title
stringlengths 3
59
| text
stringlengths 10k
146k
| pageid
int64 569
44.9M
| time
stringclasses 1
value |
---|---|---|---|
Detroit | Detroit () is the most populous city in the U.S. state of Michigan, the fourth-largest city in the Midwest and the largest city on the United States–Canada border. It is the seat of Wayne County, the most populous county in the state.
The municipality of Detroit had a 2015 estimated population of 677,116, making it the 21st-most populous city in the United States. The metropolitan area, known as Metro Detroit, is home to 4.3 million people and lies at the heart of the Great Lakes Megalopolis area, with around 60 million people. Roughly one-half of Michigan's population lives in Metro Detroit alone. The Detroit–Windsor area, a commercial link straddling the Canada–U.S. border, has a total population of about 5.7 million.World Agglomerations Retrieved on May 5, 2009.
Detroit is a major port on the Detroit River, a strait that connects the Great Lakes system to the Saint Lawrence Seaway. The Detroit Metropolitan Airport is among the most important hubs in the United States. The City of Detroit anchors the second-largest economic region in the Midwest, behind Chicago, and the thirteenth-largest in the United States. Detroit and its neighboring Canadian city Windsor are connected through a tunnel and various bridges, with the Ambassador Bridge being the busiest international crossing in North America.
Detroit was founded on July 24, 1701 by the French explorer and adventurer Antoine de la Mothe Cadillac and a party of settlers. During the 19th century, it became an important industrial hub at the center of the Great Lakes region. With expansion of the American automobile industry in the early 20th century, the Detroit area emerged as a significant metropolitan region within the United States. The city became the fourth-largest in the country for a period. In the 1950s and 1960s, suburban expansion continued with construction of a regional freeway system. A great portion of Detroit's public transport was abandoned in favour of becoming an automotive city in the post-war period, which has gradually reversed since the 1970s.
Due to industrial restructuring and loss of jobs in the auto industry, Detroit lost considerable population from the late 20th century to the present. Between 2000 and 2010 the city's population fell by 25 percent, changing its ranking from the nation's 10th-largest city to 18th. In 2010, the city had a population of 713,777, more than a 60 percent drop from a peak population of over 1.8 million at the 1950 census. This resulted from suburbanization, corruption, industrial restructuring and the decline of Detroit's auto industry. In 2013, the state of Michigan declared a financial emergency for the city, which was successfully exited with all finances handed back to Detroit in December 2014. Detroit has experienced urban decay as its population and jobs have shifted to its suburbs or elsewhere.
The erstwhile rapid growth of Detroit left a globally unique stock of architectural monuments and historic places of the first half of the 20th century, with many of them falling into disrepair or torn down since the 1960s. Conservation efforts managed to save many architectural pieces since the 2000s and allowed several large-scale revitalisations. Downtown Detroit has held an increased role as a cultural destination in the 21st century, with the restoration of several historic theatres and entertainment venues, highrise renovations, new sports stadiums, and a riverfront revitalization project. More recently, the population of Downtown Detroit, Midtown Detroit, and various other neighborhoods has increased. Some other neighborhoods remain distressed with abandonment of properties, partly revitalised by initiatives like Blight Busters, or renovated by new inhabitants for affordable housing and homesharing, like students and young entrepreneurs.
History
Native occupation
Paleo-Indian people inhabited areas near Detroit as early as 11,000 years ago. In the 17th century, the region inhabited by Huron, Odawa, Potawatomi, and Iroqois peoples.
The first Europeans did not penetrate into the region and reach the straits of Detroit until French missionaries and traders worked their way around the League of the Iroquois, with whom they were at war, and other Iroquoian tribes in the 1630s.<ref
name="AmHeritageBk">
</ref> The north side of Lake Erie was held by the Huron and Neutral peoples until the 1650s, when the Iroquois pushed both and the Erie people away from the lake and its beaver-rich feeder streams in the Beaver Wars of 1649–1655. By the 1670s, the war-weakened Iroquois laid claim to as far south as the Ohio River valley in northern Kentucky as hunting grounds, and had absorbed many other Iroquoian peoples after defeating them in war. For the next hundred years, virtually no British, colonist, or French action was contemplated without consultation with, or consideration of the Iroquois' likely response. When the French and Indian War evicted the Kingdom of France from Canada, it removed one barrier to British colonists migrating west. (See main article).
British negotiations with the Iroquois would both prove critical and lead to a Crown policy limiting the west of the Alleghenies settlements below the Great Lakes, which gave many American would-be migrants a casus belli for supporting the American Revolution. The 1798 raids and resultant 1799 decisive Sullivan Expedition reopened the Ohio Country to westward emigration, which began almost immediately, and by 1800 white settlers were pouring westwards.
European settlement
upright|thumb|Ste. Anne de Détroit, founded in 1701 by French colonists, is the second-oldest continuously operating Catholic parish in the United States. The present church was completed in 1887.
The city was named by French colonists, referring to the Detroit River (, meaning the strait of Lake Erie), linking Lake Huron and Lake Erie; in the historical context, the strait included the St. Clair River, Lake St. Clair and the Detroit River.List of U.S. place names of French origin
On the shores of the strait, in 1701, the French officer Antoine de la Mothe Cadillac, along with fifty-one French people and French Canadians, founded a settlement called Fort Pontchartrain du Détroit, naming it after Louis Phélypeaux, comte de Pontchartrain, Minister of Marine under Louis XIV., p. 56. France offered free land to colonists to attract families to Detroit; when it reached a total population of 800 in 1765, it was the largest city between Montreal and New Orleans, both also French settlements.French Ontario in the 17th and 18th centuries – Detroit . Archives of Ontario July 14, 2008. Retrieved July 23, 2008. By 1773, the population of Detroit was 1,400. By 1778, its population was up to 2,144 and it was the third-largest city in the Province of Quebec.Jacqueline Peterson, Jennifer S. H. Brown, Many Roads to Red River (2001), p69
The region grew based on the lucrative fur trade, in which numerous Native American people had important roles. Detroit's city flag reflects its French colonial heritage. (See Flag of Detroit). Descendants of the earliest French and French Canadian settlers formed a cohesive community who gradually were replaced as the dominant population after more Anglo-American settlers came to the area in the early 19th century. Living along the shores of Lakes St. Clair, and south to Monroe and downriver suburbs, the French Canadians of Detroit, also known as Muskrat French, remain a subculture in the region today.
During the French and Indian War (1754–63), the North American front of the Seven Years' War between Britain and France, British troops gained control of the settlement in 1760. They shortened the name to Detroit. Several Native American tribes launched Pontiac's Rebellion (1763), and conducted a siege of Fort Detroit, but failed to capture it. In defeat, France ceded its territory in North America east of the Mississippi to Britain following the war.
Following the American Revolutionary War and United States independence, Britain ceded Detroit along with other territory in the area under the Jay Treaty (1796), which established the northern border with Canada. In 1805, fire destroyed most of the Detroit settlement, which consisted mostly of wooden buildings. A river warehouse and brick chimneys of the former wooden homes were the sole structures to survive."Ste. Anne of Detroit" , St. Anne Church. Retrieved on April 29, 2006.
19th century
thumb|After the Siege of Detroit in 1812, Surrender of Detroit, painting by John Wycliffe Lowes Forster
From 1805 to 1847, Detroit was the capital of Michigan (first the territory, then the state). Detroit surrendered without a fight to British troops during the War of 1812 in the Siege of Detroit. The Battle of Frenchtown (January 18–23, 1813) was part of a United States effort to retake the city, and American troops suffered their highest fatalities of any battle in the war. This battle is commemorated at River Raisin National Battlefield Park south of Detroit in Monroe County. Detroit was finally recaptured by the United States later that year.
It was incorporated as a city in 1815. As the city expanded, a geometric street plan developed by Augustus B. Woodward was followed, featuring grand boulevards as in Paris.
Prior to the American Civil War, the city's access to the Canada–US border made it a key stop for refugee slaves gaining freedom in the North along the Underground Railroad. Many went across the Detroit River to Canada to escape pursuit by slave catchers. There were estimated to be 20,000 to 30,000 African-American refugees who settled in Canada.Underground Railroad, US Department of Interior, National Park Service, Denver Service Center. DIANE Publishing, Feb 1, 1995, p168 George DeBaptiste was considered to be the "president" of the Detroit Underground Railroad, William Lambert the "vice president" or "secretary" and Laura Haviland the "superintendent".Tobin, Jacqueline L. From Midnight to Dawn: The Last Tracks of the Underground Railroad. Anchor, 2008. p200-209
Numerous men from Detroit volunteered to fight for the Union during the American Civil War, including the 24th Michigan Infantry Regiment (part of the legendary Iron Brigade), which fought with distinction and suffered 82% casualties at the Battle of Gettysburg in 1863. When the First Volunteer Infantry Regiment arrived to fortify Washington, DC, President Abraham Lincoln is quoted as saying "Thank God for Michigan!" George Armstrong Custer led the Michigan Brigade during the Civil War and called them the "Wolverines".Rosentreter, Roger (July/August 1998). "Come on you Wolverines, Michigan at Gettysburg," Michigan History magazine.
During the late 19th century, several Gilded Age mansions reflecting the wealth of industry and shipping magnates were built east and west of the current downtown, along the major avenues of the Woodward plan. Most notable among them was the David Whitney House located at 4421 Woodward Avenue, which became a prime location for mansions. During this period some referred to Detroit as the Paris of the West for its architecture, grand avenues in the Paris style, and for Washington Boulevard, recently electrified by Thomas Edison. The city had grown steadily from the 1830s with the rise of shipping, shipbuilding, and manufacturing industries. Strategically located along the Great Lakes waterway, Detroit emerged as a major port and transportation hub.
In 1896, a thriving carriage trade prompted Henry Ford to build his first automobile in a rented workshop on Mack Avenue. During this growth period, Detroit expanded its borders by annexing all or part of several surrounding villages and townships.
20th century
thumb|A 4 p.m. change of work shift at the Ford Motor Company assembly plant in Highland Park, Michigan, 1910s
In 1903, Henry Ford founded the Ford Motor Company. Ford's manufacturing—and those of automotive pioneers William C. Durant, the Dodge Brothers, Packard, and Walter Chrysler—established Detroit's status in the early 20th century as the world's automotive capital. The growth of the auto industry was reflected by changes in businesses throughout the Midwest and nation, with the development of garages to service vehicles and gas stations, as well as factories for parts and tires.
With the rapid growth of industrial workers in the auto factories, labor unions such as the American Federation of Labor and the United Auto Workers fought to organize workers to gain them better working conditions and wages. They initiated strikes and other tactics in support of improvements such as the 8-hour day/40-hour work week, increased wages, greater benefits and improved working conditions. The labor activism during those years increased influence of union leaders in the city such as Jimmy Hoffa of the Teamsters and Walter Reuther of the Autoworkers.
The city became the 4th-largest in the nation in 1920, after only New York City, Chicago and Philadelphia, with the influence of the booming auto industry.
The prohibition of alcohol from 1920 to 1933 resulted in the Detroit River becoming a major conduit for smuggling of illegal Canadian spirits.Nolan, Jenny (June 15, 1999).How Prohibition made Detroit a bootlegger's dream town. Michigan History, The Detroit News. Retrieved on November 23, 2007.
Detroit, like many places in the United States, developed racial conflict and discrimination in the 20th century following rapid demographic changes as hundreds of thousands of new workers were attracted to the industrial city; in a short period it became the 4th-largest city in the nation. The Great Migration brought rural blacks from the South; they were outnumbered by southern whites who also migrated to the city. Immigration brought southern and eastern Europeans of Catholic and Jewish faith; these new groups competed with native-born whites for jobs and housing in the booming city. Detroit was one of the major Midwest cities that was a site for the dramatic urban revival of the Ku Klux Klan beginning in 1915. "By the 1920s the city had become a stronghold of the KKK," whose members opposed Catholic and Jewish immigrants, as well as black Americans."Detroit Race Riots 1943". Eleanor Roosevelt, WGBH American Experience, PBS (June 20, 1983). Retrieved on 2013-09-05. The Black Legion, a secret vigilante group, was active in the Detroit area in the 1930s, when one-third of its estimated 20,000 to 30,000 members in Michigan were based in the city. It was defeated after numerous prosecutions following the kidnapping and murder in 1936 of Charles Poole, a Catholic Works Progress Administration organizer. A total of 49 men of the Black Legion were convicted of numerous crimes, with many sentenced to life in prison for murder.
thumb|Looking south down Woodward Avenue, with the Detroit skyline in the distance, July 1942
In the 1940s the world's "first urban depressed freeway" ever built, the Davison,Route Listings: M-8. Michigan Highways. Retrieved on July 16, 2013. was constructed in Detroit. During World War II, the government encouraged retooling of the American automobile industry in support of the Allied powers, leading to Detroit's key role in the American Arsenal of Democracy.Nolan, Jenny (January 28, 1997).Willow Run and the Arsenal of Democracy . Michigan History, The Detroit News. Retrieved on November 23, 2007.
Jobs expanded so rapidly that 400,000 people were attracted to the city from 1941 to 1943, including 50,000 blacks in the second wave of the Great Migration, and 350,000 whites, many of them from the South. Some European immigrants and their descendants feared black competition for jobs and housing. The federal government prohibited discrimination in defense work but when in June 1943, Packard promoted three blacks to work next to whites on its assembly lines, 25,000 whites walked off the job.Philip A. Klinkner, Rogers M. Smith, The Unsteady March: The Rise and Decline of Racial Equality in America – Google Books. Retrieved on July 16, 2013. The Detroit race riot of 1943 took place three weeks after the Packard plant protest. Over the course of three days, 34 people were killed, of whom 25 were African American, and approximately 600 were injured, 75% black people.Detroit, The. (February 10, 1999) The 1943 Detroit race riots – Michigan History, The Detroit News Retrieved on 2013-07-16.
Postwar era
Industrial mergers in the 1950s, especially in the automobile sector, increased oligopoly in the American auto industry. Detroit manufacturers such as Packard and Hudson merged into other companies and eventually disappeared. At its peak population of 1,849,568, in the 1950 Census, the city was the 5th-largest in the United States, after New York City, Chicago, Philadelphia and Los Angeles.
As in other major American cities in the postwar era, construction of an extensive highway and freeway system around Detroit and pent-up demand for new housing stimulated suburbanization; highways made commuting by car easier. In 1956, Detroit's last heavily used electric streetcar line along the length of Woodward Avenue was removed and replaced with gas-powered buses. It was the last line of what had once been a 534-mile network of electric streetcars. In 1941 at peak times, a streetcar ran on Woodward Avenue every 60 seconds.Peter Gavrilovich & Bill McGraw (2000) The Detroit Almanac: 300 Years of Life in the Motor City. p. 232News+Views: Back track, Metro Times, Retrieved on July 16, 2013.
All of these changes in the area's transportation system favored low-density, auto-oriented development rather than high-density urban development, and industry also moved to the suburbs. The metro Detroit area developed as one of the most sprawling job markets in the United States by the 21st century, and combined with poor public transport, resulted in many jobs beyond the reach of urban low-income workers."Metro Detroit job sprawl worst in U.S.; many jobs beyond reach of poor", Detroit Free Press. Retrieved on July 16, 2013.
thumb|Packard Automotive Plant, an automobile factory that was closed and abandoned in 1958. It is revitalized since 2014.|alt=Packard Automotive Plant
In 1950, the city held about one-third of the state's population, anchored by its industries and workers. Over the next sixty years, the city's population declined to less than 10 percent of the state's population. During the same time period, the sprawling Detroit metropolitan area, which surrounds and includes the city, grew to contain more than half of Michigan's population. The shift of population and jobs eroded Detroit's tax base.
In June 1963, Rev. Martin Luther King, Jr. gave a major speech in Detroit that foreshadowed his "I Have a Dream" speech in Washington, D.C. two months later. While the African-American Civil Rights Movement gained significant federal civil rights laws in 1964 and 1965, longstanding inequities resulted in confrontations between the police and inner city black youth wanting change. Longstanding tensions in Detroit culminated in the Twelfth Street riot in July 1967. Governor George W. Romney ordered the Michigan National Guard into Detroit, and President Johnson sent in U.S. Army troops. The result was 43 dead, 467 injured, over 7,200 arrests, and more than 2,000 buildings destroyed, mostly in black residential and business areas. Thousands of small businesses closed permanently or relocated to safer neighborhoods. The affected district lay in ruins for decades.Sidney Fine, Violence in the Model City: The Cavanaugh Administration, Race Relations, and the Detroit Riot of 1967 (1989) It was the most costly riot in the United States.
On August 18, 1970, the NAACP filed suit against Michigan state officials, including Governor William Milliken, charging de facto public school segregation. The NAACP argued that although schools were not legally segregated, the city of Detroit and its surrounding counties had enacted policies to maintain racial segregation in public schools. The NAACP also suggested a direct relationship between unfair housing practices and educational segregation, which followed segregated neighborhoods. The District Court held all levels of government accountable for the segregation in its ruling. The Sixth Circuit Court affirmed some of the decision, holding that it was the state's responsibility to integrate across the segregated metropolitan area. The U.S. Supreme Court took up the case February 27, 1974. The subsequent Milliken v. Bradley decision had wide national influence. In a narrow decision, the Court found that schools were a subject of local control and that suburbs could not be forced to solve problems in the city's school district.
"Milliken was perhaps the greatest missed opportunity of that period," said Myron Orfield, professor of law at the University of Minnesota. "Had that gone the other way, it would have opened the door to fixing nearly all of Detroit's current problems.""Squandered opportunities leave Detroit isolated", Remapping Debate website. Retrieved on July 16, 2013. John Mogk, a professor of law and an expert in urban planning at Wayne State University in Detroit, says, "Everybody thinks that it was the riots [in 1967] that caused the white families to leave. Some people were leaving at that time but, really, it was after Milliken that you saw mass flight to the suburbs. If the case had gone the other way, it is likely that Detroit would not have experienced the steep decline in its tax base that has occurred since then."
1970s and decline
thumb|New cars built in Detroit loaded for rail transport, 1973
In November 1973, the city elected Coleman Young as its first black mayor. After taking office, Young emphasized increasing racial diversity in the police department. Young also worked to improve Detroit's transportation system, but tension between Young and his suburban counterparts over regional matters was problematic throughout his mayoral term. In 1976, the federal government offered $600 million for building a regional rapid transit system, under a single regional authority. But the inability of Detroit and its suburban neighbors to solve conflicts over transit planning resulted in the region losing the majority of funding for rapid transit. Following the failure to reach an agreement over the larger system, the City moved forward with construction of the elevated downtown circulator portion of the system, which became known as the Detroit People Mover.
thumb|Michigan Central Station and its Amtrak connection went out of service in 1988. (2010) The building is revitalized in the 2010s.
The gasoline crises of 1973 and 1979 also affected Detroit and the U.S. auto industry. Buyers chose smaller, more fuel-efficient cars made by foreign makers as the price of gas rose. Efforts to revive the city were stymied by the struggles of the auto industry, as their sales and market share declined. Automakers laid off thousands of employees and closed plants in the city, further eroding the tax base. To counteract this, the city used eminent domain to build two large new auto assembly plants in the city.
As mayor, Young sought to revive the city by seeking to increase investment in the city's declining downtown. The Renaissance Center, a mixed-use office and retail complex, opened in 1977. This group of skyscrapers was an attempt to keep businesses in downtown. Young also gave city support to other large developments to attract middle and upper-class residents back to the city. Despite the Renaissance Center and other projects, the downtown area continued to lose businesses to the automobile dependent suburbs. Major stores and hotels closed and many large office buildings went vacant. Young was criticized for being too focused on downtown development and not doing enough to lower the city's high crime rate and improve city services.
Long a major population center and site of worldwide automobile manufacturing, Detroit has suffered a long economic decline produced by numerous factors. Like many industrial American cities, Detroit reached its population peak in the 1950 census. The peak population was 1.8 million people. Following suburbanization, industrial restructuring, and loss of jobs (as described above), by the 2010 census, the city had less than 40 percent of that number, with just over 700,000 residents. The city has declined in population in each census since 1950.
High unemployment was compounded by middle-class flight to the suburbs, and some residents leaving the state to find work. The city was left with a higher proportion of poor in its population, reduced tax base, depressed property values, abandoned buildings, abandoned neighborhoods, high crime rates and a pronounced demographic imbalance.
1990s–2000s
thumb|The Renaissance Center, home of the world headquarters of General Motors and the tallest hotel in the Western Hemisphere, sits along the International Riverfront.
In 1993 Young retired as Detroit's longest serving mayor, deciding not to seek a sixth term. That year the city elected Dennis Archer, a former Michigan Supreme Court justice. Archer prioritized downtown development and easing tensions with Detroit's suburban neighbors. A referendum to allow casino gambling in the city passed in 1996; several temporary casino facilities opened in 1999, and permanent downtown casinos with hotels opened in 2007–08.
Campus Martius, a reconfiguration of downtown's main intersection as a new park was opened in 2004. The park has been cited as one of the best public spaces in the United States. The city's riverfront has been the focus of redevelopment, following successful examples of other older industrial cities. In 2001, the first portion of the International Riverfront was completed as a part of the city's 300th anniversary celebration, with miles of parks and associated landscaping completed in succeeding years. In 2011, the Port Authority Passenger Terminal opened with the riverwalk connecting Hart Plaza to the Renaissance Center.Bailey, Ruby L.(August 22, 2007). "The D is a draw: Most suburbanites are repeat visitors," Detroit Free Press. Quote: A Local 4 poll conducted by Selzer and Co., finds, "nearly two-thirds of residents of suburban Wayne, Oakland, and Macomb counties say they at least occasionally dine, attend cultural events or take in professional games in Detroit."
Since 2006, $9 billion has been invested in downtown and surrounding neighborhoods; $5.2 billion of that has come in 2013 and 2014. Construction activity, particularly rehabilitation of historic downtown buildings, has increased markedly. The number of vacant downtown buildings has dropped from nearly 50 to around 13.Kramer, Mary (September 28, 2014). "Rebuilding city takes patience, vision," Crain's Detroit Business|url=http://www.crainsdetroit.com/article/20140928/BLOG018/309289997/rebuilding-city-takes-patience-vision Among the most notable redevelopment projects are the Book Cadillac Hotel and the Fort Shelby Hotel; the David Broderick Tower; and the David Whitney Building.
Little Caesars Arena, a new home for the Detroit Red Wings and the Detroit Pistons with attached residential, hotel, and retail use is under construction and set to open in fall 2017.Gallagher, John (July 14, 2014). "Hockey, basketball, housing and more: Ilitches unveil 'bold vision' for Red Wings & Pistons arena district"|work=Detroit Free Press|url=http://archive.freep.com/article/20140720/BUSINESS06/307200102/Ilitch-Red-Wings-Pistons-arena-Midtown The plans for the project call for mixed-use residential on the blocks surrounding the arena and the renovation of the vacant 14-story Eddystone Hotel.
21st century
Detroit's protracted decline has resulted in severe urban decay and thousands of empty buildings around the city. Some parts of Detroit are so sparsely populated that the city has difficulty providing municipal services. The city has considered various solutions, such as demolishing abandoned homes and buildings; removing street lighting from large portions of the city; and encouraging the small population in certain areas to move to more populated locations. Roughly half of the owners of Detroit's 305,000 properties failed to pay their 2011 tax bills, resulting in about $246 million in taxes and fees going uncollected, nearly half of which was due to Detroit; the rest of the money would have been earmarked for Wayne County, Detroit Public Schools, and the library system.
thumb|Westin Book-Cadillac Hotel during extensive restoration. The hotel tower reopened in 2008.
In September 2008, Mayor Kwame Kilpatrick (who had served for six years) resigned following felony convictions. In 2013, Kilpatrick was convicted on 24 federal felony counts, including mail fraud, wire fraud, and racketeering, and was sentenced to 28 years in federal prison. The former mayor's activities cost the city an estimated $20 million.http://www.usatoday.com/story/news/nation/2013/10/06/how-corruption-deepened-detroits-crisis/2929137/ In 2013, felony bribery charges were brought against seven building inspectors.http://www.huffingtonpost.com/2013/08/29/detroit-corruption_n_3837180.html In 2016, further corruption charges were brought against 12 principals, a former school superintendent and supply vendorhttp://www.freep.com/story/news/local/michigan/detroit/2016/04/17/vendor-dps-corruption-case-lived-like-king/82767944/ for a $12 million kickback scheme.http://www.npr.org/sections/ed/2016/04/22/474737468/-the-latest-corruption-charges-in-detroits-struggling-schoolshttp://thinkprogress.org/education/2016/03/30/3764706/detroit-school-principals-kickbacks/ Law professor Peter Henning argues that Detroit's corruption is not unusual for a city its size, especially when compared with Chicago.http://www.metrotimes.com/detroit/how-corrupt-is-detroit/Content?oid=2149028
The city's financial crisis resulted in the state of Michigan taking over administrative control of its government. The state governor declared a financial emergency in March 2013, appointing Kevyn Orr as emergency manager. On July 18, 2013, Detroit became the largest U.S. city to file for bankruptcy. It was declared bankrupt by U.S. District Court on December 3, 2013, in light of the city's $18.5 billion debt and its inability to fully repay its thousands of creditors. On November 7, 2014 the city's plan for exiting bankruptcy was approved. The following month on December 11 the city officially exited bankruptcy. The plan allowed the city to eliminate $7 billion in debt and invest $1.7 billion into improved city services.
One of the largest post bankruptcy efforts to improve city services has been work to fix the city's broken street lighting system. At one time it was estimated that 40% of lights were not working. The plan calls for replacing outdated high pressure sodium lights with 65,000 LED lights. Construction began in late 2014 and by the end of 2015 around 60,000 lights have been replaced. Work is scheduled to be complete by the end of 2016.
In the 2010s, several initiatives were taken by Detroit's citizens and new inhabitants to improve the cityscape by renovating and revitalising neighbourhoods. Such include the Motor City Blight Busters and various urban gardening movements. The well-known symbol of the city's decades-long demise, the Michigan Central Station, is renovated with new windows, elevators and facilities since 2015. Several other landmark buildings were fully renovated and transformed into condominiums, hotels, offices or for cultural uses. Detroit is mentioned as a city of carination and renaissance.
Geography
Metropolitan area
Detroit is the center of a three-county urban area (population 3,734,090, area of , a 2010 United States Census) six-county metropolitan statistical area (2010 Census population of 4,296,250, area of ), and a nine-county Combined Statistical Area (2010 Census population of 5,218,852, area of ).Detroit Metro Convention & Visitors Bureau
Topography
thumb|A simulated-color satellite image of the Detroit metro area, including Windsor across the river, taken on NASA's Landsat 7 satellite
According to the U.S. Census Bureau, the city has a total area of , of which is land and is water. Detroit is the principal city in Metro Detroit and Southeast Michigan situated in the Midwestern United States and the Great Lakes region.
The Detroit River International Wildlife Refuge is the only international wildlife preserve in North America, uniquely located in the heart of a major metropolitan area. The Refuge includes islands, coastal wetlands, marshes, shoals, and waterfront lands along of the Detroit River and Western Lake Erie shoreline.
The city slopes gently from the northwest to southeast on a till plain composed largely of glacial and lake clay. The most notable topographical feature in the city is the Detroit Moraine, a broad clay ridge on which the older portions of Detroit and Windsor sit atop, rising approximately above the river at its highest point. The highest elevation in the city is located directly north of Gorham Playground on the northwest side approximately three blocks south of 8 Mile Road, at a height of . Detroit's lowest elevation is along the Detroit River, at a surface height of .
thumb|left || A view of the city from Belle Isle Park in April 2008
Belle Isle Park is a island park in the Detroit River, between Detroit and Windsor, Ontario. It is connected to the mainland by the MacArthur Bridge in Detroit. Belle Isle Park contains such attractions as the James Scott Memorial Fountain, the Belle Isle Conservatory, the Detroit Yacht Club on an adjacent island, a half-mile (800 m) beach, a golf course, a nature center, monuments, and gardens. The city skyline may be viewed from the island.
Three road systems cross the city: the original French template, with avenues radiating from the waterfront; and true north–south roads based on the Northwest Ordinance township system. The city is north of Windsor, Ontario. Detroit is the only major city along the Canada–US border in which one travels south in order to cross into Canada.
Detroit has four border crossings: the Ambassador Bridge and the Detroit–Windsor Tunnel provide motor vehicle thoroughfares, with the Michigan Central Railway Tunnel providing railroad access to and from Canada. The fourth border crossing is the Detroit–Windsor Truck Ferry, located near the Windsor Salt Mine and Zug Island. Near Zug Island, the southwest part of the city was developed over a salt mine that is below the surface. The Detroit Salt Company mine has over of roads within.Zacharias, Patricia (January 23, 2000). The ghostly salt city beneath Detroit. Michigan History, The Detroit News. Retrieved on November 23, 2007.
Climate
Detroit and the rest of southeastern Michigan have a humid continental climate (Köppen Dfa) which is influenced by the Great Lakes; the city and close-in suburbs are part of USDA Hardiness zone 6b, with farther-out northern and western suburbs generally falling in zone 6a. Winters are cold, with moderate snowfall and temperatures not rising above freezing on an average 44 days annually, while dropping to or below on an average 4.4 days a year; summers are warm to hot with temperatures exceeding on 12 days. The warm season runs from May to September. The monthly daily mean temperature ranges from in January to in July. Official temperature extremes range from on July 24, 1934 down to on January 21, 1984; the record low maximum is on January 19, 1994, while, conversely the record high minimum is on August 1, 2006, the most recent of five occurrences. A decade or two may pass between readings of or higher, which last occurred July 17, 2012. The average window for freezing temperatures is October 20 thru April 22, allowing a growing season of 180 days.
Precipitation is moderate and somewhat evenly distributed throughout the year, although the warmer months such as May and June average more, averaging annually, but historically ranging from in 1963 to in 2011. Snowfall, which typically falls in measurable amounts between November 15 through April 4 (occasionally in October and very rarely in May), averages per season, although historically ranging from in 1881–82 to in 2013–14. A thick snowpack is not often seen, with an average of only 27.5 days with or more of snow cover. Thunderstorms are frequent in the Detroit area. These usually occur during spring and summer.
Cityscape
Architecture
thumb|left|Cadillac Place (1923) (left) and the Fisher Building (1928) in the New Center district are among the city's National Historic Landmarks
thumb|upright|Wayne County Building (completed 1902) downtown by John and Arthur Scott, One Detroit Center (1993) in the back
Seen in panorama, Detroit's waterfront shows a variety of architectural styles. The post modern Neo-Gothic spires of the One Detroit Center (1993) were designed to blend with the city's Art Deco skyscrapers. Together with the Renaissance Center, they form a distinctive and recognizable skyline. Examples of the Art Deco style include the Guardian Building and Penobscot Building downtown, as well as the Fisher Building and Cadillac Place in the New Center area near Wayne State University. Among the city's prominent structures are United States' largest Fox Theatre, the Detroit Opera House, and the Detroit Institute of Arts.
While the Downtown and New Center areas contain high-rise buildings, the majority of the surrounding city consists of low-rise structures and single-family homes. Outside of the city's core, residential high-rises are found in upper-class neighborhoods such as the East Riverfront extending toward Grosse Pointe and the Palmer Park neighborhood just west of Woodward. The University Commons-Palmer Park district in northwest Detroit, near the University of Detroit Mercy and Marygrove College, anchors historic neighborhoods including Palmer Woods, Sherwood Forest, and the University District.
The National Register of Historic Places lists several area neighborhoods and districts. Neighborhoods constructed prior to World War II feature the architecture of the times, with wood-frame and brick houses in the working-class neighborhoods, larger brick homes in middle-class neighborhoods, and ornate mansions in upper-class neighborhoods such as Brush Park, Woodbridge, Indian Village, Palmer Woods, Boston-Edison, and others.
thumb|left|St. Joseph Catholic Church (1873) is a notable example of Detroit's ecclesiastical architecture (interior)
Some of the oldest neighborhoods are along the Woodward and East Jefferson corridors. Some newer residential construction may also be found along the Woodward corridor, the far west, and northeast. Some of the oldest extant neighborhoods include West Canfield and Brush Park, which have both seen multimillion-dollar restorations and construction of new homes and condominiums.. City of Detroit Partnership. Retrieved on November 24, 2007.Pfeffer, Jaime (September 12, 2006).Falling for Brush Park.Model D Media. Retrieved on April 21, 2009.
thumb|right|Detroit Financial District viewed from Windsor
Many of the city's architecturally significant buildings have been listed on the National Register of Historic Places; the city has one of United States' largest surviving collections of late 19th- and early 20th-century buildings. Architecturally significant churches and cathedrals in the city include St. Joseph's, Old St. Mary's, the Sweetest Heart of Mary, and the Cathedral of the Most Blessed Sacrament.
The city has substantial activity in urban design, historic preservation, and architecture.Cityscape Detroit.www.cityscapedetroit.org Retrieved on April 8, 2007. A number of downtown redevelopment projects—of which Campus Martius Park is one of the most notable—have revitalized parts of the city. Grand Circus Park stands near the city's theater district, Ford Field, home of the Detroit Lions, and Comerica Park, home of the Detroit Tigers. Other projects include the demolition of the Ford Auditorium off of Jefferson St.
The Detroit International Riverfront includes a partially completed three-and-one-half mile riverfront promenade with a combination of parks, residential buildings, and commercial areas. It extends from Hart Plaza to the MacArthur Bridge accessing Belle Isle Park (the largest island park in a U.S. city). The riverfront includes Tri-Centennial State Park and Harbor, Michigan's first urban state park. The second phase is a extension from Hart Plaza to the Ambassador Bridge for a total of of parkway from bridge to bridge. Civic planners envision that the pedestrian parks will stimulate residential redevelopment of riverfront properties condemned under eminent domain.
Other major parks include River Rouge (in the southwest side), the largest park in Detroit; Palmer (north of Highland Park) and Chene Park (on the east river downtown).Editorial: "At Last, Sensible Dream for Detroit's Riverfront", Detroit News, December 13, 2002
Neighborhoods
thumb|Restored historic homes in the East Ferry Avenue neighborhood in Midtown
Detroit has a variety of neighborhood types. The revitalized Downtown, Midtown, and New Center areas feature many historic buildings and are high density, while further out, particularly in the northeast and on the fringes, high vacancy levels are problematic, for which a number of solutions have been proposed. In 2007, Downtown Detroit was recognized as a best city neighborhood in which to retire among the United States' largest metro areas by CNN Money Magazine editors.<ref>Bigda, Carolyn, Erin Chambers, Lawrence Lanahan, Joe Light, Sarah Max, and Jennifer Merritt.Detroit Best place to retire: Downtown. CNN Money Magazine. Retrieved July 5, 2012.</ref>
Lafayette Park is a revitalized neighborhood on the city's east side, part of the Ludwig Mies van der Rohe residential district.Vitullo-Martin, Julio, (December 22, 2007). The Biggest Mies Collection: His Lafayette Park residential development thrives in Detroit.The Wall Street Journal.Retrieved July 5, 2012. The development was originally called the Gratiot Park. Planned by Mies van der Rohe, Ludwig Hilberseimer and Alfred Caldwell it includes a landscaped, park with no through traffic, in which these and other low-rise apartment buildings are situated. Immigrants have contributed to the city's neighborhood revitalization, especially in southwest Detroit. Southwest Detroit has experienced a thriving economy in recent years, as evidenced by new housing, increased business openings and the recently opened Mexicantown International Welcome Center.Williams, Corey (February 28, 2008).New Latino Wave Helps Revitalize Detroit. USA Today. Retrieved July 5, 2012.
thumb|upright|Historic restoration of the Lucien Moore House (1885), in Brush Park, completed in 2006Pfeffer, Jaime (September 12, 2006).Falling for Brush Park.Model D Media. Retrieved July 5, 2012.
The city has numerous neighborhoods consisting of vacant properties resulting in low inhabited density in those areas, stretching city services and infrastructure. These neighborhoods are concentrated in the northeast and on the city's fringes. A 2009 parcel survey found about a quarter of residential lots in the city to be undeveloped or vacant, and about 10% of the city's housing to be unoccupied.Detroit Parcel Survey. Retrieved on July 23, 2011. The survey also reported that most (86%) of the city's homes are in good condition with a minority (9%) in fair condition needing only minor repairs.Associated Press (February 10, 2010).Survey.Mlive.com. Retrieved July 5, 2012.Kavanaugh, Kelli B. (March 2, 2010).Intensive property survey captures state of Detroit housing, vacancy. Model D. Retrieved July 5, 2012.
To deal with vacancy issues, the city has begun demolishing the derelict houses, razing 3,000 of the total 10,000 in 2010, but the resulting low density creates a strain on the city's infrastructure. To remedy this, a number of solutions have been proposed including resident relocation from more sparsely populated neighborhoods and converting unused space to urban agricultural use, including Hantz Woodlands, though the city expects to be in the planning stages for up to another two years.. City of Detroit. Retrieved July 5, 2012.
Public funding and private investment have also been made with promises to rehabilitate neighborhoods. In April 2008, the city announced a $300-million stimulus plan to create jobs and revitalize neighborhoods, financed by city bonds and paid for by earmarking about 15% of the wagering tax. The city's working plans for neighborhood revitalizations include 7-Mile/Livernois, Brightmoor, East English Village, Grand River/Greenfield, North End, and Osborn. Private organizations have pledged substantial funding to the efforts..DEGA. Retrieved on January 2, 2009.Detroit Neighborhood Fund .Community Foundation for Southeast Michigan. Retrieved January 2, 2009. Additionally, the city has cleared a section of land for large-scale neighborhood construction, which the city is calling the Far Eastside Plan.Rose, Judy (May 11, 2003). Detroit to revive 1 neighborhood at a time. Chicago Tribune. Retrieved November 29, 2011. In 2011, Mayor Dave Bing announced a plan to categorize neighborhoods by their needs and prioritize the most needed services for those neighborhoods.
Demographics
In the 2010 United States Census, the city had 713,777 residents, ranking it the 18th most populous city in the United States.
Of the large shrinking cities of the United States, Detroit has had the most dramatic decline in population of the past 60 years (down 1,135,791) and the second largest percentage decline (down 61.4%, second only to St. Louis, Missouri's 62.7%). While the drop in Detroit's population has been ongoing since 1950, the most dramatic period was the significant 25% decline between the 2000 and 2010 Census.
The population collapse has resulted in large numbers of abandoned homes and commercial buildings, and areas of the city hit hard by urban decay.
Detroit's 713,777 residents represent 269,445 households, and 162,924 families residing in the city. The population density was 5,144.3 people per square mile (1,895/km²). There were 349,170 housing units at an average density of 2,516.5 units per square mile (971.6/km²). Housing density has declined. The city has demolished thousands of Detroit's abandoned houses, planting some areas and in others allowing the growth of urban prairie.
Of the 269,445 households, 34.4% had children under the age of 18 living with them, 21.5% were married couples living together, 31.4% had a female householder with no husband present, 39.5% were non-families, 34.0% were made up of individuals, and 3.9% had someone living alone who is 65 years of age or older. Average household size was 2.59, and average family size was 3.36.
There is a wide distribution of age in the city, with 31.1% under the age of 18, 9.7% from 18 to 24, 29.5% from 25 to 44, 19.3% from 45 to 64, and 10.4% 65 years of age or older. The median age was 31 years. For every 100 females there were 89.1 males. For every 100 females age 18 and over, there were 83.5 males.
According to a 2014 study, 67% of the population of the city identified themselves as Christians, with 49% professing attendance Protestant churches, and 16% professing Roman Catholic beliefs,Major U.S. metropolitan areas differ in their religious profiles, Pew Research Center while 24% claim no religious affiliation. Other religions collectively make up about 8% of the population.
Income and employment
The loss of industrial and working-class jobs in the city has resulted in high rates of poverty and associated problems. From 2000 to 2009, the city's estimated median household income fell from $29,526 to $26,098. the mean income of Detroit is below the overall U.S. average by several thousand dollars. Of every three Detroit residents, one lives in poverty. Luke Bergmann, author of Getting Ghost: Two Young Lives and the Struggle for the Soul of an American City, said in 2010, "Detroit is now one of the poorest big cities in the country."Bergmann, p. 39
In the 2010 American Community Survey, median household income in the city was $25,787, and the median income for a family was $31,011. The per capita income for the city was $14,118. 32.3% of families had income at or below the federally defined poverty level. Out of the total population, 53.6% of those under the age of 18 and 19.8% of those 65 and older had income at or below the federally defined poverty line.
Oakland County in Metro Detroit, once rated amongst the wealthiest US counties per household, is no longer shown in the top 25 listing of Forbes magazine. But internal county statistical methods—based on measuring per capita income for counties with more than one million residents—show that Oakland is still within the top 12, slipping from the 4th-most affluent such county in the U.S. in 2004 to 11th-most affluent in 2009.Hopkins, Carol (March 28, 2010).Oakland still ranks among the nation's wealthiest counties. Daily Tribune. Retrieved November 27, 2011. Detroit dominates Wayne County, which has an average household income of about $38,000, compared to Oakland County's $62,000.
Race and ethnicity
thumb|Map of racial distribution in Metro Detroit, 2010 U.S. Census. Each dot is 25 people.
+ Detroit racial composition Demographic profile 2010 1990 1970 1950 1940 1930 1920 1910 White 10.6% 21.6% 55.5% 83.6% 90.7% 92.2% 95.8 98.7% —Non-Hispanic 7.8% 20.7% 54.0%From 15% sample n/a 90.4% n/a n/a n/a Black or African American 82.7% 75.7% 43.7% 16.2% 9.2% 7.7% 4.1% 1.2% Hispanic or Latino (of any race) 6.8% 2.8% 1.8% n/a 0.3% n/a n/a n/a Asian 1.1% 0.8% 0.3% 0.1% 0.1% 0.1% 0.1% n/a
The city's population increased more than sixfold during the first half of the 20th century, fed largely by an influx of European, Middle Eastern (Lebanese, Assyrian/Chaldean), and Southern migrants to work in the burgeoning automobile industry.Baulch, Vivian M. (September 4, 1999). Michigan's greatest treasure – Its people . Michigan History, The Detroit News. Retrieved on October 22, 2007. In 1940, Whites were 90.4% of the city's population. Since 1950 the city has seen a major shift in its population to the suburbs. In 1910, fewer than 6,000 blacks called the city home;Vivian M. Baulch, "How Detroit got its first black hospital," The Detroit News, November 28, 1995. in 1930 more than 120,000 blacks lived in Detroit."Important Cities in Black History". Infoplease.com. The thousands of African Americans who came to Detroit were part of the Great Migration of the 20th century."Detroit and the Great Migration, 1916–1929 by Elizabeth Anne Martin ". Bentley Historical Library, University of Michigan.
Detroit remains one of the most racially segregated cities in the United States. From the 1940s to the 1970s a second wave of Blacks moved to Detroit to escape Jim Crow laws in the south and find jobs. However, they soon found themselves excluded from white areas of the city—through violence, laws, and economic discrimination (e.g., redlining). White residents attacked black homes: breaking windows, starting fires, and exploding bombs. The pattern of segregation was later magnified by white migration to the suburbs. One of the implications of racial segregation, which correlates with class segregation, may be overall worse health for some populations.
While Blacks/African-Americans comprised only 13 percent of Michigan's population in 2010, they made up nearly 82 percent of Detroit's population. The next largest population groups were Whites, at 10 percent, and Hispanics, at 6 percent. According to the 2010 Census, segregation in Detroit has decreased in absolute and in relative terms. In the first decade of the 21st century, about two-thirds of the total black population in metropolitan area resided within the city limits of Detroit.Towbridge, Gordon. "Racial divide widest in U.S." The Detroit News. January 14, 2002. Retrieved on March 30, 2009. The number of integrated neighborhoods has increased from 100 in 2000 to 204 in 2010. The city has also moved down the ranking, from number one most segregated to number four. A 2011 op-ed in The New York Times attributed the decreased segregation rating to the overall exodus from the city, cautioning that these areas may soon become more segregated. This pattern already happened in the 1970s, when apparent integration was actually a precursor to white flight and resegregation. Over a 60-year period, white flight occurred in the city. According to an estimate of the Michigan Metropolitan Information Center, from 2008 to 2009 the percentage of non-Hispanic White residents increased from 8.4% to 13.3%. Some empty nesters and many younger White people moved into the city while many African Americans moved to the suburbs.Wisely, John. "Number of whites living in Detroit goes up for first time in 60 years." Detroit Free Press at KSDK. September 29, 2010. Retrieved on January 7, 2013.
Detroit has a Mexican-American population. In the early 20th century thousands of Mexicans came to Detroit to work in agricultural, automotive, and steel jobs. During the Mexican Repatriation of the 1930s many Mexicans in Detroit were willingly repatriated or forced to repatriate. By the 1940s the Mexican community began to settle what is now Mexicantown. The population significantly increased in the 1990s due to immigration from Jalisco. In 2010 Detroit had 48,679 Hispanics, including 36,452 Mexicans. The number of Hispanics was a 70% increase from the number in 1990.Denvir, Daniel. "The Paradox of Mexicantown: Detroit's Uncomfortable Relationship With the Immigrants it Desperately Needs." (Archive) The Atlantic Cities. September 24, 2012. Retrieved on January 15, 2013.
After World War II, many people from Appalachia settled in Detroit. Appalachians formed communities and their children acquired southern accents.Detroitblogger John. "Southland." (Archive) Metro Times. April 28, 2010. Retrieved on May 12, 2012. Many Lithuanians settled in Detroit during the World War II era, especially on the city's Southwest side in the West Vernor area, where the renovated Lithuanian Hall reopened in 2006.Model D Media (November 28, 2006). Southwest Detroit's Lithuanian Hall to reopen after $2 million renovationBello, Marisol. "Lithuanian center to reopen Thursday" Detroit Free Press. November 28, 2006.
In 2001, 103,000 Jews, or about 1.9% of the population, were living in the Detroit area, in both Detroit and Ann Arbor.
Asians and Asian Americans
As of 2002, of all of the municipalities in the Wayne County-Oakland County-Macomb County area, Detroit had the second largest Asian population. As of that year Detroit's percentage of Asians was 1%, far lower than the 13.3% of Troy.Metzger, Kurt and Jason Booza. "Asians in the United States, Michigan and Metropolitan Detroit ." Center for Urban Studies, Wayne State University. January 2002 Working Paper Series, No. 7. p. 8. Retrieved on November 6, 2013. By 2000 Troy had the largest Asian American population in the tricounty area, surpassing Detroit.Metzger, Kurt and Jason Booza. "Asians in the United States, Michigan and Metropolitan Detroit ." Center for Urban Studies, Wayne State University. January 2002 Working Paper Series, No. 7. p. 10. Retrieved on November 6, 2013.
As of 2002 there are four areas in Detroit with significant Asian and Asian American populations. Northeast Detroit has population of Hmong with a smaller group of Lao people. A portion of Detroit next to eastern Hamtramck includes Bangladeshi Americans, Indian Americans, and Pakistani Americans; nearly all of the Bangladeshi population in Detroit lives in that area. Many of those residents own small businesses or work in blue collar jobs, and the population in that area is mostly Muslim. The area north of Downtown Detroit; including the region around the Henry Ford Hospital, the Detroit Medical Center, and Wayne State University; has transient Asian national origin residents who are university students or hospital workers. Few of them have permanent residency after schooling ends. They are mostly Chinese and Indian but the population also includes Filipinos, Koreans, and Pakistanis. In Southwest Detroit and western Detroit there are smaller, scattered Asian communities including an area in the westside adjacent to Dearborn and Redford Township that has a mostly Indian Asian population, and a community of Vietnamese and Laotians in Southwest Detroit.
the city has one of the U.S.'s largest concentrations of Hmong Americans. In 2006, the city had about 4,000 Hmong and other Asian immigrant families. Most Hmong live east of Coleman Young Airport near Osborn High School. Hmong immigrant families generally have lower incomes than those of suburban Asian families.Archambault, Dennis. "Young and Asian in Detroit." (Archive) Model D Media. Issue Media Group, LLC. Tuesday November 14, 2006. Retrieved on November 5, 2012.
Economy
thumb|left|The Renaissance Center is the headquarters of General Motors.
Several major corporations are based in the city, including three Fortune 500 companies. The most heavily represented sectors are manufacturing (particularly automotive), finance, technology, and health care. The most significant companies based in Detroit include: General Motors, Quicken Loans, Ally Financial, Compuware, Shinola, American Axle, Little Caesars, DTE Energy, Lowe Campbell Ewald, Blue Cross Blue Shield of Michigan, and Rossetti Architects.
About 80,500 people work in downtown Detroit, comprising one-fifth of the city's employment base.The Urban Markets Initiative, Brookings Institution Metropolitan Policy Program, The Social Compact Inc., University of Michigan Graduate Real Estate Program, (October 2006).Downtown Detroit in Focus: A Profile of Market Opportunity.Detroit Economic Growth Corporation and Downtown Detroit Partnership. Retrieved on June 14, 2008. Henion, Andy (March 22, 2007). City puts transit idea in motion.The Detroit News.(About 80,500 people work in downtown Detroit which is 21% of the city's employment base). Retrieved on May 14, 2007. Aside from the numerous Detroit-based companies listed above, downtown contains large offices for Comerica, Chrysler, HP Enterprise, Deloitte, PricewaterhouseCoopers, KPMG, and Ernst & Young. Ford Motor Company is located in the adjacent city of Dearborn.
thumb|left|The Metropolitan Center for High Technology at Wayne University offers room for startup companies.
Thousands more employees work in Midtown, north of the central business district. Midtown's anchors are the city's largest single employer Detroit Medical Center, Wayne State University, and the Henry Ford Health System in New Center. Midtown is also home to watchmaker Shinola and an array of small and startup companies. New Center bases TechTown, a research and business incubator hub that is part of the WSU system. Like downtown and Corktown, Midtown also has a fast-growing retailing and restaurant scene.
A number of the city's downtown employers are relatively new, as there has been a marked trend of companies moving from satellite suburbs around Metropolitan Detroit into the downtown core. Compuware completed its world headquarters in downtown in 2003. OnStar, Blue Cross Blue Shield, and HP Enterprise Services are located at the Renaissance Center. PricewaterhouseCoopers Plaza offices are adjacent to Ford Field, and Ernst & Young completed its office building at One Kennedy Square in 2006. Perhaps most prominently, in 2010, Quicken Loans, one of the largest mortgage lenders, relocated its world headquarters and 4,000 employees to downtown Detroit, consolidating its suburban offices.Howes, Daniel (November 12, 2007).Quicken moving to downtown Detroit.The Detroit News. Retrieved on November 12, 2007. In July 2012, the U.S. Patent and Trademark Office opened its Elijah J. McCoy Satellite Office in the Rivertown/Warehouse District as its first location outside Washington, D.C.'s metropolitan area.
In April 2014, the Department of Labor reported the city's unemployment rate at 14.5%.
The city of Detroit and other private-public partnerships have attempted to catalyze the region's growth by facilitating the building and historical rehabilitation of residential high-rises in the downtown, creating a zone that offers many business tax incentives, creating recreational spaces such as the Detroit RiverWalk, Campus Martius Park, Dequindre Cut Greenway, and Green Alleys in Midtown. The city itself has cleared sections of land while retaining a number of historically significant vacant buildings in order to spur redevelopment;Morice, Zach (September 21, 2007).Planting community in fallow fields.American Institute of Architects. Retrieved on December 23, 2009. though it has struggled with finances, the city issued bonds in 2008 to provide funding for ongoing work to demolish blighted properties. Two years earlier, downtown reported $1.3 billion in restorations and new developments which increased the number of construction jobs in the city. In the decade prior to 2006, downtown gained more than $15 billion in new investment from private and public sectors.The Urban Markets Initiative, Brookings Institution Metropolitan Policy Program The Social Compact, Inc. University of Michigan Graduate Real Estate Program (October 2006).Downtown Detroit In Focus: A Profile of Market Opportunity . Downtown Detroit Partnership. Retrieved on July 10, 2010.
thumb|left|The Westin Book Cadillac Hotel completed a $200-million reconstruction in 2008, and is in Detroit's Washington Boulevard Historic District
Despite the city's recent financial issues, many developers remain unfazed by Detroit's problems.Maynard, Micheline. (July 29, 2013) Detroit's Developers Unfazed by Bankruptcy | TIME.com. Nation.time.com. Retrieved on 2013-09-05. Midtown is one of the most successful areas within Detroit to have a residential occupancy rate of 96%.Lawrence Tech anchoring Midtown Detroit development, joining neighborhood's boom. MLive.com (May 7, 2013). Retrieved on 2013-09-05. Numerous developments have been recently completely or are in various stages of construction. These include the $82 million reconstruction of downtown's David Whitney Building (now an Aloft Hotel and luxury residences), the Woodward Garden Block Development in Midtown, the residential conversion of the David Broderick Tower in downtown, the rehabilitation of the Book Cadillac Hotel (now a Westin and luxury condos) and Fort Shelby Hotel (now Doubletree) also in downtown, and various smaller projects.Detroit Development Projects, Real Estate Investments Are Booming In 2013. Huffingtonpost.com. Retrieved on September 5, 2013.
Downtown's population of young professionals is growing and retail is expanding. A study in 2007 found out that Downtown's new residents are predominantly young professionals (57% are ages 25 to 34, 45% have bachelor's degrees, and 34% have a master's or professional degree), a trend which has hastened over the last decade. John Varvatos is set to open a downtown store in 2015, and Restoration Hardware is rumored to be opening a store nearby.Haimerl, Amy (December 11, 2014).Restoration Hardware to Open.Crain's Detroit Business. Retrieved on February 5, 2015.
On July 25, 2013, Meijer, a midwestern retail chain, opened its first supercenter store in Detroit,; this was a 20 million dollar, 190,000-square-foot store in the northern portion of the city and it also is the centerpiece of a new 72 million dollar shopping center named Gateway Marketplace.New $20M Meijer Store Opens In Detroit " CBS Detroit. Detroit.cbslocal.com (July 25, 2013). Retrieved on 2013-09-05. On June 11, 2015, Meijer opened its second supercenter store in the city.
On May 21, 2014, JPMorgan Chase announced that it was injecting $100 million over five years into Detroit's economy, providing development funding for a variety of projects that would increase employment. It is the largest commitment made to any one city by the nation's biggest bank. Of the $100 million, $50 million will go toward development projects, $25 million will go toward city blight removal, $12.5 million will go for job training, $7 million will go for small businesses in the city, and $5.5 million will go toward the M-1 light rail project (Qline). On May 19, 2015, JPMorgan Chase announced that it has invested $32 million for two redevelopment projects in the city's Capitol Park district, the Capitol Park Lofts (the former Capitol Park Building) and the Detroit Savings Bank building at 1212 Griswold. Those investments are separate from Chase's five-year, $100-million commitment.
Culture and contemporary life
thumb|Detroit's Broadway Area, a cultural link in Downtown
In the central portions of Detroit, the population of young professionals, artists, and other transplants is growing and retail is expanding.Harrison, Sheena (June 25, 2007). DEGA enlists help to spur Detroit retail. Crain's Detroit Business. Retrieved on November 28, 2007. "New downtown residents are largely young professionals according to Social Compact."Halaas, Jaime (December 20, 2005).Inside Detroit Lofts. Model D Media. Retrieved on November 28, 2007. This dynamic is luring additional new residents, and former residents returning from other cities, to the city's Downtown along with the revitalized Midtown and New Center areas.Reppert, Joe (October 2007).Detroit Neighborhood Market Drill Down . Social Compact. Retrieved on July 10, 2010.
A desire to be closer to the urban scene has also attracted some young professionals to reside in inner ring suburbs such as Ferndale and Royal Oak, Michigan. Detroit's proximity to Windsor, Ontario, provides for views and nightlife, along with Ontario's minimum drinking age of 19. A 2011 study by Walk Score recognized Detroit for its above average walkability among large U.S. cities. About two-thirds of suburban residents occasionally dine and attend cultural events or take in professional games in the city of Detroit.Bailey, Ruby L (August 22, 2007). The D is a draw: Most suburbanites are repeat visitors.Detroit Free Press. New Detroit Free Press-Local 4 poll conducted by Selzer and Co., finds, "nearly two-thirds of residents of suburban Wayne, Oakland, and Macomb counties say they at least occasionally dine, attend cultural events or take in professional games in Detroit."
Nicknames
Known as the world's automotive center,Lawrence, Peter (2009).Interview with Michigan's Governor, Corporate Design Foundation. Retrieved on May 1, 2009. "Detroit" is a metonym for that industry. Detroit's auto industry, some of which was converted to wartime defense production, was an important element of the American "Arsenal of Democracy" supporting the Allied powers during World War II. It is an important source of popular music legacies celebrated by the city's two familiar nicknames, the Motor City and Motown. Other nicknames arose in the 20th century, including City of Champions, beginning in the 1930s for its successes in individual and team sport; The D; Hockeytown (a trademark owned by the city's NHL club, the Red Wings); Rock City (after the Kiss song "Detroit Rock City"); and The 313 (its telephone area code).Commemorated in the movie 8 Mile (2002).
Music
Live music has been a prominent feature of Detroit's nightlife since the late 1940s, bringing the city recognition under the nickname 'Motown'.Discogs – Motown's 217 Recording Labels The metropolitan area has many nationally prominent live music venues. Concerts hosted by Live Nation perform throughout the Detroit area. Large concerts are held at DTE Energy Music Theatre and The Palace of Auburn Hills. The city's theatre venue circuit is the United States' second largest and hosts Broadway performances. Detroit Tourism Economic Development Council. Retrieved on July 24, 2008.Arts & Culture Detroit Economic Growth Corporation. Retrieved on July 24, 2008. "Detroit is home to the second largest theatre district in the United States."
thumb|upright|Greektown Historic District in Detroit
The city of Detroit has a rich musical heritage and has contributed to a number of different genres over the decades leading into the new millennium. Important music events in the city include: the Detroit International Jazz Festival, the Detroit Electronic Music Festival, the Motor City Music Conference (MC2), the Urban Organic Music Conference, the Concert of Colors, and the hip-hop Summer Jamz festival.
In the 1940s, Detroit blues artist John Lee Hooker became a long-term resident in the city's southwest Delray neighborhood. Hooker, among other important blues musicians migrated from his home in Mississippi bringing the Delta blues to northern cities like Detroit. Hooker recorded for Fortune Records, the biggest pre-Motown blues/soul label. During the 1950s, the city became a center for jazz, with stars performing in the Black Bottom neighborhood. Prominent emerging Jazz musicians of the 1960s included: trumpet player Donald Byrd who attended Cass Tech and performed with Art Blakey and the Jazz Messengers early in his career and Saxophonist Pepper Adams who enjoyed a solo career and accompanied Byrd on several albums. The Graystone International Jazz Museum documents jazz in Detroit.
Other, prominent Motor City R&B stars in the 1950s and early 1960s was Nolan Strong, Andre Williams and Nathaniel Mayer – who all scored local and national hits on the Fortune Records label. According to Smokey Robinson, Strong was a primary influence on his voice as a teenager. The Fortune label was a family-operated label located on Third Avenue in Detroit, and was owned by the husband and wife team of Jack Brown and Devora Brown. Fortune, which also released country, gospel and rockabilly LPs and 45s, laid the groundwork for Motown, which became Detroit's most legendary record label.
thumb|The MGM Grand Detroit, one of Detroit's three casino resorts and the 16th largest employer in the city
Berry Gordy, Jr. founded Motown Records which rose to prominence during the 1960s and early 1970s with acts such as Stevie Wonder, The Temptations, The Four Tops, Smokey Robinson & The Miracles, Diana Ross & The Supremes, the Jackson 5, Martha and the Vandellas, The Spinners, Gladys Knight & the Pips, The Marvelettes, The Elgins, The Monitors, The Velvelettes and Marvin Gaye. Artists were backed by in-house vocalists Girl Groups -- Fabulous Females Who Rocked The World, by John ClementeThe Andantes and The Funk Brothers, the Motown house band that was featured in Paul Justman's 2002 documentary film Standing in the Shadows of Motown, based on Allan Slutsky's book of the same name.
The Motown Sound played an important role in the crossover appeal with popular music, since it was the first African American owned record label to primarily feature African-American artists. Gordy moved Motown to Los Angeles in 1972 to pursue film production, but the company has since returned to Detroit. Aretha Franklin, another Detroit R&B star, carried the Motown Sound; however, she did not record with Berry's Motown Label.
Local artists and bands rose to prominence in the 1960s and 70s including: the MC5, The Stooges, Bob Seger, Amboy Dukes featuring Ted Nugent, Mitch Ryder and The Detroit Wheels, Rare Earth, Alice Cooper, and Suzi Quatro. The group Kiss emphasized the city's connection with rock in the song Detroit Rock City and the movie produced in 1999. In the 1980s, Detroit was an important center of the hardcore punk rock underground with many nationally known bands coming out of the city and its suburbs, such as The Necros, The Meatmen, and Negative Approach.
In the 1990s and the new millennium, the city has produced a number of influential hip hop artists, including Eminem, the hip-hop artist with the highest cumulative sales, hip-hop producer J Dilla, rapper and producer Esham and hip hop duo Insane Clown Posse. The city is also home to rappers Big Sean and Danny Brown. The band Sponge toured and produced music, with artists such as Kid Rock and Uncle Kracker. The city also has an active garage rock genre that has generated national attention with acts such as: The White Stripes, The Von Bondies, The Detroit Cobras, The Dirtbombs, Electric Six, and The Hard Lessons.
Detroit is cited as the birthplace of techno music in the early 1980s. The city also lends its name to an early and pioneering genre of electronic dance music, "Detroit techno". Featuring science fiction imagery and robotic themes, its futuristic style was greatly influenced by the geography of Detroit's urban decline and its industrial past. Prominent Detroit techno artists include Juan Atkins, Derrick May, Kevin Saunderson, and Jeff Mills. The Detroit Electronic Music Festival, now known as "Movement", occurs annually in late May on Memorial Day Weekend, and takes place in Hart Plaza. In the early years (2000–2002), this was a landmark event, boasting over a million estimated attendees annually, coming from all over the world to celebrate Techno music in the city of its birth.
Entertainment and performing arts
thumbnail|right|Fox Theatre at night in Downtown Detroit
Major theaters in Detroit include the Fox Theatre (5,174 seats), Music Hall (1,770 seats), the Gem Theatre (451 seats), Masonic Temple Theatre (4,404 seats), the Detroit Opera House (2,765 seats), the Fisher Theatre (2,089 seats), The Fillmore Detroit (2,200 seats), Saint Andrew's Hall, the Majestic Theater, and Orchestra Hall (2,286 seats) which hosts the renowned Detroit Symphony Orchestra. The Nederlander Organization, the largest controller of Broadway productions in New York City, originated with the purchase of the Detroit Opera House in 1922 by the Nederlander family.
Motown Motion Picture Studios with produces movies in Detroit and the surrounding area based at the Pontiac Centerpoint Business Campus for a film industry expected to employ over 4,000 people in the metro area.Gallaher, John and Kathleen Gray and Chris Christoff (February 3, 2009). "Pontiac film studio to bring jobs". Detroit Free Press.
Tourism
300px|thumb|Detroit Institute of Arts
Many of the area's prominent museums are located in the historic cultural center neighborhood around Wayne State University and the College for Creative Studies. These museums include the Detroit Institute of Arts, the Detroit Historical Museum, Charles H. Wright Museum of African American History, the Detroit Science Center, as well as the main branch of the Detroit Public Library. Other cultural highlights include Motown Historical Museum, the Ford Piquette Avenue Plant museum (birthplace of the Ford Model T and the world's oldest car factory building open to the public), the Pewabic Pottery studio and school, the Tuskegee Airmen Museum, Fort Wayne, the Dossin Great Lakes Museum, the Museum of Contemporary Art Detroit (MOCAD), the Contemporary Art Institute of Detroit (CAID), and the Belle Isle Conservatory.
In 2010, the G.R. N'Namdi Gallery opened in a complex in Midtown. Important history of America and the Detroit area are exhibited at The Henry Ford in Dearborn, the United States' largest indoor-outdoor museum complex.America's Story, Explore the States: Michigan (2006). Henry Ford Museum and Greenfield Village Library of Congress Retrieved August 14, 2011. The Detroit Historical Society provides information about tours of area churches, skyscrapers, and mansions. Inside Detroit, meanwhile, hosts tours, educational programming, and a downtown welcome center. Other sites of interest are the Detroit Zoo in Royal Oak, the Cranbrook Art Museum in Bloomfield Hills, the Anna Scripps Whitcomb Conservatory on Belle Isle, and Walter P. Chrysler Museum in Auburn Hills.
upright|thumb|Eastern Market
The city's Greektown and three downtown casino resort hotels serve as part of an entertainment hub. The Eastern Market farmer's distribution center is the largest open-air flowerbed market in the United States and has more than 150 foods and specialty businesses.. Eastern Market Merchant's Association. Retrieved on March 8, 2006. On Saturdays, about 45,000 people shop the city's historic Eastern Market..Model D Media (April 5, 2008). Retrieved January 24, 2011. The Midtown and the New Center area are centered on Wayne State University and Henry Ford Hospital. Midtown has about 50,000 residents and attracts millions of visitors each year to its museums and cultural centers;.Model D Media (April 4, 2008). Retrieved on January 24, 2011. for example, the Detroit Festival of the Arts in Midtown draws about 350,000 people.
Annual summer events include the Electronic Music Festival, International Jazz Festival, the Woodward Dream Cruise, the African World Festival, the country music Hoedown, Noel Night, and Dally in the Alley. Within downtown, Campus Martius Park hosts large events, including the annual Motown Winter Blast. As the world's traditional automotive center, the city hosts the North American International Auto Show. Held since 1924, America's Thanksgiving Parade is one of the nation's largest.The Parade Company. Retrieved on October 28, 2007. River Days, a five-day summer festival on the International Riverfront lead up to the Windsor–Detroit International Freedom Festival fireworks, which draw super sized-crowds ranging from hundreds of thousands to over three million people.Fifth Third Bank rocks the Winter Blast.Michigan Chronicle. (March 14, 2006).
An important civic sculpture in Detroit is "The Spirit of Detroit" by Marshall Fredericks at the Coleman Young Municipal Center. The image is often used as a symbol of Detroit and the statue itself is occasionally dressed in sports jerseys to celebrate when a Detroit team is doing well.Baulch, Vivian M. (August 4, 1998). Marshall Fredericks – the Spirit of Detroit . Michigan History, The Detroit News. Retrieved on November 23, 2007. A memorial to Joe Louis at the intersection of Jefferson and Woodward Avenues was dedicated on October 16, 1986. The sculpture, commissioned by Sports Illustrated and executed by Robert Graham, is a long arm with a fisted hand suspended by a pyramidal framework.Sarah Karush, The Associated Press (February 23, 2004). Police arrest two men suspected of vandalizing Joe Louis statue. USA Today.
Artist Tyree Guyton created the controversial street art exhibit known as the Heidelberg Project in 1986, using found objects including cars, clothing and shoes found in the neighborhood near and on Heidelberg Street on the near East Side of Detroit. Guyton continues to work with neighborhood residents and tourists in constantly evolving the neighborhood-wide art installation.
Sports
thumb|Looking toward Ford Field the night of Super Bowl XL
Detroit is one of 13 American metropolitan areas that are home to professional teams representing the four major sports in North America. All these teams but one play within the city of Detroit itself (the NBA's Detroit Pistons play in suburban Auburn Hills at The Palace of Auburn Hills). However, the Pistons will be moving into Little Caesars Arena in Detroit in 2017. There are three active major sports venues within the city: Comerica Park (home of the Major League Baseball team Detroit Tigers), Ford Field (home of the NFL's Detroit Lions), and Joe Louis Arena (home of the NHL's Detroit Red Wings). A 1996 marketing campaign promoted the nickname "Hockeytown".
The Detroit Tigers have won four World Series titles. The Detroit Red Wings have won 11 Stanley Cups (the most by an American NHL franchise). The Detroit Lions have won 4 NFL titles. The Detroit Pistons have won three NBA titles. With the Pistons' first of three NBA titles in 1989, the city of Detroit has won titles in all four of the major professional sports leagues. Two new downtown stadiums for the Detroit Tigers and Detroit Lions opened in 2000 and 2002, respectively, returning the Lions to the city proper.
In college sports, Detroit's central location within the Mid-American Conference has made it a frequent site for the league's championship events. While the MAC Basketball Tournament moved permanently to Cleveland starting in 2000, the MAC Football Championship Game has been played at Ford Field in Detroit since 2004, and annually attracts 25,000 to 30,000 fans. The University of Detroit Mercy has a NCAA Division I program, and Wayne State University has both NCAA Division I and II programs. The NCAA football Little Caesars Pizza Bowl is held at Ford Field each December.
The local soccer team is called the Detroit City Football Club and was founded in 2012. The team plays in the National Premier Soccer League, and its nickname is Le Rouge.
thumb|Ford Field, home of the Detroit Lions
The city hosted the 2005 MLB All-Star Game, 2006 Super Bowl XL, 2006 and 2012 World Series, WrestleMania 23 in 2007, and the NCAA Final Four in April 2009.
The city hosted the Detroit Indy Grand Prix on Belle Isle Park from 1989 to 2001, 2007 to 2008, and 2012 and beyond. In 2007, open-wheel racing returned to Belle Isle with both Indy Racing League and American Le Mans Series Racing.
In the years following the mid-1930s, Detroit was referred to as the "City of Champions" after the Tigers, Lions, and Red Wings captured all three major professional sports championships in a seven-month period of time (the Tigers won the World Series in October 1935; the Lions won the NFL championship in December 1935; the Red Wings won the Stanley Cup in April 1936). In 1932, Eddie "The Midnight Express" Tolan from Detroit won the 100- and 200-meter races and two gold medals at the 1932 Summer Olympics. Joe Louis won the heavyweight championship of the world in 1937.
Detroit has made the most bids to host the Summer Olympics without ever being awarded the games: seven unsuccessful bids for the 1944, 1952, 1956, 1960, 1964, 1968 and 1972 games.
Law and government
thumb|The Coleman A. Young Municipal Center houses the City of Detroit offices. Shown here is The Spirit of Detroit statue.
thumb|left|The Guardian Building serves as the headquarters of Wayne County, Michigan.
The city is governed pursuant to the Home Rule Charter of the City of Detroit. The city government is run by a mayor and a nine-member city council and clerk elected on an at-large nonpartisan ballot. Since voters approved the city's charter in 1974, Detroit has had a "strong mayoral" system, with the mayor approving departmental appointments. The council approves budgets but the mayor is not obligated to adhere to any earmarking. City ordinances and substantially large contracts must be approved by the council.Ward, George E. (July 1993). Detroit Charter Revision – A Brief History. Citizens Research Council of Michigan (pdf file). The Detroit City Code is the codification of Detroit's local ordinances.
The city clerk supervises elections and is formally charged with the maintenance of municipal records. Municipal elections for mayor, city council and city clerk are held at four-year intervals, in the year after presidential elections. Following a November 2009 referendum, seven council members will be elected from districts beginning in 2013 while two will continue to be elected at-large.Nelson, Gabe (November 3, 2009).Voters overwhelmingly approve Detroit Proposal D.Crains Detroit Business. Retrieved on December 23, 2009.
Detroit's courts are state-administered and elections are nonpartisan. The Probate Court for Wayne County is located in the Coleman A. Young Municipal Center in downtown Detroit. The Circuit Court is located across Gratiot Avenue in the Frank Murphy Hall of Justice, in downtown Detroit. The city is home to the Thirty-Sixth District Court, as well as the First District of the Michigan Court of Appeals and the United States District Court for the Eastern District of Michigan. The city provides law enforcement through the Detroit Police Department and emergency services through the Detroit Fire Department.
Crime
thumb|Theodore Levin United States Courthouse, Downtown
Detroit has struggled with high crime for decades. Detroit held the title of murder capital between 1985–1987 with a murder rate around 58 per 100,000.http://mediamatters.org/research/2015/07/07/conservative-media-link-chicagos-crime-wave-to/204285 Crime has since decreased and, in 2014, the murder rate was 43.4 per 100,000, lower than in St. Louis, Missouri.
About half of all murders in Michigan in 2015 occurred in Detroit.https://ucr.fbi.gov/crime-in-the-u.s/2011/crime-in-the-u.s.-2011/tables/table8statecuts/table_8_offenses_known_to_law_enforcement_michigan_by_city_2011.xlshttp://www.disastercenter.com/crime/micrime.html Although the rate of violent crime dropped 11% in 2008, violent crime in Detroit has not declined as much as the national average from 2007 to 2011. The violent crime rate is one of the highest in the United States. Neighborhoodscout.com reported a crime rate of 62.18 per 1,000 residents for property crimes, and 16.73 per 1,000 for violent crimes (compared to national figures of 32 per 1,000 for property crimes and 5 per 1,000 for violent crime in 2008). Annual statistics released by the Detroit Police Department for 2016 indicate that while the city's overall crime rate declined that year, the murder rate rose from 2015.Williams, Corey (January 3, 2017). "Crime in Detroit is down overall in 2016; homicide up by 7." Detroit Free Press. Retrieved January 13, 2017. In 2016 there were 302 homicides in Detroit, a 2.37% increase in the number of murder victims from the preceding year.Williams, Corey (January 3, 2017). "Crime in Detroit is down overall in 2016; homicide up by 7." Detroit Free Press. Retrieved January 13, 2017.
The city's downtown typically has lower crime than national and state averages.Booza, Jason C. (July 23, 2008).Reality v. Perceptions: An Analysis of Crime and Safety in Downtown Detroit. (Archive) Michigan Metropolitan Information Center, Wayne State University Center for Urban Studies. Retrieved August 14, 2011. According to a 2007 analysis, Detroit officials note that about 65 to 70 percent of homicides in the city were drug related,Shelton, Steve Malik (January 30, 2008).. Michigan Chronicle. Retrieved on March 17, 2008. with the rate of unsolved murders roughly 70%.
Areas of the city closer to the Detroit River are also patrolled by the United States Border Patrol.
In 2012, crime in the city was among the reasons for more expensive car insurance.
Politics
thumb|In 2013 Mike Duggan was elected Mayor of Detroit
Beginning with its incorporation in 1802, Detroit has had a total of 74 mayors. Detroit's last mayor from the Republican Party was Louis Miriani, who served from 1957 to 1962. In 1973, the city elected its first black mayor, Coleman Young. Despite development efforts, his combative style during his five terms in office was not well received by many suburban residents.Detroit's 'great warrior,' Coleman Young, dies (November 29, 1997). CNN.com. Mayor Dennis Archer, a former Michigan Supreme Court Justice, refocused the city's attention on redevelopment with a plan to permit three casinos downtown. By 2008, three major casino resort hotels established operations in the city.
In 2000, the city requested an investigation by the United States Justice Department into the Detroit Police Department which was concluded in 2003 over allegations regarding its use of force and civil rights violations. The city proceeded with a major reorganization of the Detroit Police Department.Lin, Judy and David Joser, (August 30, 2005). Detroit to trim 150 cops, precincts. Detroit News.
Public finances
In March 2013, Governor Rick Snyder declared a financial emergency in the city, stating that the city has a $327 million budget deficit and faces more than $14 billion in long-term debt. It has been making ends meet on a month-to-month basis with the help of bond money held in a state escrow account and has instituted mandatory unpaid days off for many city workers. Those troubles, along with underfunded city services, such as police and fire departments, and ineffective turnaround plans from Bing and the City Council led the state of Michigan to appoint an emergency manager for Detroit on March 14, 2013. On June 14, 2013 Detroit defaulted on $2.5 billion of debt by withholding $39.7 million in interest payments, while Emergency Manager Kevyn Orr met with bondholders and other creditors in an attempt to restructure the city's $18.5 billion debt and avoid bankruptcy. On July 18, 2013, the City of Detroit filed for Chapter 9 bankruptcy protection.See generally Chapter 9 bankruptcy petition, July 18, 2013, docket entry 1, In re City of Detroit, Michigan, case no. 13-53846-swr, U.S. Bankr. Court for the Eastern District of Michigan (Detroit Div.), U.S. Bankr. Judge Steven W. Rhodes, Presiding. It was declared bankrupt by U.S. judge Stephen Rhodes on December 3, with its $18.5 billion debt he said in accepting the city's contention that it is broke and that negotiations with its thousands of creditors were infeasible.
Education
Colleges and universities
thumb|Old Main, a historic building at Wayne State University, originally built as Detroit Central High School.
thumbnail|Sacred Heart Major Seminary
thumb|Commons at University of Detroit Mercy
Detroit is home to several institutions of higher learning including Wayne State University, a national research university with medical and law schools in the Midtown area offering hundreds of academic degrees and programs. The University of Detroit Mercy, located in Northwest Detroit in the University District, is a prominent Roman Catholic co-educational university affiliated with the Society of Jesus (the Jesuits) and the Sisters of Mercy. The University of Detroit Mercy offers more than a hundred academic degrees and programs of study including business, dentistry, law, engineering, architecture, nursing and allied health professions. The University of Detroit Mercy School of Law is located Downtown across from the Renaissance Center.
Sacred Heart Major Seminary, originally founded in 1919, is affiliated with Pontifical University of Saint Thomas Aquinas, Angelicum in Rome and offers pontifical degrees as well as civil undergraduate and graduate degrees. Sacred Heart Major Seminary offers a variety of academic programs for both clerical and lay students. Other institutions in the city include the College for Creative Studies, Lewis College of Business, Marygrove College and Wayne County Community College. In June 2009, the Michigan State University College of Osteopathic Medicine which is based in East Lansing opened a satellite campus located at the Detroit Medical Center. The University of Michigan was established in 1817 in Detroit and later moved to Ann Arbor in 1837. In 1959, University of Michigan–Dearborn was established in neighboring Dearborn.
Primary and secondary schools
Public schools and charter schools
With about 66,000 public school students (2011–12), the Detroit Public Schools (DPS) district is the largest school district in Michigan. Detroit has an additional 56,000 charter school students for a combined enrollment of about 122,000 students.Dawsey, Chastity Pratt (October 20, 2011). Detroit Public Schools hits enrollment goal. Detroit Free Press there are about as many students in charter schools as there are in district schools.Winerip, Michael. "For Detroit Schools, Mixed Picture on Reforms." The New York Times. March 13, 2011. Retrieved on November 9, 2012.
In 1999, the Michigan Legislature removed the locally elected board of education amid allegations of mismanagement and replaced it with a reform board appointed by the mayor and governor. The elected board of education was re-established following a city referendum in 2005. The first election of the new 11-member board of education occurred on November 8, 2005.LewAllen, Dave (August 3, 2005). Detroiters Vote for New School Board. WXYZ.com.
Due to growing Detroit charter schools enrollment as well as a continued exodus of population, the city planned to close many public schools.Hing, Julianne (March 17, 2010).Where Have All The Students Gone?.Color Lines.com. Retrieved on August 19, 2010. State officials report a 68% graduation rate for Detroit's public schools adjusted for those who change schools.Shultz, Marissa and Greg Wilkerson (June 13, 2007).Graduation rate. Detroit News. Retrieved on March 17, 2009.Detroit Public Schools news (June 15, 2007). Retrieved March 9, 2011.
Public and charter school students in the city have performed poorly on standardized tests. While Detroit public schools scored a record low on national tests, the publicly funded charter schools did even worse than the public schools.Resmovits, Joy. "Detroit Charter High Schools Underperform Public Counterparts, Analysis Shows." Huffington Post. July 8, 2011. Updated September 7, 2011.Erb, Robin and Chastity Pratt Dawsey. "Detroit students' scores a record low on national test." Detroit Free Press. December 8, 2009.
Detroit public schools students scored the lowest on tests of reading and writing of all major cities in the United States in 2015. Among eighth-graders, only 27% showed basic proficiency in math and 44% in reading.http://www.detroitnews.com/story/news/local/detroit-city/2015/10/28/national-assessment-educational-progress-detroit-math-reading-results/74718372/ Nearly half of Detroit's adults are functionally illiterate.http://www.huffingtonpost.com/2011/05/07/detroit-illiteracy-nearly-half-education_n_858307.html
Private schools
Detroit is served by various private schools, as well as parochial Roman Catholic schools operated by the Archdiocese of Detroit. there are four Catholic grade schools and three Catholic high schools in the City of Detroit, with all of them in the city's west side."Detroit area's Catholic schools shrink, but tradition endures" (Archive). Detroit Free Press. February 1, 2013. Retrieved on September 13, 2014. The Archdiocese of Detroit lists a number of primary and secondary schools in the metro area as Catholic education has emigrated to the suburbs.Pratt, Chastity, Patricia Montemurri, and Lori Higgins. "PARENTS, KIDS SCRAMBLE AS EDUCATION OPTIONS NARROW." Detroit Free Press. March 17, 2005. A1 News. Retrieved on April 30, 2011. Of the three Catholic high schools in the city, two are operated by the Society of Jesus and the third is co-sponsored by the Sisters, Servants of the Immaculate Heart of Mary and the Congregation of St. Basil.
In the 1964–1965 school year there were about 110 Catholic grade schools in Detroit, Hamtramck, and Highland Park and 55 Catholic high schools in those three cities. The Catholic school population in Detroit has decreased due to the increase of charter schools, increasing tuition at Catholic schools, the small number of African-American Catholics, White Catholics moving to suburbs, and the decreased number of teaching nuns.
Media
thumb|The Detroit Public Library
The Detroit Free Press and The Detroit News are the major daily newspapers, both broadsheet publications published together under a joint operating agreement called the Detroit Newspaper Partnership. Media philanthropy includes the Detroit Free Press high school journalism program and the Old Newsboys' Goodfellow Fund of Detroit.Old Newsboys' Goodfellow Fund of Detroit. Retrieved on April 21, 2009. In March 2009, the two newspapers reduced home delivery to three days a week, print reduced newsstand issues of the papers on non-delivery days and focus resources on Internet-based news delivery. The Metro Times, founded in 1980, is a weekly publication, covering news, arts & entertainment.
Also founded in 1935 and based in Detroit the Michigan Chronicle is one of the oldest and most respected African-American weekly newspapers in America. Covering politics, entertainment, sports and community events. The Detroit television market is the 11th largest in the United States;Nielsen Media Research Local Universe Estimates (September 24, 2005) The Nielson Company according to estimates that do not include audiences located in large areas of Ontario, Canada (Windsor and its surrounding area on broadcast and cable TV, as well as several other cable markets in Ontario, such as the city of Ottawa) which receive and watch Detroit television stations.
Detroit has the 11th largest radio market in the United States, though this ranking does not take into account Canadian audiences. Nearby Canadian stations such as Windsor's CKLW (whose jingles formerly proclaimed "CKLW-the Motor City") are popular in Detroit.
Hardcore Pawn, an American documentary reality television series produced for truTV, features the day-to-day operations of American Jewelry and Loan, a family-owned pawn shop on Greenfield Road.
Infrastructure
Health systems
Within the city of Detroit, there are over a dozen major hospitals which include the Detroit Medical Center (DMC), Henry Ford Health System, St. John Health System, and the John D. Dingell VA Medical Center. The DMC, a regional Level I trauma center, consists of Detroit Receiving Hospital and University Health Center, Children's Hospital of Michigan, Harper University Hospital, Hutzel Women's Hospital, Kresge Eye Institute, Rehabilitation Institute of Michigan, Sinai-Grace Hospital, and the Karmanos Cancer Institute. The DMC has more than 2,000 licensed beds and 3,000 affiliated physicians. It is the largest private employer in the City of Detroit. Wayne State University Retrieved January 24, 2011. The center is staffed by physicians from the Wayne State University School of Medicine, the largest single-campus medical school in the United States, and the United States' fourth largest medical school overall.
Detroit Medical Center formally became a part of Vanguard Health Systems on December 30, 2010, as a for profit corporation. Vanguard has agreed to invest nearly $1.5 B in the Detroit Medical Center complex which will include $417 M to retire debts, at least $350 M in capital expenditures and an additional $500 M for new capital investment.Anstett, Patricia (March 20, 2010).$1.5 billion for new DMC.Detroit Free Press. DMC.org. Retrieved on June 12, 2010. Vanguard has agreed to assume all debts and pension obligations. The metro area has many other hospitals including William Beaumont Hospital, St. Joseph's, and University of Michigan Medical Center.
In 2011, Detroit Medical Center and Henry Ford Health System substantially increased investments in medical research facilities and hospitals in the city's Midtown and New Center.Greene, Jay (April 5, 2010).Henry Ford Health System plans $500 million expansion. Crains Detroit Business. Retrieved on June 12, 2010.
In 2012, two major construction projects were begun in New Center, the Henry Ford Health System started the first phase of a $500 million, 300-acre revitalization project, with the construction of a new $30 million, 275,000-square-foot, Medical Distribution Center for Cardinal Health, Inc. and Wayne State University started construction on a new $93 million, 207,000-square-foot, Integrative Biosciences Center (IBio).Henderson, Tom (April 15, 2012). WSU to build $93M biotech hub. Crains Detroit Business. Retrieved on March 15, 2015. As many as 500 researchers, and staff will work out of the IBio Center.
Transportation
thumb|Rosa Parks bus terminal downtown
With its proximity to Canada and its facilities, ports, major highways, rail connections and international airports, Detroit is an important transportation hub. The city has three international border crossings, the Ambassador Bridge, Detroit–Windsor Tunnel and Michigan Central Railway Tunnel, linking Detroit to Windsor, Ontario. The Ambassador Bridge is the single busiest border crossing in North America, carrying 27% of the total trade between the U.S. and Canada.Ambassador Bridge Crossing Summary (May 11, 2005). U.S. Department of Transportation. Retrieved on April 8, 2007.
On February 18, 2015, Canadian Transport Minister Lisa Raitt announced that Canada has agreed to pay the entire cost to build a $250 million U.S. Customs plaza adjacent to the planned new Detroit–Windsor bridge, now the Gordie Howe International Bridge. Canada had already planned to pay for 95 per cent of the bridge, which will cost $2.1 billion, and is expected to open in 2020. "This allows Canada and Michigan to move the project forward immediately to its next steps which include further design work and property acquisition on the U.S. side of the border," Raitt said in a statement issued after she spoke in the House of Commons.
Transit systems
thumb|People Mover train comes into the Renaissance Center station
Mass transit in the region is provided by bus services. The Detroit Department of Transportation (DDOT) provides service to the outer edges of the city. From there, the Suburban Mobility Authority for Regional Transportation (SMART) provides service to the suburbs. Cross border service between the downtown areas of Windsor and Detroit is provided by Transit Windsor via the Tunnel Bus.
An elevated rail system known as the People Mover, completed in 1987, provides daily service around a loop downtown. The QLINE, which is expected to open in mid-2017, will serve as a link between the Detroit People Mover and Detroit Amtrak station via Woodward Avenue. The SEMCOG Commuter Rail line will extend from Detroit's New Center, connecting to Ann Arbor via Dearborn, Wayne, and Ypsilanti when it is opened.Ann Arbor – Detroit Regional Rail Project SEMCOG. Retrieved on February 4, 2010.
The Regional Transit Authority (RTA) was established by an act of the Michigan legislature in December 2012 to oversee and coordinate all existing regional mass transit operations, and to develop new transit services in the region. The RTA's first project was the introduction of RelfeX, a limited-stop, cross-county bus service connecting downtown and midtown Detroit with Oakland and Macomb counties via Woodward and Gratiot avenues.
Amtrak provides service to Detroit, operating its Wolverine service between Chicago and Pontiac. The Amtrak station is located in New Center north of downtown. The J. W. Westcott II, which delivers mail to lake freighters on the Detroit River, is the world's only floating post office.America's Floating ZIP Code 48222 J.W. Westcott Homepage. Retrieved on April 8, 2007.
Airports
thumb|Aerial of Detroit Metro Airport, one of the largest air traffic hubs in the US
Detroit Metropolitan Wayne County Airport (DTW), the principal airport serving Detroit, is located in nearby Romulus. DTW is a primary hub for Delta Air Lines (following its acquisition of Northwest Airlines), and a secondary hub for Spirit Airlines.
Coleman A. Young International Airport (DET), previously called Detroit City Airport, is on Detroit's northeast side; the airport now maintains only charter service and general aviation.Sapte, Benjamin (2003). . Embry-Riddle Aeronautical University Retrieved on April 20, 2006. Retrieved January 24, 2011. Willow Run Airport, in far-western Wayne County near Ypsilanti, is a general aviation and cargo airport.
Freeways
Metro Detroit has an extensive toll-free network of freeways administered by the Michigan Department of Transportation. Four major Interstate Highways surround the city. Detroit is connected via Interstate 75 (I-75) and I-96 to Kings Highway 401 and to major Southern Ontario cities such as London, Ontario and the Greater Toronto Area. I-75 (Chrysler and Fisher freeways) is the region's main north–south route, serving Flint, Pontiac, Troy, and Detroit, before continuing south (as the Detroit–Toledo and Seaway Freeways) to serve many of the communities along the shore of Lake Erie.
I-94 (Edsel Ford Freeway) runs east–west through Detroit and serves Ann Arbor to the west (where it continues to Chicago) and Port Huron to the northeast. The stretch of the current I-94 freeway from Ypsilanti to Detroit was one of America's earlier limited-access highways. Henry Ford built it to link the factories at Willow Run and Dearborn during World War II. A portion was known as the Willow Run Expressway. The I-96 freeway runs northwest–southeast through Livingston, Oakland and Wayne counties and (as the Jeffries Freeway through Wayne County) has its eastern terminus in downtown Detroit.
I-275 runs north–south from I-75 in the south to the junction of I-96 and I-696 in the north, providing a bypass through the western suburbs of Detroit. I-375 is a short spur route in downtown Detroit, an extension of the Chrysler Freeway. I-696 (Reuther Freeway) runs east–west from the junction of I-96 and I-275, providing a route through the northern suburbs of Detroit. Taken together, I-275 and I-696 form a semicircle around Detroit. Michigan state highways designated with the letter M serve to connect major freeways.
Notable people
Sister cities
Online Directory: Michigan, United States (2011). Sister Cities International. Retrieved August 14, 2011.
Chongqing, China
Dubai, United Arab Emirates
Kitwe, Zambia
Minsk, Belarus
Nassau, Bahamas
Toyota, Aichi Prefecture, Japan
Turin, Italy
See also
Decline of Detroit
History of Detroit
Notes
References
Further reading
Barrow, Heather B. Henry Ford's Plan for the American Suburb: Dearborn and Detroit. DeKalb, IL: Northern Illinois University Press, 2015.
Bates, Beth Tompkins. The Making of Black Detroit in the Age of Henry Ford. Chapel Hill, NC: University of North Carolina Press, 2012.
Cangany, Catherine (2014). Frontier Seaport: Detroit's Transformation into an Atlantic Entrepôt. Chicago: University of Chicago Press.
Farmer, Silas. (1884) (July 1969) The history of Detroit and Michigan, or, The metropolis illustrated: a chronological cyclopaedia of the past and present: including a full record of territorial days in Michigan, and the annuals of Wayne County, in various formats at Open Library.
Galster, George. (2012). Driving Detroit: The Quest for Respect in the Motor City University of Pennsylvania Press
Powell, L. P (1901). "Detroit, the Queen City," Historic Towns of the Western States'' (New York).
External links
Historical research and current events
Detroit Entertainment District
Detroit Historical Museums & Society
Detroit Riverfront Conservancy
Experience Detroit
Labor, Urban Affairs and Detroit History archival collections at the Walter P. Reuther Library
Virtual Motor City Collection at Wayne State University Library, contains over 30,000 images of Detroit from 1890 to 1980
Municipal government and local Chamber of Commerce
Official website
Detroit Metro Convention & Visitors Bureau
Detroit Regional Chamber
Category:Articles with images not understandable by color blind users
Category:Canada–United States border towns
Category:Cities in Michigan
Category:Cities in Wayne County, Michigan
Category:County seats in Michigan
Category:Detroit River
Michigan
Category:Government units that have filed for Chapter 9 bankruptcy
Category:Inland port cities and towns of the United States
Category:Metro Detroit
Category:Michigan Neighborhood Enterprise Zone
Category:Populated places established in 1701
Category:Populated places on the Great Lakes
Category:Populated places on the Underground Railroad
Category:1701 establishments in New France | 8,687 | 2017-01 |
Culture | Culture () can be defined in numerous ways. In the words of anthropologist E.B. Tylor, it is "that complex whole which includes knowledge, belief, art, morals, law, custom and any other capabilities and habits acquired by man as a member of society." Alternatively, in a contemporary variant, "Culture is defined as a social domain that emphasizes the practices, discourses and material expressions, which, over time, express the continuities and discontinuities of social meaning of a life held in common."
The Cambridge English Dictionary states that culture is "the way of life, especially the general customs and beliefs, of a particular group of people at a particular time."
Terror management theory posits that culture is a series of activities and worldviews that provide humans with the basis for perceiving themselves as "person[s] of worth within the world of meaning"—raising themselves above the merely physical aspects of existence, in order to deny the animal insignificance and death that Homo sapiens became aware of when they acquired a larger brain.
As a defining aspect of what it means to be human, culture is a central concept in anthropology, encompassing the range of phenomena that are transmitted through social learning in human societies. The word is used in a general sense as the evolved ability to categorize and represent experiences with symbols and to act imaginatively and creatively. This ability arose with the evolution of behavioral modernity in humans around 50,000 years ago, and is often thought to be unique to humans, although some other species have demonstrated similar, though much less complex, abilities for social learning. It is also used to denote the complex networks of practices and accumulated knowledge and ideas that is transmitted through social interaction and exist in specific human groups, or cultures, using the plural form. Some aspects of human behavior, such as language, social practices such as kinship, gender and marriage, expressive forms such as art, music, dance, ritual, and religion, and technologies such as cooking, shelter, and clothing are said to be cultural universals, found in all human societies. The concept of material culture covers the physical expressions of culture, such as technology, architecture and art, whereas the immaterial aspects of culture such as principles of social organization (including practices of political organization and social institutions), mythology, philosophy, literature (both written and oral), and science make up the intangible cultural heritage of a society.
In the humanities, one sense of culture as an attribute of the individual has been the degree to which they have cultivated a particular level of sophistication in the arts, sciences, education, or manners. The level of cultural sophistication has also sometimes been seen to distinguish civilizations from less complex societies. Such hierarchical perspectives on culture are also found in class-based distinctions between a high culture of the social elite and a low culture, popular culture, or folk culture of the lower classes, distinguished by the stratified access to cultural capital. In common parlance, culture is often used to refer specifically to the symbolic markers used by ethnic groups to distinguish themselves visibly from each other such as body modification, clothing or jewelry. Mass culture refers to the mass-produced and mass mediated forms of consumer culture that emerged in the 20th century. Some schools of philosophy, such as Marxism and critical theory, have argued that culture is often used politically as a tool of the elites to manipulate the lower classes and create a false consciousness, and such perspectives are common in the discipline of cultural studies. In the wider social sciences, the theoretical perspective of cultural materialism holds that human symbolic culture arises from the material conditions of human life, as humans create the conditions for physical survival, and that the basis of culture is found in evolved biological dispositions.
When used as a count noun, "a culture" is the set of customs, traditions, and values of a society or community, such as an ethnic group or nation. In this sense, multiculturalism is a concept that values the peaceful coexistence and mutual respect between different cultures inhabiting the same planet. Sometimes "culture" is also used to describe specific practices within a subgroup of a society, a subculture (e.g. "bro culture"), or a counterculture. Within cultural anthropology, the ideology and analytical stance of cultural relativism holds that cultures cannot easily be objectively ranked or evaluated because any evaluation is necessarily situated within the value system of a given culture.
Etymology
The modern term "culture" is based on a term used by the Ancient Roman orator Cicero in his Tusculanae Disputationes, where he wrote of a cultivation of the soul or "cultura animi," using an agricultural metaphor for the development of a philosophical soul, understood teleologically as the highest possible ideal for human development. Samuel Pufendorf took over this metaphor in a modern context, meaning something similar, but no longer assuming that philosophy was man's natural perfection. His use, and that of many writers after him, "refers to all the ways in which human beings overcome their original barbarism, and through artifice, become fully human."
Philosopher Edward S. Casey (1996) describes: "The very word culture meant 'place tilled' in Middle English, and the same word goes back to Latin colere, 'to inhabit, care for, till, worship' and cultus, 'A cult, especially a religious one.' To be cultural, to have a culture, is to inhabit a place sufficiently intensive to cultivate it—to be responsible for it, to respond to it, to attend to it caringly."https://read.amazon.com/?asin=B00DG8M7EU
Culture described by Velkley: ... originally meant the cultivation of the soul or mind, acquires most of its later modern meaning in the writings of the 18th-century German thinkers, who were on various levels developing Rousseau's criticism of "modern liberalism and Enlightenment". Thus a contrast between "culture" and "civilization" is usually implied in these authors, even when not expressed as such.
Change
thumb|left|A 19th-century engraving showing Australian natives opposing the arrival of Captain James Cook in 1770
thumb|175px|An Assyrian child wearing traditional clothing.
Cultural invention has come to mean any innovation that is new and found to be useful to a group of people and expressed in their behavior but which does not exist as a physical object. Humanity is in a global "accelerating culture change period," driven by the expansion of international commerce, the mass media, and above all, the human population explosion, among other factors. Culture repositioning means the reconstruction of the cultural concept of a society.
thumb|Full-length profile portrait of Turkman woman, standing on a carpet at the entrance to a yurt, dressed in traditional clothing and jewelry
Cultures are internally affected by both forces encouraging change and forces resisting change. These forces are related to both social structures and natural events, and are involved in the perpetuation of cultural ideas and practices within current structures, which themselves are subject to change. (See structuration.)
Social conflict and the development of technologies can produce changes within a society by altering social dynamics and promoting new cultural models, and spurring or enabling generative action. These social shifts may accompany ideological shifts and other types of cultural change. For example, the U.S. feminist movement involved new practices that produced a shift in gender relations, altering both gender and economic structures. Environmental conditions may also enter as factors. For example, after tropical forests returned at the end of the last ice age, plants suitable for domestication were available, leading to the invention of agriculture, which in turn brought about many cultural innovations and shifts in social dynamics.
Cultures are externally affected via contact between societies, which may also produce—or inhibit—social shifts and changes in cultural practices. War or competition over resources may impact technological development or social dynamics. Additionally, cultural ideas may transfer from one society to another, through diffusion or acculturation. In diffusion, the form of something (though not necessarily its meaning) moves from one culture to another. For example, hamburgers, fast food in the United States, seemed exotic when introduced into China. "Stimulus diffusion" (the sharing of ideas) refers to an element of one culture leading to an invention or propagation in another. "Direct borrowing," on the other hand, tends to refer to technological or tangible diffusion from one culture to another. Diffusion of innovations theory presents a research-based model of why and when individuals and cultures adopt new ideas, practices, and products.
Acculturation has different meanings, but in this context it refers to replacement of the traits of one culture with those of another, such as what happened to certain Native American tribes and to many indigenous peoples across the globe during the process of colonization. Related processes on an individual level include assimilation (adoption of a different culture by an individual) and transculturation. The transnational flow of culture has played a major role in merging different culture and sharing thoughts, ideas, and beliefs.
Early modern discourses
German Romanticism
thumb|upright|Johann Herder called attention to national cultures.
Immanuel Kant (1724–1804) formulated an individualist definition of "enlightenment" similar to the concept of bildung: "Enlightenment is man's emergence from his self-incurred immaturity."Kant, Immanuel. 1784. "Answering the Question: What is Enlightenment?" (German: "Beantwortung der Frage: Was ist Aufklärung?") Berlinische Monatsschrift, December (Berlin Monthly) He argued that this immaturity comes not from a lack of understanding, but from a lack of courage to think independently. Against this intellectual cowardice, Kant urged: Sapere aude, "Dare to be wise!" In reaction to Kant, German scholars such as Johann Gottfried Herder (1744–1803) argued that human creativity, which necessarily takes unpredictable and highly diverse forms, is as important as human rationality. Moreover, Herder proposed a collective form of bildung: "For Herder, Bildung was the totality of experiences that provide a coherent identity, and sense of common destiny, to a people."Michael Eldridge, "The German Bildung Tradition" UNC Charlotte
thumb|left|upright|Adolf Bastian developed a universal model of culture.
In 1795, the Prussian linguist and philosopher Wilhelm von Humboldt (1767–1835) called for an anthropology that would synthesize Kant's and Herder's interests. During the Romantic era, scholars in Germany, especially those concerned with nationalist movements—such as the nationalist struggle to create a "Germany" out of diverse principalities, and the nationalist struggles by ethnic minorities against the Austro-Hungarian Empire—developed a more inclusive notion of culture as "worldview" (Weltanschauung). According to this school of thought, each ethnic group has a distinct worldview that is incommensurable with the worldviews of other groups. Although more inclusive than earlier views, this approach to culture still allowed for distinctions between "civilized" and "primitive" or "tribal" cultures.
In 1860, Adolf Bastian (1826–1905) argued for "the psychic unity of mankind." He proposed that a scientific comparison of all human societies would reveal that distinct worldviews consisted of the same basic elements. According to Bastian, all human societies share a set of "elementary ideas" (Elementargedanken); different cultures, or different "folk ideas" (Völkergedanken), are local modifications of the elementary ideas."Adolf Bastian", Today in Science History; "Adolf Bastian", Encyclopædia Britannica This view paved the way for the modern understanding of culture. Franz Boas (1858–1942) was trained in this tradition, and he brought it with him when he left Germany for the United States.Liron, Tal (May 2003). Franz Boas and the Discovery of Culture (senior honors thesis). Retrieved from http://home.uchicago.edu/~tliron/boas/boas.pdf.
English Romanticism
thumb|left|upright|British poet and critic Matthew Arnold viewed "culture" as the cultivation of the humanist ideal.
In the 19th century, humanists such as English poet and essayist Matthew Arnold (1822–1888) used the word "culture" to refer to an ideal of individual human refinement, of "the best that has been thought and said in the world."Arnold, Matthew. 1869. Culture and Anarchy. This concept of culture is also comparable to the German concept of bildung: "...culture being a pursuit of our total perfection by means of getting to know, on all the matters which most concern us, the best which has been thought and said in the world."
In practice, culture referred to an elite ideal and was associated with such activities as art, classical music, and haute cuisine.Williams (1983), p. 90. Cited in Shuker, Roy (1994). Understanding Popular Music, p. 5. ISBN 0-415-10723-7. argues that contemporary definitions of culture fall into three possibilities or mixture of the following three:
"a general process of intellectual, spiritual, and aesthetic development"
"a particular way of life, whether of a people, period, or a group"
"the works and practices of intellectual and especially artistic activity". As these forms were associated with urban life, "culture" was identified with "civilization" (from lat. civitas, city). Another facet of the Romantic movement was an interest in folklore, which led to identifying a "culture" among non-elites. This distinction is often characterized as that between high culture, namely that of the ruling social group, and low culture. In other words, the idea of "culture" that developed in Europe during the 18th and early 19th centuries reflected inequalities within European societies.Bakhtin 1981, p. 4
thumb|upright|British anthropologist Edward Tylor was one of the first English-speaking scholars to use the term culture in an inclusive and universal sense.
Matthew Arnold contrasted "culture" with anarchy; other Europeans, following philosophers Thomas Hobbes and Jean-Jacques Rousseau, contrasted "culture" with "the state of nature." According to Hobbes and Rousseau, the Native Americans who were being conquered by Europeans from the 16th centuries on were living in a state of nature; this opposition was expressed through the contrast between "civilized" and "uncivilized." According to this way of thinking, one could classify some countries and nations as more civilized than others and some people as more cultured than others. This contrast led to Herbert Spencer's theory of Social Darwinism and Lewis Henry Morgan's theory of cultural evolution. Just as some critics have argued that the distinction between high and low cultures is really an expression of the conflict between European elites and non-elites, other critics have argued that the distinction between civilized and uncivilized people is really an expression of the conflict between European colonial powers and their colonial subjects.
Other 19th-century critics, following Rousseau, have accepted this differentiation between higher and lower culture, but have seen the refinement and sophistication of high culture as corrupting and unnatural developments that obscure and distort people's essential nature. These critics considered folk music (as produced by "the folk," i.e., rural, illiterate, peasants) to honestly express a natural way of life, while classical music seemed superficial and decadent. Equally, this view often portrayed indigenous peoples as "noble savages" living authentic and unblemished lives, uncomplicated and uncorrupted by the highly stratified capitalist systems of the West.
In 1870 the anthropologist Edward Tylor (1832–1917) applied these ideas of higher versus lower culture to propose a theory of the evolution of religion. According to this theory, religion evolves from more polytheistic to more monotheistic forms.McClenon, pp. 528–29 In the process, he redefined culture as a diverse set of activities characteristic of all human societies. This view paved the way for the modern understanding of culture.
Anthropology
thumb|upright|Petroglyphs in modern-day Gobustan, Azerbaijan, dating back to 10,000 BCE and indicating a thriving culture
Although anthropologists worldwide refer to Tylor's definition of culture,Giulio Angioni, L'antropologia evoluzionistica di Edward B. Tylor in Tre saggi... cit. in Related Studies in the 20th century "culture" emerged as the central and unifying concept of American anthropology, where it most commonly refers to the universal human capacity to classify and encode human experiences symbolically, and to communicate symbolically encoded experiences socially. American anthropology is organized into four fields, each of which plays an important role in research on culture: biological anthropology, linguistic anthropology, cultural anthropology, and in the United States, archaeology.
Sociology
The sociology of culture concerns culture as manifested in society. For sociologist Georg Simmel (1858–1918), culture referred to "the cultivation of individuals through the agency of external forms which have been objectified in the course of history." As such, culture in the sociological field can be defined as the ways of thinking, the ways of acting, and the material objects that together shape a people's way of life. Culture can be any of two types, non-material culture or material culture. Non-material culture refers to the non-physical ideas that individuals have about their culture, including values, belief systems, rules, norms, morals, language, organizations, and institutions, while material culture is the physical evidence of a culture in the objects and architecture they make or have made. The term tends to be relevant only in archeological and anthropological studies, but it specifically means all material evidence which can be attributed to culture, past or present.
Cultural sociology first emerged in Weimar Germany (1918–1933), where sociologists such as Alfred Weber used the term Kultursoziologie (cultural sociology). Cultural sociology was then "reinvented" in the English-speaking world as a product of the "cultural turn" of the 1960s, which ushered in structuralist and postmodern approaches to social science. This type of cultural sociology may be loosely regarded as an approach incorporating cultural analysis and critical theory. Cultural sociologists tend to reject scientific methods, instead hermeneutically focusing on words, artifacts and symbols. Physicist Alan Sokal published a paper in a journal of cultural sociology stating that gravity was a social construct that should be examined hermeneutically. See Sokal affair for further details. "Culture" has since become an important concept across many branches of sociology, including resolutely scientific fields like social stratification and social network analysis. As a result, there has been a recent influx of quantitative sociologists to the field. Thus, there is now a growing group of sociologists of culture who are, confusingly, not cultural sociologists. These scholars reject the abstracted postmodern aspects of cultural sociology, and instead look for a theoretical backing in the more scientific vein of social psychology and cognitive science.
Early researchers and development of cultural sociology
The sociology of culture grew from the intersection between sociology (as shaped by early theorists like Marx, Durkheim, and Weber) with the growing discipline of anthropology, wherein researchers pioneered ethnographic strategies for describing and analyzing a variety of cultures around the world. Part of the legacy of the early development of the field lingers in the methods (much of cultural sociological research is qualitative), in the theories (a variety of critical approaches to sociology are central to current research communities), and in the substantive focus of the field. For instance, relationships between popular culture, political control, and social class were early and lasting concerns in the field.
Cultural studies
In the United Kingdom, sociologists and other scholars influenced by Marxism such as Stuart Hall (1932–2014) and Raymond Williams (1921–1988) developed cultural studies. Following nineteenth-century Romantics, they identified "culture" with consumption goods and leisure activities (such as art, music, film, food, sports, and clothing). Nevertheless, they saw patterns of consumption and leisure as determined by relations of production, which led them to focus on class relations and the organization of production.
In the United States, cultural studies focuses largely on the study of popular culture; that is, on the social meanings of mass-produced consumer and leisure goods. Richard Hoggart coined the term in 1964 when he founded the Birmingham Centre for Contemporary Cultural Studies or CCCS. It has since become strongly associated with Stuart Hall, who succeeded Hoggart as Director. Cultural studies in this sense, then, can be viewed as a limited concentration scoped on the intricacies of consumerism, which belongs to a wider culture sometimes referred to as "Western civilization" or "globalism."
From the 1970s onward, Stuart Hall's pioneering work, along with that of his colleagues Paul Willis, Dick Hebdige, Tony Jefferson, and Angela McRobbie, created an international intellectual movement. As the field developed, it began to combine political economy, communication, sociology, social theory, literary theory, media theory, film/video studies, cultural anthropology, philosophy, museum studies, and art history to study cultural phenomena or cultural texts. In this field researchers often concentrate on how particular phenomena relate to matters of ideology, nationality, ethnicity, social class, and/or gender. Cultural studies is concerned with the meaning and practices of everyday life. These practices comprise the ways people do particular things (such as watching television, or eating out) in a given culture. It also studies the meanings and uses people attribute to various objects and practices. Specifically, culture involves those meanings and practices held independently of reason. Watching television in order to view a public perspective on a historical event should not be thought of as culture, unless referring to the medium of television itself, which may have been selected culturally; however, schoolchildren watching television after school with their friends in order to "fit in" certainly qualifies, since there is no grounded reason for one's participation in this practice.
In the context of cultural studies, the idea of a text includes not only written language, but also films, photographs, fashion or hairstyles: the texts of cultural studies comprise all the meaningful artifacts of culture. Similarly, the discipline widens the concept of "culture." "Culture" for a cultural-studies researcher not only includes traditional high culture (the culture of ruling social groups) and popular culture, but also everyday meanings and practices. The last two, in fact, have become the main focus of cultural studies. A further and recent approach is comparative cultural studies, based on the disciplines of comparative literature and cultural studies.
Scholars in the United Kingdom and the United States developed somewhat different versions of cultural studies after the late 1970s. The British version of cultural studies had originated in the 1950s and 1960s, mainly under the influence of Richard Hoggart, E. P. Thompson, and Raymond Williams, and later that of Stuart Hall and others at the Centre for Contemporary Cultural Studies at the University of Birmingham. This included overtly political, left-wing views, and criticisms of popular culture as "capitalist" mass culture; it absorbed some of the ideas of the Frankfurt School critique of the "culture industry" (i.e. mass culture). This emerges in the writings of early British cultural-studies scholars and their influences: see the work of (for example) Raymond Williams, Stuart Hall, Paul Willis, and Paul Gilroy.
In the United States, Lindlof and Taylor write, "Cultural studies [were] grounded in a pragmatic, liberal-pluralist tradition." The American version of cultural studies initially concerned itself more with understanding the subjective and appropriative side of audience reactions to, and uses of, mass culture; for example, American cultural-studies advocates wrote about the liberatory aspects of fandom. The distinction between American and British strands, however, has faded. Some researchers, especially in early British cultural studies, apply a Marxist model to the field. This strain of thinking has some influence from the Frankfurt School, but especially from the structuralist Marxism of Louis Althusser and others. The main focus of an orthodox Marxist approach concentrates on the production of meaning. This model assumes a mass production of culture and identifies power as residing with those producing cultural artifacts. In a Marxist view, those who control the means of production (the economic base) essentially control a culture. Other approaches to cultural studies, such as feminist cultural studies and later American developments of the field, distance themselves from this view. They criticize the Marxist assumption of a single, dominant meaning, shared by all, for any cultural product. The non-Marxist approaches suggest that different ways of consuming cultural artifacts affect the meaning of the product. This view comes through in the book Doing Cultural Studies: The Story of the Sony Walkman (by Paul du Gay et al.),
which seeks to challenge the notion that those who produce commodities control the meanings that people attribute to them. Feminist cultural analyst, theorist, and art historian Griselda Pollock contributed to cultural studies from viewpoints of art history and psychoanalysis. The writer Julia Kristeva is among influential voices at the turn of the century, contributing to cultural studies from the field of art and psychoanalytical French feminism.
Petrakis and Kostis (2013) divide cultural background variables into two main groups:
The first group covers the variables that represent the "efficiency orientation" of the societies: performance orientation, future orientation, assertiveness, power distance and uncertainty avoidance.
The second covers the variables that represent the "social orientation" of societies, i.e., the attitudes and lifestyles of their members. These variables include gender egalitarianism, institutional collectivism, in-group collectivism and human orientation.
Cultural dynamics
thumb|The Beatles exemplified changing cultural dynamics, not only in music, but fashion and lifestyle. Over a half century after their emergence they continue to have a worldwide cultural impact.
Raimon Panikkar identified 29 ways in which cultural change can be brought about, including growth, development, evolution, involution, renovation, reconception, reform, innovation, revivalism, revolution, mutation, progress, diffusion, osmosis, borrowing, eclecticism, syncretism, modernization, indigenization, and transformation. In this context, modernization could be viewed as adoption of Enlightenment era beliefs and practices, such as science, rationalism, industry, commerce, democracy, and the notion of progress.
See also
Animal culture
Anthropology
Cultural area
Outline of culture
Semiotics of culture
Notes
Additional sources
Books
"Adolf Bastian", Encyclopædia Britannica Online, January 27, 2009
Arnold, Matthew. 1869. Culture and Anarchy. New York: Macmillan. Third edition, 1882, available online. Retrieved: 2006-06-28.
Bakhtin, M. M. (1981) The Dialogic Imagination: Four Essays. Ed. Michael Holquist. Trans. Caryl Press. ISBN 978-0-252-06445-6.
Barzilai, Gad. 2003. Communities and Law: Politics and Cultures of Legal Identities University of Michigan Press. ISBN 0-472-11315-1
Bourdieu, Pierre. 1977. Outline of a Theory of Practice. Cambridge University Press. ISBN 978-0-521-29164-4
Cohen, Anthony P. 1985. The Symbolic Construction of Community. Routledge: New York,
Dawkins, R. 1982. The Extended Phenotype: The Long Reach of the Gene. Paperback ed., 1999. Oxford Paperbacks. ISBN 978-0-19-288051-2
Findley & Rothney. Twentieth-Century World (Houghton Mifflin, 1986)
Geertz, Clifford. 1973. The Interpretation of Cultures: Selected Essays. New York. ISBN 978-0-465-09719-7.
1957. "Ritual and Social Change: A Javanese Example", American Anthropologist, Vol. 59, No. 1.
Goodall, J. 1986. The Chimpanzees of Gombe: Patterns of Behavior. Cambridge, MA: Belknap Press of Harvard University Press. ISBN 978-0-674-11649-8
Hoult, T. F., ed. 1969. Dictionary of Modern Sociology. Totowa, New Jersey, United States: Littlefield, Adams & Co.
Jary, D. and J. Jary. 1991. The HarperCollins Dictionary of Sociology. New York: HarperCollins. ISBN 0-06-271543-7
Keiser, R. Lincoln 1969. The Vice Lords: Warriors of the Streets. Holt, Rinehart, and Winston. ISBN 978-0-03-080361-1.
Kroeber, A. L. and C. Kluckhohn, 1952. Culture: A Critical Review of Concepts and Definitions. Cambridge, MA: Peabody Museum
Kim, Uichol (2001). "Culture, science and indigenous psychologies: An integrated analysis." In D. Matsumoto (Ed.), Handbook of culture and psychology. Oxford: Oxford University Press
McClenon, James. "Tylor, Edward B(urnett)". Encyclopedia of Religion and Society. Ed. William Swatos and Peter Kivisto. Walnut Creek: AltaMira, 1998. 528–29.
Middleton, R. 1990. Studying Popular Music. Philadelphia: Open University Press. ISBN 978-0-335-15275-9.
O'Neil, D. 2006. Cultural Anthropology Tutorials, Behavioral Sciences Department, Palomar College, San Marco, California. Retrieved: 2006-07-10.
Reagan, Ronald. "Final Radio Address to the Nation", January 14, 1989. Retrieved June 3, 2006.
Reese, W.L. 1980. Dictionary of Philosophy and Religion: Eastern and Western Thought. New Jersey U.S., Sussex, U.K: Humanities Press.
UNESCO. 2002. Universal Declaration on Cultural Diversity, issued on International Mother Language Day, February 21, 2002. Retrieved: 2006-06-23.
White, L. 1949. The Science of Culture: A study of man and civilization. New York: Farrar, Straus and Giroux.
Wilson, Edward O. (1998). Consilience: The Unity of Knowledge. Vintage: New York. ISBN 978-0-679-76867-8.
Wolfram, Stephen. 2002 A New Kind of Science. Wolfram Media, Inc. ISBN 978-1-57955-008-0
Articles
"Adolf Bastian". Today in Science History. January 27, 2009 Today in Science History
The Meaning of "Culture" (2014-12-27), Joshua Rothman, The New Yorker
External links
Cultura: International Journal of Philosophy of Culture and Axiology
Religion and Culture: Differences (Table)
diq:Portal:Zagon | 19,159,508 | 2017-01 |
New York City | The City of New York, often called New York City or simply New York, is the most populous city in the United States. With an estimated 2015 population of 8,550,405 distributed over a land area of just , New York City is also the most densely populated major city in the United States. Located at the southern tip of the state of New York, the city is the center of the New York metropolitan area, one of the most populous urban agglomerations in the world. A global power city, New York City exerts a significant impact upon commerce, finance, media, art, fashion, research, technology, education, and entertainment, its fast pace defining the term New York minute. Home to the headquarters of the United Nations, New York is an important center for international diplomacy and has been described as the cultural and financial capital of the world.
Situated on one of the world's largest natural harbors, New York City consists of five boroughs, each of which is a separate county of New York State. The five boroughs – Brooklyn, Queens, Manhattan, The Bronx, and Staten Island – were consolidated into a single city in 1898. The city and its metropolitan area constitute the premier gateway for legal immigration to the United States, and as many as 800 languages are spoken in New York, making it the most linguistically diverse city in the world. By 2015 estimates, the New York City metropolitan region remains by a significant margin the most populous in the United States, as defined by both the Metropolitan Statistical Area (20.2 million residents) and the Combined Statistical Area (23.7 million residents). In 2013, the MSA produced a gross metropolitan product (GMP) of nearly US$1.39 trillion, while in 2012, the CSA generated a GMP of over US$1.55 trillion, both ranking first nationally by a wide margin and behind the GDP of only twelve and eleven countries, respectively.
New York City traces its origin to its 1624 founding in Lower Manhattan as a trading post by colonists of the Dutch Republic and was named New Amsterdam in 1626. The city and its surroundings came under English control in 1664 and were renamed New York after King Charles II of England granted the lands to his brother, the Duke of York. New York served as the capital of the United States from 1785 until 1790. It has been the country's largest city since 1790. The Statue of Liberty greeted millions of immigrants as they came to the Americas by ship in the late 19th and early 20th centuries and is a symbol of the United States and its democracy. In the 21st century, New York has emerged as a global node of creativity and entrepreneurship, social tolerance, and environmental sustainability.
Many districts and landmarks in New York City have become well known, and the city received a record of nearly 60 million tourists in 2015, hosting three of the world's ten most visited tourist attractions in 2013. Several sources have ranked New York the most photographed city in the world. Times Square, iconic as the world's "heart" and its "Crossroads", is the brightly illuminated hub of the Broadway Theater District, one of the world's busiest pedestrian intersections, and a major center of the world's entertainment industry. The names of many of the city's bridges, tapered skyscrapers, and parks are known around the world. Anchored by Wall Street in the Financial District of Lower Manhattan, New York City has been called both the most economically powerful city and the leading financial center of the world, and the city is home to the world's two largest stock exchanges by total market capitalization, the New York Stock Exchange and NASDAQ. Manhattan's real estate market is among the most expensive in the world. Manhattan's Chinatown incorporates the highest concentration of Chinese people in the Western Hemisphere, with multiple signature Chinatowns developing across the city. Providing continuous 24/7 service, the New York City Subway is one of the most extensive metro systems worldwide, with stations in operation. Over 120 colleges and universities are located in New York City, including Columbia University, New York University, and Rockefeller University, which have been ranked among the top 35 in the world.
History
Etymology and early history
During the Wisconsinan glaciation, the New York City region was situated at the edge of a large ice sheet over 1,000 feet in depth. The ice sheet scraped away large amounts of soil, leaving the bedrock that serves as the geologic foundation for much of New York City today. Later on, movement of the ice sheet would contribute to the separation of what are now Long Island and Staten Island.
In the precolonial era, the area of present-day New York City was inhabited by various bands of Algonquian tribes of Native Americans, including the Lenape, whose homeland, known as Lenapehoking, included Staten Island; the western portion of Long Island, including the area that would become Brooklyn and Queens; Manhattan; the Bronx; and the Lower Hudson Valley.Evan T. Pritchard: Native New Yorkers: the legacy of the Algonquin people of New York, p.27 (2002); ISBN 1-57178-107-2
The first documented visit by a European was in 1524 by Giovanni da Verrazzano, a Florentine explorer in the service of the French crown, who sailed his ship La Dauphine into New York Harbor. He claimed the area for France and named it "Nouvelle Angoulême" (New Angoulême).
thumb|Peter Minuit is credited with the purchase of the island of Manhattan in 1626.|alt=A pen drawing of two men in 16th-century Dutch clothing presenting an open box of items to a group of Native Americans in feather headdresses stereotypical of plains tribes.
A Spanish expedition led by captain Estêvão Gomes, a Portuguese sailing for Emperor Charles V, arrived in New York Harbor in January 1525 aboard the purpose-built caravel La Anunciada and charted the mouth of the Hudson River, which he named Rio de San Antonio. Heavy ice kept him from further exploration, and he returned to Spain in August. The Padrón Real of 1527, the first scientific map to show North America's east coast continuously, was informed by Gomes' expedition and labeled the Northeastern U.S. as Tierra de Esteban Gómez in his honor.Wpa Writer's Project:A Maritime History of New York, p.246;Going Coastal Productions (2004) ISBN 0-9729803-1-8
In 1609, the English explorer Henry Hudson rediscovered the region when he sailed his ship the Halve Maen ("Half Moon" in Dutch) into New York Harbor while searching for the Northwest Passage to the Orient for the Dutch East India Company. He proceeded to sail up what the Dutch would name the North River (now the Hudson River), named first by Hudson as the Mauritius after Maurice, Prince of Orange. Hudson's first mate described the harbor as "a very good Harbour for all windes" and the river as "a mile broad" and "full of fish." Hudson sailed roughly 150 miles north, past the site of the present-day Albany, in the belief that it might be an oceanic tributary before the river became too shallow to continue. He made a ten-day exploration of the area and claimed the region for the Dutch East India Company. In 1614, the area between Cape Cod and Delaware Bay would be claimed by the Netherlands and called Nieuw-Nederland (New Netherland).
The first non-Native American inhabitant of what would eventually become New York City was Dominican trader Juan Rodriguez (transliterated to Dutch as Jan Rodrigues). Born in Santo Domingo of Portuguese and African descent, he arrived in Manhattan during the winter of 1613–1614, trapping for pelts and trading with the local population as a representative of the Dutch. Broadway, from 159th Street to 218th Street, is named Juan Rodriguez Way in his honor.Juan Rodriguez monograph. Ccny.cuny.edu. Retrieved July 12, 2013.
thumb|left|New Amsterdam, centered in the eventual Lower Manhattan, in 1664, the year England took control and renamed it "New York".|alt=A painting of a coastline dotted with red roof houses and a windmill, with several masted ships sailing close to shore under blue sky.
A permanent European presence in New Netherland began in 1624 – making New York the 12th oldest continuously occupied European-established settlement in the continental United States – with the founding of a Dutch fur trading settlement on Governors Island. In 1625, construction was started on a citadel and Fort Amsterdam on Manhattan Island, later called New Amsterdam (Nieuw Amsterdam).Dutch Colonies, National Park Service. Accessed May 19, 2007. "Sponsored by the West India Company, 30 families arrived in North America in 1624, establishing a settlement on present-day Manhattan."Tolerance Park Historic New Amsterdam on Governors Island, Tolerance Park. Accessed May 12, 2007. See Legislative Resolutions Senate No. 5476 and Assembly No. 2708. The colony of New Amsterdam was centered at the site which would eventually become Lower Manhattan. In 1626, the Dutch colonial Director-General Peter Minuit, acting as charged by the Dutch West India Company, purchased the island of Manhattan from the Canarsie, a small Lenape band,Frederick M. Binder, David M. Reimers: All the Nations Under Heaven: An Ethnic and Racial History of New York City, p. 4;(1996)ISBN 0-231-07879-X for 60 guildersPieter Schaghen Letter 1626: "... hebben t'eylant Manhattes van de wilde gekocht, voor de waerde van 60 gulden: is groot 11000 morgen. ... "("... They have purchased the Island Manhattes from the Indians for the value of 60 guilders. It is 11,000 morgens in size ...) (about $1,000 in 2006). A disproved legend claims that Manhattan was purchased for $24 worth of glass beads.
Following the purchase, New Amsterdam grew slowly. To attract settlers, the Dutch instituted the patroon system in 1628, whereby wealthy Dutchmen ("patroons", or patrons) who brought 50 colonists to New Netherland would be awarded swathes of land in New Netherland, along with local political autonomy and rights to participate in the lucrative fur trade. This program had little success.
Since 1621, the Dutch West India Company had operated as a monopoly in New Netherland, on authority granted by the Dutch States General. In 1639–1640, in an effort to bolster economy growth, the Dutch West India Company relinquished its monopoly over the fur trade in New Netherland, leading to growth in the production and trade of food, timber, tobacco, and slaves (particularly with the Dutch West Indies).
In 1647, Peter Stuyvesant began his tenure as the last Director-General of New Netherland. During his tenure, the population of New Amsterdam grew from 2,000 to 8,000. Stuyvesant has been credited with improving law and order in the colony; however, he also earned a reputation as a despotic leader. He instituted regulations on liquor sales, attempted to assert control over the Dutch Reformed Church, and blocked other religious groups (including Quakers, Jews, and Lutherans) from establishing houses of worship. The Dutch West India Company would eventually attempt to ease tensions between Stuyvesant and residents of New Amsterdam.
In 1664, unable to summon any significant resistance, Stuyvesant surrendered New Amsterdam to English troops led by Colonel Richard Nicolls without bloodshed. The terms of the surrender permitted Dutch residents to remain in the colony and allowed for religious freedom. The English promptly renamed the fledgling city "New York" after the Duke of York (the future King James II of England). The transfer was confirmed in 1667 by the Treaty of Breda, which concluded the Second Anglo-Dutch War.
On August 24, 1673, during the Third Anglo-Dutch War, Dutch captain Anthony Colve seized the colony of New York from England at the behest of Cornelis Evertsen the Youngest and rechristened it "New Orange" after William III, the Prince of Orange. The Dutch would soon return the island to England under the Treaty of Westminster of November 1674.
Several intertribal wars among the Native Americans and some epidemics brought on by contact with the Europeans caused sizable population losses for the Lenape between the years 1660 and 1670."Native Americans". Penn Treaty Museum. By 1700, the Lenape population had diminished to 200."Gotham Center for New York City History" Timeline 1700–1800
New York experienced several yellow fever epidemics in the 18th century, losing ten percent of its population to the disease in 1702."The Early History of Yellow Fever" (PDF). Pedro Nogueira, Thomas Jefferson University. 2009."Timeline of Yellow Fever in America". Public Broadcasting Service (PBS).
New York grew in importance as a trading port while under British rule in the early 1700s. It also became a center of slavery, with 42% of households holding slaves by 1730, more than any other city other than Charleston, South Carolina. Most slaveholders held a few or several domestic slaves, but others hired them out to work at labor. Slavery became integrally tied to New York's economy through the labor of slaves throughout the port, and the banks and shipping tied to the American South. Discovery of the African Burying Ground in the 1990s, during construction of a new federal courthouse near Foley Square, revealed that tens of thousands of Africans had been buried in the area in the colonial years.
The 1735 trial and acquittal in Manhattan of John Peter Zenger, who had been accused of seditious libel after criticizing colonial governor William Cosby, helped to establish the freedom of the press in North America. In 1754, Columbia University was founded under charter by King George II as King's College in Lower Manhattan. The Stamp Act Congress met in New York in October 1765 as the Sons of Liberty organized in the city, skirmishing over the next ten years with British troops stationed there.
thumb|The Battle of Long Island, the largest battle of the American Revolution, took place in Brooklyn in 1776.|alt=Colonial era soldiers stand and kneel while firing muskets at and advancing enemy. Behind them is a mounted soldier with a bayonet and behind them is a large flag.
The Battle of Long Island, the largest battle of the American Revolutionary War, was fought in August 1776 entirely within the modern-day borough of Brooklyn. After the battle, in which the Americans were defeated, the British made the city their military and political base of operations in North America. The city was a haven for Loyalist refugees and escaped slaves who joined the British lines for freedom newly promised by the Crown for all fighters. As many as 10,000 escaped slaves crowded into the city during the British occupation. When the British forces evacuated at the close of the war in 1783, they transported 3,000 freedmen for resettlement in Nova Scotia. They resettled other freedmen in England and the Caribbean.
The only attempt at a peaceful solution to the war took place at the Conference House on Staten Island between American delegates, including Benjamin Franklin, and British general Lord Howe on September 11, 1776. Shortly after the British occupation began, the Great Fire of New York occurred, a large conflagration on the West Side of Lower Manhattan, which destroyed about a quarter of the buildings in the city, including Trinity Church.Trinity Church bicentennial celebration, May 5, 1897 By Trinity Church (New York, N.Y.) p. 37
In 1785, the assembly of the Congress of the Confederation made New York the national capital shortly after the war. New York was the last capital of the U.S. under the Articles of Confederation and the first capital under the Constitution of the United States. In 1789, the first President of the United States, George Washington, was inaugurated; the first United States Congress and the Supreme Court of the United States each assembled for the first time, and the United States Bill of Rights was drafted, all at Federal Hall on Wall Street. By 1790, New York had surpassed Philadelphia as the largest city in the United States.
thumb|left|Broadway follows the Native American Wickquasgeck Trail through Manhattan.|alt=A painting of a snowy city street with horse-drawn sleds and a 19th-century fire truck under blue sky
Under New York State's gradual abolition act of 1799, children of slave mothers were to be eventually liberated but to be held in indentured servitude until their mid-to-late twenties."An Act for the Gradual Aboliton of Negro Slavery in New York" (L. 1799, Ch. 62) Together with slaves freed by their masters after the Revolutionary War and escaped slaves, a significant free-black population gradually developed in Manhattan. Under such influential United States founders as Alexander Hamilton and John Jay, the New York Manumission Society worked for abolition and established the African Free School to educate black children.New York Divided: Slavery and the Civil War online exhibit, New-York Historical Society, (November 17, 2006 to September 3, 2007, physical exhibit), accessed May 10, 2012 It was not until 1827 that slavery was completely abolished in the state, and free blacks struggled afterward with discrimination. New York interracial abolitionist activism continued; among its leaders were graduates of the African Free School. The city's black population reached more than 16,000 in 1840.Leslie M. Harris, "African Americans in New York City, 1626–1863", Department of History, Emory University
In the 19th century, the city was transformed by development relating to its status as a trading center, as well as by European immigration.Ira Rosenwaike (1972). Population History of New York City, p.55. The city adopted the Commissioners' Plan of 1811, which expanded the city street grid to encompass all of Manhattan. The 1825 completion of the Erie Canal through central New York connected the Atlantic port to the agricultural markets and commodities of the North American interior via the Hudson River and the Great Lakes.; Lankevich (1998), pp. 67–68. Local politics became dominated by Tammany Hall, a political machine supported by Irish and German immigrants.
Several prominent American literary figures lived in New York during the 1830s and 1840s, including William Cullen Bryant, Washington Irving, Herman Melville, Rufus Wilmot Griswold, John Keese, Nathaniel Parker Willis, and Edgar Allan Poe. Public-minded members of the contemporaneous business elite lobbied for the establishment of Central Park, which in 1857 became the first landscaped park in an American city.
Modern history
thumb|Manhattan's Little Italy, Lower East Side, circa 1900.
The Great Irish Famine brought a large influx of Irish immigrants. Over 200,000 were living in New York by 1860, upwards of a quarter of the city's population."Cholera in Nineteenth Century New York". VNY, City University of New York. There was also extensive immigration from the German provinces, where revolutions had disrupted societies, and Germans comprised another 25% of New York's population by 1860.Leslie M. Harris, "The New York City Draft Riots", excerpt from In the Shadow of Slavery: African Americans in New York City, 1626–1863, University of Chicago Press, 2003
Democratic Party candidates were consistently elected to local office, increasing the city's ties to the South and its dominant party. In 1861, Mayor Fernando Wood called on the aldermen to declare independence from Albany and the United States after the South seceded, but his proposal was not acted on. Anger at new military conscription laws during the American Civil War (1861–1865), which spared wealthier men who could afford to pay a $300 () commutation fee to hire a substitute,"The Draft in the Civil War", u-s-history.com.William Bryk, "The Draft Riots, Part II", New York Press blogpost, August 2, 2002. led to the Draft Riots of 1863, whose most visible participants were ethnic Irish working class. The situation deteriorated into attacks on New York's elite, followed by attacks on black New Yorkers and their property after fierce competition for a decade between Irish immigrants and black people for work. Rioters burned the Colored Orphan Asylum to the ground, with more than 200 children escaping harm due to efforts of the New York City Police Department, which was mainly made up of Irish immigrants. According to historian James M. McPherson (2001), at least 120 people were killed. In all, eleven black men were lynched over five days, and the riots forced hundreds of blacks to flee the city for Williamsburg, Brooklyn, and New Jersey; the black population in Manhattan fell below 10,000 by 1865, which it had last been in 1820. The white working class had established dominance. Violence by longshoremen against black men was especially fierce in the docks area. It was one of the worst incidents of civil unrest in American history.
thumb|left|A construction worker on top of the Empire State Building as it was being built in 1930. The Chrysler Building is below and behind him.|alt=A man working on a steel girder high about a city skyline.
In 1898, the modern City of New York was formed with the consolidation of Brooklyn (until then a separate city), the County of New York (which then included parts of the Bronx), the County of Richmond, and the western portion of the County of Queens., New York City. Retrieved June 29, 2007 The opening of the subway in 1904, first built as separate private systems, helped bind the new city together. Throughout the first half of the 20th century, the city became a world center for industry, commerce, and communication.
In 1904, the steamship General Slocum caught fire in the East River, killing 1,021 people on board. In 1911, the Triangle Shirtwaist Factory fire, the city's worst industrial disaster, took the lives of 146 garment workers and spurred the growth of the International Ladies' Garment Workers' Union and major improvements in factory safety standards.
thumb|right|Dag Hammarskjold outside the UN building.jpg|thumb|upright|UN Secretary General Dag Hammarskjöld in front of the United Nations Headquarters building, completed in 1952
New York's non-white population was 36,620 in 1890.Ira Rosenwaike (1972).Population History of New York City, p.78. New York City was a prime destination in the early twentieth century for African Americans during the Great Migration from the American South, and by 1916, New York City was home to the largest urban African diaspora in North America. The Harlem Renaissance of literary and cultural life flourished during the era of Prohibition. The larger economic boom generated construction of skyscrapers competing in height and creating an identifiable skyline.
New York became the most populous urbanized area in the world in the early 1920s, overtaking London. The metropolitan area surpassed the 10 million mark in the early 1930s, becoming the first megacity in human history. The difficult years of the Great Depression saw the election of reformer Fiorello La Guardia as mayor and the fall of Tammany Hall after eighty years of political dominance.
Returning World War II veterans created a post-war economic boom and the development of large housing tracts in eastern Queens. New York emerged from the war unscathed as the leading city of the world, with Wall Street leading America's place as the world's dominant economic power. The United Nations Headquarters was completed in 1952, solidifying New York's global geopolitical influence, and the rise of abstract expressionism in the city precipitated New York's displacement of Paris as the center of the art world.
thumb|left|The Stonewall Inn in Greenwich Village, a designated U.S. National Historic Landmark and National Monument, as the site of the 1969 Stonewall Riots.|alt=A two-story building with brick on the first floor, with two arched doorways, and gray stucco on the second floor off of which hang numerous rainbow flags.
The Stonewall riots were a series of spontaneous, violent demonstrations by members of the gay community against a police raid that took place in the early morning hours of June 28, 1969, at the Stonewall Inn in the Greenwich Village neighborhood of Lower Manhattan. They are widely considered to constitute the single most important event leading to the gay liberation movement and the modern fight for LGBT rights in the United States.
thumb|United Airlines Flight 175 hits the South Tower of the original World Trade Center on September 11, 2001.|alt=Two tall, gray, rectangular buildings spewing black smoke and flames, particularly from the left of the two.
In the 1970s, job losses due to industrial restructuring caused New York City to suffer from economic problems and rising crime rates. While a resurgence in the financial industry greatly improved the city's economic health in the 1980s, New York's crime rate continued to increase through that decade and into the beginning of the 1990s. By the mid 1990s, crime rates started to drop dramatically due to revised police strategies, improving economic opportunities, gentrification, and new residents, both American transplants and new immigrants from Asia and Latin America. Important new sectors, such as Silicon Alley, emerged in the city's economy. New York's population reached all-time highs in the 2000 Census and then again in the 2010 Census.
The city and surrounding area suffered the bulk of the economic damage and largest loss of human life in the aftermath of the September 11, 2001 attacks when 10 of the 19 terrorists associated with Al-Qaeda piloted American Airlines Flight 11 into the North Tower of the World Trade Center and United Airlines Flight 175 into the South Tower of the World Trade Center, and later destroyed them, killing 2,192 civilians, 343 firefighters, and 71 law enforcement officers who were in the towers and in the surrounding area. The rebuilding of the area, has created a new One World Trade Center, and a 9/11 memorial and museum along with other new buildings and infrastructure. The World Trade Center PATH station, which opened on July 19, 1909 as the Hudson Terminal, was also destroyed in the attack. A temporary station was built and opened on November 23, 2003. A permanent station, the World Trade Center Transportation Hub, is currently under construction. The new One World Trade Center is the tallest skyscraper in the Western Hemisphere and the fourth-tallest building in the world by pinnacle height, with its spire reaching a symbolic in reference to the year of American independence.
The Occupy Wall Street protests in Zuccotti Park in the Financial District of Lower Manhattan began on September 17, 2011, receiving global attention and spawning the Occupy movement against social and economic inequality worldwide.
Geography
thumb|upright|Satellite imagery illustrating the core of the New York City Metropolitan Area, with Manhattan Island at its center
New York City is situated in the Northeastern United States, in southeastern New York State, approximately halfway between Washington, D.C. and Boston.Washington, D.C. is driving distance from New York, and Boston is driving distance from New York. – Google Maps The location at the mouth of the Hudson River, which feeds into a naturally sheltered harbor and then into the Atlantic Ocean, has helped the city grow in significance as a trading port. Most of New York City is built on the three islands of Long Island, Manhattan, and Staten Island.
The Hudson River flows through the Hudson Valley into New York Bay. Between New York City and Troy, New York, the river is an estuary. The Hudson River separates the city from the U.S. state of New Jersey. The East River—a tidal strait—flows from Long Island Sound and separates the Bronx and Manhattan from Long Island. The Harlem River, another tidal strait between the East and Hudson Rivers, separates most of Manhattan from the Bronx. The Bronx River, which flows through the Bronx and Westchester County, is the only entirely fresh water river in the city.
The city's land has been altered substantially by human intervention, with considerable land reclamation along the waterfronts since Dutch colonial times; reclamation is most prominent in Lower Manhattan, with developments such as Battery Park City in the 1970s and 1980s. Some of the natural relief in topography has been evened out, especially in Manhattan.
The city's total area is . of this is water and is land.
The highest point in the city is Todt Hill on Staten Island, which, at above sea level, is the highest point on the Eastern Seaboard south of Maine. The summit of the ridge is mostly covered in woodlands as part of the Staten Island Greenbelt.
Cityscape
Architecture
thumb|left|upright|Modern architecture juxtaposed with historic architecture is seen often in New York City.
thumb|upright=0.6|The Chrysler Building, built in 1930, is an example of the Art Deco style, with ornamental hub caps and a spire.
thumb|upright=0.6|The Empire State Building is a solitary icon of New York. It was the world's tallest building 1931–70 and is defined by its setbacks, Art Deco details and the spire.
thumb|left|Landmark 19th-century rowhouses, including brownstones, on tree-lined Kent Street in the Greenpoint Historic District, Brooklyn.|alt=A view down a street with rowhouses in brown, white, and various shades of red.
New York has architecturally noteworthy buildings in a wide range of styles and from distinct time periods, from the saltbox style Pieter Claesen Wyckoff House in Brooklyn, the oldest section of which dates to 1656, to the modern One World Trade Center, the skyscraper at Ground Zero in Lower Manhattan and the most expensive office tower in the world by construction cost.
Manhattan's skyline, with its many skyscrapers, is universally recognized, and the city has been home to several of the tallest buildings in the world. , New York City had 5,937 high-rise buildings, of which 550 completed structures were at least high, both second in the world after Hong Kong, with over 50 completed skyscrapers taller than . These include the Woolworth Building (1913), an early gothic revival skyscraper built with massively scaled gothic detailing.
The 1916 Zoning Resolution required setbacks in new buildings and restricted towers to a percentage of the lot size, to allow sunlight to reach the streets below. The Art Deco style of the Chrysler Building (1930) and Empire State Building (1931), with their tapered tops and steel spires, reflected the zoning requirements. The buildings have distinctive ornamentation, such as the eagles at the corners of the 61st floor on the Chrysler Building, and are considered some of the finest examples of the Art Deco style. A highly influential example of the international style in the United States is the Seagram Building (1957), distinctive for its façade using visible bronze-toned I-beams to evoke the building's structure. The Condé Nast Building (2000) is a prominent example of green design in American skyscrapers and has received an award from the American Institute of Architects and AIA New York State for its design.
The character of New York's large residential districts is often defined by the elegant brownstone rowhouses and townhouses and shabby tenements that were built during a period of rapid expansion from 1870 to 1930. In contrast, New York City also has neighborhoods that are less densely populated and feature free-standing dwellings. In neighborhoods such as Riverdale (in the Bronx), Ditmas Park (in Brooklyn), and Douglaston (in Queens), large single-family homes are common in various architectural styles such as Tudor Revival and Victorian.
Stone and brick became the city's building materials of choice after the construction of wood-frame houses was limited in the aftermath of the Great Fire of 1835.Lankevich (1998), pp. 82–83; A distinctive feature of many of the city's buildings is the wooden roof-mounted water towers. In the 1800s, the city required their installation on buildings higher than six stories to prevent the need for excessively high water pressures at lower elevations, which could break municipal water pipes. Garden apartments became popular during the 1920s in outlying areas, such as Jackson Heights.
According to the United States Geological Survey, an updated analysis of seismic hazard in July 2014 revealed a "slightly lower hazard for tall buildings" in New York City than previously assessed. Scientists estimated this lessened risk based upon a lower likelihood than previously thought of slow shaking near the city, which would be more likely to cause damage to taller structures from an earthquake in the vicinity of the city.
Boroughs
thumb|236px|The five boroughs of New York City:
|alt=A map with five insular regions of different colors.
New York City is often referred to collectively as the five boroughs, and in turn, there are hundreds of distinct neighborhoods throughout the boroughs, many with a definable history and character to call their own. If the boroughs were each independent cities, four of the boroughs (Brooklyn, Queens, Manhattan, and the Bronx) would be among the ten most populous cities in the United States (Staten island would be ranked 37th) ; these same boroughs are coterminous with the four most densely populated counties in the United States (New York [Manhattan], Kings [Brooklyn], Bronx, and Queens).
Manhattan (New York County) is the geographically smallest and most densely populated borough and is home to Central Park and most of the city's skyscrapers. Manhattan's (New York County's) population density of 72,033 people per square mile (27,812/km²) in 2015 makes it the highest of any county in the United States and higher than the density of any individual American city. Manhattan is the cultural, administrative, and financial center of New York City and contains the headquarters of many major multinational corporations, the United Nations Headquarters, Wall Street, and a number of important universities. Manhattan is often described as the financial and cultural center of the world.Most of the borough is situated on Manhattan Island, at the mouth of the Hudson River. Several small islands are also part of the borough of Manhattan, including Randall's Island, Wards Island, and Roosevelt Island in the East River, and Governors Island and Liberty Island to the south in New York Harbor. Manhattan Island is loosely divided into Lower, Midtown, and Uptown regions. Uptown Manhattan is divided by Central Park into the Upper East Side and the Upper West Side, and above the park is Harlem. The borough also includes a small neighborhood on the United States mainland, called Marble Hill, which is contiguous with The Bronx. New York City's remaining four boroughs are collectively referred to as the outer boroughs.
Brooklyn (Kings County), on the western tip of Long Island, is the city's most populous borough. Brooklyn is known for its cultural, social, and ethnic diversity, an independent art scene, distinct neighborhoods, and a distinctive architectural heritage. Downtown Brooklyn is the only central core neighborhood in the outer boroughs. The borough has a long beachfront shoreline including Coney Island, established in the 1870s as one of the earliest amusement grounds in the country. Marine Park and Prospect Park are the two largest parks in Brooklyn.
Queens (Queens County), on Long Island north and east of Brooklyn, is geographically the largest borough, the most ethnically diverse county in the United States, and the most ethnically diverse urban area in the world. Historically a collection of small towns and villages founded by the Dutch, the borough has since developed both commercial and residential prominence. Queens is the site of Citi Field, the baseball stadium of the New York Mets, and hosts the annual U.S. Open tennis tournament at Flushing Meadows-Corona Park. Additionally, two of the three busiest airports serving the New York metropolitan area, John F. Kennedy International Airport and LaGuardia Airport, are located in Queens. (The third is Newark Liberty International Airport in Newark, New Jersey.)
Staten Island (Richmond County) is the most suburban in character of the five boroughs. Staten Island is connected to Brooklyn by the Verrazano-Narrows Bridge and to Manhattan by way of the free Staten Island Ferry, a daily commuter ferry which provides unobstructed views of the Statue of Liberty, Ellis Island, and Lower Manhattan. In central Staten Island, the Staten Island Greenbelt spans approximately , including of walking trails and one of the last undisturbed forests in the city. Designated in 1984 to protect the island's natural lands, the Greenbelt comprises seven city parks.
The Bronx (Bronx County) is New York City's northernmost borough and the only New York City borough with a majority of it a part of the mainland United States. It is the location of Yankee Stadium, the baseball park of the New York Yankees, and home to the largest cooperatively owned housing complex in the United States, Co-op City. It is also home to the Bronx Zoo, the world's largest metropolitan zoo, which spans and houses over 6,000 animals. The Bronx is also the birthplace of rap and hip hop culture. Pelham Bay Park is the largest park in New York City, at .
Climate
thumb|Avenue C in Manhattan after flooding caused by Hurricane Sandy on October 29, 2012.
Under the Köppen climate classification, using the isotherm, New York City features a humid subtropical climate (Cfa), and is thus the northernmost major city on the North American continent with this categorization. The suburbs to the immediate north and west lie in the transitional zone between humid subtropical and humid continental climates (Dfa). The city averages 234 days with at least some sunshine annually, and averages 57% of possible sunshine annually, accumulating 2,535 hours of sunshine per annum. The city lies in the USDA 7b plant hardiness zone.
Winters are cold and damp, and prevailing wind patterns that blow offshore minimize the moderating effects of the Atlantic Ocean; yet the Atlantic and the partial shielding from colder air by the Appalachians keep the city warmer in the winter than inland North American cities at similar or lesser latitudes such as Pittsburgh, Cincinnati, and Indianapolis. The daily mean temperature in January, the area's coldest month, is ; temperatures usually drop to several times per winter, and reach several days in the coldest winter month. Spring and autumn are unpredictable and can range from chilly to warm, although they are usually mild with low humidity. Summers are typically warm to hot and humid, with a daily mean temperature of in July. Nighttime conditions are often exacerbated by the urban heat island phenomenon, while daytime temperatures exceed on average of 17 days each summer and in some years exceed . Extreme temperatures have ranged from , recorded on February 9, 1934, up to on July 9, 1936.
The city receives of precipitation annually, which is fairly spread throughout the year.
Average winter snowfall between 1981 and 2010 has been ; this varies considerably from year to year. Hurricanes and tropical storms are rare in the New York area, but they are not unheard of and always have the potential to strike the area. Hurricane Sandy brought a destructive storm surge to New York City on the evening of October 29, 2012, flooding numerous streets, tunnels, and subway lines in Lower Manhattan and other areas of the city and cutting off electricity in many parts of the city and its suburbs. The storm and its profound impacts have prompted the discussion of constructing seawalls and other coastal barriers around the shorelines of the city and the metropolitan area to minimize the risk of destructive consequences from another such event in the future.
Parks
thumb|220px|right|alt=A spherical sculpture and several attractions line a park during a World's Fair.|Flushing Meadows–Corona Park was used in the 1964 New York World's Fair, with the Unisphere as its centerpiece.
The City of New York has a complex park system, with various lands operated by the National Park Service, the New York State Office of Parks, Recreation and Historic Preservation, and the New York City Department of Parks and Recreation.
In its 2013 ParkScore ranking, the Trust for Public Land reported that the park system in New York City was the second best park system among the 50 most populous US cities, behind the park system of Minneapolis. ParkScore ranks urban park systems by a formula that analyzes median park size, park acres as percent of city area, the percent of city residents within a half-mile of a park, spending of park services per resident, and the number of playgrounds per 10,000 residents.
National parks
thumb|left|The Statue of Liberty on Liberty Island in New York Harbor is a symbol of the United States and its ideals of freedom, democracy, and opportunity.
Gateway National Recreation Area contains over in total, most of it surrounded by New York City, including the Jamaica Bay Wildlife Refuge. In Brooklyn and Queens, the park contains over of salt marsh, wetlands, islands, and water, including most of Jamaica Bay. Also in Queens, the park includes a significant portion of the western Rockaway Peninsula, most notably Jacob Riis Park and Fort Tilden. In Staten Island, Gateway National Recreation Area includes Fort Wadsworth, with historic pre-Civil War era Battery Weed and Fort Tompkins, and Great Kills Park, with beaches, trails, and a marina.
The Statue of Liberty National Monument and Ellis Island Immigration Museum are managed by the National Park Service and are in both the states of New York and New Jersey. They are joined in the harbor by Governors Island National Monument, in New York. Historic sites under federal management on Manhattan Island include Castle Clinton National Monument; Federal Hall National Memorial; Theodore Roosevelt Birthplace National Historic Site; General Grant National Memorial ("Grant's Tomb"); African Burial Ground National Monument; and Hamilton Grange National Memorial. Hundreds of private properties are listed on the National Register of Historic Places or as a National Historic Landmark such as, for example, the Stonewall Inn, part of the Stonewall National Monument in Greenwich Village, as the catalyst of the modern gay rights movement.|
State parks
There are seven state parks within the confines of New York City, including Clay Pit Ponds State Park Preserve, a natural area that includes extensive riding trails, and Riverbank State Park, a facility that rises over the Hudson River.
City parks
thumb|right|upright|Reindeer at the Bronx Zoo, the world's largest metropolitan zoo.
New York City has over of municipal parkland and of public beaches.; The largest municipal park in the city is Pelham Bay Park in the Bronx, with .
Central Park, an park in middle-upper Manhattan, is the most visited urban park in the United States and one of the most filmed locations in the world, with 40 million visitors in 2013. The park contains a myriad of attractions; there are several lakes and ponds, two ice-skating rinks, the Central Park Zoo, the Central Park Conservatory Garden, and the Jackie Onassis Reservoir. Indoor attractions include Belvedere Castle with its nature center, the Swedish Cottage Marionette Theater, and the historic Carousel. On October 23, 2012, hedge fund manager John A. Paulson announced a $100 million gift to the Central Park Conservancy, the largest ever monetary donation to New York City's park system.
Washington Square Park is a prominent landmark in the Greenwich Village neighborhood of Lower Manhattan. The Washington Square Arch at the northern gateway to the park is an iconic symbol of both New York University and Greenwich Village.
Prospect Park in Brooklyn has a meadow, a lake, and extensive woodlands. Within the park is the historic Battle Pass, prominent in the Battle of Long Island.
Flushing Meadows–Corona Park in Queens, the city's third largest park, was the setting for the 1939 World's Fair and the 1964 World's Fair and is host to the annual United States Open Tennis Championships tournament.
Over a fifth of the Bronx's area, , is given over to open space and parks, including Pelham Bay Park, Van Cortlandt Park, the Bronx Zoo, and the New York Botanical Gardens.Ladies and gentlemen, the Bronx is blooming! by Beth J. Harpaz, Travel Editor of The Associated Press (AP), June 30, 2008, Retrieved July 11, 2008
In Staten Island, the Conference House Park contains the historic Conference House, site of the only attempt of a peaceful resolution to the American Revolution, attended by Benjamin Franklin representing the Americans and Lord Howe representing the British Crown. The historic Burial Ridge, the largest Native American burial ground within New York City, is within the park.
Military installations
New York City is home to Fort Hamilton, the U.S. military's only active duty installation within the city. Established in 1825 in Brooklyn on the site of a small battery utilized during the American Revolution, it is one of America's longest serving military forts. Today Fort Hamilton serves as the headquarters of the North Atlantic Division of the United States Army Corps of Engineers and for the New York City Recruiting Battalion. It also houses the 1179th Transportation Brigade, the 722nd Aeromedical Staging Squadron, and a military entrance processing station. Other formerly active military reservations still utilized for National Guard and military training or reserve operations in the city include Fort Wadsworth in Staten Island and Fort Totten in Queens.
Demographics
City compared to State & U.S. 2000 Census These figures were adopted by the U.S. Census Bureau in September 2006.NY CityNY StateU.S.Total population8,213,83918,976,457281,421,906Population change, 1990 to 2000+9.4%+5.5%+13.1%Population density26,403/sq mi402/sq mi80/sq miMedian household income (1999)$38,293$43,393$41,994Bachelor's degree or higher27%27%29%Foreign born36%20%11%White (non-Hispanic)35%62%67%Black28%16%12%Hispanic (any race)27%15%11%Asian10%6%4%
Racial composition 2010 1990 1970 1940 White 44.0% 52.3% 76.6% 93.6% —Non-Hispanic 33.3% 43.2% 62.9%From 15% sample 92.0% Black or African American 25.5% 28.7% 21.1% 6.1% Hispanic or Latino (of any race) 28.6% 24.4% 16.2% 1.6% Asian 12.7% 7.0% 1.2% −
thumb|300px|New York City had an estimated population density of 28,053 people per square mile (10,756/km²) in 2015, with Manhattan alone at 72,033/sq mi (27,812/km²).
New York City is the most-populous city in the United States, with an estimated record high of 8,550,405 residents , incorporating more immigration into the city than outmigration since the 2010 United States Census. More than twice as many people live in New York City as in the second-most populous U.S. city (Los Angeles), and within a smaller area. New York City gained more residents between April 2010 and July 2014 (316,000) than any other U.S. city. New York City's population amounts to about 40% of New York State's population and a similar percentage of the New York metropolitan area population.
Population density
In 2015, the city had an estimated population density of 28,053 people per square mile (10,756/km²), rendering it the most densely populated of all municipalities housing over 100,000 residents in the United States, with several small cities (of fewer than 100,000) in adjacent Hudson County, New Jersey having greater density, as per the 2000 Census. Geographically co-extensive with New York County, the borough of Manhattan's 2015 population density of 72,033 people per square mile (27,812/km²) makes it the highest of any county in the United States"Population Density", Geographic Information Systems – GIS of Interest. Accessed May 17, 2007. "What I discovered is that out of the 3140 counties listed in the Census population data only 178 counties were calculated to have a population density over one person per acre. Not surprisingly, New York County (which contains Manhattan) had the highest population density with a calculated 104.218 persons per acre." and higher than the density of any individual American city.
Race and ethnicity
The city's population in 2010 was 44% white (33.3% non-Hispanic white), 25.5% black (23% non-Hispanic black), 0.7% Native American, and 12.7% Asian. Hispanics of any race represented 28.6% of the population, while Asians constituted the fastest-growing segment of the city's population between 2000 and 2010; the non-Hispanic white population declined 3 percent, the smallest recorded decline in decades; and for the first time since the Civil War, the number of blacks declined over a decade.
Throughout its history, the city has been a major port of entry for immigrants into the United States; more than 12 million European immigrants were received at Ellis Island between 1892 and 1924. The term "melting pot" was first coined to describe densely populated immigrant neighborhoods on the Lower East Side. By 1900, Germans constituted the largest immigrant group, followed by the Irish, Jews, and Italians. In 1940, whites represented 92% of the city's population.
Approximately 37% of the city's population is foreign born. In New York, no single country or region of origin dominates. The ten largest sources of foreign-born individuals in the city were the Dominican Republic, China, Mexico, Guyana, Jamaica, Ecuador, Haiti, India, Russia, and Trinidad and Tobago, while the Bangladeshi immigrant population has since become one of the fastest growing in the city, counting over 74,000 by 2013.
Asian Americans in New York City, according to the 2010 Census, number more than one million, greater than the combined totals of San Francisco and Los Angeles. New York contains the highest total Asian population of any U.S. city proper. The New York City borough of Queens is home to the state's largest Asian American population and the largest Andean (Colombian, Ecuadorian, Peruvian, and Bolivian) populations in the United States, and is also the most ethnically diverse urban area in the world. The Chinese population constitutes the fastest-growing nationality in New York State; multiple satellites of the original Manhattan Chinatown (紐約華埠), in Brooklyn (布鲁克林華埠), and around Flushing, Queens (法拉盛華埠), are thriving as traditionally urban enclaves, while also expanding rapidly eastward into suburban Nassau County (拿騷縣) on Long Island (長島), as the New York metropolitan region and New York State have become the top destinations for new Chinese immigrants, respectively, and large-scale Chinese immigration continues into New York City and surrounding areas. In 2012, 6.3% of New York City was of Chinese ethnicity, with nearly three-fourths living in either Queens or Brooklyn, geographically on Long Island. A community numbering 20,000 Korean-Chinese (Chaoxianzu (Chinese: 朝鲜族) or Joseonjok (Hangul: 조선족)) is centered in Flushing, Queens, while New York City is also home to the largest Tibetan population outside China, India, and Nepal, also centered in Queens. Koreans made up 1.2% of the city's population, and Japanese 0.3%. Filipinos were the largest Southeast Asian ethnic group at 0.8%, followed by Vietnamese, who made up 0.2% of New York City's population in 2010. Indians are the largest South Asian group, comprising 2.4% of the city's population, with Bangladeshis and Pakistanis at 0.7% and 0.5%, respectively. Queens is the preferred borough of settlement for Asian Indians, Koreans, Filipinos, and Malaysians and other Southeast Asians; while Brooklyn is receiving large numbers of both West Indian and Asian Indian immigrants.
New York City has the largest European and non-Hispanic white population of any American city. At 2.7 million in 2012, New York's non-Hispanic white population is larger than the non-Hispanic white populations of Los Angeles (1.1 million), Chicago (865,000), and Houston (550,000) combined. The non-Hispanic white population was 6.6 million in 1940. The non-Hispanic white population has begun to increase since 2010. The European diaspora residing in the city is very diverse. According to 2012 Census estimates, there were roughly 560,000 Italian Americans, 385,000 Irish Americans, 253,000 German Americans, 223,000 Russian Americans, 201,000 Polish Americans, and 137,000 English Americans. Additionally, Greek and French Americans numbered 65,000 each, with those of Hungarian descent estimated at 60,000 people. Ukrainian and Scottish Americans numbered 55,000 and 35,000, respectively. People identifying ancestry from Spain numbered 30,838 total in 2010. People of Norwegian and Swedish descent both stood at about 20,000 each, while people of Czech, Lithuanian, Portuguese, Scotch-Irish, and Welsh descent all numbered between 12,000–14,000 people. Arab Americans number over 160,000 in New York City, with the highest concentration in Brooklyn. Central Asians, primarily Uzbek Americans, are a rapidly growing segment of the city's non-Hispanic white population, enumerating over 30,000, and including over half of all Central Asian immigrants to the United States, most settling in Queens or Brooklyn. Albanian Americans are most highly concentrated in the Bronx.
The wider New York City metropolitan statistical area, with over 20 million people, about 50% greater than the second-place Los Angeles metropolitan area in the United States, is also ethnically diverse, with the largest foreign-born population of any metropolitan region in the world. The New York region continues to be by far the leading metropolitan gateway for legal immigrants admitted into the United States, substantially exceeding the combined totals of Los Angeles and Miami. It is home to the largest Jewish and Israeli communities outside Israel, with the Jewish population in the region numbering over 1.5 million in 2012 and including many diverse Jewish sects from around the Middle East and Eastern Europe. The metropolitan area is also home to 20% of the nation's Indian Americans and at least 20 Little India enclaves, and 15% of all Korean Americans and four Koreatowns; the largest Asian Indian population in the Western Hemisphere; the largest Russian American, Italian American, and African American populations; the largest Dominican American, Puerto Rican American, and South American and second-largest overall Hispanic population in the United States, numbering 4.8 million; and includes at least 6 established Chinatowns within New York City alone, with the urban agglomeration comprising a population of 779,269 overseas Chinese Census estimates, the largest outside of Asia.
Ecuador, Colombia, Guyana, Peru, and Brazil were the top source countries from South America for legal immigrants to the New York City region in 2013; the Dominican Republic, Jamaica, Haiti, and Trinidad and Tobago in the Caribbean; Egypt, Ghana, and Nigeria from Africa; and El Salvador, Honduras, and Guatemala in Central America. Amidst a resurgence of Puerto Rican migration to New York City, this population had increased to approximately 1.3 million in the metropolitan area .
Sexual orientation and gender identity
The New York metropolitan area is home to a self-identifying gay and bisexual community estimated at nearly 570,000 individuals, the largest in the United States and one of the world's largest. Same-sex marriages in New York were legalized on June 24, 2011 and were authorized to take place beginning 30 days thereafter. New York City is also home to the largest transgender population in the United States, estimated at 25,000 in 2016.
Religion
Christianity (59%), made up of Roman Catholicism (33%), Protestantism (23%), and other Christians (3%), was the most prevalently practiced religion in New York ,Major U.S. metropolitan areas differ in their religious profiles, Pew Research Center, Accessed July 30, 2015. followed by Judaism, with approximately 1.1 million Jews in New York City, over half living in Brooklyn. Islam ranks third in New York City, with official estimates ranging between 600,000 and 1,000,000 observers and including 10% of the city's public schoolchildren, followed by Hinduism, Buddhism, and a variety of other religions, as well as atheism. In 2014, 24% self-identified with no organized religious affiliation.
Income
New York City has a high degree of income disparity as indicated by its Gini Coefficient of 0.5 for the city overall and 0.6 for Manhattan. In the first quarter of 2014, the average weekly wage in New York County (Manhattan) was $2,749, representing the highest total among large counties in the United States. As of 2016, New York City had the second-highest number of billionaires of any city in the world with 95, after Beijing, including former Mayor Michael Bloomberg. New York also had the highest density of millionaires per capita among major U.S. cities in 2014, at 4.6% of residents. Lower Manhattan has been experiencing a baby boom, with the area south of Canal Street witnessing 1,086 births in 2010, 12% greater than 2009 and over twice the number born in 2001.
Economy
City economic overview
Top publicly traded companiesin New York City (ranked by 2015 revenues)with City and U.S. ranks NYCcorporationUS 1Verizon Communications13 2JPMorgan Chase23 3Citigroup29 4MetLife40 5American International Group49 6Pfizer (pharmaceuticals)55 7New York Life61 8Goldman Sachs74 9Morgan Stanley78 10TIAA (Teachers Ins. & Annuity)82 11INTL FCStone83 12American Express85Every firm's revenue exceeded $30 billionFinancial services firms in greenFull table at Economy of New York CitySource: Fortune 500Fortune, Volume 173, Number 8 (June 15, 2016), page F-40
New York is a global hub of international business and commerce. In 2012, New York City topped the first Global Economic Power Index, published by The Atlantic (to be differentiated from a namesake list published by the Martin Prosperity Institute), with cities ranked according to criteria reflecting their presence on similar lists as published by other entities. The city is a major center for banking and finance, retailing, world trade, transportation, tourism, real estate, new media, traditional media, advertising, legal services, accountancy, insurance, theater, fashion, and the arts in the United States; while Silicon Alley, metonymous for New York's broad-spectrum high technology sphere, continues to expand. The Port of New York and New Jersey is also a major economic engine, handling record cargo volume in the first half of 2014.
Many Fortune 500 corporations are headquartered in New York City,Fortune 500 web site (cities), retrieved July 21, 2011; Fortune, Vol. 163, no. 7 (May 23, 2011), page F-45 as are a large number of foreign corporations. One out of ten private sector jobs in the city is with a foreign company. New York City has been ranked first among cities across the globe in attracting capital, business, and tourists. This ability to attract foreign investment helped New York City top the FDi Magazine American Cities of the Future ranking for 2013.
Real estate is a major force in the city's economy, as the total value of all New York City property was assessed at US$914.8 billion for the 2015 fiscal year. The Time Warner Center is the property with the highest-listed market value in the city, at US$1.1 billion in 2006. New York City is home to some of the nation's—and the world's—most valuable real estate. 450 Park Avenue was sold on July 2, 2007 for US$510 million, about $1,589 per square foot ($17,104/m²), breaking the barely month-old record for an American office building of $1,476 per square foot ($15,887/m²) set in the June 2007 sale of 660 Madison Avenue.Quirk, James. , The Record (Bergen County), July 5, 2007. Accessed July 5, 2007. "On Monday, a 26-year-old, 33-story office building at 450 Park Ave. sold for a stunning $1,589 per square foot, or about ,10 million. The price is believed to be the most ever paid for a U.S. office building on a per-square-foot basis. That broke the previous record—set four weeks earlier—when 660 Madison Ave. sold for $1,476 a square foot." According to Forbes, in 2014, Manhattan was home to six of the top ten zip codes in the United States by median housing price. Fifth Avenue in Midtown Manhattan commands the highest retail rents in the world, at US in 2017.
, the global advertising agencies of Omnicom Group and Interpublic Group, both based in Manhattan, had combined annual revenues of approximately US$21 billion, reflecting New York City's role as the top global center for the advertising industry, which is metonymously referred to as "Madison Avenue". The city's fashion industry provides approximately 180,000 employees with $11 billion in annual wages.
Other important sectors include medical research and technology, non-profit institutions, and universities. Manufacturing accounts for a significant but declining share of employment, although the city's garment industry is showing a resurgence in Brooklyn. Food processing is a US$5 billion industry that employs more than 19,000 residents.
Chocolate is New York City's leading specialty-food export, with up to US$234 million worth of exports each year. Entrepreneurs were forming a "Chocolate District" in Brooklyn , while Godiva, one of the world's largest chocolatiers, continues to be headquartered in Manhattan.
Wall Street
thumb|left|The New York Stock Exchange on Wall Street, the world's largest stock exchange per total market capitalization of its listed companies.|alt=A large flag is stretched over Roman style columns on the front of a large building.
New York City's most important economic sector lies in its role as the headquarters for the U.S.financial industry, metonymously known as Wall Street. The city's securities industry, enumerating 163,400 jobs in August 2013, continues to form the largest segment of the city's financial sector and an important economic engine, accounting in 2012 for 5 percent of the city's private sector jobs, 8.5 percent (US$3.8 billion) of its tax revenue, and 22 percent of the city's total wages, including an average salary of US$360,700. Many large financial companies are headquartered in New York City, and the city is also home to a burgeoning number of financial startup companies.
Lower Manhattan is the third-largest central business district in the United States and is home to the New York Stock Exchange, on Wall Street, and the NASDAQ, at 165 Broadway, representing the world's largest and second largest stock exchanges, respectively, when measured both by overall average daily trading volume and by total market capitalization of their listed companies in 2013. Investment banking fees on Wall Street totaled approximately $40 billion in 2012, while in 2013, senior New York City bank officers who manage risk and compliance functions earned as much as $324,000 annually. In fiscal year 2013–14, Wall Street's securities industry generated 19% of New York State's tax revenue. New York City remains the largest global center for trading in public equity and debt capital markets, driven in part by the size and financial development of the U.S. economy. In July 2013, NYSE Euronext, the operator of the New York Stock Exchange, took over the administration of the London interbank offered rate from the British Bankers Association. New York also leads in hedge fund management; private equity; and the monetary volume of mergers and acquisitions. Several investment banks and investment mangers headquartered in Manhattan are important participants in other global financial centers. New York is also the principal commercial banking center of the United States.
Many of the world's largest media conglomerates are also based in the city. Manhattan contained over 500 million square feet (46.5 million m2) of office space in 2015, making it the largest office market in the United States, while Midtown Manhattan, with nearly 400 million square feet (37.2 million m2) in 2015, is the largest central business district in the world.
Silicon Alley
thumb|right|250px|Silicon Alley, once centered around the Flatiron District, is now metonymous for New York's high tech sector, which has since expanded beyond the area.
Silicon Alley, centered in Manhattan, has evolved into a metonym for the sphere encompassing the New York City metropolitan region's high technology industries involving the Internet, new media, telecommunications, digital media, software development, biotechnology, game design, financial technology ("FinTech"), and other fields within information technology that are supported by its entrepreneurship ecosystem and venture capital investments. In 2015, Silicon Alley generated over US$7.3 billion in venture capital investment across a broad spectrum of high technology enterprises, most based in Manhattan, with others in Brooklyn, Queens, and elsewhere in the region. High technology startup companies and employment are growing in New York City and the region, bolstered by the city's position in North America as the leading Internet hub and telecommunications center, including its vicinity to several transatlantic fiber optic trunk lines, New York's intellectual capital, and its extensive outdoor wireless connectivity. Verizon Communications, headquartered at 140 West Street in Lower Manhattan, was at the final stages in 2014 of completing a US$3 billion fiberoptic telecommunications upgrade throughout New York City. , New York City hosted 300,000 employees in the tech sector.
The biotechnology sector is also growing in New York City, based upon the city's strength in academic scientific research and public and commercial financial support. On December 19, 2011, then Mayor Michael R. Bloomberg announced his choice of Cornell University and Technion-Israel Institute of Technology to build a US$2 billion graduate school of applied sciences called Cornell Tech on Roosevelt Island with the goal of transforming New York City into the world's premier technology capital. By mid-2014, Accelerator, a biotech investment firm, had raised more than US$30 million from investors, including Eli Lilly and Company, Pfizer, and Johnson & Johnson, for initial funding to create biotechnology startups at the Alexandria Center for Life Science, which encompasses more than on East 29th Street and promotes collaboration among scientists and entrepreneurs at the center and with nearby academic, medical, and research institutions. The New York City Economic Development Corporation's Early Stage Life Sciences Funding Initiative and venture capital partners, including Celgene, General Electric Ventures, and Eli Lilly, committed a minimum of US$100 million to help launch 15 to 20 ventures in life sciences and biotechnology.
Tourism
thumb|right|Times Square is the hub of the Broadway theater district and a media center. It also has one of the highest annual attendance rates of any tourist attraction in the world, estimated at 50 million.
thumb|right|The I Love New York logo, designed by Milton Glaser in 1977
Tourism is a vital industry for New York City, which has witnessed a growing combined volume of international and domestic tourists, receiving a sixth consecutive record of nearly 60 million visitors in 2015. Tourism had generated an all-time high US$61.3 billion in overall economic impact for New York City in 2014, pending 2015 statistics. Approximately 12 million visitors to New York City were from outside the United States, with the highest numbers from the United Kingdom, Canada, Brazil, and China. According to the website reuters.com, "New York City tourism climb[ed] record high in 2015 for [the] sixth year.".
I Love New York (stylized I ❤ NY) is both a logo and a song that are the basis of an advertising campaign and have been used since 1977 to promote tourism in New York City,Interview with Milton Glaser The Believer. Accessed July 8, 2015. and later to promote New York State as well. The trademarked logo, owned by New York State Empire State Development, appears in souvenir shops and brochures throughout the city and state, some licensed, many not. The song is the state song of New York.
Major tourist destinations include Times Square; Broadway theater productions; the Empire State Building; the Statue of Liberty; Ellis Island; the United Nations Headquarters; museums such as the Metropolitan Museum of Art; greenspaces such as Central Park and Washington Square Park; Rockefeller Center; the Manhattan Chinatown; luxury shopping along Fifth and Madison Avenues; and events such as the Halloween Parade in Greenwich Village; the Macy's Thanksgiving Day Parade; the lighting of the Rockefeller Center Christmas Tree; the St. Patrick's Day parade; seasonal activities such as ice skating in Central Park in the wintertime; the Tribeca Film Festival; and free performances in Central Park at Summerstage. Major attractions in the boroughs outside Manhattan include Flushing Meadows-Corona Park and the Unisphere in Queens; the Bronx Zoo; Coney Island, Brooklyn; and the New York Botanical Garden in the Bronx. The New York Wheel, a 630-foot ferris wheel, was under construction at the northern shore of Staten Island in 2015, overlooking the Statue of Liberty, New York Harbor, and the Lower Manhattan skyline.
Manhattan was on track to have an estimated 90,000 hotel rooms at the end of 2014, a 10% increase from 2013. In October 2014, the Anbang Insurance Group, based in China, purchased the Waldorf Astoria New York for US$1.95 billion, making it the world's most expensive hotel ever sold.
Media and entertainment
thumb|left|Rockefeller Center is home to NBC Studios.|alt=Ice skaters on a rink below a golden sculpture and a row of national flags that fly in front of a stone tower.
New York is a prominent location for the American entertainment industry, with many films, television series, books, and other media being set there. , New York City was the second largest center for filmmaking and television production in the United States, producing about 200 feature films annually, employing 130,000 individuals; the filmed entertainment industry has been growing in New York, contributing nearly US$9 billion to the New York City economy alone as of 2015, and by volume, New York is the world leader in independent film production – one-third of all American independent films are produced in New York City. The Association of Independent Commercial Producers is also based in New York. In the first five months of 2014 alone, location filming for television pilots in New York City exceeded the record production levels for all of 2013, with New York surpassing Los Angeles as the top North American city for the same distinction during the 2013/2014 cycle.
New York City is additionally a center for the advertising, music, newspaper, digital media, and publishing industries and is also the largest media market in North America. Some of the city's media conglomerates and institutions include Time Warner, the Thomson Reuters Corporation, the Associated Press, Bloomberg L.P., the News Corporation, The New York Times Company, NBCUniversal, the Hearst Corporation, AOL, and Viacom. Seven of the world's top eight global advertising agency networks have their headquarters in New York.Top 10 Consolidated Agency Networks: Ranked by 2006 Worldwide Network Revenue, Advertising Age Agency Report 2007 Index (April 25, 2007). Retrieved June 8, 2007. Two of the top three record labels' headquarters are in New York: Sony Music Entertainment and Warner Music Group. Universal Music Group also has offices in New York. New media enterprises are contributing an increasingly important component to the city's central role in the media sphere.
More than 200 newspapers and 350 consumer magazines have an office in the city, and the publishing industry employs about 25,000 people. Two of the three national daily newspapers in the United States are New York papers: The Wall Street Journal and The New York Times, which has won the most Pulitzer Prizes for journalism. Major tabloid newspapers in the city include: The New York Daily News, which was founded in 1919 by Joseph Medill Patterson and The New York Post, founded in 1801 by Alexander Hamilton.Allan Nevins, The Evening Post: Century of Journalism, Boni and Liveright, 1922, page 17. The city also has a comprehensive ethnic press, with 270 newspapers and magazines published in more than 40 languages. El Diario La Prensa is New York's largest Spanish-language daily and the oldest in the nation. The New York Amsterdam News, published in Harlem, is a prominent African American newspaper.
The Village Voice is the largest alternative newspaper.
The television industry developed in New York and is a significant employer in the city's economy. The three major American broadcast networks are all headquartered in New York: ABC, CBS, and NBC. Many cable networks are based in the city as well, including MTV, Fox News, HBO, Showtime, Bravo, Food Network, AMC, and Comedy Central. The City of New York operates a public broadcast service, NYCTV, that has produced several original Emmy Award-winning shows covering music and culture in city neighborhoods and city government.
New York is also a major center for non-commercial educational media. The oldest public-access television channel in the United States is the Manhattan Neighborhood Network, founded in 1971. WNET is the city's major public television station and a primary source of national Public Broadcasting Service (PBS) television programming. WNYC, a public radio station owned by the city until 1997, has the largest public radio audience in the United States.
Human resources
Education and scholarly activity
Primary and secondary education
The New York City Public Schools system, managed by the New York City Department of Education, is the largest public school system in the United States, serving about 1.1 million students in more than 1,700 separate primary and secondary schools. The city's public school system includes nine specialized high schools to serve academically and artistically gifted students. The city government pays the Pelham Public Schools to educate a very small, detached section of the Bronx. (Archive)
thumb|right|Butler Library at Columbia University, described as one of the most beautiful college libraries in the United States.
thumb|The Washington Square Arch, an unofficial icon of both New York University (NYU) and its Greenwich Village neighborhood.
The New York City Charter School Center assists the setup of new charter schools. There are approximately 900 additional privately run secular and religious schools in the city.
Higher education and research
Over 600,000 students are enrolled in New York City's over 120 higher education institutions, the highest number of any city in the United States, including over half million in the City University of New York (CUNY) system alone in 2014. In 2005, three out of five Manhattan residents were college graduates, and one out of four had a postgraduate degree, forming one of the highest concentrations of highly educated people in any American city. New York City is home to such notable private universities as Barnard College, Columbia University, Cooper Union, Fordham University, New York University, New York Institute of Technology, Pace University, and Yeshiva University. The public CUNY system is one of the largest universities in the nation, comprising 24 institutions across all five boroughs: senior colleges, community colleges, and other graduate/professional schools. The public State University of New York (SUNY) system serves New York City, as well as the rest of the state. The city also has other smaller private colleges and universities, including many religious and special-purpose institutions, such as St. John's University, The Juilliard School, Manhattan College, The College of Mount Saint Vincent, Fashion Institute of Technology, Parsons School of Design, The New School, Pratt Institute, The School of Visual Arts, The King's College, and Wagner College.
Much of the scientific research in the city is done in medicine and the life sciences. New York City has the most post-graduate life sciences degrees awarded annually in the United States, with 127 Nobel laureates having roots in local institutions ; while in 2012, 43,523 licensed physicians were practicing in New York City. Major biomedical research institutions include Memorial Sloan–Kettering Cancer Center, Rockefeller University, SUNY Downstate Medical Center, Albert Einstein College of Medicine, Mount Sinai School of Medicine, and Weill Cornell Medical College, being joined by the Cornell University/Technion-Israel Institute of Technology venture on Roosevelt Island.
Public library system
thumb|left|The Stephen A. Schwarzman Headquarters Building of the New York Public Library, at 5th Avenue and 42nd Street.
The New York Public Library, which has the largest collection of any public library system in the United States, serves Manhattan, the Bronx, and Staten Island. Queens is served by the Queens Borough Public Library, the nation's second largest public library system, while the Brooklyn Public Library serves Brooklyn.
Public health
thumb|right|New York-Presbyterian Hospital, white complex at center, the largest hospital and largest private employer in New York City and one of the world's busiest.
The New York City Health and Hospitals Corporation (HHC) operates the public hospitals and clinics in New York City. A public benefit corporation with $6.7 billion in annual revenues, HHC is the largest municipal healthcare system in the United States serving 1.4 million patients, including more than 475,000 uninsured city residents. HHC was created in 1969 by the New York State Legislature as a public benefit corporation (Chapter 1016 of the Laws 1969). HHC operates 11 acute care hospitals, five nursing homes, six diagnostic and treatment centers, and more than 70 community-based primary care sites, serving primarily the poor and working class. HHC's MetroPlus Health Plan is one of the New York area's largest providers of government-sponsored health insurance and is the plan of choice for nearly half million New Yorkers.
Each year HHC's facilities provide about 225,000 admissions, one million emergency room visits and five million clinic visits to New Yorkers. HHC facilities treat nearly one-fifth of all general hospital discharges and more than one third of emergency room and hospital-based clinic visits in New York City.
The most well-known hospital in the HHC system is Bellevue Hospital, the oldest public hospital in the United States. Bellevue is the designated hospital for treatment of the President of the United States and other world leaders if they become sick or injured while in New York City. The president of HHC is Ramanathan Raju, MD, a surgeon and former CEO of the Cook County health system in Illinois.
Public safety
Police and law enforcement
thumb|left|The New York City Police Department (NYPD) represents the largest police force in the United States.
The New York City Police Department (NYPD) has been the largest police force in the United States by a significant margin, with over 35,000 sworn officers. Members of the NYPD are frequently referred to by politicians, the media, and their own police cars by the nickname, New York's Finest.
In 2014, New York City had the third lowest murder rate among the largest U.S. cities, having become significantly safer after a spike in crime in the 1970s through 1990s.Arthur Prager, "Worst-Case Scenario", American Heritage, February/March 2006. Violent crime in New York City decreased more than 75% from 1993 to 2005, and continued decreasing during periods when the nation as a whole saw increases. By 2002, New York City's crime rate was similar to that of Provo, Utah, and was ranked 197th in crime among the 216 U.S. cities with populations greater than 100,000. In 2005, the homicide rate was at its lowest level since 1966, and in 2007, the city recorded fewer than 500 homicides for the first time ever since crime statistics were first published in 1963.Fewer Killings in 2007, but Still Felt in City's Streets, The New York Times, January 1, 2008. Retrieved June 21, 2009. In 2015, 50.5% of New York City misdemeanor assault suspects were black, 33.3% Hispanic, 11.1% white, 4.8% Asian/Pacific Islander and 0.3% Native American.http://www.nyc.gov/html/nypd/downloads/pdf/analysis_and_planning/year_end_2015_enforcement_report.pdf New York City experienced 352 homicides in 2015, its second lowest number on record.
Sociologists and criminologists have not reached consensus on the explanation for the dramatic decrease in the city's crime rate. Some attribute the phenomenon to new tactics used by the NYPD,"Livingstone to follow methods of the NYPD". Telegraph. January 17, 2001. including its use of CompStat and the broken windows theory."Staying a beat ahead of crime". Theage.com.au. November 5, 2002. Others cite the end of the crack epidemic and demographic changes,; including from immigration. Another theory is that widespread exposure to lead pollution from automobile exhaust, which can lower intelligence and increase aggression levels, incited the initial crime wave in the mid-20th century, most acutely affecting heavily trafficked cities like New York. A strong correlation was found demonstrating that violent crime rates in New York and other big cities began to fall after lead was removed from American gasoline in the 1970s. Another theory cited to explain New York City's falling homicide rate is the inverse correlation between the number of murders and the increasingly wetter climate in the city.
In 2012, the NYPD came under scrutiny for its use of a stop-and-frisk program, which has undergone several policy revisions since then.
Organized crime has long been associated with New York City, beginning with the Forty Thieves and the Roach Guards in the Five Points in the 1820s. The 20th century saw a rise in the Mafia, dominated by the Five Families, as well as in gangs, including the Black Spades."Youth Gangs". Gotham Gazette. March 5, 2001. The Mafia and gang presence has declined in the city in the 21st century.
Firefighting
thumb|The New York City Fire Department (FDNY) is the largest municipal fire department in the United States.
The New York City Fire Department (FDNY), provides fire protection, technical rescue, primary response to biological, chemical, and radioactive hazards, and emergency medical services for the five boroughs of New York City. The New York City Fire Department is the largest municipal fire department in the United States and the second largest in the world after the Tokyo Fire Department. The FDNY employs approximately 11,080 uniformed firefighters and over 3,300 uniformed EMTs and paramedics. The FDNY's motto is New York's Bravest.
The New York City Fire Department faces highly multifaceted firefighting challenges in many ways unique to New York. In addition to responding to building types that range from wood-frame single family homes to high-rise structures, there are many secluded bridges and tunnels, as well as large parks and wooded areas that can give rise to brush fires. New York is also home to one of the largest subway systems in the world, consisting of hundreds of miles of tunnel with electrified track.
The FDNY headquarters is located at 9 MetroTech Center in Downtown Brooklyn,"9 Metrotech Center – FDNY Headquarters". Fresh Meadow Mechanical Corp. Retrieved on November 5, 2009. and the FDNY Fire Academy is located on Randalls Island. There are three Bureau of Fire Communications alarm offices which receive and dispatch alarms to appropriate units. One office, at 11 Metrotech Center in Brooklyn, houses Manhattan/Citywide, Brooklyn, and Staten Island Fire Communications. The Bronx and Queens offices are in separate buildings.
Culture and contemporary life
New York City has been described as the cultural capital of the world by the diplomatic consulates of Iceland and Latvia and by New York's Baruch College. A book containing a series of essays titled New York, Culture Capital of the World, 1940–1965 has also been published as showcased by the National Library of Australia. In describing New York, author Tom Wolfe said, "Culture just seems to be in the air, like part of the weather."
Numerous major American cultural movements began in the city, such as the Harlem Renaissance, which established the African-American literary canon in the United States. The city was a center of jazz in the 1940s, abstract expressionism in the 1950s, and the birthplace of hip hop in the 1970s. The city's punkHarrington, Joe S. Sonic Cool: The Life & Death of Rock 'N' Roll. pp. 324–30. 2002. Hal-Leonard. USA. and hardcore scenes were influential in the 1970s and 1980s. New York has long had a flourishing scene for Jewish American literature.
The city is the birthplace of many cultural movements, including the Harlem Renaissance in literature and visual art; abstract expressionism (also known as the New York School) in painting; and hip hop, punk, salsa, disco, freestyle, Tin Pan Alley, and jazz in music. New York City has been considered the dance capital of the world. The city is also widely celebrated in popular lore, frequently the setting for books, movies (see List of films set in New York City), and television programs. New York Fashion Week is one of the world's preeminent fashion events and is afforded extensive coverage by the media.
New York has also frequently been ranked the top fashion capital of the world on the annual list compiled by the Global Language Monitor.
Arts
New York City has more than 2,000 arts and cultural organizations and more than 500 art galleries of all sizes. The city government funds the arts with a larger annual budget than the National Endowment for the Arts. Wealthy business magnates in the 19th century built a network of major cultural institutions, such as the famed Carnegie Hall and the Metropolitan Museum of Art, that would become internationally established. The advent of electric lighting led to elaborate theater productions, and in the 1880s, New York City theaters on Broadway and along 42nd Street began featuring a new stage form that became known as the Broadway musical. Strongly influenced by the city's immigrants, productions such as those of Harrigan and Hart, George M. Cohan, and others used song in narratives that often reflected themes of hope and ambition.
Performing arts
thumb|left|Lincoln Center for the Performing Arts|alt=The corner of a lit up plaza with a fountain in the center and the ends of two brightly lit buildings with tall arches on the square.
Broadway theatre is one of the premier forms of English-language theatre in the world, named after Broadway, the major thoroughfare that crosses Times Square, also sometimes referred to as "The Great White Way".McBeth, VR. "The Great White Way" on TimesSquare.com. Quote: "Coined in 1901 by O.J. Gude, the designer of many prominent advertising displays, to describe the new light show that beckoned along Broadway, The Great White Way is a phrase known worldwide to describe Broadway's profusion of theaters in Times Square."Tell, Darcy. Times Square spectacular: lighting up Broadway New York: HarperCollins, 2007Allen, Irving Lewis. The City in Slang: New York Life and Popular Speech. New York: Oxford University Press, 1995. Quote: "By 1910, the blocks of Broadway just above 42nd Street were at the very heart of the Great White Way. The glow of Times Square symbolized the center of New York, if not of the world." Forty-one venues in Midtown Manhattan's Theatre District, each with at least 500 seats, are classified as Broadway theatres. According to The Broadway League, Broadway shows sold approximately US$1.27 billion worth of tickets in the 2013–2014 season, an 11.4% increase from US$1.139 billion in the 2012–2013 season. Attendance in 2013–2014 stood at 12.21 million, representing a 5.5% increase from the 2012–2013 season's 11.57 million.
Lincoln Center for the Performing Arts, anchoring Lincoln Square on the Upper West Side of Manhattan, is home to numerous influential arts organizations, including the Metropolitan Opera, New York City Opera, New York Philharmonic, and New York City Ballet, as well as the Vivian Beaumont Theater, the Juilliard School, Jazz at Lincoln Center, and Alice Tully Hall. The Lee Strasberg Theatre and Film Institute is in Union Square, and Tisch School of the Arts is based at New York University, while Central Park SummerStage presents free music concerts in Central Park.
thumb|The Metropolitan Museum of Art, part of Museum Mile, is one of the largest museums in the world.|alt=A very ornate multi-story stone façade rises over steps and a plaza at night.
In April 2015, New York hosted the annual Cardistry-Con, a three-day cardistry convention and interactive conference for cardists all over the world."72 Hours Inside the Eye-Popping World of Cardistry". Vanity Fair. Retrieved December 6, 2015.
Visual arts
New York City is home to hundreds of cultural institutions and historic sites, many of which are internationally known.
Museum Mile is the name for a section of Fifth Avenue running from 82nd to 105th streets on the Upper East Side of Manhattan, in an area sometimes called Upper Carnegie Hill. The Mile, which contains one of the densest displays of culture in the world, is actually three blocks longer than one mile (1.6 km). Ten museums occupy the length of this section of Fifth Avenue. The tenth museum, the Museum for African Art, joined the ensemble in 2009, although its museum at 110th Street, the first new museum constructed on the Mile since the Guggenheim in 1959, opened in late 2012. In addition to other programming, the museums collaborate for the annual Museum Mile Festival, held each year in June, to promote the museums and increase visitation. Many of the world's most lucrative art auctions are held in New York City.
Cuisine
thumb|Smorgasburg opened in 2011 as an open air food market and is part of the Brooklyn Flea.|alt=People crowd around white tents in the foreground next to a red brick wall with arched windows. Above and to the left is a towering stone bride.
New York City's food culture includes a variety of international cuisines influenced by the city's immigrant history. Central European and Italian immigrants brought bagels, cheesecake, and New York-style pizza into the city, while Chinese and other Asian restaurants, sandwich joints, trattorias, diners, and coffeehouses have become ubiquitous. Some 4,000 mobile food vendors licensed by the city, many immigrant-owned, have made Middle Eastern foods such as falafel and kebabs examples of modern New York street food. The city is home to "nearly one thousand of the finest and most diverse haute cuisine restaurants in the world", according to Michelin. The New York City Department of Health and Mental Hygiene assigns letter grades to the city's 24,000 restaurants based upon their inspection results.
Accent and dialect
The New York area is home to a distinctive regional speech pattern called the New York dialect, alternatively known as Brooklynese or New Yorkese. It has generally been considered one of the most recognizable accents within American English. The classic version of this dialect is centered on middle and working-class people of European descent. The influx of non-European immigrants in recent decades has led to changes in this distinctive dialect, and the traditional form of this speech pattern is no longer as prevalent among general New Yorkers as in the past.
The traditional New York area accent is characterized as non-rhotic, so that the sound does not appear at the end of a syllable or immediately before a consonant; hence the pronunciation of the city name as "New Yawk." There is no in words like park or (with vowel backed and diphthongized due to the low-back chain shift), butter , or here . In another feature called the low back chain shift, the vowel sound of words like talk, law, cross, chocolate, and coffee and the often homophonous in core and more are tensed and usually raised more than in General American. In the most old-fashioned and extreme versions of the New York dialect, the vowel sounds of words like "girl" and of words like "oil" became a diphthong . This would often be misperceived by speakers of other accents as a reversal of the er and oy sounds, so that girl is pronounced "goil" and oil is pronounced "erl"; this leads to the caricature of New Yorkers saying things like "Joizey" (Jersey), "Toidy-Toid Street" (33rd St.) and "terlet" (toilet). The character Archie Bunker from the 1970s sitcom All in the Family (played by Carroll O'Connor) was an example of having used this pattern of speech, which continues to fade in its overall presence.
Sports
New York City is home to the headquarters of the National Football League, Major League Baseball, the National Basketball Association, the National Hockey League, and Major League Soccer. The New York metropolitan area hosts the most sports teams in these five professional leagues. Participation in professional sports in the city predates all professional leagues, and the city has been continuously hosting professional sports since the birth of the Brooklyn Dodgers in 1882. The city has played host to over forty major professional teams in the five sports and their respective competing leagues, both current and historic. Four of the ten most expensive stadiums ever built worldwide (MetLife Stadium, the new Yankee Stadium, Madison Square Garden, and Citi Field) are located in the New York metropolitan area. Madison Square Garden, its predecessor, the original Yankee Stadium and Ebbets Field, are sporting venues located in New York City, the latter two having been commemorated on U.S. postage stamps.
New York has been described as the "Capital of Baseball". There have been 35 Major League Baseball World Series and 73 pennants won by New York teams. It is one of only five metro areas (Los Angeles, Chicago, Baltimore–Washington, and the San Francisco Bay Area being the others) to have two baseball teams. Additionally, there have been 14 World Series in which two New York City teams played each other, known as a Subway Series and occurring most recently in . No other metropolitan area has had this happen more than once (Chicago in , St. Louis in , and the San Francisco Bay Area in ). The city's two current Major League Baseball teams are the New York Mets, who play at Citi Field in Queens, and the New York Yankees, who play at Yankee Stadium in the Bronx. who compete in six games of interleague play every regular season that has also come to be called the Subway Series. The Yankees have won a record 27 championships, while the Mets have won the World Series twice. The city also was once home to the Brooklyn Dodgers (now the Los Angeles Dodgers), who won the World Series once, and the New York Giants (now the San Francisco Giants), who won the World Series five times. Both teams moved to California in 1958. There are also two Minor League Baseball teams in the city, the Brooklyn Cyclones and Staten Island Yankees.
The city is represented in the National Football League by the New York Giants and the New York Jets, although both teams play their home games at MetLife Stadium in nearby East Rutherford, New Jersey, which hosted Super Bowl XLVIII in 2014.
The New York Islanders and the New York Rangers represent the city in the National Hockey League. Also within the metropolitan area are the New Jersey Devils, who play in nearby Newark, New Jersey.
The city's National Basketball Association teams are the Brooklyn Nets and the New York Knicks, while the New York Liberty is the city's Women's National Basketball Association. The first national college-level basketball championship, the National Invitation Tournament, was held in New York in 1938 and remains in the city. The city is well known for its links to basketball, which is played in nearly every park in the city by local youth, many of whom have gone on to play for major college programs and in the NBA.
In soccer, New York City is represented by New York City FC of Major League Soccer, who play their home games at Yankee Stadium. The New York Red Bulls play their home games at Red Bull Arena in nearby Harrison, New Jersey. Historically, the city is known for the New York Cosmos, the highly successful former professional soccer team which was the American home of Pelé. A new version of the New York Cosmos was formed in 2010, and began play in the second division North American Soccer League in 2013. The Cosmos play their home games at James M. Shuart Stadium on the campus of Hofstra University, just outside the New York City limits in Hempstead, New York.
The annual United States Open Tennis Championships is one of the world's four Grand Slam tennis tournaments and is held at the National Tennis Center in Flushing Meadows-Corona Park, Queens. The New York Marathon is one of the world's largest, and the 2004–2006 events hold the top three places in the marathons with the largest number of finishers, including 37,866 finishers in 2006.World's Largest Marathons, Association of International Marathons and Road Races (AIMS). Retrieved June 28, 2007. The Millrose Games is an annual track and field meet whose featured event is the Wanamaker Mile. Boxing is also a prominent part of the city's sporting scene, with events like the Amateur Boxing Golden Gloves being held at Madison Square Garden each year. The city is also considered the host of the Belmont Stakes, the last, longest and oldest of horse racing's Triple Crown races, held just over the city's border at Belmont Park on the first or second Sunday of June. The city also hosted the 1932 U.S. Open golf tournament and the 1930 and 1939 PGA Championships, and has been host city for both events several times, most notably for nearby Winged Foot Golf Club. The Gaelic games are played in Riverdale, Bronx at Gaelic Park, home to the New York GAA, the only North American team to compete at the senior inter-county level.
Many sports are associated with New York's immigrant communities. Stickball, a street version of baseball, was popularized by youths in the 1930s, and a street in the Bronx was renamed Stickball Boulevard in the late 2000s to memorialize this.
Transportation
thumb|left|New York City is home to the two busiest rail stations in the US, including Grand Central Terminal.|alt=A row of yellow taxis in front of a multi-story ornate stone building with three huge arched windows.
New York City's comprehensive transportation system is both complex and extensive.
Rapid transit
Mass transit in New York City, most of which runs 24 hours a day, accounts for one in every three users of mass transit in the United States, and two-thirds of the nation's rail riders live in the New York City Metropolitan Area.
Rail
thumb|The New York City Subway is the world's largest rapid transit system by length of routes and by number of stations.|alt=The back end of a subway train, with a red E on a LED display on the top. To the left of the train is a platform with a person walking away.
The iconic New York City Subway system is the largest rapid transit system in the world when measured by stations in operation, with , and by length of routes. Nearly all of New York's subway system is open 24 hours a day, in contrast to the overnight shutdown common to systems in most cities, including Hong Kong, London, Paris, Seoul, and Tokyo. The New York City Subway is also the busiest metropolitan rail transit system in the Western Hemisphere, with 1.76 billion passenger rides in 2015, while Grand Central Terminal, also referred to as "Grand Central Station", is the world's largest railway station by number of train platforms.
Public transport is essential in New York City. 54.6% of New Yorkers commuted to work in 2005 using mass transit. This is in contrast to the rest of the United States, where about 90% of commuters drive automobiles to their workplace. According to the New York City Comptroller, workers in New York City area spend an average of 6 hours and 18 minutes getting to work each week, the longest commute time in the nation among large cities. New York is the only US city in which a majority (52%) of households do not have a car; only 22% of Manhattanites own a car. Due to their high usage of mass transit, New Yorkers spend less of their household income on transportation than the national average, saving $19 billion annually on transportation compared to other urban Americans.
New York City's commuter rail network is the largest in North America. The rail network, connecting New York City to its suburbs, consists of the Long Island Rail Road, Metro-North Railroad, and New Jersey Transit. The combined systems converge at Grand Central Terminal and Pennsylvania Station and contain more than 250 stations and 20 rail lines. In Queens, the elevated AirTrain people mover system connects JFK International Airport to the New York City Subway and the Long Island Rail Road; a separate AirTrain system is planned alongside the Grand Central Parkway to connect LaGuardia Airport to these transit systems. For intercity rail, New York City is served by Amtrak, whose busiest station by a significant margin is Pennsylvania Station on the West Side of Manhattan, from which Amtrak provides connections to Boston, Philadelphia, and Washington, D.C. along the Northeast Corridor, and long-distance train service to other North American cities.
The Staten Island Railway rapid transit system solely serves Staten Island, operating 24 hours a day. The Port Authority Trans-Hudson (PATH train) links Midtown and Lower Manhattan to northeastern New Jersey, primarily Hoboken, Jersey City, and Newark. Like the New York City Subway, the PATH operates 24 hours a day; meaning three of the six rapid transit systems in the world which operate on 24-hour schedules are wholly or partly in New York (the others are a portion of the Chicago 'L', the PATCO Speedline serving Philadelphia, and the Copenhagen Metro).
Multibillion US$ heavy-rail transit projects under construction in New York City include the Second Avenue Subway, the East Side Access project, and the 7 Subway Extension.
Buses
thumb|The Port Authority Bus Terminal, the world's busiest bus station, at 8th Avenue and 42nd Street.
New York City's public bus fleet is the largest in North America, and the Port Authority Bus Terminal, the main intercity bus terminal of the city, serves 7,000 buses and 200,000 commuters daily, making it the busiest bus station in the world.
Aviation
thumb|left|John F. Kennedy Airport in Queens, the busiest international air passenger gateway to the United States.|alt=Five jumbo airplanes wait in a line on a runway next to a small body of water. Behind them in the distance is the airport and control tower.
New York's airspace is the busiest in the United States and one of the world's busiest air transportation corridors. The three busiest airports in the New York metropolitan area include John F. Kennedy International Airport, Newark Liberty International Airport, and LaGuardia Airport; 109 million travelers used these three airports in 2012, and the city's airspace is the busiest in the nation. JFK and Newark Liberty were the busiest and fourth busiest U.S. gateways for international air passengers, respectively, in 2012; , JFK was the busiest airport for international passengers in North America. Plans have advanced to expand passenger volume at a fourth airport, Stewart International Airport near Newburgh, New York, by the Port Authority of New York and New Jersey. Plans were announced in July 2015 to entirely rebuild LaGuardia Airport in a multibillion-dollar project to replace its aging facilities. Other commercial airports in or serving the New York metropolitan area include Long Island MacArthur Airport, Trenton–Mercer Airport and Westchester County Airport. The primary general aviation airport serving the area is Teterboro Airport.
Ferries
The Staten Island Ferry is the world's busiest ferry route, carrying over 23 million passengers from July 2015 through June 2016 on the route between Staten Island and Lower Manhattan and running 24 hours a day. Other ferry systems shuttle commuters between Manhattan and other locales within the city and the metropolitan area.
Citywide Ferry Service, a NYCEDC initiative with routes that are proposed to go to all five boroughs, is expected to start operations in 2017–2018, with a few routes opening in 2017 and the rest in 2018, transporting an anticipated 4.5 million passengers annually. Meanwhile, Seastreak ferry announced construction of a 600-passenger high-speed luxury ferry in September 2016, to shuttle riders between the Jersey Shore and Manhattan, anticipated to start service in 2017; this would be the largest vessel in its class.
Taxis, transport startups, and trams
Other features of the city's transportation infrastructure encompass more than 12,000 yellow taxicabs; various competing startup transportation network companies; and an aerial tramway that transports commuters between Roosevelt Island and Manhattan Island.
Streets and highways
thumb|8th Avenue, looking northward ("uptown"). Most streets and avenues in Manhattan's grid plan incorporate a one-way traffic configuration.
Despite New York's heavy reliance on its vast public transit system, streets are a defining feature of the city. Manhattan's street grid plan greatly influenced the city's physical development. Several of the city's streets and avenues, like Broadway, Wall Street, Madison Avenue, and Seventh Avenue are also used as metonyms for national industries there: the theater, finance, advertising, and fashion organizations, respectively.
New York City also has an extensive web of expressways and parkways, which link the city's boroughs to each other and to northern New Jersey, Westchester County, Long Island, and southwestern Connecticut through various bridges and tunnels. Because these highways serve millions of outer borough and suburban residents who commute into Manhattan, it is quite common for motorists to be stranded for hours in traffic jams that are a daily occurrence, particularly during rush hour.George Washington Bridge turns 75 years old: Huge flag, cake part of celebration, Times Herald-Record, October 24, 2006. "The party, however, will be small in comparison to the one that the Port Authority of New York and New Jersey organized for 5,000 people to open the bridge to traffic in 1931. And it won't even be on what is now the world's busiest bridge for fear of snarling traffic."
River crossings
thumb|left|The Verrazano-Narrows Bridge, one of the world's longest suspension bridges, connects Brooklyn and Staten Island across The Narrows.|alt=A tall suspension bridge connects a distant piece of land at night.
thumb|left|The George Washington Bridge, connecting Upper Manhattan (background) from Fort Lee, New Jersey across the Hudson River, is the world's busiest motor vehicle bridge.
New York City is located on one of the world's largest natural harbors,New York Harbor Video – How the Earth Was Made. HISTORY.com. Retrieved on April 12, 2014. and the boroughs of Manhattan and Staten Island are (primarily) coterminous with islands of the same names, while Queens and Brooklyn are located at the west end of the larger Long Island, and The Bronx is located at the southern tip of New York State's mainland. This situation of boroughs separated by water led to the development of an extensive infrastructure of bridges and tunnels. Nearly all of the city's major bridges and tunnels are notable, and several have broken or set records.
The George Washington Bridge is the world's busiest motor vehicle bridge, connecting Manhattan to Bergen County, New Jersey. The Verrazano-Narrows Bridge is the longest suspension bridge in the Americas and one of the world's longest. The Brooklyn Bridge is an icon of the city itself. The towers of the Brooklyn Bridge are built of limestone, granite, and Rosendale cement, and their architectural style is neo-Gothic, with characteristic pointed arches above the passageways through the stone towers. This bridge was also the longest suspension bridge in the world from its opening until 1903, and is the first steel-wire suspension bridge. The Queensboro Bridge is an important piece of cantilever architecture. The Manhattan Bridge, opened in 1909, is considered to be the forerunner of modern suspension bridges, and its design served as the model for many of the long-span suspension bridges around the world; the Manhattan Bridge, Throgs Neck Bridge, Triborough Bridge, and Verrazano-Narrows Bridge are all examples of Structural Expressionism.New York Architecture Images-Manhattan Bridge. Nyc-architecture.com (December 31, 1909). Retrieved on April 12, 2014.New York Architecture Images. Nyc-architecture.com. Retrieved on April 12, 2014.
Manhattan Island is linked to New York City's outer boroughs and New Jersey by several tunnels as well. The Lincoln Tunnel, which carries 120,000 vehicles a day under the Hudson River between New Jersey and Midtown Manhattan, is the busiest vehicular tunnel in the world. The tunnel was built instead of a bridge to allow unfettered passage of large passenger and cargo ships that sailed through New York Harbor and up the Hudson River to Manhattan's piers. The Holland Tunnel, connecting Lower Manhattan to Jersey City, New Jersey, was the world's first mechanically ventilated vehicular tunnel when it opened in 1927.Holland Tunnel (I-78). Nycroads.com. Retrieved on April 12, 2014. The Queens-Midtown Tunnel, built to relieve congestion on the bridges connecting Manhattan with Queens and Brooklyn, was the largest non-federal project in its time when it was completed in 1940. President Franklin D. Roosevelt was the first person to drive through it."President the 'First' to Use Midtown Tube; Precedence at Opening Denied Hundreds of Motorists", The New York Times, November 9, 1940. p. 19. The Hugh L. Carey Tunnel runs underneath Battery Park and connects the Financial District at the southern tip of Manhattan to Red Hook in Brooklyn.
Environment
thumb|As of July 2010, the city had 3,715 hybrid taxis in service, the largest number of any city in North America.|alt=Two yellow taxis on a narrow street lined with shops.
Environmental impact reduction
New York City has focused on reducing its environmental impact and carbon footprint. Mass transit use in New York City is the highest in the United States. Also, by 2010, the city had 3,715 hybrid taxis and other clean diesel vehicles, representing around 28% of New York's taxi fleet in service, the most of any city in North America.
New York's high rate of public transit use, over 200,000 daily cyclists , and many pedestrian commuters make it the most energy-efficient major city in the United States. Walk and bicycle modes of travel account for 21% of all modes for trips in the city; nationally the rate for metro regions is about 8%. In both its 2011 and 2015 rankings, Walk Score named New York City the most walkable large city in the United States. Citibank sponsored the introduction of 10,000 public bicycles for the city's bike-share project in the summer of 2013. Research conducted by Quinnipiac University showed that a majority of New Yorkers support the initiative. New York City's numerical "in-season cycling indicator" of bicycling in the city hit an all-time high in 2013.
The city government was a petitioner in the landmark Massachusetts v. Environmental Protection Agency Supreme Court case forcing the EPA to regulate greenhouse gases as pollutants. The city is also a leader in the construction of energy-efficient green office buildings, including the Hearst Tower among others. Mayor Bill de Blasio has committed to an 80% reduction in greenhouse gas emissions between 2014 and 2050 to reduce the city's contributions to climate change, beginning with a comprehensive "Green Buildings" plan.
Water purity and availability
New York City is supplied with drinking water by the protected Catskill Mountains watershed. As a result of the watershed's integrity and undisturbed natural water filtration system, New York is one of only four major cities in the United States the majority of whose drinking water is pure enough not to require purification by water treatment plants. The Croton Watershed north of the city is undergoing construction of a US$3.2 billion water purification plant to augment New York City's water supply by an estimated 290 million gallons daily, representing a greater than 20% addition to the city's current availability of water. The ongoing expansion of New York City Water Tunnel No. 3, an integral part of the New York City water supply system, is the largest capital construction project in the city's history, with segments serving Manhattan and The Bronx completed, and with segments serving Brooklyn and Queens planned for construction in 2020.
Environmental revitalization
Newtown Creek, a a long estuary that forms part of the border between the boroughs of Brooklyn and Queens, has been designated a Superfund site for environmental clean-up and remediation of the waterway's recreational and economic resources for many communities. One of the most heavily used bodies of water in the Port of New York and New Jersey, it had been one of the most contaminated industrial sites in the country, containing years of discarded toxins, an estimated of spilled oil, including the Greenpoint oil spill, raw sewage from New York City's sewer system, and other accumulation.
Government and politics
Government
thumb|right|New York City Hall is the oldest City Hall in the United States that still houses its original governmental functions.|alt=A wide white building in a colonial style with a cupola in the center.
New York City has been a metropolitan municipality with a mayor–council form of government since its consolidation in 1898. The government of New York is more centralized than that of most other U.S. cities. In New York City, the city government is responsible for public education, correctional institutions, public safety, recreational facilities, sanitation, water supply, and welfare services.
The Mayor and council members are elected to four-year terms. The City Council is a unicameral body consisting of 51 council members whose districts are defined by geographic population boundaries. Each term for the mayor and council members lasts four years and has a three consecutive-term limit, which is reset after a four-year break. The New York City Administrative Code, the New York City Rules, and the City Record are the code of local laws, compilation of regulations, and official journal, respectively.
thumb|left|The New York County Courthouse houses the New York Supreme Court and other offices.
Each borough is coextensive with a judicial district of the state Unified Court System, of which the Criminal Court and the Civil Court are the local courts, while the New York Supreme Court conducts major trials and appeals. Manhattan hosts the First Department of the Supreme Court, Appellate Division while Brooklyn hosts the Second Department. There are also several extrajudicial administrative courts, which are executive agencies and not part of the state Unified Court System.
Uniquely among major American cities, New York is divided between, and is host to the main branches of, two different US district courts: the District Court for the Southern District of New York, whose main courthouse is on Foley Square near City Hall in Manhattan and whose jurisdiction includes Manhattan and the Bronx, and the District Court for the Eastern District of New York, whose main courthouse is in Brooklyn and whose jurisdiction includes Brooklyn, Queens, and Staten Island. The US Court of Appeals for the Second Circuit and US Court of International Trade are also based in New York, also on Foley Square in Manhattan.
Politics
thumb|upright|Bill de Blasio, the current and 109th Mayor of New York City
The present mayor is Bill de Blasio, the first Democrat since 1993. elected in 2013 with over 73% of the vote, who assumed office on January 1, 2014.
The Democratic Party holds the majority of public offices. As of April 2016, 69% of registered voters in the city are Democrats and 10% are Republicans. New York City has not been carried by a Republican in a statewide or presidential election since President Calvin Coolidge won the five boroughs in 1924. In 2012, Democrat Barack Obama became the first presidential candidate of any party to receive more than 80% of the overall vote in New York City, sweeping all five boroughs. Party platforms center on affordable housing, education, and economic development, and labor politics are of importance in the city.
New York is the most important source of political fundraising in the United States, as four of the top five ZIP codes in the nation for political contributions are in Manhattan. The top ZIP code, 10021 on the Upper East Side, generated the most money for the 2004 presidential campaigns of George W. Bush and John Kerry. The city has a strong imbalance of payments with the national and state governments. It receives 83 cents in services for every $1 it sends to the federal government in taxes (or annually sends $11.4 billion more than it receives back). City residents and businesses also spent an additional $4.1 billion in the 2009–2010 fiscal year to the state of New York than the city received in return.
Notable people
Global outreach
In 2006, the Sister City Program of the City of New York, Inc. was restructured and renamed New York City Global Partners. New York City has expanded its international outreach via this program to a network of cities worldwide, promoting the exchange of ideas and innovation between their citizenry and policymakers, according to the city's website. New York's historic sister cities are denoted below by the year they joined New York City's partnership network.
New York City Global Partners networkAfrica
Accra, Ghana
Addis Ababa, Ethiopia
Cairo, Egypt (1982)
Cape Town, South Africa
Lagos, Nigeria
Libreville, Gabon
Johannesburg, South Africa (2003)
Nairobi, Kenya
Asia
(East)
Bangkok, Thailand
Beijing, People's Republic of China (1980)
Biên Hòa, Vietnam
Changwon, South Korea
Chongqing, People's Republic of China
Guangzhou, People's Republic of China
Ho Chi Minh City, Vietnam
Hong Kong, People's Republic of China
Jakarta, Indonesia
Kuala Lumpur, Malaysia
Manila, Philippines
Seoul, South Korea
Shanghai, People's Republic of China
Shenyang, People's Republic of China
Singapore, Singapore
Taipei, Taiwan
Tokyo, Japan (1960)
(South)
Bangalore, India
Delhi, India
Dhaka, Bangladesh
Karachi, Pakistan
Mumbai, India
(West)
Dubai, United Arab Emirates
Istanbul, Turkey (transcontinental)
Jerusalem, Israel (1993)
Tel Aviv, Israel
Australia
Melbourne, Australia
Sydney, Australia
Europe
(East)
Bucharest, Romania
Budapest, Hungary (1992)
Istanbul, Turkey (transcontinental)
Kiev, Ukraine
Moscow, Russia
Prague, Czech Republic
St. Petersburg, Russia
Vienna, Austria
Warsaw, Poland
(Scandinavia)
Copenhagen, Denmark
Helsinki, Finland
Oslo, Norway
Stockholm, Sweden
(South)
Barcelona, Spain
Lisbon, Portugal
Madrid, Spain (1982)
Milan, Italy
Pristina, Kosovo
Rome, Italy (1992)
(West)
Amsterdam, Netherlands
Antwerp, Belgium
Belfast, Northern Ireland
Berlin, Germany
Brussels, Belgium
Dublin, Ireland
Düsseldorf, Germany
Edinburgh, Scotland
Geneva, Switzerland
Glasgow, Scotland
Hamburg, Germany
Heidelberg, Germany
London, England (2001)
Luxembourg City, Luxembourg
Lyon, France
Munich, Germany
Paris, France
Rotterdam, Netherlands
The Hague, Netherlands
North America
(Canada)
Calgary, Alberta, Canada
Edmonton, Alberta, Canada
Montreal, Quebec, Canada
Ottawa, Ontario, Canada
Quebec City, Quebec, Canada
Toronto, Ontario, Canada
Vancouver, British Columbia, Canada
Victoria, British Columbia, Canada
Winnipeg, Manitoba, Canada
(Mexico, Central America, and Caribbean)
Cuernavaca, Morales, Mexico
Mexico City, Distrito Federal, Mexico
Monterrey, Nuevo León, Mexico
Panama City, Panama
Santo Domingo, Dominican Republic (1983)
(United States)
Baltimore, Maryland, United States
Boston, Massachusetts, United States
Chicago, Illinois, United States
Los Angeles, California, United States
Philadelphia, Pennsylvania, United States
South America
Bogotá, Colombia
Brasilia, Brazil (2004)
Buenos Aires, Argentina
Caracas, Venezuela
Córdoba, Argentina
Curitiba, Brazil
Lima, Peru
Medellín, Colombia
Rio de Janeiro, Brazil
Santiago, Chile
São Paulo, Brazil
Notes
References
Further reading
From Google Books.
External links
NYC Go, official tourism website of New York City
.
Collections, 145,000 NYC photographs at Museum of the City of New York
Category:1624 establishments in the Dutch Empire
Category:1624 establishments in North America
Category:Cities in New York
Category:Former capitals of the United States
Category:Former state capitals in the United States
Category:Populated places established in 1624
Category:Populated places on the Hudson River
Category:Port cities and towns of the United States Atlantic coast
Category:Cities in the New York metropolitan area
Category:Populated coastal places in New York
Category:Populated places established in 1898
Category:Establishments in New Netherland
Category:1898 establishments in New York | 645,042 | 2017-01 |
Marshall Islands |
The Marshall Islands, officially the Republic of the Marshall Islands (),Pronunciations:* English: Republic of the Marshall Islands * Marshallese: () is an island country located near the equator in the Pacific Ocean, slightly west of the International Date Line. Geographically, the country is part of the larger island group of Micronesia. The country's population of 53,158 people (at the 2011 Census) is spread out over 29 coral atolls, comprising 1,156 individual islands and islets. The islands share maritime boundaries with the Federated States of Micronesia to the west, Wake Island to the north,Wake Island is claimed as a territory of the Marshall Islands, but is also claimed as an unorganized, unincorporated territory of the United States, with de facto control vested in the Office of Insular Affairs. Kiribati to the south-east, and Nauru to the south. About 27,797 of the islanders (at the 2011 Census) live on Majuro, which contains the capital.
Micronesian colonists gradually settled the Marshall Islands during the 2nd millennium BC, with inter-island navigation made possible using traditional stick charts. Islands in the archipelago were first explored by Europeans in the 1520s, with Spanish explorer Alonso de Salazar sighting an atoll in August 1526. Other expeditions by Spanish and English ships followed. The islands derive their name from British explorer John Marshall, who visited in 1788. The islands were historically known by the inhabitants as "jolet jen Anij" (Gifts from God).
The European powers recognized Spanish sovereignty over the islands in 1874. They had been part of the Spanish East Indies formally since 1528. Later, Spain sold the islands to the German Empire in 1884, and they became part of German New Guinea in 1885. In World War I the Empire of Japan occupied the Marshall Islands, which in 1919 the League of Nations combined with other former German territories to form the South Pacific Mandate. In World War II, the United States conquered the islands in the Gilbert and Marshall Islands campaign. Along with other Pacific Islands, the Marshall Islands were then consolidated into the Trust Territory of the Pacific Islands governed by the US. Self-government was achieved in 1979, and full sovereignty in 1986, under a Compact of Free Association with the United States. Marshall Islands has been a United Nations member state since 1991.
Politically, the Marshall Islands is a presidential republic in free association with the United States, with the US providing defense, subsidies, and access to U.S. based agencies such as the FCC and the USPS. With few natural resources, the islands' wealth is based on a service economy, as well as some fishing and agriculture; aid from the United States represents a large percentage of the islands' gross domestic product. The country uses the United States dollar as its currency.
The majority of the citizens of the Marshall Islands are of Marshallese descent, though there are small numbers of immigrants from the United States, China, Philippines, and other Pacific islands. The two official languages are Marshallese, which is a member of the Malayo-Polynesian languages, and English. Almost the entire population of the islands practises some religion, with three-quarters of the country either following the United Church of Christ – Congregational in the Marshall Islands (UCCCMI) or the Assemblies of God.
History
thumb|Marshall Islanders sailing in traditional costume, circa 1899–1900.
Micronesians settled the Marshall Islands in the 2nd millennium BC, but there are no historical or oral records of that period. Over time, the Marshall Island people learned to navigate over long ocean distances by canoe using traditional stick charts.The History of Mankind by Professor Friedrich Ratzel, Book II, Section A, The Races of Oceania page 165, picture of a stick chart from the Marshall Islands. MacMillan and Co., published 1896.
Spanish colony
Spanish explorer Alonso de Salazar was the first European to see the islands in 1526, commanding the ship Santa Maria de la Victoria, the only surviving vessel of the Loaísa Expedition. On August 21, he sighted an island (probably Taongi) at 14°N that he named "San Bartolome".Sharp, pp. 11–3
On September 21, 1529, Álvaro de Saavedra Cerón commanded the Spanish ship Florida, on his second attempt to recross the Pacific from the Maluku Islands. He stood off a group of islands from which local inhabitants hurled stones at his ship. These islands, which he named "Los Pintados", may have been Ujelang. On October 1, he found another group of islands where he went ashore for eight days, exchanged gifts with the local inhabitants and took on water. These islands, which he named "Los Jardines", may have been Enewetak or Bikini Atoll.Wright 1951: 109–10Sharp, pp. 19–23
The Spanish ship San Pedro and two other vessels in an expedition commanded by Miguel López de Legazpi discovered an island on January 9, 1530, possibly Mejit, at 10°N, which they named "Los Barbudos". The Spaniards went ashore and traded with the local inhabitants. On January 10, the Spaniards sighted another island that they named "Placeres", perhaps Ailuk; ten leagues away, they sighted another island that they called "Pajares" (perhaps Jemo). On January 12, they sighted another island at 10°N that they called "Corrales" (possibly Wotho). On January 15, the Spaniards sighted another low island, perhaps Ujelang, at 10°N, where they described the people on "Barbudos".Filipiniana Book Guild 1965: 46–8, 91, 240Sharp, pp. 36–9 After that, ships including the San Jeronimo, Los Reyes and Todos los Santos also visited the islands in different years.
The islanders had no immunity to European diseases and many died as a result of contact with the Spanish.
Other European contact
Captain John Charles Marshall and Thomas Gilbert visited the islands in 1788. The islands were named for Marshall on Western charts, although the natives have historically named their home "jolet jen Anij" (Gifts from God). Around 1820, Russian explorer Adam Johann von Krusenstern and the French explorer Louis Isidore Duperrey named the islands after John Marshall, and drew maps of the islands. The designation was repeated later on British maps. In 1824 the crew of the American whaler Globe mutinied and some of the crew put ashore on Mulgrave Island. One year later, the American schooner Dolphin arrived and picked up two boys, the last survivors of a massacre by the natives due to their brutal treatment of the women.
A number of vessels visiting the islands were attacked and their crews killed. In 1834, Captain DonSette and his crew were killed. Similarly, in 1845 the schooner Naiad punished a native for stealing with such violence that the natives attacked the ship. Later that year a whaler's boat crew were killed. In 1852 the San Francisco-based ships Glencoe and Sea Nymph were attacked and everyone aboard except for one crew member were killed. The violence was usually attributed as a response to the ill treatment of the natives in response to petty theft, which was a common practice. In 1857, two missionaries successfully settled on Ebon, living among the natives through at least 1870.
The international community in 1874 recognized the Spanish Empire's claim of sovereignty over the islands as part of the Spanish East Indies.
German protectorate
Although the Spanish Empire had a residual claim on the Marshalls in 1874, when she began asserting her sovereignty over the Carolines, she made no effort to prevent the German Empire from gaining a foothold there. Britain also raised no objection to a German protectorate over the Marshalls in exchange for German recognition of Britain's rights in the Gilbert and Ellice Islands.Hezel, Francis X. The First Taint of Civilization: A History of the Caroline and Marshall Islands in Pre-colonial Days, 1521–1885 University of Hawaii Press, 1994. pp. 304–06. On October 13, 1885, the gunboat under Captain Fritz Rötger brought German emissaries to Jaluit. They signed a treaty with Kabua, whom the Germans had earlier recognized as "King of the Ralik Islands," on October 15.
Subsequently, seven other chiefs on seven other islands signed a treaty in German and Marshallese and a final copy witnessed by Rötger on November 1 was sent to the German Foreign Office.Dirk H. R. Spennemann, Marshall Islands History Sources No. 18: Treaty of friendship between the Marshallese chiefs and the German Empire (1885). marshall.csu.edu.au The Germans erected a sign declaring a "Imperial German Protectorate" at Jaluit. It has been speculated that the crisis over the Carolines with Spain, which almost provoked a war, was in fact "a feint to cover the acquisition of the Marshall Islands", which went almost unnoticed at the time, despite the islands being the largest source of copra in Micronesia.Hezel, Francis X. (2003) Strangers in Their Own Land: A Century of Colonial Rule in the Caroline and Marshall Islands, University of Hawaii Press, pp. 45–46, ISBN 0824828046. Spain sold the islands to Germany in 1884 through papal mediation.
A German trading company, the Jaluit Gesellschaft, administered the islands from 1887 until 1905. They conscripted the islanders as laborers. After the German–Spanish Treaty of 1899, in which Germany acquired the Carolines, Palau, and the Marianas from Spain, Germany placed all of its Micronesian islands, including the Marshalls, under the governor of German New Guinea.
Catholic missionary Father A. Erdland, from the Missionaries of the Sacred Heart based in Hiltrup, Germany, lived on Jaluit from around 1904 to 1914. He was very interested in the islands and conducted considerable research on the Marshallese culture and language. He published a 376-page monograph on the islands in 1914. Father H. Linckens, another Missionary of the Sacred Heart visited the Marshall Islands in 1904 and 1911 for several weeks. He published a small work in 1912 about the Catholic mission activities and the people of the Marshall Islands.
Japanese mandate
Under German control, and even before then, the Japanese traders and fishermen from time to time visited the Marshall Islands, although contact with the islanders was irregular. After the Meiji Restoration (1868), the Japanese government adopted a policy of turning the Japanese Empire into a great economic and military power in East Asia.
In 1914, Japan joined the Entente during World War I and captured various German Empire colonies, including several in Micronesia. On September 29, 1914, Japanese troops occupied the Enewetak Atoll, and on September 30, 1914, the Jaluit Atoll, the administrative centre of the Marshall Islands. After the war, on June 28, 1919, Germany signed (under protest) the Treaty of Versailles. It renounced all of its Pacific possessions,Full text (German) , Artikel 119 including the Marshall Islands. On December 17, 1920, the Council of the League of Nations approved the South Pacific Mandate for Japan to take over all former German colonies in the Pacific Ocean located north of the Equator. The Administrative Centre of the Marshall Islands archipelago remained Jaluit.
The German Empire had primarily economic interests in Micronesia. The Japanese interests were in land. Despite the Marshalls' small area and few resources, the absorption of the territory by Japan would to some extent alleviate Japan's problem of an increasing population with a diminishing amount of available land to house it. During its years of colonial rule, Japan moved more than 1,000 Japanese to the Marshall Islands although they never outnumbered the indigenous peoples as they did in the Mariana Islands and Palau.
The Japanese enlarged administration and appointed local leaders, which weakened the authority of local traditional leaders. Japan also tried to change the social organization in the islands from matrilineality to the Japanese patriarchal system, but with no success. Moreover, during the 1930s, one third of all land up to the high water level was declared the property of the Japanese government. Before Japan banned foreign traders on the archipelago, the activities of Catholic and Protestant missionaries were allowed.
Indigenous people were educated in Japanese schools, and studied the Japanese language and Japanese culture. This policy was the government strategy not only in the Marshall Islands, but on all the other mandated territories in Micronesia. On March 27, 1933, Japan gave notice of withdrawal from the League of Nations,League of Nations chronology, United Nations. according to the rules of the league (article 1, section 3), the withdrawal became effective exactly two years later: pdf but continued to manage the islands, and in the late 1930s began building air bases on several atolls. The Marshall Islands were in an important geographic position, being the easternmost point in Japan's defensive ring at the beginning of World War II.
World War II
thumb|right|US troops inspecting an enemy bunker, Kwajalein Atoll. 1944.
In the months before the attack on Pearl Harbor, Kwajalein Atoll was the administrative center of the Japanese 6th Fleet Forces Service, whose task was the defense of the Marshall Islands.
In World War II, the United States, during the Gilbert and Marshall Islands campaign, invaded and occupied the islands in 1944, destroying or isolating the Japanese garrisons. In just one month in 1944, Americans captured Kwajalein Atoll, Majuro and Enewetak, and, in the next two months, the rest of the Marshall Islands, except for Wotje, Mili, Maloelap and Jaluit.
The battle in the Marshall Islands caused irreparable damage, especially on Japanese bases. During the American bombing, the islands' population suffered from lack of food and various injuries. U.S. attacks started in mid-1943, and caused half the Japanese garrison of 5,100 people in the Mili Atoll to die from hunger by August 1945.
thumb|Shipping Lane Patrol Kwajalein Island (Marshall Islands-April 1945)
Trust Territory of the Pacific Islands
Following capture and occupation by the United States during World War II, the Marshall Islands, along with several other island groups located in Micronesia, passed formally to the United States under United Nations auspices in 1947 as part of the Trust Territory of the Pacific Islands established pursuant to Security Council Resolution 21.
Nuclear testing during the Cold War
thumb|Mushroom cloud from the largest atmospheric nuclear test the United States ever conducted, Castle Bravo.
From 1946 to 1958, the early years of the Cold War, the United States tested 67 nuclear weapons at its Pacific Proving Grounds located in the Marshall Islands,"Nuclear Weapons Test Map", Public Broadcasting Service including the largest atmospheric nuclear test ever conducted by the U.S., code named Castle Bravo. "The bombs had a total yield of 108,496 kilotons, over 7,200 times more powerful than the atomic weapons used during World War II." With the 1952 test of the first U.S. hydrogen bomb, code named "Ivy Mike," the island of Elugelab in the Enewetak atoll was destroyed. In 1956, the United States Atomic Energy Commission regarded the Marshall Islands as "by far the most contaminated place in the world."Stephanie Cooke (2009). In Mortal Hands: A Cautionary History of the Nuclear Age, Black Inc., p. 168, ISBN 978-1-59691-617-3.
Nuclear claims between the U.S. and the Marshall Islands are ongoing, and health effects from these nuclear tests linger. Project 4.1 was a medical study conducted by the United States of those residents of the Bikini Atoll exposed to radioactive fallout. From 1956 to August 1998, at least $759 million was paid to the Marshallese Islanders in compensation for their exposure to U.S. nuclear weapon testing.
Independence
In 1979, the Government of the Marshall Islands was officially established and the country became self-governing.
In 1986, the Compact of Free Association with the United States entered into force, granting the Republic of the Marshall Islands (RMI) its sovereignty. The Compact provided for aid and U.S. defense of the islands in exchange for continued U.S. military use of the missile testing range at Kwajalein Atoll. The independence procedure was formally completed under international law in 1990, when the UN officially ended the Trusteeship status pursuant to Security Council Resolution 683.
Climate change
In 2008, extreme waves and high tides caused widespread flooding in the capital city of Majuro and other urban centres, above sea level. On Christmas morning in 2008, the government declared a state of emergency."Marshall atolls declare emergency ", BBC News, December 25, 2008. In 2013, heavy waves once again breached the city walls of Majuro.
In 2013, the northern atolls of the Marshall Islands experienced drought. The drought left 6,000 people surviving on less than of water per day. This resulted in the failure of food crops and the spread of diseases such as diarrhea, pink eye, and influenza. These emergencies resulted in the United States President declaring an emergency in the islands. This declaration activated support from US government agencies under the Republic's "free association" status with the United States, which provides humanitarian and other vital support.President Obama Signs a Disaster Declaration for the Republic of the Marshall Islands | The White House. Whitehouse.gov (June 14, 2013). Retrieved on September 11, 2013.
Following the 2013 emergencies, the Minister of Foreign Affairs Tony deBrum was encouraged by the Obama administration in the United States to turn the crises into an opportunity to promote action against climate change. DeBrum demanded new commitment and international leadership to stave off further climate disasters from battering his country and other similarly vulnerable countries. In September 2013, the Marshall Islands hosted the 44th Pacific Islands Forum summit. DeBrum proposed a Majuro Declaration for Climate Leadership to galvanize concrete action on climate change.NEWS: Marshall Islands call for "New wave of climate leadership" at upcoming Pacific Islands Forum Climate & Development Knowledge Network. Downloaded July 31, 2013.
Government
thumb|The Marshall Islands Capitol building
The government of the Marshall Islands operates under a mixed parliamentary-presidential system as set forth in its Constitution. Elections are held every four years in universal suffrage (for all citizens above 18), with each of the twenty-four constituencies (see below) electing one or more representatives (senators) to the lower house of RMI's unicameral legislature, the Nitijela. (Majuro, the capital atoll, elects five senators.) The President, who is head of state as well as head of government, is elected by the 33 senators of the Nitijela. Four of the five Marshallese presidents who have been elected since the Constitution was adopted in 1979 have been traditional paramount chiefs.
Legislative power lies with the Nitijela. The upper house of Parliament, called the Council of Iroij, is an advisory body comprising twelve tribal chiefs. The executive branch consists of the President and the Presidential Cabinet, which consists of ten ministers appointed by the President with the approval of the Nitijela. The twenty-four electoral districts into which the country is divided correspond to the inhabited islands and atolls. There are currently four political parties in the Marshall Islands: Aelon̄ Kein Ad (AKA), United People's Party (UPP), Kien Eo Am (KEA) and United Democratic Party (UDP). Rule is shared by the AKA and the UDP. The following senators are in the legislative body:
Ailinglaplap Atoll – Christopher Loeak (AKA), Ruben R. Zackhras (UDP)
Ailuk Atoll – Maynard Alfred (UDP)
Arno Atoll – Nidel Lorak (UDP), Jiba B. Kabua (AKA)
Aur Atoll – Hilda C. Heine (AKA)
Ebon Atoll – John M. Silk (UDP)
Enewetak Atoll – Jack J. Ading (KEA)
Jabat Island – Kessai H. Note (UDP)
Jaluit Atoll – Rien J. Morris (UDP), Alvin T. Jacklick (KEA)
Kili Island – Vice Speaker Tomaki Juda (UDP)
Kwajalein Atoll – Michael Kabua (AKA), Tony A. deBrum (AKA), Jeban Riklon (AKA)
Lae Atoll – Thomas Heine (AKA)
Lib Island – Jerakoj Jerry Bejang (AKA)
Likiep Atoll – Speaker Donald F. Capelle (UDP)
Majuro Atoll – Phillip H. Muller (AKA), David Kramer (KEA), Brenson S. Wase (KEA), Anthony Muller (KEA), Jurelang Zedkaia (KEA)
Maloelap Atoll – Michael Konelios (UDP)
Mejit Island – Dennis Momotaro (AKA)
Mili Atoll – Wilbur Heine (AKA)
Namdrik Atoll – Mattlan Zackhras (UDP)
Namu Atoll – Tony Aiseia (AKA)
Rongelap Atoll – Kenneth A. Kedi (IND)
Ujae Atoll – Caious Lucky (AKA)
Utirik Atoll – Hiroshi V. Yamamura (AKA)
Wotho Atoll – David Kabua (AKA)
Wotje Atoll – Litokwa Tomeing (UPP)
Foreign affairs and defense
The Compact of Free Association with the United States gives the U.S. sole responsibility for international defense of the Marshall Islands. It allows islanders to live and work in the United States and establishes economic and technical aid programs.
The Marshall Islands was admitted to the United Nations based on the Security Council's recommendation on August 9, 1991, in Resolution 704 and the General Assembly's approval on September 17, 1991, in Resolution 46/3.United Nations General Assembly Resolution 46/3, Admission of the Republic of the Marshall Islands to Membership in the United Nations, adopted 17 September 1991. In international politics within the United Nations, the Marshall Islands has often voted consistently with the United States with respect to General Assembly resolutions.General Assembly – Overall Votes – Comparison with U.S. vote lists the Marshall Islands as the country with the second highest incidence of votes. Micronesia has always been in the top two.
On 28 April 2015, the Iranian navy seized the Marshall Island-flagged MV Maersk Tigris near the Strait of Hormuz. The ship had been chartered by Germany's Rickmers Ship Management, which stated that the ship contained no special cargo and no military weapons. The ship was reported to be under the control of the Iranian Revolutionary Guard according to the Pentagon. Tensions escallated in the region due to the intensifying of Saudi-led coalition attacks in Yemen. The Pentagon reported that the destroyer USS Farragut and a maritime reconnaissance aircraft were dispatched upon receiving a distress call from the ship Tigris and it was also reported that all 34 crew members were detained. US defense officials have said that they would review U.S. defense obligations to the Government of the Marshall Islands in the wake of recent events and also condemned the shots fired at the bridge as "inappropriate". It was reported in May 2015 that Tehran would release the ship after it paid a penalty.
Geography
right|thumb|Map of the Marshall Islands
thumb|right|Aerial view of Majuro, one of the many atolls that makes up the Marshall Islands
thumb|Beach scenery at Laura, Majuro.
The Marshall Islands sit atop ancient submerged volcanoes rising from the ocean floor, about halfway between Hawaii and Australia, north of Nauru and Kiribati, east of the Federated States of Micronesia, and south of the U.S. territory of Wake Island, to which it lays claim. The atolls and islands form two groups: the Ratak (sunrise) and the Ralik (sunset). The two island chains lie approximately parallel to one another, running northwest to southeast, comprising about of ocean but only about of land mass. Each includes 15 to 18 islands and atolls. The country consists of a total of 29 atolls and five isolated islands situated in about 180,000 square miles of the Pacific. The largest atoll with a land area of is Kwajalein. It surrounds a 655-square-mile lagoon.
Twenty-four of the atolls and islands are inhabited. Atolls are uninhabited due to poor living conditions, lack of rain, or nuclear contamination. The uninhabited atolls are:
Ailinginae Atoll
Bikar (Bikaar) Atoll
Bikini Atoll
Bokak Atoll
Erikub Atoll
Jemo Island
Nadikdik Atoll
Rongerik Atoll
Toke Atoll
Ujelang Atoll
The average altitude above sea level for the entire country is .
Shark sanctuary
In October 2011, the government declared that an area covering nearly of ocean shall be reserved as a shark sanctuary. This is the world's largest shark sanctuary, extending the worldwide ocean area in which sharks are protected from . In protected waters, all shark fishing is banned and all by-catch must be released. However, some have questioned the ability of the Marshall Islands to enforce this zone.
Territorial claim on Wake Island
The Marshall Islands also lays claim to Wake Island. While Wake has been administered by the United States since 1899, the Marshallese government refers to it by the name Enen-kio.
Climate
thumb|left|150px|Average monthly temperatures (red) and precipitation (blue) on Majuro.
The climate has a dry season from December to April and a wet season from May to November. Many Pacific typhoons begin as tropical storms in the Marshall Islands region, and grow stronger as they move west toward the Mariana Islands and the Philippines.
Due to its very low elevation, the Marshall Islands are threatened by the potential effects of sea level rise. According to the president of Nauru, the Marshall Islands are the most endangered nation in the world due to flooding from climate change.
Population has outstripped the supply of freshwater, usually from rainfall. The northern atolls get of rainfall annually; the southern atolls about twice that. The threat of drought is commonplace throughout the island chains.
Culture
thumb|Marshallese fans
Although the ancient skills are now in decline, the Marshallese were once able navigators, using the stars and stick-and-shell charts.
Economy
thumb|Graphical depiction of Marshall Islands's product exports in 28 colour-coded categories.
The islands have few natural resources, and their imports far exceed exports.
Labour
In 2007, the Marshall Islands joined the International Labour Organization, which means its labour laws will comply with international benchmarks. This may impact business conditions in the islands.
Taxation
The income tax has two brackets, with rates of 8% and 12%. The corporate tax is 3% of revenue.
Foreign assistance
United States government assistance is the mainstay of the economy. Under terms of the Amended Compact of Free Association, the U.S. is committed to provide US$57.7 million per year in assistance to the Marshall Islands (RMI) through 2013, and then US$62.7 million through 2023, at which time a trust fund, made up of U.S. and RMI contributions, will begin perpetual annual payouts.
The United States Army maintains the Ronald Reagan Ballistic Missile Defense Test Site on Kwajalein Atoll. Marshallese land owners receive rent for the base.
Agriculture
Agricultural production is concentrated on small farms. The most important commercial crops are coconuts, tomatoes, melons, and breadfruit.
Industry
Small-scale industry is limited to handicrafts, fish processing, and copra.
Fishing
Fishing has been critical to the economy of this island nation since its settlement.
In 1999, a private company built a tuna loining plant with more than 400 employees, mostly women. But the plant closed in 2005 after a failed attempt to convert it to produce tuna steaks, a process that requires half as many employees. Operating costs exceeded revenue, and the plant's owners tried to partner with the government to prevent closure. But government officials personally interested in an economic stake in the plant refused to help. After the plant closed, it was taken over by the government, which had been the guarantor of a $2 million loan to the business.
Energy
On September 15, 2007, Witon Barry (of the Tobolar Copra processing plant in the Marshall Islands capital of Majuro) said power authorities, private companies, and entrepreneurs had been experimenting with coconut oil as alternative to diesel fuel for vehicles, power generators, and ships. Coconut trees abound in the Pacific's tropical islands. Copra, the meat of the coconut, yields coconut oil (1 liter for every 6 to 10 coconuts). In 2009, a 57 kW solar power plant was installed, the largest in the Pacific at the time, including New Zealand.College of the Marshall Islands. (PDF) . reidtechnology.co.nz. June 2009 It is estimated that 330 kW of solar and 450 kW of wind power would be required to make the College of the Marshall Islands energy self-sufficient.College of the Marshall Islands: Reiher Returns from Japan Solar Training Program with New Ideas. Yokwe.net. Retrieved on September 11, 2013. Marshalls Energy Company (MEC), a government entity, provides the islands with electricity. In 2008, 420 solar home systems of 200 Wp each were installed on Ailinglaplap Atoll, sufficient for limited electricity use.
Demographics
Historical population figures are unknown. In 1862, the population was estimated at about 10,000. In 1960, the entire population was about 15,000. In the 2011 Census, the number of island residents was 53,158. Over two-thirds of the population live in the capital, Majuro and Ebeye, the secondary urban center, located in Kwajalein Atoll. This excludes many who have relocated elsewhere, primarily to the United States. The Compact of Free Association allows them to freely relocate to the United States and obtain work there. A large concentration of about 4,300 Marshall Islanders have relocated to Springdale, Arkansas, the largest population concentration of natives outside their island home.
Most of the residents are Marshallese, who are of Micronesian origin and migrated from Asia several thousand years ago. A minority of Marshallese have some recent Asian ancestry, mainly Japanese. About one-half of the nation's population lives on Majuro, the capital, and Ebeye, a densely populated island.David Vine (January 7, 2004) Exile in the Indian Ocean: Documenting the Injuries of Involuntary Displacement. Ralph Bunche Institute for International Studies. Web.gc.cuny.edu. Retrieved on September 11, 2013. The outer islands are sparsely populated due to lack of employment opportunities and economic development. Life on the outer atolls is generally traditional.
The official language of the Marshall Islands is Marshallese, but it is common to speak the English language.
Religion
Major religious groups in the Republic of the Marshall Islands include the United Church of Christ – Congregational in the Marshall Islands, with 51.5% of the population; the Assemblies of God, 24.2%; the Roman Catholic Church, 8.4%;International Religious Freedom Report 2009: Marshall Islands. United States Bureau of Democracy, Human Rights and Labor (September 14, 2007). This article incorporates text from this source, which is in the public domain. and The Church of Jesus Christ of Latter-day Saints (Mormons), 8.3%. Also represented are Bukot Nan Jesus (also known as Assembly of God Part Two), 2.2%; Baptist, 1.0%; Seventh-day Adventists, 0.9%; Full Gospel, 0.7%; and the Baha'i Faith, 0.6%. Persons without any religious affiliation account for a very small percentage of the population. There is also a small community of Ahmadiyya Muslims based in Majuro, with the first mosque opening in the capital in September 2012.First Mosque opens up in Marshall Islands by Radio New Zealand International, September 21, 2012
Health
A recent 2007–2008 lifestyle intervention study demonstrated that strict plant-based nutritional interventions with monitored daily exercise could significantly counter behaviorally-related diabetes and pre-diabetes.Davis, B. Defeating Diabetes: Lessons From the Marshall Islands, Today’s Dietitian, August 2008, Vol. 10 No. 8 P. 24. Accessed online 10/03/2016
Education
The Ministry of Education (Marshall Islands) operates the state schools in the Marshall Islands.Education. Office of the President, Republic of the Marshall Islands. rmigovernment.org. Retrieved on May 25, 2012. There are two tertiary institutions operating in the Marshall Islands, the College of the Marshall IslandsCollege of the Marshall Islands (CMI). Cmi.edu. Retrieved on September 11, 2013. and the University of the South Pacific.
Transportation
The Marshall Islands are served by the Marshall Islands International Airport in Majuro, the Bucholz Army Airfield in Kwajalein, and other small airports and airstrips.
In 2005, Aloha Airlines canceled its flight services to the Marshall Islands.
Media
The Marshall Islands have several AM and FM radio stations.
AM: V7AB 1098 • 1557
FM: V7AB 97.9 • V7AA 104.1 (formerly 96.3)
AFRTS: AM 1224 (NPR) • 99.9 (Country) • 101.1 (Active Rock) • 102.1 (Hot AC)
See also
Outline of the Marshall Islands
Index of Marshall Islands-related articles
Visa policy of the Marshall Islands
List of island countries
The Plutonium Files
Notes
References
Bibliography
Further reading
Barker, H. M. (2004). Bravo for the Marshallese: Regaining Control in a Post-nuclear, Post-colonial World. Belmont, California: Thomson/Wadsworth.
Carucci, L. M. (1997). Nuclear Nativity: Rituals of Renewal and Empowerment in the Marshall Islands. DeKalb: Northern Illinois University Press.
Hein, J. R., F. L. Wong, and D. L. Mosier (2007). Bathymetry of the Republic of the Marshall Islands and Vicinity. Miscellaneous Field Studies; Map-MF-2324. Reston, VA: U.S. Department of the Interior, U.S. Geological Survey.
Niedenthal, J. (2001). For the Good of Mankind: A History of the People of Bikini and Their Islands. Majuro, Marshall Islands: Bravo Publishers.
Rudiak-Gould, P. (2009). Surviving Paradise: One Year on a Disappearing Island. New York: Union Square Press.
Woodard, Colin (2000). Ocean's End: Travels Through Endangered Seas. New York: Basic Books. (Contains extended account of sea-level rise threat and the legacy of U.S. Atomic testing.)
External links
Government
Embassy of the Republic of the Marshall Islands Washington, DC official government site
Chief of State and Cabinet Members
General information
Country Profile from New Internationalist
Marshall Islands from UCB Libraries GovPubs
Marshall Islands from the BBC News
News media
Marshall Islands Journal Weekly independent national newspaper
Other
Digital Micronesia – Marshalls by Dirk HR Spennemann, Associate Professor in Cultural Heritage Management
Plants & Environments of the Marshall Islands Book turned website by Dr. Mark Merlin of the University of Hawaii
Atomic Testing Information
Pictures of victims of U.S. nuclear testing in the Marshall Islands on Nuclear Files.org
"Kenner hearing: Marshall Islands-flagged rig in Gulf oil spill was reviewed in February"
NOAA's National Weather Service – Marshall Islands
Canoes of the Marshall Islands
Alele Museum – Museum of the Marshall Islands
WUTMI – Women United Together Marshall Islands
Category:Archipelagoes of the Pacific Ocean
Category:Associated states of the United States
Category:Countries in Micronesia
Category:English-speaking countries and territories
Category:Former German colonies
Category:Former Japanese colonies
Category:Island countries
Category:Liberal democracies
Category:Member states of the United Nations
Category:Republics
Category:Small Island Developing States
Category:States and territories established in 1986
Category:1986 establishments in Oceania
Category:World War II sites | 19,147 | 2017-01 |
Hyderabad | Hyderabad ( ; often ) is the capital of the southern Indian state of Telangana and de jure capital of Andhra Pradesh. Occupying along the banks of the Musi River, it has a population of about and a metropolitan population of about , making it the fourth most populous city and sixth most populous urban agglomeration in India. At an average altitude of , much of Hyderabad is situated on hilly terrain around artificial lakes, including Hussain Sagar—predating the city's founding—north of the city centre.
Established in 1591 by Muhammad Quli Qutb Shah, Hyderabad remained under the rule of the Qutb Shahi dynasty for nearly a century before the Mughals captured the region. In 1724, Mughal viceroy Asif Jah I declared his sovereignty and created his own dynasty, known as the Nizams of Hyderabad. The Nizam's dominions became a princely state during the British Raj, and remained so for , with the city serving as its capital. The city continued as the capital of Hyderabad State after it was brought into the Indian Union in 1948, and became the capital of Andhra Pradesh after the States Reorganisation Act, 1956. Since 1956, Rashtrapati Nilayam in the city has been the winter office of the President of India. In 2014, the newly formed state of Telangana split from Andhra Pradesh and the city became joint capital of the two states, a transitional arrangement scheduled to end by 2025.
Relics of Qutb Shahi and Nizam rule remain visible today; the Charminar—commissioned by Muhammad Quli Qutb Shah—has come to symbolise Hyderabad. Golconda fort is another major landmark. The influence of Mughlai culture is also evident in the region's distinctive cuisine, which includes Hyderabadi biryani and Hyderabadi haleem. The Qutb Shahis and Nizams established Hyderabad as a cultural hub, attracting men of letters from different parts of the world. Hyderabad emerged as the foremost centre of culture in India with the decline of the Mughal Empire in the mid-19th century, with artists migrating to the city from the rest of the Indian subcontinent. The Telugu film industry based in the city is the country's second-largest producer of motion pictures.
Hyderabad was historically known as a pearl and diamond trading centre, and it continues to be known as the City of Pearls. Many of the city's traditional bazaars remain open, including Laad Bazaar, Begum Bazaar and Sultan Bazaar. Industrialisation throughout the attracted major Indian manufacturing, research and financial institutions, including Bharat Heavy Electricals Limited, the National Geophysical Research Institute and the Centre for Cellular and Molecular Biology. Special economic zones dedicated to information technology have encouraged companies from India and around the world to set up operations in Hyderabad. The emergence of pharmaceutical and biotechnology industries in the 1990s led to the area's naming as India's "Genome Valley". With an output of 74 billion, Hyderabad is the fifth-largest contributor to India's overall gross domestic product.
History
Toponymy
According to John Everett-Heath, the author of Oxford Concise Dictionary of World Place Names, Hyderabad means "Haydar's city" or "lion city", from haydar (lion) and ābād (city). It was named to honour the Caliph Ali Ibn Abi Talib, who was also known as Haydar because of his lion-like valour in battles. Andrew Petersen, a scholar of Islamic architecture, says the city was originally called Baghnagar (city of gardens). One popular theory suggests that Muhammad Quli Qutb Shah, the founder of the city, named it "Bhagyanagar" or "Bhāgnagar" after Bhagmati, a local nautch (dancing) girl with whom he had fallen in love. She converted to Islam and adopted the title Hyder Mahal. The city was renamed Hyderabad in her honour.
According to German traveller Heinrich von Poser, whose travelogue of the Deccan was translated by Gita Dharampal-Frick of Heidelberg University, there were two names for the city: "On 3 December 1622, we reached the city of Bagneger or Hederabat, the seat of the king Sultan Mehemet Culi Cuttub Shah and the capital of the kingdom". French traveller Jean de Thévenot visited the Deccan region in 1666–1667 refers to the city in his book Travels in India as "Bagnagar and Aiderabad".
Early and medieval history
Archaeologists excavating near the city have unearthed Iron Age sites that may date from 500 BCE. The region comprising modern Hyderabad and its surroundings was known as Golkonda (Golla Konda-"shepherd's hill"), and was ruled by the Chalukya dynasty from 624 CE to 1075 CE. Following the dissolution of the Chalukya empire into four parts in the 11th century, Golkonda came under the control of the Kakatiya dynasty from 1158, whose seat of power was at Warangal, northeast of modern Hyderabad.
thumb|left|The Qutb Shahi Tombs at Ibrahim Bagh are the tombs of the seven Qutb Shahi rulers.|alt=Tomb of Abdullah Qutb Shah, the former ruler of Hyderabad
The Kakatiya dynasty was reduced to a vassal of the Khilji dynasty in 1310 after its defeat by Sultan Alauddin Khilji of the Delhi Sultanate. This lasted until 1321, when the Kakatiya dynasty was annexed by Malik Kafur, Allaudin Khilji's general. During this period, Alauddin Khilji took the Koh-i-Noor diamond, which is said to have been mined from the Kollur Mines of Golkonda, to Delhi. Muhammad bin Tughluq succeeded to the Delhi sultanate in 1325, bringing Warangal under the rule of the Tughlaq dynasty until 1347 when Ala-ud-Din Bahman Shah, a governor under bin Tughluq, rebelled against Delhi and established the Bahmani Sultanate in the Deccan Plateau, with Gulbarga, west of Hyderabad, as its capital. The Hyderabad area was under the control of the Musunuri Nayaks at this time, who, however, were forced to cede it to the Bahmani Sultanate in 1364. The Bahmani kings ruled the region until 1518 and were the first independent Muslim rulers of the Deccan.
Sultan Quli, a governor of Golkonda, revolted against the Bahmani Sultanate and established the Qutb Shahi dynasty in 1518; he rebuilt the mud-fort of Golconda and named the city "Muhammad nagar". The fifth sultan, Muhammad Quli Qutb Shah, established Hyderabad on the banks of the Musi River in 1591, to avoid the water shortages experienced at Golkonda. During his rule, he had the Charminar and Mecca Masjid built in the city. On 21 September 1687, the Golkonda Sultanate came under the rule of the Mughal emperor Aurangzeb after a year-long siege of the Golkonda fort. The annexed area was renamed Deccan Suba (Deccan province) and the capital was moved from Golkonda to Aurangabad, about northwest of Hyderabad.
Modern history
thumb|A mill with a canal connecting to Hussain Sagar lake. Following the introduction of railways in the 1880s, factories were built around the lake.|alt=Sepia photograph of buildings around the water canal
In 1714 Farrukhsiyar, the Mughal emperor, appointed Asif Jah I to be Viceroy of the Deccan, with the title Nizam-ul-Mulk (Administrator of the Realm). In 1724, Asif Jah I defeated Mubariz Khan to establish autonomy over the Deccan Suba, named the region Hyderabad Deccan, and started what came to be known as the Asif Jahi dynasty. Subsequent rulers retained the title Nizam ul-Mulk and were referred to as Asif Jahi Nizams, or Nizams of Hyderabad. The death of Asif Jah I in 1748 resulted in a period of political unrest as his sons, backed by opportunistic neighbouring states and colonial foreign forces, contended for the throne. The accession of Asif Jah II, who reigned from 1762 to 1803, ended the instability. In 1768 he signed the treaty of Masulipatnam, surrendering the coastal region to the East India Company in return for a fixed annual rent.
In 1769 Hyderabad city became the formal capital of the Nizams. In response to regular threats from Hyder Ali (Dalwai of Mysore), Baji Rao I (Peshwa of the Maratha Empire), and Basalath Jung (Asif Jah II's elder brother, who was supported by the Marquis de Bussy-Castelnau), the Nizam signed a subsidiary alliance with the East India Company in 1798, allowing the British Indian Army to occupy Bolarum (modern Secunderabad) to protect the state's capital, for which the Nizams paid an annual maintenance to the British.
Until 1874 there were no modern industries in Hyderabad. With the introduction of railways in the 1880s, four factories were built to the south and east of Hussain Sagar lake, and during the early 20th century, Hyderabad was transformed into a modern city with the establishment of transport services, underground drainage, running water, electricity, telecommunications, universities, industries, and Begumpet Airport. The Nizams ruled their princely state from Hyderabad during the British Raj.
After India gained independence, the Nizam declared his intention to remain independent rather than become part of the Indian Union. The Hyderabad State Congress, with the support of the Indian National Congress and the Communist Party of India, began agitating against Nizam VII in 1948. On 17 September that year, the Indian Army took control of Hyderabad State after an invasion codenamed Operation Polo. With the defeat of his forces, Nizam VII capitulated to the Indian Union by signing an Instrument of Accession, which made him the Rajpramukh (Princely Governor) of the state until 31 October 1956. Between 1946 and 1951, the Communist Party of India fomented the Telangana uprising against the feudal lords of the Telangana region. The Constitution of India, which became effective on 26 January 1950, made Hyderabad State one of the part B states of India, with Hyderabad city continuing to be the capital. In his 1955 report Thoughts on Linguistic States, B. R. Ambedkar, then chairman of the Drafting Committee of the Indian Constitution, proposed designating the city of Hyderabad as the second capital of India because of its amenities and strategic central location. Since 1956, the Rashtrapati Nilayam in Hyderabad has been the second official residence and business office of the President of India; the President stays once a year in winter and conducts official business particularly relating to Southern India.
On 1 November 1956 the states of India were reorganised by language. Hyderabad state was split into three parts, which were merged with neighbouring states to form the modern states of Maharashtra, Karnataka and Andhra Pradesh. The nine Telugu- and Urdu-speaking districts of Hyderabad State in the Telangana region were merged with the Telugu-speaking Andhra State to create Andhra Pradesh, with Hyderabad as its capital. Several protests, known collectively as the Telangana movement, attempted to invalidate the merger and demanded the creation of a new Telangana state. Major actions took place in 1969 and 1972, and a third began in 2010. The city suffered several explosions: one at Dilsukhnagar in 2002 claimed two lives; terrorist bombs in May and August 2007 caused communal tension and riots; and two bombs exploded in February 2013. On 30 July 2013 the government of India declared that part of Andhra Pradesh would be split off to form a new Telangana state, and that Hyderabad city would be the capital city and part of Telangana, while the city would also remain the capital of Andhra Pradesh for no more than ten years. On 3 October 2013 the Union Cabinet approved the proposal,
and in February 2014 both houses of Parliament passed the Telangana Bill. With the final assent of the President of India in June 2014, Telangana state was formed.
Geography
thumb|left|Hussain Sagar lake, built during the reign of the Qutb Shahi dynasty, was once the source of drinking water for Hyderabad.|alt=Large manmade lake, with Hyderabad in the far distance
Situated in the southern part of Telangana in southeastern India, Hyderabad is south of Delhi, southeast of Mumbai, and north of Bangalore by road. It lies on the banks of the Musi River, in the northern part of the Deccan Plateau. Greater Hyderabad covers , making it one of the largest metropolitan areas in India. With an average altitude of , Hyderabad lies on predominantly sloping terrain of grey and pink granite, dotted with small hills, the highest being Banjara Hills at . The city has numerous lakes referred to as sagar, meaning "sea". Examples include artificial lakes created by dams on the Musi, such as Hussain Sagar (built in 1562 near the city centre), Osman Sagar and Himayat Sagar. As of 1996, the city had 140 lakes and 834 water tanks (ponds).
Climate
Hyderabad has a tropical wet and dry climate (Köppen Aw) bordering on a hot semi-arid climate (Köppen BSh). The annual mean temperature is ; monthly mean temperatures are . Summers (March–June) are hot and humid, with average highs in the mid-to-high 30s Celsius; maximum temperatures often exceed between April and June. The coolest temperatures occur in December and January, when the lowest temperature occasionally dips to . May is the hottest month, when daily temperatures range from 26 to 39 °C (79–102 °F); December, the coldest, has temperatures varying from 14.5 to 28 °C (57–82 °F).
Heavy rain from the south-west summer monsoon falls between June and September, supplying Hyderabad with most of its mean annual rainfall. Since records began in November 1891, the heaviest rainfall recorded in a 24-hour period was on 24 August 2000. The highest temperature ever recorded was on 2 June 1966, and the lowest was on 8 January 1946. The city receives 2,731 hours of sunshine per year; maximum daily sunlight exposure occurs in February.
Conservation
thumb|right|Blackbucks grazing at Mahavir Harina Vanasthali National Park|alt=Three antelopes
Hyderabad's lakes and the sloping terrain of its low-lying hills provide habitat for an assortment of flora and fauna. As of 2016, the tree cover is 1.66% of total city area, a decrease from 2.71% in 1996. The forest region in and around the city encompasses areas of ecological and biological importance, which are preserved in the form of national parks, zoos, mini-zoos and a wildlife sanctuary. Nehru Zoological Park, the city's one large zoo, is the first in India to have a lion and tiger safari park. Hyderabad has three national parks (Mrugavani National Park, Mahavir Harina Vanasthali National Park and Kasu Brahmananda Reddy National Park), and the Manjira Wildlife Sanctuary is about from the city. Hyderabad's other environmental reserves are: Kotla Vijayabhaskara Reddy Botanical Gardens, Shamirpet Lake, Hussain Sagar, Fox Sagar Lake, Mir Alam Tank and Patancheru Lake, which is home to regional birds and attracts seasonal migratory birds from different parts of the world.
Organisations engaged in environmental and wildlife preservation include the Telangana Forest Department, Indian Council of Forestry Research and Education, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), the Animal Welfare Board of India, the Blue Cross of Hyderabad and the University of Hyderabad.
Administration
Common capital status
thumb|left|The Telangana and Andhra Pradesh legislatures are housed in the State Assembly Building.|alt=White building with multiple domes
According to the Andhra Pradesh Reorganisation Act, 2014 part 2 Section 5: "(1) On and from the appointed day, Hyderabad in the existing State of Andhra Pradesh, shall be the common capital of the State of Telangana and the State of Andhra Pradesh for such period not exceeding ten years. (2) After expiry of the period referred to in sub-section (1), Hyderabad shall be the capital of the State of Telangana and there shall be a new capital for the State of Andhra Pradesh."
The same sections also define that the common capital includes the existing area designated as the Greater Hyderabad Municipal Corporation under the Hyderabad Municipal Corporation Act, 1955. As stipulated in sections 3 and 18(1) of the Reorganisation Act, city MLAs are members of Telangana state assembly.
Local government
The Greater Hyderabad Municipal Corporation (GHMC) oversees the civic infrastructure of the city's 18 "circles", which together encompass 150 municipal wards. Each ward is represented by a corporator, elected by popular vote. The corporators elect the Mayor, who is the titular head of GHMC; executive powers rest with the Municipal Commissioner, appointed by the state government. The GHMC carries out the city's infrastructural work such as building and maintenance of roads and drains, town planning including construction regulation, maintenance of municipal markets and parks, solid waste management, the issuing of birth and death certificates, the issuing of trade licences, collection of property tax, and community welfare services such as mother and child healthcare, and pre-school and non-formal education. The GHMC was formed in April 2007 by merging the Municipal Corporation of Hyderabad (MCH) with 12 municipalities of the Hyderabad, Ranga Reddy and Medak districts covering a total area of . In the 2016 municipal election, the Telangana Rashtra Samithi formed the majority and the present Mayor is Bonthu Ram Mohan. The Secunderabad Cantonment Board is a civic administration agency overseeing an area of , where there are several military camps. The Osmania University campus is administered independently by the university authority.
Law and order in Hyderabad city is supervised by the governor of Telangana. The jurisdiction is divided into three police commissionerates: Hyderabad, Cyberabad, and Rachakonda. Each zone is headed by a deputy commissioner.
The jurisdictions of the city's administrative agencies are, in ascending order of size: the Hyderabad Police area, Hyderabad district, the GHMC area ("Hyderabad city") and the area under the Hyderabad Metropolitan Development Authority (HMDA). The HMDA is an apolitical urban planning agency that covers the GHMC and its suburbs, extending to 54 mandals in five districts encircling the city. It coordinates the development activities of GHMC and suburban municipalities and manages the administration of bodies such as the Hyderabad Metropolitan Water Supply and Sewerage Board (HMWSSB).
As the seat of the government of Telangana, Hyderabad is home to the state's legislature, secretariat and high court, as well as various local government agencies. The Lower City Civil Court and the Metropolitan Criminal Court are under the jurisdiction of the High Court. The GHMC area contains 24 State Legislative Assembly constituencies, which form five constituencies of the Lok Sabha (the lower house of the Parliament of India).
Utility services
thumb|right|A GHMC sweeper cleaning the Tank Bund Road|alt=Woman sweeping the road
The HMWSSB regulates rainwater harvesting, sewerage services and water supply, which is sourced from several dams located in the suburbs. In 2005, the HMWSSB started operating a water supply pipeline from Nagarjuna Sagar Dam to meet increasing demand. The Telangana Southern Power Distribution Company Limited manages electricity supply. As of October 2014, there were 15 fire stations in the city, operated by the Telangana State Disaster and Fire Response Department. The government-owned India Post has five head post offices and many sub-post offices in Hyderabad, which are complemented by private courier services.
Pollution control
Hyderabad produces around 4,500 tonnes of solid waste daily, which is transported from collection units in Imlibun, Yousufguda and Lower Tank Bund to the dumpsite in Jawaharnagar. Disposal is managed by the Integrated Solid Waste Management project which was started by the GHMC in 2010. Rapid urbanisation and increased economic activity has also led to increased industrial waste, air, noise and water pollution, which is regulated by the Telangana Pollution Control Board (TPCB). The contribution of different sources to air pollution in 2006 was: 20–50% from vehicles, 40–70% from a combination of vehicle discharge and road dust, 10–30% from industrial discharges and 3–10% from the burning of household rubbish. Deaths resulting from atmospheric particulate matter are estimated at 1,700–3,000 each year.
Ground water around Hyderabad, which has a hardness of up to 1000 ppm, around three times higher than is desirable, is the main source of drinking water but the increasing population and consequent increase in demand has led to a decline in not only ground water but also river and lake levels. This shortage is further exacerbated by inadequately treated effluent discharged from industrial treatment plants polluting the water sources of the city.
Healthcare
thumb|The Nizamia Unani Hospital provides medical care using regular medicine along with Unani |alt=Building with Islamic architecture
The Commissionerate of Health and Family Welfare is responsible for planning, implementation and monitoring of all facilities related to health and preventive services. –11, the city had 50 government hospitals, 300 private and charity hospitals and 194 nursing homes providing around 12,000 hospital beds, fewer than half the required 25,000. For every 10,000 people in the city, there are 17.6 hospital beds,, the census city population was 6,809,970 and there were 12,000 available hospital beds, giving the derived rate. 9 specialist doctors, 14 nurses and 6 physicians. The city also has about 4,000 individual clinics and 500 medical diagnostic centres. Private clinics are preferred by many residents because of the distance to, poor quality of care at and long waiting times in government facilities, The cities surveyed were Delhi, Meerut, Kolkata, Indore, Mumbai, Nagpur, Chennai and Hyderabad. despite the high proportion of the city's residents being covered by government health insurance: 24% according to a National Family Health Survey in 2005.
, many new private hospitals of various sizes were opened or being built. Hyderabad also has outpatient and inpatient facilities that use Unani, homoeopathic and Ayurvedic treatments.
In the 2005 National Family Health Survey, it was reported that the city's total fertility rate is 1.8, which is below the replacement rate. Only 61% of children had been provided with all basic vaccines (BCG, measles and full courses of polio and DPT), fewer than in all other surveyed cities except Meerut. The infant mortality rate was 35 per 1,000 live births, and the mortality rate for children under five was 41 per 1,000 live births. The survey also reported that a third of women and a quarter of men are overweight or obese, 49% of children below 5 years are anaemic, and up to 20% of children are underweight, while more than 2% of women and 3% of men suffer from diabetes.
Demographics
When the GHMC was created in 2007, the area occupied by the municipality increased from to . Consequently, the population increased by 87%, from 3,637,483 in the 2001 census to 6,809,970 in the 2011 census, 24% of which are migrants from elsewhere in India, making Hyderabad the nation's fourth most populous city. , the population density is . At the same 2011 census, the Hyderabad Urban Agglomeration had a population of 7,749,334, making it the sixth most populous urban agglomeration in the country.
The population of the Hyderabad urban agglomeration has since been estimated by electoral officials to be 9.1 million as of early 2013 but is expected to exceed 10 million by the end of the year. There are 3,500,802 male and 3,309,168 female citizens—a sex ratio of 945 females per 1000 males, higher than the national average of 926 per 1000. Among children aged years, 373,794 are boys and 352,022 are girls—a ratio of 942 per 1000. Literacy stands at 82.96% (male 85.96%; female 79.79%), higher than the national average of 74.04%. The socio-economic strata consist of 20% upper class, 50% middle class and 30% working class.
Language and religion
Referred to as "Hyderabadi", the residents of Hyderabad are predominantly Telugu and Urdu speaking people, with minority Bengali, Gujarati (including Memon), Kannada (including Nawayathi), Malayalam, Marathi, Marwari, Odia, Punjabi, Tamil and Uttar Pradeshi communities. Hyderabad is home to a unique dialect of Urdu called Hyderabadi Urdu, which is a type of Dakhini, and is the mother tongue of most Hyderabadi Muslims, a unique community who owe much of their history, language, cuisine, and culture to Hyderabad, and the various dynasties who previously ruled. Hadhrami Arabs, African Arabs, Armenians, Abyssinians, Iranians, Pathans and Turkish people are also present; these communities, of which the Hadhrami are the largest, declined after Hyderabad State became part of the Indian Union, as they lost the patronage of the Nizams.
Telugu and Urdu are both official languages of the city, and most Hyderabadis are bilingual. The Telugu dialect spoken in Hyderabad is called Telangana Mandalika, and the Urdu spoken is called Dakhini. English is also used. A significant minority speak other languages, including Hindi, Marathi, Odia, Tamil, Bengali and Kannada.
Hindus are in the majority. Muslims form a very large minority, and are present throughout the city and predominate in and around the Old City. There are also Christian, Sikh, Jain, Buddhist and Parsi communities and iconic temples, mosques and churches can be seen.
According to the 2011 census, in Greater Hyderabad (the extended city area governed by GHMC plus the outlying districts), religious make-up was: Hindus (64.93%), Muslims (30.13%), Christians (2.75%), Jains (0.29%), Sikhs (0.25%) and Buddhists (0.04%); 1.56% did not state any religion. On this page, select "Andhra Pradesh" from the download menu. "District - Hyderabad" is at line 672 of the excel file, "GHMC (M Corp. + OG)" at line 11.
Slums
thumb|right|Labourers in a rural area of Hyderabad|alt=Hyderabad slum dwellers outside mud houses
In the greater metropolitan area, 13% of the population live below the poverty line. According to a 2012 report submitted by GHMC to the World Bank, Hyderabad has 1,476 slums with a total population of 1.7 million, of whom 66% live in 985 slums in the "core" of the city (the part that formed Hyderabad before the April 2007 expansion) and the remaining 34% live in 491 suburban tenements. About 22% of the slum-dwelling households had migrated from different parts of India in the last decade of the 20th century, and 63% claimed to have lived in the slums for more than 10 years. Overall literacy in the slums is and female literacy is . A third of the slums have basic service connections, and the remainder depend on general public services provided by the government. There are 405 government schools, 267 government aided schools, 175 private schools and 528 community halls in the slum areas.
According to a 2008 survey by the Centre for Good Governance, 87.6% of the slum-dwelling households are nuclear families, 18% are very poor, with an income up to per annum, 73% live below the poverty line (a standard poverty line recognised by the Andhra Pradesh Government is per annum), 27% of the chief wage earners (CWE) are casual labour and 38% of the CWE are illiterate. About 3.72% of the slum children aged 5–14 do not go to school and 3.17% work as child labour, of whom 64% are boys and 36% are girls. The largest employers of child labour are street shops and construction sites. Among the working children, 35% are engaged in hazardous jobs.
Cityscape
Neighbourhoods
thumb|Optimist and Laser dinghies during the Hyderabad Sailing Week Regatta at Hussain Sagar|alt=People sailing in the lake regatta
The historic city established by Muhammad Quli Qutb Shah on the southern side of the Musi River forms the "Old City", while the "New City" encompasses the urbanised area on the northern banks. The two are connected by many bridges across the river, the oldest of which is Purana Pul ("old bridge"). Hyderabad is twinned with neighbouring Secunderabad, to which it is connected by Hussain Sagar.
Many historic and tourist sites lie in south central Hyderabad, such as the Charminar, the Mecca Masjid, the Salar Jung Museum, the Nizam's Museum, the Falaknuma Palace, and the traditional retail corridor comprising the Pearl Market, Laad Bazaar and Madina Circle. North of the river are hospitals, colleges, major railway stations and business areas such as Begum Bazaar, Koti, Abids, Sultan Bazaar and Moazzam Jahi Market, along with administrative and recreational establishments such as the Reserve Bank of India, the Telangana Secretariat, the India Government Mint, Hyderabad, the Telangana Legislature, the Public Gardens, the Nizam Club, the Ravindra Bharathi, the State Museum, the Birla Temple and the Birla Planetarium.
North of central Hyderabad lie Hussain Sagar, Tank Bund Road, Rani Gunj and the Secunderabad Railway Station. Most of the city's parks and recreational centres, such as Sanjeevaiah Park, Indira Park, Lumbini Park, NTR Gardens, the Buddha statue and Tankbund Park are located here. In the northwest part of the city there are upscale residential and commercial areas such as Banjara Hills, Jubilee Hills, Begumpet, Khairatabad and Miyapur. The northern end contains industrial areas such as Sanathnagar, Moosapet, Balanagar, Patancheru and Chanda Nagar. The northeast end is dotted with residential areas. In the eastern part of the city lie many defence research centres and Ramoji Film City. The "Cyberabad" area in the southwest and west of the city has grown rapidly since the 1990s. It is home to information technology and bio-pharmaceutical companies and to landmarks such as Hyderabad Airport, Osman Sagar, Himayath Sagar and Kasu Brahmananda Reddy National Park.
Landmarks
Heritage buildings constructed during the Qutb Shahi and Nizam eras showcase Indo-Islamic architecture influenced by Medieval, Mughal and European styles. After the 1908 flooding of the Musi River, the city was expanded and civic monuments constructed, particularly during the rule of Mir Osman Ali Khan (the VIIth Nizam), whose patronage of architecture led to him being referred to as the maker of modern Hyderabad. In 2012, the government of India declared Hyderabad the first "Best heritage city of India".
Qutb Shahi architecture of the 16th and early 17th centuries followed classical Persian architecture featuring domes and colossal arches. The oldest surviving Qutb Shahi structure in Hyderabad is the ruins of Golconda fort built in the 16th century. Most of the historical bazaars that still exist were constructed on the street north of Charminar towards the fort. The Charminar has become an icon of the city; located in the centre of old Hyderabad, it is a square structure with sides long and four grand arches each facing a road. At each corner stands a -high minaret. The Charminar, Golconda fort and the Qutb Shahi tombs are considered to be monuments of national importance in India; in 2010 the Indian government proposed that the sites be listed for UNESCO World Heritage status.
Among the oldest surviving examples of Nizam architecture in Hyderabad is the Chowmahalla Palace, which was the seat of royal power. It showcases a diverse array of architectural styles, from the Baroque Harem to its Neoclassical royal court. The other palaces include Falaknuma Palace (inspired by the style of Andrea Palladio), Purani Haveli, King Kothi and Bella Vista Palace all of which were built at the peak of Nizam rule in the 19th century. During Mir Osman Ali Khan's rule, European styles, along with Indo-Islamic, became prominent. These styles are reflected in the Falaknuma Palace and many civic monuments such as the Hyderabad High Court, Osmania Hospital, Osmania University, the State Central Library, City College, the Telangana Legislature, the State Archaeology Museum, Jubilee Hall, and Hyderabad and Kachiguda railway stations. Other landmarks of note are Paigah Palace, Asman Garh Palace, Basheer Bagh Palace, Errum Manzil and the Spanish Mosque, all constructed by the Paigah family.
Economy
thumb|A scene of bridalware shops in Laad Bazaar, near the Charminar|alt=four men in a traditional bridalware shops in the market
thumb|Inorbit Mall—Hyderabad, a modern shopping facility|alt=5-storey modern building occupying a city block
Hyderabad is the largest contributor to the gross domestic product (GDP), tax and other revenues, of Telangana, and the sixth largest deposit centre and fourth largest credit centre nationwide, as ranked by the Reserve Bank of India (RBI) in June 2012. Its 74 billion GDP made it the fifth-largest contributor city to India's overall GDP in 2011–12. Its per capita annual income in 2011 was . , the largest employers in the city were the governments of Andhra Pradesh (113,098 employees) and India (85,155). According to a 2005 survey, 77% of males and 19% of females in the city were employed. The service industry remains dominant in the city, and 90% of the employed workforce is engaged in this sector.
Hyderabad's role in the pearl trade has given it the name "City of Pearls" and up until the 18th century, the city was also the only global trading centre for large diamonds. Industrialisation began under the Nizams in the late 19th century, helped by railway expansion that connected the city with major ports. From the 1950s to the 1970s, Indian enterprises, such as Bharat Heavy Electricals Limited (BHEL), Nuclear Fuel Complex (NFC), National Mineral Development Corporation (NMDC), Bharat Electronics (BEL), Electronics Corporation of India Limited (ECIL), Defence Research and Development Organisation (DRDO), Hindustan Aeronautics Limited (HAL), Centre for Cellular and Molecular Biology (CCMB), Centre for DNA Fingerprinting and Diagnostics (CDFD), State Bank of Hyderabad (SBH) and Andhra Bank (AB) were established in the city. The city is home to Hyderabad Securities formerly known as Hyderabad Stock Exchange (HSE), and houses the regional office of the Securities and Exchange Board of India (SEBI). In 2013, the Bombay Stock Exchange (BSE) facility in Hyderabad was forecast to provide operations and transactions services to BSE-Mumbai by the end of 2014. The growth of the financial services sector has helped Hyderabad evolve from a traditional manufacturing city to a cosmopolitan industrial service centre. Since the 1990s, the growth of information technology (IT), IT-enabled services (ITES), insurance and financial institutions has expanded the service sector, and these primary economic activities have boosted the ancillary sectors of trade and commerce, transport, storage, communication, real estate and retail.
Hyderabad's commercial markets are divided into four sectors: central business districts, sub-central business centres, neighbourhood business centres and local business centres. Many traditional and historic bazaars are located throughout the city, Laad Bazaar being the prominent among all is popular for selling a variety of traditional and cultural antique wares, along with gems and pearls.
thumb|left|HITEC city, the hub of information technology companies|alt=City panorama showing gardens, clean roads and modern office buildings
The establishment of Indian Drugs and Pharmaceuticals Limited (IDPL), a public sector undertaking, in 1961 was followed over the decades by many national and global companies opening manufacturing and research facilities in the city. , the city manufactured one third of India's bulk drugs and 16% of biotechnology products, contributing to its reputation as "India's pharmaceutical capital" and the "Genome Valley of India".
Hyderabad is a global centre of information technology, for which it is known as Cyberabad (Cyber City). , it contributed 15% of India's and 98% of Andhra Pradesh's exports in IT and ITES sectors and 22% of NASSCOM's total membership is from the city. The development of HITEC City, a township with extensive technological infrastructure, prompted multinational companies to establish facilities in Hyderabad. The city is home to more than 1300 IT and ITES firms, including global conglomerates such as Microsoft, Apple, Amazon, Google, IBM, Yahoo!, Oracle Corporation, Dell, Facebook,
and major Indian firms including Tech Mahindra, Infosys, Tata Consultancy Services (TCS), Polaris and Wipro. In 2009 the World Bank Group ranked the city as the second best Indian city for doing business. The city and its suburbs contain the highest number of special economic zones of any Indian city.
Like the rest of India, Hyderabad has a large informal economy that employs 30% of the labour force. According to a survey published in 2007, it had 40–50,000 street vendors, and their numbers were increasing. Among the street vendors, 84% are male and 16% female, and four fifths are "stationary vendors" operating from a fixed pitch, often with their own stall. Most are financed through personal savings; only 8% borrow from moneylenders. Vendor earnings vary from to per day. Other unorganised economic sectors include dairy, poultry farming, brick manufacturing, casual labour and domestic help. Those involved in the informal economy constitute a major portion of urban poor.
Culture
thumb|Makkah Masjid constructed during the Qutb Shahi and Mughal rule in Hyderabad|alt=Stone mosque
Hyderabad emerged as the foremost centre of culture in India with the decline of the Mughal Empire. After the fall of Delhi in 1857, the migration of performing artists to the city particularly from the north and west of the Indian sub continent, under the patronage of the Nizam, enriched the cultural milieu. This migration resulted in a mingling of North and South Indian languages, cultures and religions, which has since led to a co-existence of Hindu and Muslim traditions, for which the city has become noted. A further consequence of this north–south mix is that both Telugu and Urdu are official languages of Telangana. The mixing of religions has also resulted in many festivals being celebrated in Hyderabad such as Ganesh Chaturthi, Diwali and Bonalu of Hindu tradition and Eid ul-Fitr and Eid al-Adha by Muslims.
Traditional Hyderabadi garb also reveals a mix of Muslim and South Asian influences with men wearing sherwani and kurta–paijama and women wearing khara dupatta and salwar kameez. Most Muslim women wear burqa and hijab outdoors. In addition to the traditional Indian and Muslim garments, increasing exposure to western cultures has led to a rise in the wearing of western style clothing among youths.
Literature
In the past, Qutb Shahi rulers and Nizams attracted artists, architects and men of letters from different parts of the world through patronage. The resulting ethnic mix popularised cultural events such as mushairas (poetic symposia). The Qutb Shahi dynasty particularly encouraged the growth of Deccani Urdu literature leading to works such as the Deccani Masnavi and Diwan poetry, which are among the earliest available manuscripts in Urdu. Lazzat Un Nisa, a book compiled in the 15th century at Qutb Shahi courts, contains erotic paintings with diagrams for secret medicines and stimulants in the eastern form of ancient sexual arts. The reign of the Nizams saw many literary reforms and the introduction of Urdu as a language of court, administration and education. In 1824, a collection of Urdu Ghazal poetry, named Gulzar-e-Mahlaqa, authored by Mah Laqa Bai—the first female Urdu poet to produce a Diwan—was published in Hyderabad.
Hyderabad has continued with these traditions in its annual Hyderabad Literary Festival, held since 2010, showcasing the city's literary and cultural creativity. Organisations engaged in the advancement of literature include the Sahitya Akademi, the Urdu Academy, the Telugu Academy, the National Council for Promotion of Urdu Language, the Comparative Literature Association of India, and the Andhra Saraswata Parishad. Literary development is further aided by state institutions such as the State Central Library, the largest public library in the state which was established in 1891, and other major libraries including the Sri Krishna Devaraya Andhra Bhasha Nilayam, the British Library and the Sundarayya Vignana Kendram.
Music and films
South Indian music and dances such as the Kuchipudi and Bharatanatyam styles are popular in the Deccan region. As a result of their culture policies, North Indian music and dance gained popularity during the rule of the Mughals and Nizams, and it was also during their reign that it became a tradition among the nobility to associate themselves with tawaif (courtesans). These courtesans were revered as the epitome of etiquette and culture, and were appointed to teach singing, poetry and classical dance to many children of the aristocracy. This gave rise to certain styles of court music, dance and poetry. Besides western and Indian popular music genres such as filmi music, the residents of Hyderabad play city-based marfa music, dholak ke geet (household songs based on local Folklore), and qawwali, especially at weddings, festivals and other celebratory events. The state government organises the Golconda Music and Dance Festival, the Taramati Music Festival and the Premavathi Dance Festival to further encourage the development of music.
Although the city is not particularly noted for theatre and drama, the state government promotes theatre with multiple programmes and festivals in such venues as the Ravindra Bharati, Shilpakala Vedika and Lalithakala Thoranam. Although not a purely music oriented event, Numaish, a popular annual exhibition of local and national consumer products, does feature some musical performances. The city is home to the Telugu film industry, popularly known as Tollywood and , produces the second largest number of films in India behind Bollywood. Films in the local Hyderabadi dialect are also produced and have been gaining popularity since 2005. The city has also hosted international film festivals such as the International Children's Film Festival and the Hyderabad International Film Festival. In 2005, Guinness World Records declared Ramoji Film City to be the world's largest film studio.
Art and handicrafts
thumb|left|An 18th century Bidriware cup with lid, displayed at the V&A Museum|alt=Decorated metal vase
The region is well known for its Golconda and Hyderabad painting styles which are branches of Deccani painting. Developed during the 16th century, the Golconda style is a native style blending foreign techniques and bears some similarity to the Vijayanagara paintings of neighbouring Mysore. A significant use of luminous gold and white colours is generally found in the Golconda style. The Hyderabad style originated in the 17th century under the Nizams. Highly influenced by Mughal painting, this style makes use of bright colours and mostly depicts regional landscape, culture, costumes and jewellery.
Although not a centre for handicrafts itself, the patronage of the arts by the Mughals and Nizams attracted artisans from the region to Hyderabad. Such crafts include: Bidriware, a metalwork handicraft from neighbouring Karnataka, which was popularised during the 18th century and has since been granted a Geographical Indication (GI) tag under the auspices of the WTO act; and Zari and Zardozi, embroidery works on textile that involve making elaborate designs using gold, silver and other metal threads. Another example of a handicraft drawn to Hyderabad is Kalamkari, a hand-painted or block-printed cotton textile that comes from cities in Andhra Pradesh. This craft is distinguished in having both a Hindu style, known as Srikalahasti and entirely done by hand, and an Islamic style, known as Machilipatnam that uses both hand and block techniques. Examples of Hyderabad's arts and crafts are housed in various museums including the Salar Jung Museum (housing "one of the largest one-man-collections in the world"), the AP State Archaeology Museum, the Nizam Museum, the City Museum and the Birla Science Museum.
Cuisine
thumb|Hyderabadi biryani
Hyderabadi cuisine comprises a broad repertoire of rice, wheat and meat dishes and the skilled use of various spices. Hyderabadi biryani and Hyderabadi haleem, with their blend of Mughlai and Arab cuisines, carry the national Geographical Indications tag.
Hyderabadi cuisine is influenced to some extent by French, but more by Arabic, Turkish, Iranian and native Telugu and Marathwada cuisines. Popular native dishes include nihari, chakna, baghara baingan and the desserts qubani ka meetha, double ka meetha and kaddu ki kheer (a sweet porridge made with sweet gourd).
Media
One of Hyderabad's earliest newspapers, The Deccan Times, was established in the 1780s. In modern times, the major Telugu dailies published in Hyderabad are Eenadu, Andhra Jyothy, Sakshi and Namaste Telangana, while the major English papers are The Times of India, The Hindu and The Deccan Chronicle. The major Urdu papers include The Siasat Daily, The Munsif Daily and Etemaad. Many coffee table magazines, professional magazines and research journals are also regularly published. The Secunderabad Cantonment Board established the first radio station in Hyderabad State around 1919. Deccan Radio was the first radio public broadcast station in the city starting on 3 February 1935, with FM broadcasting beginning in 2000. The available channels in Hyderabad include All India Radio, Radio Mirchi, Radio City, Red FM, Big FM and Fever FM.
Television broadcasting in Hyderabad began in 1974 with the launch of Doordarshan, the Government of India's public service broadcaster, which transmits two free-to-air terrestrial television channels and one satellite channel. Private satellite channels started in July 1992 with the launch of Star TV. Satellite TV channels are accessible via cable subscription, direct-broadcast satellite services or internet-based television. Hyderabad's first dial-up internet access became available in the early 1990s and was limited to software development companies. The first public internet access service began in 1995, with the first private sector internet service provider (ISP) starting operations in 1998. In 2015, high-speed public WiFi was introduced in parts of the city.
Education
thumb|left|Osmania University College of Arts|alt=Large pink granite building
Public and private schools in Hyderabad are governed by the Central Board of Secondary Education and follow a "10+2+3" plan. About two-thirds of pupils attend privately run institutions. Languages of instruction include English, Hindi, Telugu and Urdu. Depending on the institution, students are required to sit the Secondary School Certificate or the Indian Certificate of Secondary Education. After completing secondary education, students enroll in schools or junior colleges with a higher secondary facility. Admission to professional graduation colleges in Hyderabad, many of which are affiliated with either Jawaharlal Nehru Technological University Hyderabad (JNTUH) or Osmania University (OU), is through the Engineering Agricultural and Medical Common Entrance Test (EAM-CET).
There are 13 universities in Hyderabad: two private universities, two deemed universities, six state universities and three central universities. The central universities are the University of Hyderabad, Maulana Azad National Urdu University and the English and Foreign Languages University. Osmania University, established in 1918, was the first university in Hyderabad and is India's second most popular institution for international students. The Dr. B. R. Ambedkar Open University, established in 1982, is the first distance-learning open university in India.
Hyderabad is also home to a number of centres specialising in particular fields such as biomedical sciences, biotechnology and pharmaceuticals, such as the National Institute of Pharmaceutical Education and Research (NIPER) and National Institute of Nutrition (NIN). Hyderabad has five major medical schools—Osmania Medical College, Gandhi Medical College, Nizam's Institute of Medical Sciences, Deccan College of Medical Sciences and Shadan Institute of Medical Sciences—and many affiliated teaching hospitals. The Government Nizamia Tibbi College is a college of Unani medicine. Hyderabad is also the headquarters of the Indian Heart Association, a non-profit foundation for cardiovascular education.Indian Heart Association Webpage. Retrieved 30 April 2015.
Institutes in Hyderabad include the National Institute of Rural Development, the Indian School of Business, the Institute of Public Enterprise, the Administrative Staff College of India and the Sardar Vallabhbhai Patel National Police Academy. Technical and engineering schools include the International Institute of Information Technology, Hyderabad (IIITH), Birla Institute of Technology and Science, Pilani – Hyderabad (BITS Hyderabad) and Indian Institute of Technology, Hyderabad (IIT-H) as well as agricultural engineering institutes such as the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) and the Acharya N. G. Ranga Agricultural University. Hyderabad also has schools of fashion design including Raffles Millennium International, NIFT Hyderabad and Wigan and Leigh College. The National Institute of Design, Hyderabad (NID-H), will offer undergraduate and postgraduate courses from 2015.
Sports
thumb|left|Backyard cricket—an informal variant of cricket played in the bylanes of the city almost by all age groups|alt=Three Hyderabadi boys playing with cricket bats and a ball
The most popular sports played in Hyderabad are cricket and association football. At the professional level, the city has hosted national and international sports events such as the 2002 National Games of India, the 2003 Afro-Asian Games, the 2004 AP Tourism Hyderabad Open women's tennis tournament, the 2007 Military World Games, the 2009 World Badminton Championships and the 2009 IBSF World Snooker Championship. The city hosts a number of venues suitable for professional competition such as the Swarnandhra Pradesh Sports Complex for field hockey, the G. M. C. Balayogi Stadium in Gachibowli for athletics and football, and for cricket, the Lal Bahadur Shastri Stadium and Rajiv Gandhi International Cricket Stadium, home ground of the Hyderabad Cricket Association. Hyderabad has hosted many international cricket matches, including matches in the 1987 and the 1996 ICC Cricket World Cups. The Hyderabad cricket team represents the city in the Ranji Trophy—a first-class cricket tournament among India's states and cities. Hyderabad is also home to the Indian Premier League franchise Sunrisers Hyderabad champions of Indian Premier League 2016. A previous franchise was the Deccan Chargers, which won the 2009 Indian Premier League held in South Africa.
During British rule, Secunderabad became a well-known sporting centre and many race courses, parade grounds and polo fields were built. Many elite clubs formed by the Nizams and the British such as the Secunderabad Club, the Nizam Club and the Hyderabad Race Club, which is known for its horse racing especially the annual Deccan derby, still exist. In more recent times, motorsports has become popular with the Andhra Pradesh Motor Sports Club organising popular events such as the Deccan ¼ Mile Drag, TSD Rallies and 4x4 off-road rallying.
International-level sportspeople from Hyderabad include: cricketers Ghulam Ahmed, M. L. Jaisimha, Mohammed Azharuddin, V. V. S. Laxman, Venkatapathy Raju, Shivlal Yadav, Arshad Ayub, Syed Abid Ali, Mithali Raj and Noel David; football players Syed Abdul Rahim, Syed Nayeemuddin and Shabbir Ali; tennis player Sania Mirza; badminton players S. M. Arif, Pullela Gopichand, Saina Nehwal, P. V. Sindhu, Jwala Gutta and Chetan Anand; hockey players Syed Mohammad Hadi and Mukesh Kumar; rifle shooters Gagan Narang and Asher Noria and bodybuilder Mir Mohtesham Ali Khan.
Transport
thumb|Map representing the Intermediate Ring Road that connects the Inner Ring Road with the Outer Ring Road|alt=Circle and intersecting lines representing the city roads
The most commonly used forms of medium distance transport in Hyderabad include government owned services such as light railways and buses, as well as privately operated taxis and auto rickshaws. Bus services operate from the Mahatma Gandhi Bus Station in the city centre and carry over 130 million passengers daily across the entire network. Hyderabad's light rail transportation system, the Multi-Modal Transport System (MMTS), is a three line suburban rail service used by over 160,000 passengers daily. Complementing these government services are minibus routes operated by Setwin (Society for Employment Promotion & Training in Twin Cities). Intercity rail services also operate from Hyderabad; the main, and largest, station is Secunderabad Railway Station, which serves as Indian Railways' South Central Railway zone headquarters and a hub for both buses and MMTS light rail services connecting Secunderabad and Hyderabad. Other major railway stations in Hyderabad are Hyderabad Deccan Station, Kachiguda Railway Station, Begumpet Railway Station, Malkajgiri Railway Station and Lingampally Railway Station. The Hyderabad Metro, a new rapid transit system, is to be added to the existing public transport infrastructure and is scheduled to operate three lines by 2015.
, there are over 3.5 million vehicles operating in the city, of which 74% are two-wheelers, 15% cars and 3% three-wheelers. The remaining 8% include buses, goods vehicles and taxis. The large number of vehicles coupled with relatively low road coverage—roads occupy only 9.5% of the total city area—has led to widespread traffic congestion pp. 2–3 especially since 80% of passengers and 60% of freight are transported by road. The Inner Ring Road, the Outer Ring Road, the Hyderabad Elevated Expressway, the longest flyover in India, and various interchanges, overpasses and underpasses were built to ease the congestion. Maximum speed limits within the city are for two-wheelers and cars, for auto rickshaws and for light commercial vehicles and buses.
Hyderabad sits at the junction of three National Highways linking it to six other states: NH-7 runs from Varanasi, Uttar Pradesh, in the north to Kanyakumari, Tamil Nadu, in the south; NH-9, runs east-west between Machilipatnam, Andhra Pradesh, and Pune, Maharashtra; and the NH-163 links Hyderabad to Bhopalpatnam, Chhattisgarh NH-765 links Hyderabad to Srisailam. Five state highways, SH-1, SH-2, SH-4, SH-5 and SH-6, either start from, or pass through, Hyderabad.
Air traffic was previously handled via Begumpet Airport, but this was replaced by Rajiv Gandhi International Airport (RGIA) in 2008, with the capacity of handling 12 million passengers and 100,000 tonnes of cargo per annum. In 2011, Airports Council International, an autonomous body representing the world's airports, judged RGIA the world's best airport in the passenger category and the world's fifth best airport for service quality.
To reduce traffic and its emissions, the government has undertaken a metro rail project, Hyderabad metro rail which is under construction in different phases.
See also
List of tourist attractions in Hyderabad
List of people from Hyderabad
List of tallest buildings in Hyderabad
Notes
References
Bibliography
Further reading
External links
A guide to Hyderabad
Hyderabad Metro
Category:Capitals of former nations
Category:Cities in Telangana
Category:Cities and towns in Hyderabad district, India
Category:Former national capitals
Category:High-technology business districts
Category:Historic districts
Category:Metropolitan cities in India
Category:Populated places established in 1591
Category:1591 establishments in Asia | 37,534 | 2017-01 |
Pharmaceutical industry | thumb|Glivec, a drug used in the treatment of several cancers, is marketed by Novartis, one of the world's major pharmaceutical companies.
The pharmaceutical industry discovers, develops, produces, and markets drugs or pharmaceutical drugs for use as medications. Pharmaceutical companies may deal in generic or brand medications and medical devices. They are subject to a variety of laws and regulations that govern the patenting, testing, safety, efficacy and marketing of drugs.
History
Mid-1800s – 1945: From botanicals to the first synthetic drugs
The modern pharmaceutical industry traces its roots to two sources. The first of these were local apothecaries that expanded from their traditional role distributing botanical drugs such as morphine and quinine to wholesale manufacture in the mid 1800s. Rational drug discovery from plants started particularly with the isolation of morphine, analgesic and sleep-inducing agent from opium, by the German apothecary assistant Friedrich Sertürner, who named the compound after the Greek god of dreams, Morpheus. Multinational corporations including Merck, Hoffman-La Roche, Burroughs-Wellcome (now part of Glaxo Smith Kline), Abbott Laboratories, Eli Lilly and Upjohn (now part of Pfizer) began as local apothecary shops in the mid-1800s. By the late 1880s, German dye manufacturers had perfected the purification of individual organic compounds from coal tar and other mineral sources and had also established rudimentary methods in organic chemical synthesis. The development of synthetic chemical methods allowed scientists to systematically vary the structure of chemical substances, and growth in the emerging science of pharmacology expanded their ability to evaluate the biological effects of these structural changes.
Epinephrine, norepinephrine, and amphetamine
By the 1890s, the profound effect of adrenal extracts on many different tissue types had been discovered, setting off a search both for the mechanism of chemical signalling and efforts to exploit these observations for the development of new drugs. The blood pressure raising and vasoconstrictive effects of adrenal extracts were of particular interest to surgeons as hemostatic agents and as treatment for shock, and a number of companies developed products based on adrenal extracts containing varying purities of the active substance. In 1897, John Abel of Johns Hopkins University identified the active principle as epinephrine, which he isolated in an impure state as the sulfate salt. Industrial chemist Jokichi Takamine later developed a method for obtaining epinephrine in a pure state, and licensed the technology to Parke Davis. Parke Davis marketed epinephrine under the trade name Adrenalin. Injected epinephrine proved to be especially efficacious for the acute treatment of asthma attacks, and an inhaled version was sold in the United States until 2011 (Primatene Mist). By 1929 epinephrine had been formulated into an inhaler for use in the treatment of nasal congestion.
While highly effective, the requirement for injection limited the use of norepinephrine and orally active derivatives were sought. A structurally similar compound, ephedrine, was identified by Japanese chemists in the Ma Huang plant and marketed by Eli Lilly as an oral treatment for asthma. Following the work of Henry Dale and George Barger at Burroughs-Wellcome, academic chemist Gordon Alles synthesized amphetamine and tested it in asthma patients in 1929. The drug proved to have only modest anti-asthma effects, but produced sensations of exhilaration and palpitations. Amphetamine was developed by Smith, Kline and French as a nasal decongestant under the trade name Benzedrine Inhaler. Amphetamine was eventually developed for the treatment of narcolepsy, post-encepheletic parkinsonism, and mood elevation in depression and other psychiatric indications. It received approval as a New and Nonofficial Remedy from the American Medical Association for these uses in 1937 and remained in common use for depression until the development of tricyclic antidepressants in the 1960s.
Discovery and development of the barbiturates
thumb|upright|Diethylbarbituric acid was the first marketed barbiturate. It was sold by Bayer under the trade name Veronal
In 1903, Hermann Emil Fischer and Joseph von Mering disclosed their discovery that diethylbarbituric acid, formed from the reaction of diethylmalonic acid, phosphorus oxychloride and urea, induces sleep in dogs. The discovery was patented and licensed to Bayer pharmaceuticals, which marketed the compound under the trade name Veronal as a sleep aid beginning in 1904. Systematic investigations of the effect of structural changes on potency and duration of action led to the discovery of phenobarbital at Bayer in 1911 and the discovery of its potent anti-epileptic activity in 1912. Phenobarbital was among the most widely used drugs for the treatment of epilepsy through the 1970s, and as of 2014, remains on the World Health Organizations list of essential medications. The 1950s and 1960s saw increased awareness of the addictive properties and abuse potential of barbiturates and amphetamines and led to increasing restrictions on their use and growing government oversight of prescribers. Today, amphetamine is largely restricted to use in the treatment of attention deficit disorder and phenobarbital in the treatment of epilepsy.
Insulin
A series of experiments performed from the late 1800s to the early 1900s revealed that diabetes is caused by the absence of a substance normally produced by the pancreas. In 1869, Oskar Minkowski and Joseph von Mering found that diabetes could be induced in dogs by surgical removal of the pancreas. In 1921, Canadian professor Frederick Banting and his student Charles Best repeated this study, and found that injections of pancreatic extract reversed the symptoms produced by pancreas removal. Soon, the extract was demonstrated to work in people, but development of insulin therapy as a routine medical procedure was delayed by difficulties in producing the material in sufficient quantity and with reproducible purity. The researchers sought assistance from industrial collaborators at Eli Lilly and Co. based on the company's experience with large scale purification of biological materials. Chemist George Walden of Eli Lilly and Company found that careful adjustment of the pH of the extract allowed a relatively pure grade of insulin to be produced. Under pressure from Toronto University and a potential patent challenge by academic scientists who had independently developed a similar purification method, an agreement was reached for non-exclusive production of insulin by multiple companies. Prior to the discovery and widespread availability of insulin therapy the life expectancy of diabetics was only a few months.
Early anti-infective research: Salvarsan, Prontosil, Penicillin and vaccines
The development of drugs for the treatment of infectious diseases was a major focus of early research and development efforts; in 1900 pneumonia, tuberculosis, and diarrhea were the three leading causes of death in the United States and mortality in the first year of life exceeded 10%.
In 1911 arsphenamine, the first synthetic anti-infective drug, was developed by Paul Ehrlich and chemist Alfred Bertheim of the Institute of Experimental Therapy in Berlin. The drug was given the commercial name Salvarsan. Ehrlich, noting both the general toxicity of arsenic and the selective absorption of certain dyes by bacteria, hypothesized that an arsenic-containing dye with similar selective absorption properties could be used to treat bacterial infections. Arsphenamine was prepared as part of a campaign to synthesize a series of such compounds, and found to exhibit partially selective toxicity. Arsphenamine proved to be the first effective treatment for syphilis, a disease which prior to that time was incurable and led inexorably to severe skin ulceration, neurological damage, and death.
Ehrlich's approach of systematically varying the chemical structure of synthetic compounds and measuring the effects of these changes on biological activity was pursued broadly by industrial scientists, including Bayer scientists Josef Klarer, Fritz Mietzsch, and Gerhard Domagk. This work, also based in the testing of compounds available from the German dye industry, led to the development of Prontosil, the first representative of the sulfonamide class of antibiotics. Compared to arsphenamine, the sulfonamides had a broader spectrum of activity and were far less toxic, rendering them useful for infections caused by pathogens such as streptococci. In 1939, Domagk received the Nobel Prize in Medicine for this discovery. Nonetheless, the dramatic decrease in deaths from infectious diseases that occurred prior to World War II was primarily the result of improved public health measures such as clean water and less crowded housing, and the impact of anti-infective drugs and vaccines was significant mainly after World War II.
In 1928, Alexander Fleming discovered the antibacterial effects of penicillin, but its exploitation for the treatment of human disease awaited the development of methods for its large scale production and purification. These were developed by a U.S. and British government-led consortium of pharmaceutical companies during the Second World War.
Early progress toward the development of vaccines occurred throughout this period, primarily in the form of academic and government-funded basic research directed toward the identification of the pathogens responsible for common communicable diseases. In 1885 Louis Pasteur and Pierre Paul Émile Roux created the first rabies vaccine. The first diphtheria vaccines were produced in 1914 from a mixture of diphtheria toxin and antitoxin (produced from the serum of an inoculated animal), but the safety of the inoculation was marginal and it was not widely used. The United States recorded 206,000 cases of diphtheria in 1921 resulting in 15,520 deaths. In 1923 parallel efforts by Gaston Ramon at the Pasteur Institute and Alexander Glenny at the Wellcome Research Laboratories (later part of GlaxoSmithKline) led to the discovery that a safer vaccine could be produced by treating diphtheria toxin with formaldehyde. In 1944, Maurice Hilleman of Squibb Pharmaceuticals developed the first vaccine against Japanese encephelitis. Hilleman would later move to Merck where he would play a key role in the development of vaccines against measles, mumps, chickenpox, rubella, hepatitis A, hepatitis B, and meningitis.
Unsafe drugs and early industry regulation
thumb|upright|In 1937 over 100 people died after ingesting a solution of the antibacterial sulfanilamide formulated in the toxic solvent diethylene glycol
Prior to the 20th century drugs were generally produced by small scale manufacturers with little regulatory control over manufacturing or claims of safety and efficacy. To the extent that such laws did exist, enforcement was lax. In the United States, increased regulation of vaccines and other biological drugs was spurred by tetanus outbreaks and deaths caused by the distribution of contaminated smallpox vaccine and diphtheria antitoxin. The Biologics Control Act of 1902 required that federal government grant premarket approval for every biological drug and for the process and facility producing such drugs. This was followed in 1906 by the Pure Food and Drugs Act, which forbade the interstate distribution of adulterated or misbranded foods and drugs. A drug was considered misbranded if it contained alcohol, morphine, opium, cocaine, or any of several other potentially dangerous or addictive drugs, and if its label failed to indicate the quantity or proportion of such drugs. The government's attempts to use the law to prosecute manufacturers for making unsupported claims of efficacy were undercut by a Supreme Court ruling restricting the federal government's enforcement powers to cases of incorrect specification of the drug's ingredients.
In 1937 over 100 people died after ingesting "Elixir Sulfanilamide" manufactured by S.E. Massengill Company of Tennessee. The product was formulated in diethylene glycol, a highly toxic solvent that is now widely used as antifreeze. Under the laws extant at that time, prosecution of the manufacturer was possible only under the technicality that the product had been called an "elixir", which literally implied a solution in ethanol. In response to this episode, the U.S. Congress passed the Federal Food, Drug, and Cosmetic Act of 1938, which for the first time required pre-market demonstration of safety before a drug could be sold, and explicitly prohibited false therapeutic claims.
The post-war years, 1945–1970
Further advances in anti-infective research
The aftermath of World War II saw an explosion in the discovery of new classes of antibacterial drugs including the cephalosporins (developed by Eli Lilly based on the seminal work of Giuseppe Brotzu and Edward Abraham), streptomycin (discovered during a Merck-funded research program in Selman Waksman's laboratory), the tetracyclines (discovered at Lederle Laboratories, now a part of Pfizer), erythromycin (discovered at Eli Lilly and Co.) and their extension to an increasingly wide range of bacterial pathogens. Streptomycin, discovered during a Merck-funded research program in Selman Waksman's laboratory at Rutgers in 1943, became the first effective treatment for tuberculosis. At the time of its discovery, sanitoriums for the isolation of tuberculosis-infected people were an ubiquitous feature of cities in developed countries, with 50% dying within 5 years of admission.
A Federal Trade Commission report issued in 1958 attempted to quantify the effect of antibiotic development on American public health. The report found that over the period 1946-1955, there was a 42% drop in the incidence of diseases for which antibiotics were effective and only a 20% drop in those for which antibiotics were not effective. The report concluded that "it appears that the use of antibiotics, early diagnosis, and other factors have limited the epidemic spread and thus the number of these diseases which have occurred". The study further examined mortality rates for eight common diseases for which antibiotics offered effective therapy (syphilis, tuberculosis, dysentery, scarlet fever, whooping cough, meningococcal infections, and pneumonia), and found a 56% decline over the same period.Federal Trade Commission Report of Antibiotics Manufacture, June 1958 (Washington D.C., Government Printing Office, 1958) pages 98-120 Notable among these was a 75% decline in deaths due to tuberculosis.Federal Trade Commission Report of Antibiotics Manufacture, June 1958 (Washington D.C., Government Printing Office, 1958) page 277
alt=Measles cases 1944-1964 follow a highly variable epidemic pattern, with 150,000-850,000 cases per year. A sharp decline followed introduction of the vaccine in 1963, with fewer than 25,000 cases reported in 1968. Outbreaks around 1971 and 1977 gave 75,000 and 57,000 cases, respectively. Cases were stable at a few thousand per year until an outbreak of 28,000 in 1990. Cases declined from a few hundred per year in the early 1990s to a few dozen in the 2000s. | thumb | Measles cases reported in the United States before and after introduction of the vaccine.
thumb|right|alt=Life expectancy by age in 1900, 1950, and 1997 United States.|Percent surviving by age in 1900, 1950, and 1997.
During the years 1940-1955, the rate of decline in the U.S. death rate accelerated from 2% per year to 8% per year, then returned to the historical rate of 2% per year. The dramatic decline in the immediate post-war years has been attributed to the rapid development of new treatments and vaccines for infectious disease that occurred during these years.
Vaccine development continued to accelerate, with the most notable achievement of the period being Jonas Salk's 1954 development of the polio vaccine under the funding of the non-profit National Foundation for Infantile Paralysis. The vaccine process was never patented, but was instead given to pharmaceutical companies to manufacture as a low-cost generic. In 1960 Maurice Hilleman of Merck Sharp & Dohme identified the SV40 virus, which was later shown to cause tumors in many mammalian species. It was later determined that SV40 was present as a contaminant in polio vaccine lots that had been administered to 90% of the children in the United States. The contamination appears to have originated both in the original cell stock and in monkey tissue used for production. In 2004 the United States Cancer Institute announced that it had concluded that SV40 is not associated with cancer in people.
Other notable new vaccines of the period include those for measles (1962, John Franklin Enders of Children's Medical Center Boston, later refined by Maurice Hilleman at Merck), Rubella (1969, Hilleman, Merck) and mumps (1967, Hilleman, Merck) The United States incidences of rubella, congenital rubella syndrome, measles, and mumps all fell by >95% in the immediate aftermath of widespread vaccination. The first 20 years of licensed measles vaccination in the U.S. prevented an estimated 52 million cases of the disease, 17,400 cases of mental retardation, and 5,200 deaths.
Development and marketing of antihypertensive drugs
Hypertension is a risk factor for atherosclerosis, heart failure, coronary artery disease, stroke, renal disease, and peripheral arterial disease, and is the most important risk factor for cardiovascular morbidity and mortality, in industrialized countries. Prior to 1940 approximately 23% of all deaths among persons over age 50 were attributed to hypertension. Severe cases of hypertension were treated by surgery.
Early developments in the field of treating hypertension included quaternary ammonium ion sympathetic nervous system blocking agents, but these compounds were never widely used due to their severe side effects, because the long term health consequences of high blood pressure had not yet been established, and because they had to be administered by injection.
In 1952 researchers at Ciba discovered the first orally available vasodilator, hydralazine. A major shortcoming of hydralazine monotherapy was that it lost its effectiveness over time (tachyphylaxis). In the mid-1950s Karl H. Beyer, James M. Sprague, John E. Baer, and Frederick C. Novello of Merck and Co. discovered and developed chlorothiazide, which remains the most widely used antihypertensive drug today. This development was associated with a substantial decline in the mortality rate among people with hypertension. The inventors were recognized by a Public Health Lasker Award in 1975 for "the saving of untold thousands of lives and the alleviation of the suffering of millions of victims of hypertension".
A 2009 Cochrane review concluded that thiazide antihypertensive drugs reduce the risk of death (RR 0.89), stroke (RR 0.63), coronary heart disease (RR 0.84), and cardiovascular events (RR 0.70) in people with high blood pressure. In the ensuring years other classes of antihypertensive drug were developed and found wide acceptance in combination therapy, including loop diuretics (Lasix/furosemide, Hoechst Pharmaceuticals, 1963), beta blockers (ICI Pharmaceuticals, 1964) ACE inhibitors, and angiotensin receptor blockers. ACE inhibitors reduce the risk of new onset kidney disease [RR 0.71] and death [RR 0.84] in diabetic patients, irrespective of whether they have hypertension.
Oral Contraceptives
Prior to the second world war, birth control was prohibited in many countries, and in the United States even the discussion of contraceptive methods sometimes led to prosecution under Comstock laws. The history of the development of oral contraceptives is thus closely tied to the birth control movement and the efforts of activists Margaret Sanger, Mary Dennett, and Emma Goldman. Based on fundamental research performed by Gregory Pincus and synthetic methods for progesterone developed by Carl Djerassi at Syntex and by Frank Colton at G.D. Searle & Co., the first oral contraceptive, Enovid, was developed by E.D. Searle and Co. and approved by the FDA in 1960. The original formulation incorporated vastly excessive doses of hormones, and caused severe side effects. Nonetheless, by 1962, 1.2 million American women were on the pill, and by 1965 the number had increased to 6.5 million. The availability of a convenient form of temporary contraceptive led to dramatic changes in social mores including expanding the range of lifestyle options available to women, reducing the reliance of women on men for contraceptive practice, encouraging the delay of marriage, and increasing pre-marital co-habitation.
Thalidomide and the Kefauver-Harris Amendments
thumb|Baby born to a mother who had taken thalidomide while pregnant.
In the U.S., a push for revisions of the FD&C Act emerged from Congressional hearings led by Senator Estes Kefauver of Tennessee in 1959. The hearings covered a wide range of policy issues, including advertising abuses, questionable efficacy of drugs, and the need for greater regulation of the industry. While momentum for new legislation temporarily flagged under extended debate, a new tragedy emerged that underscored the need for more comprehensive regulation and provided the driving force for the passage of new laws.
On 12 September 1960, an American licensee, the William S. Merrell Company of Cincinnati, submitted to FDA a new drug application for Kevadon (thalidomide), the brand name of a sedative that had been marketed in Europe since 1956: thalidomide. The FDA medical officer in charge of this review, Frances Kelsey, believed the data were too incomplete to support the safety of this drug.
The firm continued to pressure Kelsey and the agency to approve the application—until November 1961, when the drug was pulled off the German market because of its association with grave congenital abnormalities. Several thousand newborns in Europe and elsewhere suffered the teratogenic effects of thalidomide. Though the drug was never approved in the USA, the firm distributed Kevadon to over 1,000 physicians there under the guise of investigational use. Over 20,000 Americans received thalidomide in this "study," including 624 pregnant patients, and about 17 known newborns suffered the effects of the drug.
The thalidomide tragedy resurrected Kefauver's bill to enhance drug regulation that had stalled in Congress, and the Kefauver-Harris Amendment became law on 10 October 1962. Manufacturers henceforth had to prove to FDA that their drugs were effective as well as safe before they could go on the US market. The FDA received authority to regulate advertising of prescription drugs and to establish good manufacturing practices. The law required that all drugs introduced between 1938 and 1962 had to be effective. An FDA - National Academy of Sciences collaborative study showed that nearly 40 percent of these products were not effective. A similarly comprehensive study of over-the-counter products began ten years later.
1970–1980s
Statins
In 1971, Akira Endo, a Japanese biochemist working for the pharmaceutical company Sankyo, identified mevastatin (ML-236B), a molecule produced by the fungus Penicillium citrinum, as an inhibitor of HMG-CoA reductase, a critical enzyme used by the body to produce cholesterol. Animal trials showed very good inhibitory effect as in clinical trials, however a long term study in dogs found toxic effects at higher doses and as a result mevastatin was believed to be too toxic for human use. Mevastatin was never marketed, because of its adverse effects of tumors, muscle deterioration, and sometimes death in laboratory dogs.
P. Roy Vagelos, chief scientist and later CEO of Merck & Co, was interested, and made several trips to Japan starting in 1975. By 1978, Merck had isolated lovastatin (mevinolin, MK803) from the fungus Aspergillus terreus, first marketed in 1987 as Mevacor.
In April 1994, the results of a Merck-sponsored study, the Scandinavian Simvastatin Survival Study, were announced. Researchers tested simvastatin, later sold by Merck as Zocor, on 4,444 patients with high cholesterol and heart disease. After five years, the study concluded the patients saw a 35% reduction in their cholesterol, and their chances of dying of a heart attack were reduced by 42%. In 1995, Zocor and Mevacor both made Merck over US$1 billion. Endo was awarded the 2006 Japan Prize, and the Lasker-DeBakey Clinical Medical Research Award in 2008. For his "pioneering research into a new class of molecules" for "lowering cholesterol,"
Research and development
Drug discovery is the process by which potential drugs are discovered or designed. In the past most drugs have been discovered either by isolating the active ingredient from traditional remedies or by serendipitous discovery. Modern biotechnology often focuses on understanding the metabolic pathways related to a disease state or pathogen, and manipulating these pathways using molecular biology or biochemistry. A great deal of early-stage drug discovery has traditionally been carried out by universities and research institutions.
Drug development refers to activities undertaken after a compound is identified as a potential drug in order to establish its suitability as a medication. Objectives of drug development are to determine appropriate formulation and dosing, as well as to establish safety. Research in these areas generally includes a combination of in vitro studies, in vivo studies, and clinical trials. The cost of late stage development has meant it is usually done by the larger pharmaceutical companies.
Often, large multinational corporations exhibit vertical integration, participating in a broad range of drug discovery and development, manufacturing and quality control, marketing, sales, and distribution. Smaller organizations, on the other hand, often focus on a specific aspect such as discovering drug candidates or developing formulations. Often, collaborative agreements between research organizations and large pharmaceutical companies are formed to explore the potential of new drug substances. More recently, multi-nationals are increasingly relying on contract research organizations to manage drug development.
The cost of innovation
Drug discovery and development is very expensive; of all compounds investigated for use in humans only a small fraction are eventually approved in most nations by government appointed medical institutions or boards, who have to approve new drugs before they can be marketed in those countries. In 2010 18 NMEs (New Molecular Entities) were approved and three biologics by the FDA, or 21 in total, which is down from 26 in 2009 and 24 in 2008. On the other hand, there were only 18 approvals in total in 2007 and 22 back in 2006. Since 2001, the Center for Drug Evaluation and Research has averaged 22.9 approvals a year.
This approval comes only after heavy investment in pre-clinical development and clinical trials, as well as a commitment to ongoing safety monitoring. Drugs which fail part-way through this process often incur large costs, while generating no revenue in return. If the cost of these failed drugs is taken into account, the cost of developing a successful new drug (new chemical entity, or NCE), has been estimated at about 1.3 billion USD(not including marketing expenses). Professors Light and Lexchin reported in 2012, however, that the rate of approval for new drugs has been a relatively stable average rate of 15 to 25 for decades.
Industry-wide research and investment reached a record $65.3 billion in 2009. While the cost of research in the U.S. was about 34.2 billion between 1995 and 2010, revenues rose faster (revenues rose by 200.4 billion in that time).
A study by the consulting firm Bain & Company reported that the cost for discovering, developing and launching (which factored in marketing and other business expenses) a new drug (along with the prospective drugs that fail) rose over a five-year period to nearly $1.7 billion in 2003. According to Forbes, by 2010 development costs were between $4 billion to $11 billion per drug.
Some of these estimates also take into account the opportunity cost of investing capital many years before revenues are realized (see Time-value of money). Because of the very long time needed for discovery, development, and approval of pharmaceuticals, these costs can accumulate to nearly half the total expense. A direct consequence within the pharmaceutical industry value chain is that major pharmaceutical multinationals tend to increasingly outsource risks related to fundamental research, which somewhat reshapes the industry ecosystem with biotechnology companies playing an increasingly important role, and overall strategies being redefined accordingly. Some approved drugs, such as those based on re-formulation of an existing active ingredient (also referred to as Line-extensions) are much less expensive to develop.
Controversies
Due to repeated accusations and findings that some clinical trials conducted or funded by pharmaceutical companies may report only positive results for the preferred medication, the industry has been looked at much more closely by independent groups and government agencies.
In response to specific cases in which unfavorable data from pharmaceutical company-sponsored research was not published, the Pharmaceutical Research and Manufacturers of America have published new guidelines urging companies to report all findings and limit the financial involvement in drug companies of researchers. US congress signed into law a bill which requires phase II and phase III clinical trials to be registered by the sponsor on the clinicaltrials.gov website run by the NIH.
Drug researchers not directly employed by pharmaceutical companies often look to companies for grants, and companies often look to researchers for studies that will make their products look favorable. Sponsored researchers are rewarded by drug companies, for example with support for their conference/symposium costs. Lecture scripts and even journal articles presented by academic researchers may actually be "ghost-written" by pharmaceutical companies.
An investigation by ProPublica found that at least 21 doctors have been paid more than $500,000 for speeches and consulting by drugs manufacturers since 2009, with half of the top earners working in psychiatry, and about $2 billion in total paid to doctors for such services. AstraZeneca, Johnson & Johnson and Eli Lilly have paid billions of dollars in federal settlements over allegations that they paid doctors to promote drugs for unapproved uses. Some prominent medical schools have since tightened rules on faculty acceptance of such payments by drug companies.
In contrast to this viewpoint, an article and associated editorial in the New England Journal of Medicine in May 2015 emphasized the importance of pharmaceutical industry-physician interactions for the development of novel treatments, and argued that moral outrage over industry malfeasance had unjustifiably led many to overemphasize the problems created by financial conflicts of interest. The article noted that major healthcare organizations such as National Center for Advancing Translational Sciences of the National Institutes of Health, the President's Council of Advisors on Science and Technology, the World Economic Forum, the Gates Foundation, the Wellcome Trust, and the Food and Drug Administration had encouraged greater interactions between physicians and industry in order to bring greater benefits to patients.
Product approval
In the United States, new pharmaceutical products must be approved by the Food and Drug Administration (FDA) as being both safe and effective. This process generally involves submission of an Investigational New Drug filing with sufficient pre-clinical data to support proceeding with human trials. Following IND approval, three phases of progressively larger human clinical trials may be conducted. Phase I generally studies toxicity using healthy volunteers. Phase II can include pharmacokinetics and dosing in patients, and Phase III is a very large study of efficacy in the intended patient population. Following the successful completion of phase III testing, a New Drug Application is submitted to the FDA. The FDA review the data and if the product is seen as having a positive benefit-risk assessment, approval to market the product in the US is granted.
A fourth phase of post-approval surveillance is also often required due to the fact that even the largest clinical trials cannot effectively predict the prevalence of rare side-effects. Postmarketing surveillance ensures that after marketing the safety of a drug is monitored closely. In certain instances, its indication may need to be limited to particular patient groups, and in others the substance is withdrawn from the market completely.
The FDA provides information about approved drugs at the Orange Book site.
In the UK, the Medicines and Healthcare Products Regulatory Agency approves drugs for use, though the evaluation is done by the European Medicines Agency, an agency of the European Union based in London. Normally an approval in the UK and other European countries comes later than one in the USA. Then it is the National Institute for Health and Care Excellence (NICE), for England and Wales, who decides if and how the National Health Service (NHS) will allow (in the sense of paying for) their use. The British National Formulary is the core guide for pharmacists and clinicians.
In many non-US western countries a 'fourth hurdle' of cost effectiveness analysis has developed before new technologies can be provided. This focuses on the efficiency (in terms of the cost per QALY) of the technologies in question rather than their efficacy. In England and Wales NICE decides whether and in what circumstances drugs and technologies will be made available by the NHS, whilst similar arrangements exist with the Scottish Medicines Consortium in Scotland, and the Pharmaceutical Benefits Advisory Committee in Australia. A product must pass the threshold for cost-effectiveness if it is to be approved. Treatments must represent 'value for money' and a net benefit to society.
Orphan drugs
There are special rules for certain rare diseases ("orphan diseases") in several major drug regulatory territories. For example, diseases involving fewer than 200,000 patients in the United States, or larger populations in certain circumstances are subject to the Orphan Drug Act.
Because medical research and development of drugs to treat such diseases is financially disadvantageous, companies that do so are rewarded with tax reductions, fee waivers, and market exclusivity on that drug for a limited time (seven years), regardless of whether the drug is protected by patents.
Global sales
In 2011, global spending on prescription drugs topped $954 billion, even as growth slowed somewhat in Europe and North America. The United States accounts for more than a third of the global pharmaceutical market, with $340 billion in annual sales followed by the EU and Japan.,(pdf) Emerging markets such as China, Russia, South Korea and Mexico outpaced that market, growing a huge 81 percent.
The top ten best-selling drugs of 2013 totaled $75.6 billion in sales, with the anti-inflammatory drug Humira being the best-selling drug worldwide at $10.7 billion in sales. The second and third best selling were Enbrel and Remicade, respectively. The top three best-selling drugs in the United States in 2013 were Abilify ($6.3 billion,) Nexium ($6 billion) and Humira ($5.4 billion). The best-selling drug ever, Lipitor, averaged $13 billion annually and netted $141 billion total over its lifetime before Pfizer's patent expired in November 2011.
IMS Health publishes an analysis of trends expected in the pharmaceutical industry in 2007, including increasing profits in most sectors despite loss of some patents, and new 'blockbuster' drugs on the horizon.
Patents and generics
Depending on a number of considerations, a company may apply for and be granted a patent for the drug, or the process of producing the drug, granting exclusivity rights typically for about 20 years.Frequently Asked Questions (FAQs) However, only after rigorous study and testing, which takes 10 to 15 years on average, will governmental authorities grant permission for the company to market and sell the drug. Patent protection enables the owner of the patent to recover the costs of research and development through high profit margins for the branded drug. When the patent protection for the drug expires, a generic drug is usually developed and sold by a competing company. The development and approval of generics is less expensive, allowing them to be sold at a lower price. Often the owner of the branded drug will introduce a generic version before the patent expires in order to get a head start in the generic market. Restructuring has therefore become routine, driven by the patent expiration of products launched during the industry's "golden era" in the 1990s and companies' failure to develop sufficient new blockbuster products to replace lost revenues.
Prescriptions
In the U.S., the value of prescriptions increased over the period of 1995 to 2005 by 3.4 billion annually, a 61 percent increase. Retail sales of prescription drugs jumped 250 percent from $72 billion to $250 billion, while the average price of prescriptions more than doubled from $30 to $68.
Marketing
Advertising is common in healthcare journals as well as through more mainstream media routes. In some countries, notably the US, they are allowed to advertise directly to the general public. Pharmaceutical companies generally employ sales people (often called 'drug reps' or, an older term, 'detail men') to market directly and personally to physicians and other healthcare providers. In some countries, notably the US, pharmaceutical companies also employ lobbyists to influence politicians. Marketing of prescription drugs in the US is regulated by the federal Prescription Drug Marketing Act of 1987.
To healthcare professionals
The book Bad Pharma also discusses the influence of drug representatives, how ghostwriters are employed by the drug companies to write papers for academics to publish, how independent the academic journals really are, how the drug companies finance doctors' continuing education, and how patients' groups are often funded by industry.
Direct to consumer advertising
Since the 1980s new methods of marketing for prescription drugs to consumers have become important. Direct-to-consumer media advertising was legalised in the FDA Guidance for Industry on Consumer-Directed Broadcast Advertisements.
Controversy about drug marketing and lobbying
There has been increasing controversy surrounding pharmaceutical marketing and influence. There have been accusations and findings of influence on doctors and other health professionals through drug reps, including the constant provision of marketing 'gifts' and biased information to health professionals; highly prevalent advertising in journals and conferences; funding independent healthcare organizations and health promotion campaigns; lobbying physicians and politicians (more than any other industry in the US); sponsorship of medical schools or nurse training; sponsorship of continuing educational events, with influence on the curriculum; and hiring physicians as paid consultants on medical advisory boards.
Some advocacy groups, such as No Free Lunch and AllTrials, have criticized the effect of drug marketing to physicians because they say it biases physicians to prescribe the marketed drugs even when others might be cheaper or better for the patient.
There have been related accusations of disease mongering(over-medicalising) to expand the market for medications. An inaugural conference on that subject took place in Australia in 2006. In 2009, the Government-funded National Prescribing Service launched the "Finding Evidence – Recognising Hype" program, aimed at educating GPs on methods for independent drug analysis.
A 2005 review by a special committee of the UK government came to all the above conclusions in a European Union context whilst also highlighting the contributions and needs of the industry.
Meta-analyses have shown that psychiatric studies sponsored by pharmaceutical companies are several times more likely to report positive results, and if a drug company employee is involved the effect is even larger. Influence has also extended to the training of doctors and nurses in medical schools, which is being fought.
It has been argued that the design of the Diagnostic and Statistical Manual of Mental Disorders and the expansion of the criteria represents an increasing medicalization of human nature, or "disease mongering", driven by drug company influence on psychiatry. The potential for direct conflict of interest has been raised, partly because roughly half the authors who selected and defined the DSM-IV psychiatric disorders had or previously had financial relationships with the pharmaceutical industry.
In the US, starting in 2013, under the Physician Financial Transparency Reports (part of the Sunshine Act), the Centers for Medicare & Medicaid Services has to collect information from applicable manufacturers and group purchasing organizations in order to report information about their financial relationships with physicians and hospitals. Data are made public in the Centers for Medicare & Medicaid Services website. The expectation is that relationship between doctors and Pharmaceutical industry will become fully transparent.
Regulatory issues
Ben Goldacre has argued that regulators – such as the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK, or the Food and Drug Administration (FDA) in the United States – advance the interests of the drug companies rather than the interests of the public due to revolving door exchange of employees between the regulator and the companies and friendships develop between regulator and company employees. He argues that regulators do not require that new drugs offer an improvement over what is already available, or even that they be particularly effective.
Others have argued that excessive regulation suppresses therapeutic innovation, and that the current cost of regulator-required clinical trials prevents the full exploitation of new genetic and biological knowledge for the treatment of human disease. A 2012 report by the President's Council of Advisors on Science and Technology made several key recommendations to reduce regulatory burdens to new drug development, including 1) expanding the FDA's use of accelerated approval processes, 2) creating an expedited approval pathway for drugs intended for use in narrowly defined populations, and 3) undertaking pilot projects designed to evaluate the feasibility of a new, adaptive drug approval process.
Pharmaceutical fraud
Pharmaceutical fraud involves deceptions which bring financial gain to a pharmaceutical company. It affects individuals and public and private insurers. There are several different schemes used to defraud the health care system which are particular to the pharmaceutical industry. These include: Good Manufacturing Practice (GMP) Violations, Off Label Marketing, Best Price Fraud, CME Fraud, Medicaid Price Reporting, and Manufactured Compound Drugs. Of this amount $2.5 billion was recovered through False Claims Act cases in FY 2010. Examples of fraud cases include the GlaxoSmithKline $3 billion settlement, Pfizer $2.3 billion settlement and Merck & Co. $650 million settlement. Damages from fraud can be recovered by use of the False Claims Act, most commonly under the qui tam provisions which rewards an individual for being a "whistleblower", or relator (law).
Every major company selling the antipsychotics — Bristol-Myers Squibb, Eli Lilly, Pfizer, AstraZeneca and Johnson & Johnson — has either settled recent government cases, under the False Claims Act, for hundreds of millions of dollars or is currently under investigation for possible health care fraud. Following charges of illegal marketing, two of the settlements set records last year for the largest criminal fines ever imposed on corporations. One involved Eli Lilly's antipsychotic Zyprexa, and the other involved Bextra. In the Bextra case, the government also charged Pfizer with illegally marketing another antipsychotic, Geodon; Pfizer settled that part of the claim for $301 million, without admitting any wrongdoing.
On 2 July 2012, GlaxoSmithKline pleaded guilty to criminal charges and agreed to a $3 billion settlement of the largest health-care fraud case in the U.S. and the largest payment by a drug company. The settlement is related to the company's illegal promotion of prescription drugs, its failure to report safety data, bribing doctors, and promoting medicines for uses for which they were not licensed. The drugs involved were Paxil, Wellbutrin, Advair, Lamictal, and Zofran for off-label, non-covered uses. Those and the drugs Imitrex, Lotronex, Flovent, and Valtrex were involved in the kickback scheme.
The following is a list of the four largest settlements reached with pharmaceutical companies from 1991 to 2012, rank ordered by the size of the total settlement. Legal claims against the pharmaceutical industry have varied widely over the past two decades, including Medicare and Medicaid fraud, off-label promotion, and inadequate manufacturing practices.
Company Settlement Violation(s) Year Product(s) Laws allegedly violated (if applicable) GlaxoSmithKline $3 billion Off-label promotion/ failure to disclose safety data 2012 Avandia/Wellbutrin/Paxil False Claims Act/FDCA Pfizer $2.3 billion Off-label promotion/kickbacks 2009 Bextra/Geodon/ Zyvox/Lyrica False Claims Act/FDCA Abbott Laboratories $1.5 billion Off-label promotion 2012 Depakote False Claims Act/FDCA Eli Lilly $1.4 billion Off-label promotion 2009 Zyprexa False Claims Act/FDCA
Developing world
Patents
Patents have been criticized in the developing world, as they are thought to reduce access to existing medicines.See for example: 't Hoen, Ellen. "TRIPS, Pharmaceutical Patents, and Access to Essential Medicines: A Long Way from Seattle to Doha". Chicago Journal of International Law, 27(43), 2002; Musungu, Sisule F., and Cecilia Oh. "The Use of Flexibilities in TRIPS by Developing Countries: Can They Provide Access to Medicines?" Commission on Intellectual Property Rights, Innovation and Public Health, The World Health Organization, 2005. Reconciling patents and universal access to medicine would require an efficient international policy of price discrimination. Moreover, under the TRIPS agreement of the World Trade Organization, countries must allow pharmaceutical products to be patented. In 2001, the WTO adopted the Doha Declaration, which indicates that the TRIPS agreement should be read with the goals of public health in mind, and allows some methods for circumventing pharmaceutical monopolies: via compulsory licensing or parallel imports, even before patent expiration.
In March 2001, 40 multi-national pharmaceutical companies brought litigation against South Africa for its Medicines Act, which allowed the generic production of antiretroviral drugs (ARVs) for treating HIV, despite the fact that these drugs were on-patent."Pharmaceutical Manufacturer's Association v. The President of South Africa (PMA)", 2002 (2) SA 674 (CC) (S. Africa). HIV was and is an epidemic in South Africa, and ARVs at the time cost between 10,000 and 15,000 USD per patient per year. This was unaffordable for most South African citizens, and so the South African government committed to providing ARVs at prices closer to what people could afford. To do so, they would need to ignore the patents on drugs and produce generics within the country (using a compulsory license), or import them from abroad. After international protest in favour of public health rights (including the collection of 250,000 signatures by MSF), the governments of several developed countries (including The Netherlands, Germany, France, and later the US) backed the South African government, and the case was dropped in April of that year.
In 2016, GlaxoSmithKline (the worlds 6th largest Pharmaceutical) announced that it would be dropping its patents in poor countries so as to allow independent companies to make and sell versions of its drugs in those areas, thereby widening the public access to them. GlaxoSmithKline published a list of 50 countries they would no longer hold patents in, affecting 1 billion people worldwide.
Charitable programs
Charitable programs and drug discovery & development efforts by pharmaceutical companies include:
"Merck's Gift", wherein billions of river blindness drugs were donated in Africa
Pfizer's gift of free/discounted fluconazole and other drugs for AIDS in South AfricaPfizer Will Donate Fluconazole to South Africa
GSK's commitment to give free albendazole tablets to the WHO for, and until, the elimination of lymphatic filariasis worldwide.
In 2006, Novartis committed US$755 million in corporate citizenship initiatives around the world, particularly focusing on improved access to medicines in the developing world through its Access to Medicine projects, including donations of medicines to patients affected by leprosy, tuberculosis, and malaria; Glivec patient assistance programs; and relief to support major humanitarian organisations with emergency medical needs.
HSvj Foundation is working on health care awareness in India through training,awareness programs are conducted in schools and colleges. Apart from this, they want Indian students to pursue pure science and scientific research in API, molecular discovery as carrier.Long term aim is to create facility for research in drug discovery in collaboration with Indian universities.
SASTRA University is collaborating with various life science firms on drug efficacy,tissue culture, TATA group has initiated various break through research in their nano technology department.
See also
Big Pharma conspiracy theory
Clinical trial
Drug development
Drug discovery
List of pharmaceutical companies
Pharmaceutical marketing
Pharmacy
References
Category:Pharmacology
Category:Pharmacy
Category:Life sciences industry | 560,876 | 2017-01 |
Saint Helena | Saint Helena ( ) is a volcanic tropical island in the South Atlantic Ocean, east of Rio de Janeiro and west of the Cunene River, which marks the border between Namibia and Angola in southwestern Africa. It is part of the British Overseas Territory of Saint Helena, Ascension and Tristan da Cunha. Saint Helena measures about and has a population of 4,534 (2016 census). It was named after Saint Helena of Constantinople.
The island, one of the most remote islands in the world, was uninhabited when discovered by the Portuguese in 1502. It was an important stopover for ships sailing to Europe from Asia and South Africa for centuries. Napoleon was imprisoned there in exile by the British, as were Dinuzulu kaCetshwayo (for leading a Zulu army against British rule) and more than 5,000 Boers taken prisoner during the second Boer War.
Between 1791 and 1833, Saint Helena became the site of a series of experiments in conservation, reforestation and attempts to boost rainfall artificially.Richard Grove, Green Imperialism: Colonial Expansion, Tropical Island Edens and the Origins of Environmentalism, 1600–1860 (Cambridge: Cambridge University Press, 1995), pp. 309–379 This environmental intervention was closely linked to the conceptualisation of the processes of environmental change and helped establish the roots of environmentalism.
Saint Helena is Britain's second-oldest remaining overseas territory after Bermuda.
History
Early history (1502–1658)
Most historical accounts state that the island was discovered on 21 May 1502 by Galician navigator João da Nova sailing at the service of Portugal, and that he named it "Santa Helena" after Helena of Constantinople. Another theory holds that the island found by da Nova was actually Tristan da Cunha, to the south,article: Tristan da Cunha (distance) and that Saint Helena was discovered by some of the ships attached to the squadron of the Estêvão da Gama expedition on 30 July 1503 (as reported in the account of clerk Thomé Lopes).A. H. Schulenburg, 'The discovery of St Helena: the search continues'. Wirebird: The Journal of the Friends of St Helena, Issue 24 (Spring 2002), pp. 13–19.Duarte Leite, História dos Descobrimentos, Vol. II (Lisbon: Edições Cosmos, 1960), 206.de Montalbodo, Paesi Nuovamente Retovati & Nuovo Mondo da Alberico Vesputio Fiorentino Intitulato (Venice: 1507) However, a paper published in 2015 reviewed the discovery date and dismissed the 18 August as too late for da Nova to make a discovery and then return to Lisbon by 11 September 1502, whether he sailed from Saint Helena or Tristan da Cunha.Ian Bruce, ‘St Helena Day’, Wirebird The Journal of the Friends of St Helena, no. 44 (2015): 32–46. It demonstrates that 21 May is probably a Protestant rather than Catholic or Orthodox feast-day, first quoted in 1596 by Jan Huyghen van Linschoten, who was probably mistaken because the island was discovered several decades before the Reformation and start of Protestantism.Jan Huyghen van Linschoten, Itinerario, voyage ofte schipvaert van Jan Huygen Van Linschoten naer Oost ofte Portugaels Indien, inhoudende een corte beschryvinghe der selver landen ende zee-custen... waer by ghevoecht zijn niet alleen die conterfeytsels van de habyten, drachten ende wesen, so van de Portugesen aldaer residerende als van de ingeboornen Indianen. (C. Claesz, 1596).Jan Huygen van Linschoten, John Huighen Van Linschoten, His Discours of Voyages Into Ye Easte [and] West Indies: Divided Into Foure Bookes (London: John Wolfe, 1598). The alternative discovery date of 3 May is suggested as being historically more credible; it is the Catholic feast-day for the finding of the True Cross by Saint Helena in Jerusalem, and cited by Odoardo Duarte LopesDuarte Lopes and Filippo Pigafetta, Relatione del Reame di Congo et delle circonvicine contrade tratta dalli scritti & ragionamenti di Odoardo Lope[S] Portoghese / per Filipo Pigafetta con disegni vari di geografiadi pianti, d’habiti d’animali, & altro. (Rome: BGrassi, 1591). and Sir Thomas Herbert.Thomas Herbert, Some Yeares Travels into Africa et Asia the Great: Especially Describing the Famous Empires of Persia and Industant as Also Divers Other Kingdoms in the Orientall Indies and I’les Adjacent (Jacob Blome & Richard Bishop, 1638), 353.
The Portuguese found the island uninhabited, with an abundance of trees and fresh water. They imported livestock, fruit trees and vegetables, and built a chapel and one or two houses. They formed no permanent settlement, but the island was an important rendezvous point and source of food for ships travelling from Asia to Europe, and frequently sick mariners were left on the island to recover before taking passage on the next ship to call on the island.
Englishman Sir Francis Drake probably located the island on the final leg of his circumnavigation of the world (1577–1580).Drake and St Helena, privately published by Robin Castell in 2005 Further visits by other English explorers followed and, once Saint Helena’s location was more widely known, English ships of war began to lie in wait in the area to attack Portuguese India carracks on their way home. In developing their Far East trade, the Dutch also began to frequent the island. The Portuguese and Spanish soon gave up regularly calling at the island, partly because they used ports along the West African coast, but also because of attacks on their shipping, the desecration of their chapel and religious icons, destruction of their livestock, and destruction of plantations by Dutch and English sailors.
The Dutch Republic formally made claim to Saint Helena in 1633, although there is no evidence that they ever occupied, colonized, or fortified it. By 1651, the Dutch had mainly abandoned the island in favour of their colony at the Cape of Good Hope.
East India Company (1658–1815)
thumb|left|A View of the Town and Island of Saint Helena in the Atlantic Ocean belonging to the English East India Company, engraving, c. 1790.
In 1657, Oliver Cromwell granted the English East India Company a charter to govern Saint Helena and, the following year, the company decided to fortify the island and colonise it with planters. The first governor Captain John Dutton arrived in 1659, making Saint Helena one of Britain's oldest colonies outside North America and the Caribbean. A fort and houses were built. After the Restoration of the English monarchy in 1660, the East India Company received a royal charter giving it the sole right to fortify and colonise the island. The fort was renamed James Fort and the town Jamestown, in honour of the Duke of York, later James II of England.
Between January and May 1673, the Dutch East India Company forcibly took the island, before English reinforcements restored English East India Company control. The company experienced difficulty attracting new immigrants, and sentiments of unrest and rebellion fomented among the inhabitants. Ecological problems of deforestation, soil erosion, vermin and drought led Governor Isaac Pyke in 1715 to suggest that the population be moved to Mauritius, but this was not acted upon and the company continued to subsidise the community because of the island's strategic location. A census in 1723 recorded 1,110 people, including 610 slaves.
18th century governors tried to tackle the island's problems by implementing tree plantation, improving fortifications, eliminating corruption, building a hospital, tackling the neglect of crops and livestock, controlling the consumption of alcohol and introducing legal reforms. The island enjoyed a lengthy period of prosperity from about 1770. Captain James Cook visited the island in 1775 on the final leg of his second circumnavigation of the world. St. James' Church was erected in Jamestown in 1774, and Plantation House was built in 1791–92 and has since been the official residence of the Governor.
Edmond Halley visited Saint Helena on leaving the University of Oxford in 1676 and set up an astronomical observatory with a aerial telescope, with the intention of studying stars from the Southern Hemisphere.Gazetteer – p. 7. MONUMENTS IN FRANCE – page 338 The site of this telescope is near Saint Mathew's Church in Hutt's Gate in the Longwood district. The high hill there is named for him and is called Halley's Mount.
Throughout this period, Saint Helena was an important port of call of the East India Company. East Indiamen would stop there on the return leg of their voyages to British India and China. At Saint Helena, ships could replenish supplies of water and provisions and, during wartime, form convoys that would sail under the protection of vessels of the Royal Navy. Captain James Cook's vessel HMS Endeavour anchored and resupplied off the coast of Saint Helena in May 1771 on its return from the European discovery of the east coast of Australia and rediscovery of New Zealand.
The importation of slaves was made illegal in 1792. Governor Robert Patton (1802–1807) recommended that the company import Chinese labour to supplement the rural workforce. The coolie labourers arrived in 1810, and their numbers reached 600 by 1818. Many were allowed to stay, and their descendents became integrated into the population. An 1814 census recorded 3,507 people on the island.
British rule (1815–1821) and Napoleon's exile
right|thumb|Napoléon à Sainte-Hélène by Francois-Joseph Sandmann
thumb|Longwood House (photographed June 1970)
In 1815, the British government selected Saint Helena as the place of detention for Napoleon Bonaparte. He was taken to the island in October 1815. Napoleon stayed at the Briars pavilion on the grounds of the Balcombe family's home until his permanent residence at Longwood House was completed in December 1815. Napoleon died there on 5 May 1821.
British East India Company (1821–1834)
After Napoleon's death, the thousands of temporary visitors were withdrawn and the East India Company resumed full control of Saint Helena. Between 1815 and 1830, the EIC made the packet schooner St Helena available to the government of the island, which made multiple trips per year between the island and the Cape, carrying passengers both ways and supplies of wine and provisions back to the island.
Napoleon praised Saint Helena’s coffee during his exile on the island, and the product enjoyed a brief popularity in Paris in the years after his death.
The importation of slaves to Saint Helena was banned in 1792, but the phased emancipation of over 800 resident slaves did not take place until 1827, which was still some six years before the British Parliament passed legislation to ban slavery in the colonies.New research published on http://sthelena.uk.net; shortened extract published in the Saint Helena Independent on 3 June 2011.
Crown colony (1834–1981)
Under the provisions of the 1833 India Act, control of Saint Helena was passed from the East India Company to the British Crown, and it became a crown colony.The St Helena, Ascension and Tristan da Cunha Constitution Order 2009 "...the transfer of rule of the island to His Majesty’s Government on 22 April 1834 under the Government of India Act 1833, now called the Saint Helena Act 1833" (Schedule Preamble) Subsequent administrative cost cutting triggered the start of a long-term population decline whereby those who could afford to do so tended to leave the island for better opportunities elsewhere. The latter half of the 19th century saw the advent of steam ships not reliant on trade winds, as well as the diversion of Far East trade away from the traditional South Atlantic shipping lanes to a route via the Red Sea (which, prior to the building of the Suez Canal, involved a short overland section). These factors contributed to a decline in the number of ships calling at the island from 1,100 in 1855 to only 288 in 1889.
In 1840, a British naval station established to suppress the African slave trade was based on the island, and between 1840 and 1849 over 15,000 freed slaves, known as "Liberated Africans", were landed there.
In 1858, the French emperor Napoleon III successfully gained the possession, in the name of the French government, of Longwood House and the lands around it, last residence of Napoleon I (who died there in 1821). It is still French property, administered by a French representative and under the authority of the French Ministry of Foreign Affairs.
On 11 April 1898 American Joshua Slocum, on his famous and epic solo round-the-world voyage, arrived at Jamestown. He departed on 20 April 1898 for the final leg of his circumnavigation having been extended hospitality from the governor, his Excellency Sir R A Standale, presented two lectures on his voyage, and been invited to Longwood by the French Consular agent.
In 1900 and 1901, over 6,000 Boer prisoners were held on the island, and the population reached its all-time high of 9,850 in 1901.
A local industry manufacturing fibre from New Zealand flax was successfully reestablished in 1907 and generated considerable income during the First World War. Ascension Island was made a dependency of Saint Helena in 1922, and Tristan da Cunha followed in 1938. During the Second World War, the United States built Wideawake airport on Ascension in 1942, but no military use was made of Saint Helena.
During this period, the island enjoyed increased revenues through the sale of flax, with prices peaking in 1951. However, the industry declined because of transportation costs and competition from synthetic fibres. The decision by the British Post Office to use synthetic fibres for its mailbags was a further blow, contributing to the closure of the island's flax mills in 1965.
From 1958, the Union Castle shipping line gradually reduced its service calls to the island. Curnow Shipping, based in Avonmouth, replaced the Union-Castle Line mailship service in 1977, using the RMS (Royal Mail Ship) St Helena.
1981 to present
thumb|right|Saint Helena seen from space (photo is oriented with south-east towards the top)
The British Nationality Act 1981 reclassified Saint Helena and the other Crown colonies as British Dependent Territories. The islanders lost their right of abode in Britain. For the next 20 years, many could find only low-paid work with the island government, and the only available employment outside Saint Helena was on the Falkland Islands and Ascension Island. The Development and Economic Planning Department (which still operates) was formed in 1988 to contribute to raising the living standards of the people of Saint Helena.
In 1989, Prince Andrew launched the replacement RMS St Helena to serve the island; the vessel was specially built for the Cardiff–Cape Town route and features a mixed cargo/passenger layout.
The Saint Helena Constitution took effect in 1989 and provided that the island would be governed by a Governor, Commander-in-Chief, and an elected Executive and Legislative Council. In 2002, the British Overseas Territories Act 2002 granted full British citizenship to the islanders, and renamed the Dependent Territories (including Saint Helena) the British Overseas Territories. In 2009, Saint Helena and its two territories received equal status under a new constitution, and the British Overseas Territory was renamed Saint Helena, Ascension and Tristan da Cunha.
The UK government has spent £250 million in the construction of the island's airport. This is aimed at helping the island become more self-sufficient, encouraging economic development while reducing dependence on British government aid. It is also expected to kick-start the tourism industry, with up to 30,000 visitors expected annually. As of August 2015, ticketing was postponed until an airline could be firmly designated.
The first plane landed on 15 September 2015, with the first large passenger jet landing on 18 April of the following year, although the airport is not yet officially open due to concerns about wind shear.
Geography
thumb|right|Positions (north to south) of Ascension Island, Saint Helena, and Tristan da Cunha in the South Atlantic Ocean
Located in the South Atlantic Ocean on the Mid-Atlantic Ridge, more than from the nearest major landmass, Saint Helena is one of the most remote places in the world. The nearest port on the continent is Namibe in southern Angola, and the nearest international airport the Quatro de Fevereiro Airport of Angola's capital Luanda; connections to Cape Town in South Africa are used for most shipping needs, such as the mail boat that serves the island, the RMS St Helena. The island is associated with two other isolated islands in the southern Atlantic, also British territories: Ascension Island about due northwest in more equatorial waters and Tristan da Cunha, which is well outside the tropics to the south. The island is situated in the Western Hemisphere and has the same longitude as Cornwall in the United Kingdom. Despite its remote location, it is classified as being in West Africa by the United Nations.
The island of Saint Helena has a total area of , and is composed largely of rugged terrain of volcanic origin (the last volcanic eruptions occurred about 7 million years ago).Natural History of Saint Helena Coastal areas are covered in volcanic rock and warmer and drier than the centre. The highest point of the island is Diana's Peak at . In 1996 it became the island's first national park. Much of the island is covered by New Zealand flax, a legacy of former industry, but there are some original trees augmented by plantations, including those of the Millennium Forest project, which was established in 2002 to replant part of the lost Great Wood and is now managed by the Saint Helena National Trust. The Millennium Forest is being planted with indigenous gumwood trees. When the island was discovered, it was covered with unique indigenous vegetation, including a remarkable cabbage tree species. The island's hinterland must have been a dense tropical forest but the coastal areas were probably also quite green. The modern landscape is very different, with widespread bare rock in the lower areas, although inland it is green, mainly due to introduced vegetation. There are no native land mammals, but cattle, cats, dogs, donkeys, goats, mice, rabbits, rats and sheep have been introduced, and native species have been adversely affected as a result. The dramatic change in landscape must be attributed to these introductions. As a result, the string tree (Acalypha rubrinervis) and the Saint Helena olive (Nesiota elliptica) are now extinct, and many of the other endemic plants are threatened with extinction.
There are several rocks and islets off the coast, including: Castle Rock, Speery Island, the Needle, Lower Black Rock, Upper Black Rock (South), Bird Island (Southwest), Black Rock, Thompson's Valley Island, Peaked Island, Egg Island, Lady's Chair, Lighter Rock (West), Long Ledge (Northwest), Shore Island, George Island, Rough Rock Island, Flat Rock (East), the Buoys, Sandy Bay Island, the Chimney, White Bird Island and Frightus Rock (Southeast), all of which are within one kilometre () of the shore.
The national bird of Saint Helena is the Saint Helena plover, known locally as the wirebird, on account of its wire-like legs. It appears on the coat of arms of Saint Helena and on the flag.
Climate
The climate of Saint Helena is tropical, marine and mild, tempered by the Benguela Current and trade winds that blow almost continuously.About St Helena, St Helena News Media Services The climate varies noticeably across the island. Temperatures in Jamestown, on the north leeward shore, range between in the summer (January to April) and during the remainder of the year. The temperatures in the central areas are, on average, lower. Jamestown also has a very low annual rainfall, while falls per year on the higher ground and the south coast, where it is also noticeably cloudier.BBC Weather Centre There are weather recording stations in the Longwood and Blue Hill districts.
Administrative divisions
thumb|right|350px|Districts of Saint Helena
Saint Helena is divided into eight districts,St Helena Independent, 3 October 2008 page 2 with the majority housing a community Centre. The districts also serve as statistical divisions. The island is a single electoral area and elects 12 representatives to the Legislative Council of 15.
Districtbalance Areakm2 Areasq mi Pop.1998 Pop.2008 Pop.2016 Pop./km22016 Alarm Forest 5.4 2.1 289 276 383 70.4 Blue Hill 36.8 14.2 177 153 158 4.3 Half Tree Hollow 1.6 0.6 1,140 901 984 633.2 Jamestown 3.9 1.5 884 716 629 161.9 Levelwood 14.8 5.7 376 316 369 25.0 Longwood 33.4 12.9 960 715 790 23.6 Sandy Bay 16.1 6.2 254 205 193 12.0 Saint Paul's 11.4 4.4 908 795 843 74.0 Royal Mail ShipSt. Helena – – 149 171 183 – JamestownHarbour – – 20 9 13 – Total 123.3 47.6 5,157 4,257 4,349 35.3
Population
Demographics
thumb|right|Jamestown, from above
thumb|Jamestown, the capital of Saint Helena
Saint Helena was first settled by the English in 1659, and the island has a population of about 4,250 inhabitants, mainly descended from people from Britain – settlers ("planters") and soldiers – and slaves who were brought there from the beginning of settlement – initially from Africa (the Cape Verde Islands, Gold Coast and west coast of Africa are mentioned in early records), then India and Madagascar. The importation of slaves was made illegal in 1792, thus preventing any further increase in their numbers.
In 1840, Saint Helena became a provisioning station for the British West Africa Squadron, preventing slavery to Brazil (mainly), and many thousands of slaves were freed on the island. These were all African, and about 500 stayed while the rest were sent on to the West Indies and Cape Town, and eventually to Sierra Leone.
Imported Chinese labourers arrived in 1810, reaching a peak of 618 in 1818, after which numbers were reduced. Only a few older men remained after the British Crown took over the government of the island from the East India Company in 1834. The majority were sent back to China, although records in the Cape suggest that they never got any farther than Cape Town. There were also a very few Indian lascars who worked under the harbour master.
The citizens of Saint Helena hold British Overseas Territories citizenship. On 21 May 2002, full British citizenship was restored by the British Overseas Territories Act 2002.St Helena celebrates the restoration of full citizenship, Telegraph, 22 May 2002 See also British nationality law.
During periods of unemployment, there has been a long pattern of emigration from the island since the post-Napoleonic period. The majority of "Saints" emigrated to Britain, South Africa and in the early years, Australia. The population had been steadily declining since the late 1980s and dropped from 5,157 at the 1998 census to 4,257 in 2008. However, as of the 2016 census, the population has risen to 4,534. In the past emigration was characterised by young unaccompanied persons leaving to work on long-term contracts on Ascension and the Falkland Islands, but since "Saints" were re-awarded British citizenship in 2002, emigration to Britain by a wider range of wage-earners has accelerated due to the prospect of higher wages and better progression prospects.
Religion
Most residents belong to the Anglican Communion and are members of the Diocese of St Helena, which has its own bishop and includes Ascension Island. The 150th anniversary of the diocese was celebrated in June 2009.
Other Christian denominations on the island include the Roman Catholic Church (since 1852), the Salvation Army (since 1884), Baptists (since 1845) and, in more recent times, the Seventh-day Adventist Church (since 1949), the New Apostolic Church, and Jehovah's Witnesses (of which one in 33 residents is a member, the highest ratio of any country).
The Roman Catholics are pastorally served by the Mission sui iuris of Saint Helena, Ascension and Tristan da Cunha, whose office of ecclesiastical superior is vested in the Apostolic Prefecture of the Falkland Islands.
The Baha'i Faith has also been represented on the island since 1954.
Politics
Executive authority in Saint Helena is vested in Queen Elizabeth II and is exercised on her behalf by the Governor of Saint Helena. The Governor is appointed by the Queen on the advice of the British government. Defence and Foreign Affairs remain the responsibility of the United Kingdom.
There are fifteen seats in the Legislative Council of Saint Helena, a unicameral legislature, in addition to a Speaker and a Deputy Speaker. Twelve of the fifteen members are elected in elections held every four years. The three ex officio members are the Chief Secretary, Financial Secretary and Attorney General. The Executive Council is presided over by the Governor, and consists of three ex officio officers and five elected members of the Legislative Council appointed by the Governor. There is no elected Chief Minister, and the Governor acts as the head of government. In January 2013 it was proposed that the Executive Council would be led by a "Chief Councillor" who would be elected by the members of the Legislative Council and would nominate the other members of the Executive Council. These proposals were put to a referendum on 23 March 2013 where they were defeated by 158 votes to 42 on a 10% turnout.
Both Ascension Island and Tristan da Cunha have an Administrator appointed to represent the Governor of Saint Helena.
One commentator has observed that, notwithstanding the high unemployment resulting from the loss of full passports during 1981–2002, the level of loyalty to the British monarchy by the Saint Helena population is probably not exceeded in any other part of the world.Smallman, David L., Quincentenary, a Story of St Helena, 1502–2002; Jackson, E. L. St Helena: The Historic Island, Ward, Lock & Co, London, 1903 King George VI is the only reigning monarch to have visited the island. This was in 1947 when the King, accompanied by Queen Elizabeth (later the Queen Mother), Princess Elizabeth (later Queen Elizabeth II) and Princess Margaret were travelling to South Africa. Prince Philip arrived at Saint Helena in 1957, followed by his son, Prince Andrew, who visited as a member of the armed forces in 1984, and his daughter, the Princess Royal, in 2002.
Human rights
In 2012, the government of Saint Helena funded the creation of the St. Helena Human Rights Action Plan 2012–2015. Work is being done under this action plan, including publishing awareness-raising articles in local newspapers, providing support for members of the public with human rights queries, and extending several UN Conventions on human rights to St. Helena.
Legislation to set up an Equality and Human Rights Commission was passed by Legislative Council in July 2015. This commenced operation in October 2015.
Child abuse scandal
In 2014 there were reports of child abuse in Saint Helena. Britain’s Foreign and Commonwealth Office (FCO) was accused of lying to the United Nations about child abuse in Saint Helena to cover up allegations, including cases of a police officer having raped a four-year-old girl and of a police officer having mutilated a two-year-old.
A government report was published on 10 December 2015. It found that the accusations were grossly exaggerated, and the lurid headlines in the Daily Mail had come from information from two social workers, whom the report described as incompetent.
Biodiversity
Saint Helena has long been known for its high proportion of endemic birds and vascular plants. The highland areas contain most of the 400 endemic species recognised to date. Much of the island has been identified by BirdLife International as being important for bird conservation, especially the endemic Saint Helena plover or wirebird, and for seabirds breeding on the offshore islets and stacks, in the north-east and the south-west Important Bird Areas. On the basis of these endemics and an exceptional range of habitats, Saint Helena is on the United Kingdom's tentative list for future UNESCO World Heritage Sites.
Saint Helena's biodiversity, however, also includes marine vertebrates, invertebrates (freshwater, terrestrial and marine), fungi (including lichen-forming species), non-vascular plants, seaweeds and other biological groups. To date, very little is known about these, although more than 200 lichen-forming fungi have been recorded, including 9 endemics,Aptroot, A. "Lichens of St Helena and Ascension Island". Botanical Journal of the Linnean Society, 158: 147–171, 2008 suggesting that many significant discoveries remain to be made.
Economy
Note: Some of the data in this section have been sourced from the Government of St Helena Sustainable Development Plan.News.co.sh
The island had a monocrop economy until 1966, based on the cultivation and processing of New Zealand flax for rope and string. Saint Helena's economy is now weak, and is almost entirely sustained by aid from the British government. The public sector dominates the economy, accounting for about 50% of gross domestic product. Inflation was running at 4% in 2005. There have been increases in the cost of fuel, power and all imported goods.
The tourist industry is heavily based on the promotion of Napoleon's imprisonment. A golf course also exists and the possibility for sportfishing tourism is great. Three hotels operate on the island but the arrival of tourists is directly linked to the arrival and departure schedule of the RMS St Helena. Some 3,200 short-term visitors arrived on the island in 2013.
Saint Helena produces what is said to be the most expensive coffee in the world. It also produces and exports Tungi Spirit, made from the fruit of the prickly or cactus pears, Opuntia ficus-indica ("Tungi" is the local St Helenian name for the plant). Like Ascension Island and Tristan da Cunha, Saint Helena is permitted to issue its own postage stamps, an enterprise that provides a significant income.
Economic statistics
Quoted at constant 2002 prices, GDP fell from £12 million in 1999–2000 to £11 million in 2005–06. Imports are mainly from the UK and South Africa and amounted to £6.4 million in 2004–05 (quoted on an FOB basis). Exports are much smaller, amounting to £0.2 million in 2004–05. Exports are mainly fish and coffee; Philatelic sales were £0.06 million in 2004–05. The limited number of visiting tourists spent about £0.4 million in 2004–05, representing a contribution to GDP of 3%.
Public expenditure rose from £10 million in 2001–02 to £12 million in 2005–06 to £28m in 2012–13. The contribution of UK budgetary aid to total SHG government expenditure rose from £4.6 million in to £6.4 million to £12.1 million over the same period. Wages and salaries represent about 38% of recurrent expenditure.
Unemployment levels are low (31 individuals in 2013, compared to 50 in 2004 and 342 in 1998). Employment is dominated by the public sector, the number of government positions has fallen from 1,142 in 2006 to just over 800 in 2013. Saint Helena’s private sector employs approximately 45% of the employed labour force and is largely dominated by small and micro businesses with 218 private businesses employing 886 in 2004.
Household survey results suggest the percentage of households spending less than £20 per week on a per capita basis fell from 27% to 8% between 2000 and 2004, implying a decline in income poverty. Nevertheless, 22% of the population claimed social security benefit in 2006/7, most of them aged over 60, a sector that represents 20% of the population.
Banking and currency
In 1821, Saul Solomon issued a 70,560 copper tokens worth a halfpenny each Payable at St Helena by Solomon, Dickson and Taylor – presumably London partners – that circulated alongside the East India Company's local coinage until the Crown took over the island in 1836. The coin remains readily available to collectors.
Today Saint Helena has its own currency, the Saint Helena pound, which is at parity with the pound sterling. The government of Saint Helena produces its own coinage and banknotes. The Bank of Saint Helena was established on Saint Helena and Ascension Island in 2004. It has branches in Jamestown on Saint Helena, and Georgetown, Ascension Island and it took over the business of the St. Helena government savings bank and Ascension Island Savings Bank.
For more information on currency in the wider region, see the Sterling Currency in the South Atlantic and the Antarctic.
Transport
thumb|right|RMS St Helena in James Bay.
thumb|right|Looking back at the island from the RMS St Helena.
Saint Helena is one of the most remote islands in the world, and has one commercial airport.
Sea
The ship RMS St Helena runs between Saint Helena and Cape Town on a five-day voyage, also visiting Ascension Island and Walvis Bay, and occasionally voyaging north to Tenerife and Portland, UK. It berths in James Bay, Saint Helena, approximately 30 times per year. The RMS St Helena was due for decommissioning in 2010, but its service life has been extended to July 2017.
Air
thumb|right|St Helena Airport terminal under construction in 2014
After a long period of rumour and consultation, the British government announced plans to construct an airport in Saint Helena in March 2005. The airport was expected to be completed by 2010. However, the approved bidder, the Italian firm Impregilo, was not chosen until 2008, and then the project was put on hold in November 2008, allegedly due to new financial pressures brought on by the financial crisis of 2007–08. By January 2009, construction had not commenced and no final contracts had been signed. Governor Andrew Gurr departed for London in an attempt to speed up the process and solve the problems.
On 22 July 2010, the British government agreed to help pay for the new airport. In November 2011, a new deal was signed between the British government and South African civil engineering company Basil Read, and the airport was scheduled to open in February 2016 with flights to and from South Africa and the UK. In March 2015, South African airline Comair became the preferred bidder to provide weekly air service between the island and Johannesburg, starting from 2016.
The first aircraft landed at the new airport on 15 September 2015, a South African Beechcraft King Air 200, prior to conducting a series of flights to calibrate the airport's radio navigation equipment.Rosenberg, Zach. "Tiny, Remote St. Helena Gets Its First Airport" Air & Space/Smithsonian, 18 September 2015. Accessed: 26 September 2015.
The first helicopter landing at the new airfield was conducted by the Wildcat HMA.2 ZZ377 from 825 Squadron 201 Flight, embarked on visiting HMS Lancaster on 23 October 2015.
The airport's opening was due in May 2016, but it was announced in June 2016 that it had been delayed indefinitely due to high winds and wind shear.
Local
A minibus offers a basic service to carry people around Saint Helena, with most services designed to take people into Jamestown for a few hours on weekdays to conduct their business. Car hire is available for visitors.
Media and communications
Radio
Radio St Helena started operations on Christmas Day 1967, and provided a local radio service that had a range of about from the island, and also broadcast internationally on shortwave radio (11092.5 kHz) on one day a year. The station presented news, features, and music in collaboration with its sister newspaper the St Helena Herald. It closed on 25 December 2012 to make way for a new three-channel FM service, also funded by St. Helena Government and run by the South Atlantic Media Services (formerly St. Helena Broadcasting (Guarantee) Corporation).
South Atlantic Media Services Ltd. (SAMS)http://www.sams.sh provides three radio channels to St Helena. SAMS Radio 1 is a music and entertainment channel; SAMS Radio 2 is a relay of the BBC World Service and SAMS Radio 3 broadcasts continuous music. SAMS also produces a weekly newspaper, The Sentinel, and a weekly TV news broadcast.
Saint FM provided a local radio service for the island which was also available on internet radio and relayed in Ascension Island. The station was not government-funded. It was launched in January 2005 and closed on 21 December 2012. It broadcast news, features, and music in collaboration with its sister newspaper the St Helena Independent (which continues).
Saint FM Community Radio took over the radio channels vacated by Saint FM and launched on 10 March 2013. The station operates as a limited-by-guarantee company owned by its members, and is registered as a fund-raising Association. Membership is open to everyone and grants access to a live audio stream.
Occasional amateur radio operations also occur on the island. The ITU prefix used is ZD7.
Online
St Helena Online is a not-for-profit internet news service run from the UK by a former print and BBC journalist, working in partnership with Saint FM and the St Helena Independent.
St Helena Local offers a news service and online user forum offering information about St Helena. This website is run from overseas but is open to contribution from anyone who has an interest in St Helena.
Saint Helena Island Info is an online resource featuring the history of St. Helena from its discovery to the present day, plus photographs and information about life on St. Helena today.
Saint Helena Government is the official mouthpiece of the island's governing body. It includes news, information for potential visitors and investors, as well as official press releases and pages from the major government departments.
Saint Helena Tourism is a website aimed squarely at the tourist trade with advice on accommodation, transport, food and drink, events and the like.
Saint Helena Islands Property Finder - St Helena online accommodation offering self-catering, bed and breakfasts, hotels and property news.
Television
Sure South Atlantic Ltd ("Sure") offers television for the island via 17 analogue terrestrial UHF channels, offering a mix of British, US, and South African programming. The channels are from DSTV and include Mnet, SuperSport, and BBC channels. The feed signal from MultiChoice DStv in South Africa is received by a satellite dish at Bryant's Beacon from Intelsat 7 in the Ku band.
South Atlantic Media Services Ltd. (SAMS) produces a weekly TV news broadcast, Newsbyte, which is also published on YouTube.
Telecommunications
Sure provide the telecommunications service in the territory through a digital copper-based telephone network including ADSL broadband service. In August 2011 the first fibre-optic link was installed on the island, which connects the television reception antennas at Bryant's Beacon to the Cable & Wireless Technical Centre in the Briars.
A satellite ground station with a satellite dish installed in 1989"Cable & Wireless Carries out Major Mechanical Maintenance" The St Helena Independent Volume 1, Issue 37 Friday 21 July 2006, p. 8 at The Briars is the only international connection providing satellite links through Intelsat 707 to Ascension island and the United Kingdom. Since all international telephone and internet communications are relying on this single satellite link both internet and telephone service are subject to sun outages.
Saint Helena has the international calling code +290 which, since 2006, Tristan da Cunha has shared. Saint Helena telephone numbers changed from 4 to 5 digits on 1 October 2013 by being prefixed with the digit "2", i.e. 2xxxx, with the range 5xxxx being reserved for mobile numbering, and 8xxx being used for Tristan da Cunha numbers (these are still shown as 4 digits).www.itu.int
Mobile telephony was due to start operating on the island by late 2015.
Internet
Saint Helena was granted the use of .sh as its own Internet country code top-level domain (ccTLD). This is formally shared with Ascension Island and Tristan da Cunha, British Overseas Territories. Registrations of internationalized domain names are also accepted under this TLD so, for example, the German federal state of Schleswig-Holstein uses the .sh domain for some quasi-governmental sites..SH IDN Policy, NIC, Saint Helena. In practice several sites dedicated to aspects of life on Saint Helena are run from elsewhere in the world so use other TLD's, such as the Saint Helena website which is based in Sweden.
Saint Helena has a 10/3.6 Mbit/s internet link via Intelsat 707 provided by Sure. Serving a population of more than 4,000, this single satellite link is considered inadequate in terms of bandwidth.
ADSL broadband service is provided with maximum speeds of up to 1,536 KBit/s downstream and 512 KBit/s upstream offered on contract levels from lite at £16 per month to gold+ at £190 per month.http://www.sure.co.sh/downloads/BroadbandPackages.pdf There are a few public WiFi hotspots in Jamestown, which are also being operated by Sure (formerly Cable & Wireless).
The South Atlantic Express, a submarine communications cable connecting Africa to South America, run by the undersea fibre optic provider eFive, will pass St Helena relatively closely. There were no plans to land the cable and install a landing station ashore, which could supply St Helena's population with sufficient bandwidth to fully leverage the benefits of today's information society. In January 2012, a group of supporters petitioned the UK government to meet the cost of landing the cable at St Helena. On 6 October 2012, eFive agreed to reroute the cable through St. Helena after a successful lobbying campaign by A Human Right, a San Francisco-based NGA working on initiatives to ensure all people are connected to the internet. Islanders have sought the assistance of the UK Department for International Development and Foreign and Commonwealth Office in funding the £10m required to bridge the connection from a local junction box on the cable to the island. The UK government has announced that a review of the island's economy would be required before such funding would be agreed.
Local newspapers
The island has two local newspapers, both of which are available on the internet. The St Helena Independent has been published since November 2005. The Sentinel newspaper was introduced in 2012.
Culture and society
Education
Education is free and compulsory between the ages of 5 and 16 The island has three primary schools for students of age 4 to 11: Harford, Pilling, and St Paul’s. Prince Andrew School provides secondary education for students aged 11 to 18. At the beginning of the academic year 2009–10, 230 students were enrolled in primary school and 286 in secondary school.
The Education and Employment Directorate also offers programmes for students with special needs, vocational training, adult education, evening classes, and distance learning. The island has a public library (the oldest in the Southern Hemisphere) and a mobile library service which operates weekly in rural areas.
The English national curriculum is adapted for local use. A range of qualifications are offered – from GCSE, A/S and A2, to Level 3 Diplomas and VRQ qualifications:
A/S & A2 and Level 3 Diploma
Business Studies
English
English Literature
Geography
ICT
Psychology
Maths
Accountancy
VRQ
Building and Construction
Automotive Studies
Saint Helena has no tertiary education. Scholarships are offered for students to study abroad.
Sport
Sports played on the island include football, cricket, volleyball, tennis, golf, motocross, shooting and sailing. Saint Helena has sent teams to a number of Commonwealth Games. Saint Helena is a member of the International Island Games Association.Island Games St Helena profile The Saint Helena cricket team made its debut in international cricket in Division Three of the African region of the World Cricket League in 2011.
The Governor's Cup is a yacht race between Cape Town and Saint Helena island, held every two years in December/January; the most recent event was in December 2010. In Jamestown a timed run takes place up Jacob's Ladder every year, with people coming from all over the world to take part.
Scouting
There are Scouting and Guiding Groups on Saint Helena and Ascension Island. Scouting was established on Saint Helena island in 1912.ScoutBaseUK A Scouting Timeline Lord and Lady Baden-Powell visited the Scouts on Saint Helena on the return from their 1937 tour of Africa. The visit is described in Lord Baden-Powell's book entitled African Adventures.
Notable people from St. Helena
Namesake
St Helena, the suburb of Melbourne, Victoria, Australia was named after the island.
See also
List of islands
Manatee of Helena
Outline of Saint Helena
Saint Helena Police Service
Healthcare on Saint Helena
References
Further reading
Aptroot, Andre. Lichens of St Helena, Pisces Publications, Newbury, UK, 2012, ISBN 9781874357537
Brooke, T. H., A History of the Island of St Helena from its Discovery by the Portuguese to the Year 1806, Printed for Black, Parry and Kingsbury, London, 1808
Bruce, I. T., Thomas Buce: St Helena Postmaster and Stamp Designer, Thirty years of St Helena, Ascension and Tristan Philately, pp 7–10, 2006, ISBN 1-890454-37-0
Cannan, Edward Churches of the South Atlantic Islands 1502–1991 ISBN 0-904614-48-4
Chaplin, Arnold, A St Helena's Who's Who or a Directory of the Island During the Captivity of Napoleon, published by the author in 1914. This has recently been republished under the title Napoleon’s Captivity on St Helena 1815–1821, Savannah Paperback Classics, 2002, ISBN 1-902366-12-3
Clements, B.; "St Helena:South Atlantic Fortress"; Fort, (Fortress Study Group), 2007 (35), pp. 75–90
Crallan, Hugh, Island of St Helena, Listing and Preservation of Buildings of Architectural and Historic Interest, 1974
Cross, Tony St Helena including Ascension Island and Tristan Da Cunha ISBN 0-7153-8075-3
Dampier, William, Piracy, Turtles & Flying Foxes, 2007, Penguin Books, 2007, pp 99–104, ISBN 0-14-102541-7
Darwin, Charles, Geological Observations on the Volcanic Islands, Chapter 4, Smith, Elder & Co., London, 1844.
Denholm, Ken, South Atlantic Haven, a Maritime History for the Island of St Helena, published and printed by the Education Department of the Government of St Helena
Duncan, Francis, A Description of the Island Of St Helena Containing Observations on its Singular Structure and Formation and an Account of its Climate, Natural History, and Inhabitants, London, Printed For R Phillips, 6 Bridge Street, Blackfriars, 1805
Eriksen, Ronnie, St Helena Lifeline, Mallet & Bell Publications, Norfolk, 1994, ISBN 0-620-15055-6
Evans, Dorothy, Schooling in the South Atlantic Islands 1661–1992, Anthony Nelson, 1994, ISBN 0-904614-51-4
George, Barbara B. St Helena — the Chinese Connection (2002) ISBN 0189948922
Gosse, Philip Saint Helena, 1502–1938 ISBN 0-904614-39-5
Hakluyt, The Principal Navigations Voyages Traffiques & Discoveries of the English Nation, from the Prosperous Voyage of M. Thomas Candish esquire into the South Sea, and so around about the circumference of the whole earth, begun in the yere 1586, and finished 1588, 1598–1600, Volume XI.
Hibbert, Edward, St Helena Postal History and Stamps, Robson Lowe Limited, London, 1979
Hearl, Trevor W., St Helena Britannica: Studies in South Atlantic Island History (ed. A.H. Schulenburg), Friends of St Helena, London, 2013
Holmes, Rachel, Scanty Particulars: The Scandalous Life and Astonishing Secret of James Barry, Queen Victoria's Most Eminent Military Doctor, Viking Press, 2002, ISBN 0-375-5055-63
Jackson, E. L. St Helena: The Historic Island, Ward, Lock & Co, London, 1903
Janisch, Hudson Ralph, Extracts from the St Helena Records, Printed and Published at the "Guardian" Office by Benjamin Grant, St Helena, 1885
Keneally, Tom, Napoleon's Last Island, ISBN 978 0 85798 460 9, Penguin Random House Australia, 2015
Kitching, G. C., A Handbook of St Helena Including a short History of the island Under the Crown
Lambdon, Phil. Flowering plants and ferns of St Helena, Pisces Publications, Newbury, UK, 2012, ISBN 9781874357520
Melliss, John C. M., St Helena: A Physical, Historical and Topographical Description of the Island Including Geology, Fauna, Flora and Meteorology, L. Reeve & Co, London, 1875
Schulenburg, A. H., 'St Helena Historiography, Philately, and the "Castella" Controversy', South Atlantic Chronicle: The Journal of the St Helena, Ascension and Tristan da Cunha Philatelic Society, Vol. XXIII, No.3, pp. 3–6, 1999
Schulenburg, A.H., '"Island of the Blessed": Eden, Arcadia and the Picturesque in the Textualizing of St Helena', Journal of Historical Geography, Vol.29, No.4 (2003), pp. 535–53
Schulenburg, A.H., 'St Helena: British Local History in the Context of Empire', The Local Historian, Vol.28, No.2 (1998), pp. 108–122
Shine, Ian, Serendipity in St Helena, a Genetical and Medical Study of an isolated Community, Pergamon Press, Oxford, 1970 ISBN 0-08-012794-0
Smallman, David L., Quincentenary, a Story of St Helena, 1502–2002 ISBN 1-872229-47-6
Van Linschoten, Iohn Huighen, His Discours of Voyages into ye Easte & West Indies, Wolfe, London, 1598
Weider, Ben & Hapgood, David The Murder of Napoleon (1999) ISBN 1-58348-150-8
Wigginton, Martin. Mosses and liverworts of St Helena, Pisces Publications, Newbury, UK, 2012, ISBN 9781874357-51-3
External links
The Official Government Website of Saint Helena
The Official Website for St Helena Tourism
Saint Helena Island Information website
St Helena Association (UK)
Friends of St Helena – supporting St Helena and providing information about the island since 1988
Radio Saint FM (live broadcasting from Saint Helena)
The Saint Helena Virtual Library and Archive
Saint Helena Travel Guide from Travellerspoint.
The first website on St Helena — since 1995
The St Helena Institute – Dedicated to St Helena and Dependencies research since 1997
BBC News: Life on one of the world's most remote islands
Main sites, habitations and occupants of the island during Napoleon's captivity
South Atlantic news, in association with the Saint Helena Independent
St Helenas online rental accommodation and property finder
Seale, Robert F. (1834) The geognosy of the island St. Helena, illustrated in a series of views, plans and sections. London: Achermann and Co. – digital facsimile from the Linda Hall Library
Isolated Islands: St. Helena (2014), Globe Trekker (Travel Documentory)
Category:Islands of Saint Helena, Ascension and Tristan da Cunha
Category:Islands of the South Atlantic Ocean
Category:Remote islands
Category:West Africa
Category:Islands of British Overseas Territories
Category:States and territories established in 1659
Category:1659 establishments in Africa
Category:1659 establishments in the British Empire
Category:Former British colonies and protectorates in Africa
Category:English-speaking countries and territories | 26,945 | 2017-01 |
Oklahoma City | Oklahoma City is the capital and largest city of the state of Oklahoma. The county seat of Oklahoma County, the city ranks 27th among United States cities in population. The population grew following the 2010 Census, with the population estimated to have increased to 631,346 as of July 2015. As of 2015, the Oklahoma City metropolitan area had a population of 1,358,452, and the Oklahoma City-Shawnee Combined Statistical Area had a population of 1,459,758 (Chamber of Commerce) residents, making it Oklahoma's largest metropolitan area.
Oklahoma City's city limits extend into Canadian, Cleveland, and Pottawatomie counties, though much of those areas outside of the core Oklahoma County area are suburban or rural (watershed). The city ranks as the eighth-largest city in the United States by land area (including consolidated city-counties; it is the largest city in the United States by land area whose government is not consolidated with that of a county or borough).
Oklahoma City, lying in the Great Plains region, features one of the largest livestock markets in the world.Knapp, Adam. Stockyards City district at About.com (Retrieved April 29, 2010) Oil, natural gas, petroleum products and related industries are the largest sector of the local economy. The city is situated in the middle of an active oil field and oil derricks dot the capitol grounds. The federal government employs large numbers of workers at Tinker Air Force Base and the United States Department of Transportation's Mike Monroney Aeronautical Center (these two sites house several offices of the Federal Aviation Administration and the Transportation Department's Enterprise Service Center, respectively).
Oklahoma City is on the I-35 Corridor and is one of the primary travel corridors into neighboring Texas and Mexico. Located in the Frontier Country region of the state, the city's northeast section lies in an ecological region known as the Cross Timbers. The city was founded during the Land Run of 1889, and grew to a population of over 10,000 within hours of its founding. The city was the scene of the April 19, 1995 bombing of the Alfred P. Murrah Federal Building, in which 168 people died. It was the deadliest terror attack in the history of the United States until the attacks of September 11, 2001, and remains the deadliest act of domestic terrorism in U.S. history.
Since the time weather records have been kept, Oklahoma City has been struck by thirteen strong tornadoes: eleven F/EF4s and two F/EF5.
History
thumb|left|Map of Indian Territory (Oklahoma) 1889, showing Oklahoma as a train stop on a railroad line. Britannica 9th ed.
Oklahoma City was settled on April 22,http://digital.library.okstate.edu/encyclopedia/entries/l/la014.html 1889, when the area known as the "Unassigned Lands" was opened for settlement in an event known as "The Land Run".Wilson, Linda D. "Oklahoma City", Encyclopedia of Oklahoma History and Culture. Retrieved January 26, 2010. Some 10,000 homesteaders settled the area that would become the capital of Oklahoma. The town grew quickly; the population doubled between 1890 and 1900.Wilson. Encyclopedia of Oklahoma History and Culture Early leaders of the development of the city included Anton Classen, John Shartel, Henry Overholser and James W. Maney.
left|thumb|Lithograph of Oklahoma City from 1890.
By the time Oklahoma was admitted to the Union in 1907, Oklahoma City had surpassed Guthrie, the territorial capital, as the population center and commercial hub of the new state. Soon after, the capital was moved from Guthrie to Oklahoma City.Curtis, Gene. "Only in Oklahoma: State capital location was a fight to the finish", Tulsa World. Retrieved February 4, 2010. Oklahoma City was a major stop on Route 66 during the early part of the 20th century; it was prominently mentioned in Bobby Troup's 1946 jazz classic, "(Get Your Kicks on) Route 66", later made famous by artist Nat King Cole.
Before World War II, Oklahoma City developed major stockyards, attracting jobs and revenue formerly in Chicago and Omaha, Nebraska. With the 1928 discovery of oil within the city limits (including under the State Capitol), Oklahoma City became a major center of oil production.Oklahoma Oil: Past, Present and Future Post-war growth accompanied the construction of the Interstate Highway System, which made Oklahoma City a major interchange as the convergence of I-35, I-40 and I-44. It was also aided by federal development of Tinker Air Force Base.
In 1950, the Census Bureau reported city's population as 8.6% black and 90.7% white.
Patience Latting was elected Mayor of Oklahoma City in 1971, becoming the city's first female mayor. Latting was also the first woman to serve as mayor of a U.S. city with over 350,000 residents.
300px|thumb|Oklahoma City National Memorial at Christmas.
As with many other American cities, center city population declined in the 1970s and 1980s as families followed newly constructed highways to move to newer housing in nearby suburbs. Urban renewal projects in the 1970s, including the Pei Plan, removed many older historic structures but failed to spark much new development, leaving the city dotted with vacant lots used for parking. A notable exception was the city's construction of the Myriad Gardens and Crystal Bridge, a botanical garden and modernistic conservatory in the heart of downtown. Architecturally significant historic buildings lost to clearances were the Criterion Theater, the Baum Building, the Hales Building,Lackmeyer and Money, pp. 20, 42. and the Biltmore Hotel.
In 1993, the city passed a massive redevelopment package known as the Metropolitan Area Projects (MAPS), intended to rebuild the city's core with civic projects to establish more activities and life to downtown. The city added a new baseball park; central library; renovations to the civic center, convention center and fairgrounds; and a water canal in the Bricktown entertainment district. Water taxis transport passengers within the district, adding color and activity along the canal. MAPS has become one of the most successful public-private partnerships undertaken in the U.S., exceeding $3 billion in private investment as of 2010.Metropolitan Area Projects, Greater Oklahoma City Chamber. Retrieved February 5, 2010. As a result of MAPS, the population living in downtown housing has exponentially increased, together with demand for additional residential and retail amenities, such as grocery, services, and shops.
Since the MAPS projects' completion, the downtown area has seen continued development. Several downtown buildings are undergoing renovation/restoration. Notable among these was the restoration of the Skirvin Hotel in 2007. The famed First National Center is being renovated.
Residents of Oklahoma City suffered substantial losses on April 19, 1995 when Timothy McVeigh detonated a bomb in front of the Murrah building. The building was destroyed (the remnants of which had to be imploded in a controlled demolition later that year), more than 100 nearby buildings suffered severe damage, and 168 people were killed. The site has been commemorated as the Oklahoma City National Memorial and Museum. Since its opening in 2000, over three million people have visited. Every year on April 19, survivors, families and friends return to the memorial to read the names of each person lost. On June 11, 2001, McVeigh was executed by lethal injection.
The "Core-to-Shore" project was created to relocate I-40 one mile (1.6 km) south and replace it with a boulevard to create a landscaped entrance to the city. This also allows the central portion of the city to expand south and connect with the shore of the Oklahoma River. Several elements of "Core to Shore" were included in the MAPS 3 proposal approved by voters in late 2009.
Geography
thumb|300px|right|Mid-May 2006 photograph of Oklahoma City taken from the International Space Station (ISS).
Oklahoma City lies along one of the primary corridors into Texas and Mexico, and is a three-hour drive from the Dallas-Fort Worth metropolitan area. The city is located in the Frontier Country region in the center of the state, making it an ideal location for state government.
According to the United States Census Bureau, the city has a total area of , of which, of it is land and of it is water. The total area is 3.09 percent water.
Oklahoma City lies in the Sandstone Hills region of Oklahoma, known for hills of 250 to and two species of oak: blackjack oak (Quercus marilandica) and post oak (Q. stellata).Oklahoma Geography, NetState.com . Retrieved February 4, 2010. The northeastern part of the city and its eastern suburbs fall into an ecological region known as the Cross Timbers.
The city is roughly bisected by the North Canadian River (recently renamed the Oklahoma River inside city limits). The North Canadian once had sufficient flow to flood every year, wreaking destruction on surrounding areas, including the central business district and the original Oklahoma City Zoo.History of the Oklahoma City Zoo, Oklahoma City Life Web site. Retrieved February 5, 2010. In the 1940s, a dam was built on the river to manage the flood control and reduced its level.Elmias Thomas Collection Projects Series, University of Oklahoma. Retrieved February 5, 2010. In the 1990s, as part of the citywide revitalization project known as MAPS, the city built a series of low-water dams, returning water to the portion of the river flowing near downtown.2008 Oklahoma River, City of Oklahoma City. Retrieved February 4, 2010. The city has three large lakes: Lake Hefner and Lake Overholser, in the northwestern quarter of the city; and the largest, Lake Stanley Draper, in the sparsely populated far southeast portion of the city.
The population density normally reported for Oklahoma City using the area of its city limits can be a bit misleading. Its urbanized zone covers roughly resulting in a density of 2,500 per square mile (2013 est), compared with larger rural watershed areas incorporated by the city, which cover the remaining of the city limits.American Fact Finder Table GCT-PH1 retrieved on July 17, 2008
Oklahoma City is one of the largest cities in the nation in compliance with the Clean Air Act.About, Modern Transit Project. Retrieved February 5, 2010.
thumb|Devon Energy Center, tallest building in the state.
Tallest buildings
RankBuildingHeightFloorsBuilt1 Devon Energy Center 50 2012 2 Chase Tower 36 1971 3 First National Center 33 1931 4 City Place Tower 33 1931 5 Oklahoma Tower 31 1982 6 SandRidge Center 30 1973 7 Valliance Bank Tower 22 1984 8 Bank of Oklahoma Plaza 16 1972 9 AT&T Building 16 1928 10 Leadership Square North 22 1984
Neighborhoods
thumb|left|Automobile Alley in Oklahoma City.
thumb|300px|Looking up in the heart of Oklahoma City's Central Business District.
Oklahoma City neighborhoods are extremely varied; pin-neat affluent historic neighborhoods sit next to districts that have not wholly recovered from economic and social decline of the 1970s and 1980s.
The city is bisected geographically and culturally by the North Canadian River, which basically divides North Oklahoma City and South Oklahoma City. The two-halves of the city were actually founded and plotted as separate cities, but soon grew together. The north side is characterized by very diverse and fashionable urban neighborhoods near the city center and sprawling suburbs further north. South Oklahoma City is generally more blue collar working class and significantly more industrial, having grown up around the Stockyards and meat packing plants at the turn of the century, and is currently the center of the city's rapidly growing Latino community.
Downtown Oklahoma City, which has 7,600 residents, is currently seeing an influx of new private investment and large scale public works projects, which have helped to resuscitate a central business district left almost deserted by the Oil Bust of the early 1980s. The centerpiece of downtown is the newly renovated Crystal Bridge and Myriad Botanical Gardens, one of the few elements of the Pei Plan to be completed. In the next few years a massive new central park will link the gardens near the CBD and the new convention center to be built just south of it to the North Canadian River, as part of a massive works project known as Core to Shore; the new park is part of MAPS3, a collection of civic projects funded by a 1-cent temporary (seven-year) sales tax increase.
Climate
Oklahoma City has a humid subtropical climate (Köppen: Cfa), with frequent variations in weather daily and seasonally, except during the consistently hot and humid summer months. Prolonged and severe droughts (sometimes leading to wildfires in the vicinity) as well as very heavy rainfall leading to flash flooding and flooding occur with some regularity. Consistent winds, usually from the south or south-southeast during the summer, help temper the hotter weather. Consistent northerly winds during the winter can intensify cold periods. Severe ice storms and snowstorms happen sporadically during the winter.
The average temperature is , with the monthly daily average ranging from in January to in July. Extremes range from on February 12, 1899 to on August 11, 1936 and August 3, 2012;"Climatological averages and records" NWS Norman, Oklahoma. Retrieved August 22, 2012. the last sub-zero (°F) reading was on February 10, 2011. Temperatures reach on 10.4 days of the year, on nearly 70 days, and fail to rise above freezing on 8.3 days. The city receives about of precipitation annually, of which is snow.
The report Regional Climate Trends and Scenarios for the U.S. National Climate Assessment (NCA) from 2013 by NOAA, projects that parts of the Great Plains region can expect up to 30% (High emissions scenario based on CMIP3 and NARCCAP models) increase in extreme precipitation days by midcentury. This definition is based on days receiving more than one inch of rainfall.
Extreme weather
Oklahoma City has a very active severe weather season from March through June, especially during April and May. Being in the center of what is colloquially referred to as Tornado Alley, it is prone to especially frequent and severe tornadoes, as well as very severe hailstorms and occasional derechoes. Tornadoes have occurred in every month of the year and a secondary smaller peak also occurs during autumn, especially October. The Oklahoma City metropolitan area is one of the most tornado-prone major cities in the world, with about 150 tornadoes striking within the city limits since 1890. Since the time weather records have been kept, Oklahoma City has been struck by thirteen violent tornadoes, eleven F/EF4s and two F/EF5s. On May 3, 1999 parts of southern Oklahoma City and nearby suburban communities suffered from one of the most powerful tornadoes on record, an F5 on the Fujita scale, with wind speeds estimated by radar at . On May 20, 2013, far southwest Oklahoma City, along with Newcastle and Moore, was hit again by an EF5 tornado; it was wide and killed 23 people. Less than two weeks later, on May 31, another outbreak affected the Oklahoma City area, including an EF1 and an EF0 within the city and an EF3 tornado several miles west of the city that was in width, the widest tornado ever recorded.
With 19.48 inches of rainfall, May 2015 was by far Oklahoma City's record-wettest month since record keeping began in 1890. Across Oklahoma and Texas generally, there was record flooding in the latter part of the month
Demographics
According to the 2010 census, the racial composition of Oklahoma City was as follows:
White: 62.7% (56.7% Non-Hispanic White)
Black or African American: 15.1%
Native American: 3.5%
Asian: 4.0% (1.7% Vietnamese, 0.7% Indian)
Native Hawaiian and Other Pacific Islander: 0.1%
Some other race: 9.4%
Two or more races: 5.2%
Hispanic or Latino (of any race): 17.2% (14.2% Mexican, 0.7% Guatemalan)
As of the 2010 census, there were 579,999 people, 230,233 households, and 144,120 families residing in the city. The population density was 956.4 inhabitants per square mile (321.9/km²). There were 256,930 housing units at an average density of 375.9 per square mile (145.1/km²).
There were 230,233 households, 29.4% of which had children under the age of 18 living with them, 43.4% were married couples living together, 13.9% had a female householder with no husband present, and 37.4% were non-families. One person households account for 30.5% of all households and 8.7% of all households had someone living alone who is 65 years of age or older. The average household size was 2.47 and the average family size was 3.11.
The median income for a household in the city was $48,557 and the median income for a family was $62,527. The per capita income for the city was $26,208. 17.1% of the population and 12.4% of families were below the poverty line. Out of the total population, 23.0% of those under the age of 18 and 9.2% of those 65 and older were living below the poverty line.
In the 2000 Census, Oklahoma City's age composition was 25.5% under the age of 18, 10.7% from 18 to 24, 30.8% from 25 to 44, 21.5% from 45 to 64, and 11.5% who were 65 years of age or older. The median age was 34 years. For every 100 females there were 95.6 males. For every 100 females age 18 and over, there were 92.7 males.
Oklahoma City has experienced significant population increases since the late 1990s. Since the official Census in 2000, Oklahoma City has grown 25 percent (a 125,214 raw increase) according to the Bureau estimates. The 2015 estimate of 631,346 is the largest population Oklahoma City has ever recorded. It is the first city in the state to record a population greater than 600,000 residents and the largest municipal population of the Great Plains region (Oklahoma, Kansas, Nebraska, South Dakota, North Dakota).
Racial composition 2010 1990 1970 1940 White 62.7% 74.8% 84.0% 90.4% —Non-Hispanic 56.7% 72.9% 82.2%From 15% sample n/a Black or African American 15.1% 16.0% 13.7% 9.5% Native American 3.5% 4.2% 2.0% 0.1% Hispanic or Latino (of any race) 17.2% 5.0% 2.0% n/a Asian 4.0% 2.4% 0.2% –
Metropolitan statistical area
thumb|Old Interstate 40 Crosstown, Oklahoma City
Oklahoma City is the principal city of the eight-county Oklahoma City Metropolitan Statistical Area in Central Oklahoma and is the state's largest urbanized area. As of 2015, the metropolitan area was the 41st largest in the nation based on population.
Crime
Law enforcement claims that Oklahoma City has traditionally been the territory of the notorious Juárez Cartel, but the Sinaloa Cartel has been reported as trying to establish a foothold in Oklahoma City. There are many rival gangs in Oklahoma City, one whose headquarters has been established in the city, the Southside Locos, traditionally known as Sureños.
Oklahoma City also has its share of very brutal crimes, particularly in the 1970s. The worst of which occurred in 1978, when six employees of a Sirloin Stockade restaurant on the city's south side were murdered execution-style in the restaurant's freezer. An intensive investigation followed, and the three individuals involved, who also killed three others in Purcell, Oklahoma, were identified. One, Harold Stafford, died in a motorcycle accident in Tulsa not long after the restaurant murders. Another, Verna Stafford, was sentenced to life without parole after being granted a new trial after she had previously been sentenced to death. Roger Dale Stafford, considered the mastermind of the murder spree, was executed by lethal injection at the Oklahoma State Penitentiary in 1995.
The Oklahoma City Police Department, has a uniformed force of 1,169 officers and 300+ civilian employees. The Department has a central police station and five substations covering 2,500 police reporting districts that average 1/4 square mile in size.
thumb|The Murrah Federal Building after the attack.
On April 19, 1995, the Alfred P. Murrah Federal Building was destroyed by a fertilizer bomb manufactured and detonated by Timothy McVeigh. The blast and catastrophic collapse killed 168 people and injured over 680. The blast shockwave destroyed or damaged 324 buildings within a 340-meter radius, destroyed or burned 86 cars, and shattered glass in 258 nearby buildings, causing at least an estimated $652 million worth of damage. The main suspect, Timothy McVeigh, was executed by lethal injection on June 11, 2001. It was the deadliest single domestic terrorist attack in US history prior to 9/11.
Economy
thumb|The Sonic Drive-In restaurant chain is headquartered in Oklahoma City.
The economy of Oklahoma City, once just a regional power center of government and energy exploration, has since diversified to include the sectors of information technology, services, health services and administration. The city is headquarters to two Fortune 500 companies, Chesapeake Energy Corporation and Devon Energy Corporation, as well as being home to Love's Travel Stops & Country Stores, which is ranked thirteenth on Forbes' list of private companies.
As of July 2014, the top fifteen employers in the city were (with the number of employees in parentheses):
State of Oklahoma (46,900)
Mike Monroney Aeronautical Center (7,500)
Integris Health (6,000)
City of Oklahoma City (4,840)
University of Oklahoma Health Sciences Center (5,000)
Hobby Lobby Stores (5,100)
Chesapeake Energy Corporation (3,500)
Mercy Health Center (4,300)
OG+E Energy Corp (3,400)
Devon Energy Corporation (3,200)
OU Medical Center (3,200)
SSM Health Care of Oklahoma, Inc. (3,100)
AT&T (3,000)
Sonic Corp. (2,000)
LSB Industries, Inc. (1,880)
While not in Oklahoma City proper, other large employers within the MSA region include: Tinker Air Force Base (27,000); University of Oklahoma (11,900); University of Central Oklahoma (2,900); and Norman Regional Hospital (2,800).
Other major corporations with a large presence (over 1000 employees) in Oklahoma City include: Dell, The Hertz Corporation, United Parcel Service, Farmers Insurance Group, Great Plains Coca-Cola Bottling Company, Cox Communications, The Boeing Company, Deaconess Hospital, Johnson Controls, MidFirst Bank, American Fidelity Assurance, Rose State College, and Continental Resources."Oklahoma City: Economy, City-Data.com. . Retrieved January 26, 2010.
According to the Oklahoma City Chamber of Commerce, the metropolitan area's economic output grew by 33 percent between 2001 and 2005 due chiefly to economic diversification. Its gross metropolitan product was $43.1 billion in 2005"City area enjoys increase in jobs" NewsOK.com (Retrieved May 1, 2010) and grew to $61.1 billion in 2009."" Bureau of Economic Analysis By 2016 the GDP had grown to 73.8 billion.""US Mayors study
In 2008, Forbes magazine named Oklahoma City the most "recession proof city in America". The magazine reported that the city had falling unemployment, one of the strongest housing markets in the country and solid growth in energy, agriculture and manufacturing. However, during the early 1980s, Oklahoma City had one of the worst job and housing markets due to the bankruptcy of Penn Square Bank in 1982 and then the post-1985 crash in oil prices.
In 2013, Forbes ranked Oklahoma City No. 8 on its list of the Best Places for Business and Careers.
In 2014, Forbes ranked Oklahoma City No. 7 on its list of Best Places for Business.
Business districts
Business districts, and to a lesser extent, neighborhoods tend to maintain their boundaries and character through the application of zoning regulations and business improvement districts (districts where property owners agree to a property tax surcharge to support additional services for the community). Through zoning regulations, historic districts, and other special zoning districts, including overlay districts, are established. Oklahoma City currently has three business improvement districts, including one encompassing the central business district.
Culture
Museums and theaters
thumb|Water taxis in Oklahoma City's downtown Bricktown neighborhood.
The Donald W. Reynolds Visual Arts Center is the new downtown home for the Oklahoma City Museum of Art. The museum features visiting exhibits, original selections from its own collection, a theater showing a variety of foreign, independent, and classic films each week, and a restaurant. OKCMOA is also home to the most comprehensive collection of Chihuly glass in the world including the 55-foot Eleanor Blake Kirkpatrick Memorial Tower in the Museum's atrium.Dale Chihuly: The Exhibition | Oklahoma City Museum of Art The art deco Civic Center Music Hall, which was totally renovated in 2001, has performances from the Oklahoma City Ballet, the Oklahoma City Opera, the Oklahoma City Philharmonic and also various concerts and traveling Broadway shows.
left|thumb|The Survivor Tree on the grounds of the Oklahoma City National Memorial.
Other theaters include Lyric Theatre, Jewel Box Theatre, Kirkpatrick Auditorium, the Poteet Theatre, the Oklahoma City Community College Bruce Owen Theater and the 488-seat Petree Recital Hall, at the Oklahoma City University campus. The university also opened the Wanda L Bass School of Music and auditorium in April 2006.
The Science Museum Oklahoma (formerly Kirkpatrick Science and Air Space Museum at Omniplex) houses exhibits on science, aviation, and an IMAX theater. The museum formerly housed the International Photography Hall of Fame (IPHF) that exhibits photographs and artifacts from a large collection of cameras and other artifacts preserving the history of photography. IPHF honors those who have made significant contributions to the art and/or science of photography and relocated to St. Louis, Missouri in 2013.
The Museum of Osteology houses more than 300 real animal skeletons. Focusing on the form and function of the skeletal system, this museum displays hundreds of skulls and skeletons from all corners of the world. Exhibits include adaptation, locomotion, classification and diversity of the vertebrate kingdom. The Museum of Osteology is the only one of its kind in America.
The National Cowboy & Western Heritage Museum has galleries of western art and is home to the Hall of Great Western Performers. In contrast, the city will also be home to The American Indian Cultural Center and Museum that began construction in 2009 (although completion of the facility has been held up due to insufficient funding), on the south side of Interstate 40, southeast from Bricktown.
The Oklahoma City National Memorial in the northern part of Oklahoma City's downtown was created as the inscription on its eastern gate of the Memorial reads, "to honor the victims, survivors, rescuers, and all who were changed forever on April 19, 1995"; the memorial was built on the land formerly occupied by the Alfred P. Murrah Federal Building complex prior to its 1995 bombing. The outdoor Symbolic Memorial can be visited 24 hours a day for free, and the adjacent Memorial Museum, located in the former Journal Record building damaged by the bombing, can be entered for a small fee. The site is also home to the National Memorial Institute for the Prevention of Terrorism, a non-partisan, nonprofit think tank devoted to the prevention of terrorism.
The American Banjo Museum located in the Bricktown Entertainment district is dedicated to preserving and promoting the music and heritage of the banjo. Its collection is valued at $3.5 million, and an interpretive exhibit tells the evolution of the banjo from its roots in American slavery, to bluegrass, to folk and to world music.
The Oklahoma History Center is the history museum of the state of Oklahoma. Located across the street from the governor's mansion at 800 Nazih Zuhdi Drive in northeast Oklahoma City, the museum opened in 2005 and is operated by the Oklahoma Historical Society. It preserves the history of Oklahoma from the prehistoric to the present day.
Sports
thumb|Chickasaw Bricktown Ballpark, home of the Oklahoma City Dodgers and the Big 12 Baseball Tournament.
Oklahoma City is home to several professional sports teams, including the Oklahoma City Thunder of the National Basketball Association. The Thunder is the city's second "permanent" major professional sports franchise after the now-defunct AFL Oklahoma Wranglers and is the third major-league team to call the city home when considering the temporary hosting of the New Orleans/Oklahoma City Hornets for the 2005–06 and 2006–07 NBA seasons.
Other professional sports clubs in Oklahoma City include the Oklahoma City Dodgers, the Triple-A affiliate of the Los Angeles Dodgers, the Oklahoma City Energy FC of the United Soccer League, Rayo OKC of the North American Soccer League (NASL), and the Crusaders of Oklahoma Rugby Football Club of USA Rugby. The Oklahoma City Blazers, a name used for decades of the city's hockey team in the Central Hockey League have been reborn as a Junior A team playing in the Western States Hockey League.
Chesapeake Energy Arena in downtown is the principal multipurpose arena in the city which hosts concerts, NHL exhibition games, and many of the city's pro sports teams. In 2008, the Oklahoma City Thunder became the major tenant. Located nearby in Bricktown, the Chickasaw Bricktown Ballpark is the home to the city's baseball team, the Dodgers. "The Brick", as it is locally known, is considered one of the finest minor league parks in the nation.
Oklahoma City is the annual host of the Big 12 Baseball Tournament, the World Cup of Softball, and the annual NCAA Women's College World Series. The city has held the 2005 NCAA Men's Basketball First and Second round and hosted the Big 12 Men's and Women's Basketball Tournaments in 2007 and 2009. The major universities in the area – University of Oklahoma, Oklahoma City University, and Oklahoma State University – often schedule major basketball games and other sporting events at Chesapeake Energy Arena and Chickasaw Bricktown Ballpark, although most home games are played at their campus stadiums.
Other major sporting events include Thoroughbred and Quarter horse racing circuits at Remington Park and numerous horse shows and equine events that take place at the state fairgrounds each year. There are numerous golf courses and country clubs spread around the city.
High school football
The state of Oklahoma hosts a highly competitive high school football culture, with many teams in the Oklahoma City metropolitan area. The Oklahoma Secondary School Activities Association (OSSAA) organizes high school football into eight distinct classes based on the size of school enrollment. Beginning with the largest, the classes are: 6A, 5A, 4A, 3A, 2A, A, B, and C. Class 6A is broken into two divisions. Oklahoma City area schools in this division include: Edmond North, Mustang, Moore, Yukon, Edmond Memorial, Edmond Santa Fe, Norman North, Westmoore, Southmoore, Putnam City North, Norman, Putnam City, Putnam City West, U.S. Grant, Midwest City.http://www.ossaaonline.com/docs/2013-14/Football/FB_1415_1516_Classifications-3.pdf
left|thumb|Chesapeake Energy Arena, home of the NBA's Oklahoma City Thunder.
Thunder
thumb|180px|Wearing away-colors, NBA superstar Kevin Durant of the Thunder dunks against the Washington Wizards on March 14, 2011.
The Oklahoma City Thunder of the National Basketball Association (NBA) has called Oklahoma City home since the 2008–09 season, when owner Clay Bennett relocated the franchise from Seattle, Washington. The Thunder plays home games at the Chesapeake Energy Arena in downtown Oklahoma City, known affectionately in the national media as 'the Peake' and 'Loud City'. The Thunder is known by several nicknames, including "OKC Thunder" and simply "OKC", and its mascot is Rumble the Bison.
After an arrival to Oklahoma City for the 2008–09 season, the Oklahoma City Thunder secured a berth (8th) in the 2010 NBA Playoffs the next year after boasting its first 50-win season, winning two games in the first round against the Los Angeles Lakers. In 2012, Oklahoma City made it to the NBA Finals, but lost to the Miami Heat in five games. In 2013 the Thunder reached the Western Conference semi-finals without All-Star guard Russell Westbrook, who was injured in their first round series against the Houston Rockets, only to lose to the Memphis Grizzlies. In 2014 Oklahoma City again reached the NBA's Western Conference Finals but eventually lost to the San Antonio Spurs in six games.
The Oklahoma City Thunder has been regarded by sports analysts as one of the elite franchises of the NBA's Western Conference and that of a media darling as the future of the league. Oklahoma City has earned Northwest Division titles every year since 2009 and has consistently improved its win record to 59-wins in 2014. The Thunder is led by second year head coach Billy Donovan and is anchored by All-Star point guard Russell Westbrook.
Hornets
In the aftermath of Hurricane Katrina, the NBA's New Orleans Hornets (now the New Orleans Pelicans) temporarily relocated to the Ford Center, playing the majority of its home games there during the 2005–06 and 2006–07 seasons. The team became the first NBA franchise to play regular-season games in the state of Oklahoma. The team was known as the New Orleans/Oklahoma City Hornets while playing in Oklahoma City.
The team ultimately returned to New Orleans full-time for the 2007–08 season. The Hornets played their final home game in Oklahoma City during the exhibition season on October 9, 2007 against the Houston Rockets.
thumb|2010–11 Oklahoma City Barons
Current metro area pro-teams
Club Sport League Stadium Oklahoma City Thunder Basketball National Basketball Association Chesapeake Energy Arena Oklahoma City Blue Basketball NBA Development League Cox Convention Center Oklahoma City Dodgers Baseball Pacific Coast League (AAA) Chickasaw Bricktown Ballpark Oklahoma City Energy Men's Soccer United Soccer League (Div. 2) Taft Stadium Oklahoma City Football Club Women's Soccer Women's Premier Soccer League Stars Field
Parks and recreation
upright|thumb|Sunset over Lake Hefner in northwest Oklahoma City.
300px|right|thumb|Myriad Botanical Gardens, the centerpiece of downtown OKC.
One of the more prominent landmarks downtown is the Crystal Bridge at the Myriad Botanical Gardens, a large downtown urban park. Designed by I. M. Pei, the Crystal Bridge is a tropical conservatory in the area. The park has an amphitheater, known as the Water Stage. In 2007, following a renovation of the stage, Oklahoma Shakespeare in the Park relocated to the Myriad Gardens. The Myriad Gardens will undergo a massive renovation in conjunction with the recently built Devon Tower directly north of it.
The Oklahoma City Zoo and Botanical Garden is home to numerous natural habitats, WPA era architecture and landscaping, and hosts major touring concerts during the summer at its amphitheater. Oklahoma City also has two amusement parks, Frontier City theme park and White Water Bay water park. Frontier City is an 'Old West'-themed amusement park. The park also features a recreation of a western gunfight at the 'OK Corral' and many shops that line the "Western" town's main street. Frontier City also hosts a national concert circuit at its amphitheater during the summer. Oklahoma City also has a combination racetrack and casino open year-round, Remington Park, which hosts both Quarter horse (March – June) and Thoroughbred (August – December) seasons.
Walking trails line Lake Hefner and Lake Overholser in the northwest part of the city and downtown at the canal and the Oklahoma River. The majority of the east shore area is taken up by parks and trails, including a new leashless dog park and the postwar-era Stars and Stripes Park. Lake Stanley Draper is the city's largest and most remote lake.
Oklahoma City has a major park in each quadrant of the city, going back to the first parks masterplan. Will Rogers Park, Lincoln Park, Trosper Park, and Woodson Park were once connected by the Grand Boulevard loop, some sections of which no longer exist. Martin Park Nature Center is a natural habitat in far northwest Oklahoma City. Will Rogers Park is home to the Lycan Conservatory, the Rose Garden, and Butterfly Garden, all built in the WPA era. Oklahoma City is home to the American Banjo Museum, which houses a large collection of highly decorated banjos from the early 20th century and exhibits on the history of the banjo and its place in American history. Concerts and lectures are also held there.
In April 2005, the Oklahoma City Skate Park at Wiley Post Park was renamed the Mat Hoffman Action Sports Park to recognize Mat Hoffman, an Oklahoma City area resident and businessman that was instrumental in the design of the skate park and is a 10-time BMX World Vert champion. In March 2009, the Mat Hoffman Action Sports Park was named by the National Geographic Society Travel Guide as one of the "Ten Best."
Government
300px|thumb|Oklahoma State Capitol seen from the OK History Center.
right|300px|thumb|Oklahoma City Civic Center, including the art deco city hall building.
The City of Oklahoma City has operated under a council-manager form of city government since 1927."Mayor and Council", City of Oklahoma City. Retrieved January 27, 2010. Mick Cornett serves as Mayor, having first been elected in 2004, and re-elected in 2006, 2010, and 2014. Eight councilpersons represent each of the eight wards of Oklahoma City. City Manager Jim Couch was appointed in late 2000. Couch previously served as assistant city manager, Metropolitan Area Projects Plan (MAPS) director and utilities director prior to his service as city manager.
The city has called on residents to vote for sales tax-based projects to revitalize parts of the city. The Bricktown district is the best example of such an initiative. In the recent MAPS 3 vote, the city's fraternal order of police criticized the project proposals for not doing enough to expand the police presence to keep up with the growing residential population and increased commercial activity. In September 2013, Oklahoma City area attorney David Slane announced that he would pursue legal action regarding MAPS3, on claims that the multiple projects that made up the plan violate a state constitutional law limiting voter ballot issues to a single subject.Oklahoma City responds to David Slane's challenge of MAPS-3, KOKH-TV, September 3, 2013.
Education
Higher education
left|thumb|OU Health Sciences Center in Oklahoma City.
The city is home to several colleges and universities. Oklahoma City University, formerly known as Epworth University, was founded by the United Methodist Church on September 1, 1904 and is renowned for its performing arts, science, mass communications, business, law, and athletic programs. OCU has its main campus in the north-central section of the city, near the city's chinatown area. OCU Law is located in the Midtown district near downtown, in the old Central High School building.
The University of Oklahoma has several institutions of higher learning in the city and metropolitan area, with OU Medicine and the University of Oklahoma Health Sciences Center campuses located east of downtown in the Oklahoma Health Center district, and the main campus located to the south in the suburb of Norman. The OU Medicine hosting the state's only Level-One trauma center. OU Health Sciences Center is one of the nation's largest independent medical centers, employing more than 12,000 people.OU Medical Center OU is one of only four major universities in the nation to operate six medical schools.
The third-largest university in the state, the University of Central Oklahoma, is located just north of the city in the suburb of Edmond. Oklahoma Christian University, one of the state's private liberal arts institutions, is located just south of the Edmond border, inside the Oklahoma City limits.
right|thumb|Park on the campus of the OU Health Sciences Center.
Oklahoma City Community College in south Oklahoma City is the second-largest community college in the state. Rose State College is located east of Oklahoma City in suburban Midwest City. Oklahoma State University–Oklahoma City is located in the "Furniture District" on the Westside. Northeast of the city is Langston University, the state's historically black college (HBCU). Langston also has an urban campus in the eastside section of the city. Southern Nazarene University, which was founded by the Church of the Nazarene, is a university located in suburban Bethany, which is surrounded by the Oklahoma City city limits.
Although technically not a university, the FAA's Mike Monroney Aeronautical Center has many aspects of an institution of higher learning. Its FAA Academy is accredited by the North Central Association of Colleges and Schools. Its Civil Aerospace Medical Institute (CAMI) has a medical education division responsible for aeromedical education in general as well as the education of aviation medical examiners in the U.S. and 93 other countries. In addition, The National Academy of Science offers Research Associateship Programs for fellowship and other grants for CAMI research.
Primary and secondary
thumb|Bishop McGuinness Catholic High School.
Oklahoma City is home to the state's largest school district, Oklahoma City Public Schools. The district's Classen School of Advanced Studies and Harding Charter Preparatory High School rank high among public schools nationally according to a formula that looks at the number of Advanced Placement, International Baccalaureate and/or Cambridge tests taken by the school's students divided by the number of graduating seniors.The Top of the Class 2008, Newsweek, May 17, 2008. (Retrieved April 28, 2010). In addition, OKCPS's Belle Isle Enterprise Middle School was named the top middle school in the state according to the Academic Performance Index, and recently received the Blue Ribbon School Award, in 2004 and again in 2011.Belle Isle Enterprise Middle School (Retrieved January 26, 2010). KIPP Reach College Preparatory School in Oklahoma City received the 2012 National Blue Ribbon along with its school leader, Tracy McDaniel Sr., being awarded the Terrel H. Bell Award for Outstanding Leadership.
The Oklahoma School of Science and Mathematics, a school for some of the state's most gifted math and science pupils, is also located in Oklahoma City.
Due to Oklahoma City's explosive growth, parts of several suburban districts spill into the city, including Putnam City School District in the northwest, Moore Public Schools in the south, and Mid-Del School District in the southeast. The city boasts a number of private and parochial schools. Casady School and Heritage Hall School are both examples of a private college preparatory school with vigorous academics that range among the top in Oklahoma. Providence Hall is a Protestant school. Two prominent schools of the Archdiocese of Oklahoma City are Bishop McGuinness High School and Mount Saint Mary High School. Other private schools include Crossings Christian School.
CareerTech
Oklahoma City has several public career and technology education schools associated with the Oklahoma Department of Career and Technology Education, the largest of which are Metro Technology Center and Francis Tuttle Technology Center.
Private career and technology education schools in Oklahoma City include Oklahoma Technology Institute, Platt College, Vatterott College, and Heritage College. The Dale Rogers Training Center in Oklahoma City is a nonprofit vocational training center for individuals with disabilities.
Media
Print
The Oklahoman is Oklahoma City's major daily newspaper and is the most widely circulated in the state. NewsOK.com is the Oklahoman's online presence. Oklahoma Gazette is Oklahoma City's independent newsweekly, featuring such staples as local commentary, feature stories, restaurant reviews and movie listings and music and entertainment. The Journal Record is the city's daily business newspaper and okcBIZ is a monthly publication that covers business news affecting those who live and work in Central Oklahoma.
There are numerous community and international newspapers locally that cater to the city's ethnic mosaic; such as The Black Chronicle, headquartered in the Eastside, the OK VIETIMES and Oklahoma Chinese Times, located in Asia District, and various Hispanic community publications. The Campus is the student newspaper at Oklahoma City University. Gay publications include The Gayly Oklahoman.
An upscale lifestyle publication called Slice Magazine is circulated throughout the metropolitan area. In addition, there is a magazine published by Back40 Design Group called The Edmond Outlook. It contains local commentary and human interest pieces direct-mailed to over 50,000 Edmond residents.
Broadcast
Oklahoma City was home to several pioneers in radio and television broadcasting. Oklahoma City's WKY Radio was the first radio station transmitting west of the Mississippi River and the third radio station in the United States.Oklahoma Fast Facts and Trivia. Retrieved January 26, 2009. WKY received its federal license in 1921 and has continually broadcast under the same call letters since 1922. In 1928, WKY was purchased by E.K. Gaylord's Oklahoma Publishing Company and affiliated with the NBC Red Network; in 1949, WKY-TV (channel 4) went on the air and later became the first independently owned television station in the U.S. to broadcast in color. In mid-2002, WKY radio was purchased outright by Citadel Broadcasting, who was bought out by Cumulus Broadcasting in 2011. The Gaylord family earlier sold WKY-TV in 1976, which has gone through a succession of owners (what is now KFOR-TV is currently owned by Tribune Broadcasting as of December 2013).
The major U.S. broadcast television networks have affiliates in the Oklahoma City market (ranked 41st for television by Nielsen and 48th for radio by Arbitron, covering a 34-county area serving the central, northern-central and west-central sections Oklahoma); including NBC affiliate KFOR-TV (channel 4), ABC affiliate KOCO-TV (channel 5), CBS affiliate KWTV-DT (channel 9, the flagship of locally based Griffin Communications), PBS station KETA-TV (channel 13, the flagship of the state-run OETA member network), Fox affiliate KOKH-TV (channel 25), CW affiliate KOCB (channel 34), independent station KAUT-TV (channel 43), MyNetworkTV affiliate KSBI-TV (channel 52), and Ion Television owned-and-operated station KOPX-TV (channel 62). The market is also home to several religious stations including TBN owned-and-operated station KTBO-TV (channel 14) and Norman-based Daystar owned-and-operated station KOCM (channel 46).
Despite the market's geographical size, none of the English-language commercial affiliates in the Oklahoma City designated market area operate full-power satellite stations to the far northwest part of the state (requiring cable or satellite to view them), though KFOR-TV, KOCO-TV, KWTV-DT and KOKH-TV each operate low-power translators in that portion of the market. Oklahoma City is one of the few markets located between Chicago and Dallas to have affiliates of two or more of the major Spanish-language broadcast networks: Telemundo affiliate KTUZ-TV (channel 30), Woodward-based Univision affiliate KUOK 35 (whose translator KUOK-CD, channel 36, serves the immediate Oklahoma City area), Azteca affiliate KOHC-CD (channel 45) and Estrella TV affiliate KOCY-LP (channel 48).
Infrastructure
Fire department
Oklahoma City is protected by the Oklahoma City Fire Department (OKCFD), which employs 1015 paid, professional firefighters. The current Chief of Department is G. Keith Bryant, the department is also commanded by three Deputy Chiefs, who – along with the department chief – oversee the Operational Services, Prevention Services, and Support Services bureaus. The OKCFD currently operates out of 37 fire stations, located throughout the city in six battalions. The OKCFD also operates a fire apparatus fleet of 36 engines (including 30 paramedic engines), 13 ladders, 16 brush patrol units, six water tankers, two hazardous materials units, one Technical Rescue Unit, one Air Supply Unit, six Arson Investigation Units, and one Rehabilitation Unit. Each engine is staffed with a driver, an officer, and one to two firefighters, while each ladder company is staffed with a driver, an officer, and one firefighter. Minimum staffing per shift is 213 personnel. The Oklahoma City Fire Department responds to over 70,000 emergency calls annually.City of Oklahoma City | Fire Department. Okc.gov. Retrieved on July 21, 2013.City of Oklahoma City | Fire Department. Okc.gov. Retrieved on July 21, 2013.http://www.okc.gov/fire/fire_report.pdf
Transportation
Highway
left|thumb|Skydance Bridge crossing the newly opened Interstate 40 in Oklahoma City.
Oklahoma City is an integral point on the United States Interstate Network, with three major interstate highways – Interstate 35, Interstate 40, and Interstate 44 – bisecting the city. Interstate 240 connects Interstate 40 and Interstate 44 in south Oklahoma City, while Interstate 235 spurs from Interstate 44 in north-central Oklahoma City into downtown.
Major state expressways through the city include Lake Hefner Parkway (SH-74), the Kilpatrick Turnpike, Airport Road (SH-152), and Broadway Extension (US-77) which continues from I-235 connecting Central Oklahoma City to Edmond. Lake Hefner Parkway runs through northwest Oklahoma City, while Airport Road runs through southwest Oklahoma City and leads to Will Rogers World Airport. The Kilpatrick Turnpike loops around north and west Oklahoma City.
Oklahoma City also has several major national and state highways within its city limits. Shields Boulevard (US-77) continues from E.K. Gaylord Boulevard in downtown Oklahoma City and runs south eventually connecting to I-35 near the suburb of Moore. Northwest Expressway (Oklahoma State Highway 3) runs from North Classen Boulevard in north-central Oklahoma City to the northwestern suburbs.
Air
Oklahoma City is served by two primary airports, Will Rogers World Airport and the much smaller Wiley Post Airport (incidentally, the two honorees died in the same plane crash in Alaska)"Wiley Post ", U.S. Centennial of Flight Commission. Retrieved February 1, 2010. Will Rogers World Airport is the state's busiest commercial airport, with over 3.6 million passengers annually.Current Statistics, Will Rogers World Airport . Retrieved February 1, 2010. Tinker Air Force Base, in southeast Oklahoma City, is the largest military air depot in the nation; a major maintenance and deployment facility for the Navy and the Air Force, and the second largest military institution in the state (after Fort Sill in Lawton).
thumb|United Airlines Boeing 737 aircraft at the East Concourse of Will Rogers World Airport.
Rail and bus
Amtrak has a railway station downtown, with daily service to Fort Worth and the nation's rail network via the Heartland Flyer. Oklahoma City once was the crossroads of several interstate passenger railroads, but service at that level has long since been discontinued. Freight service is provided by BNSF and Union Pacific. Greyhound and several other intercity bus companies serve Oklahoma City at the Union Bus Station in downtown.
Public transit
Embark (formerly METRO Transit) is the city's public transit company. The main transfer terminal is located downtown at NW 5th Street and Hudson Avenue. Embark maintains limited coverage of the city's main street grid using a hub-and-spoke system from the main terminal, making many journeys impractical due to the rather small number of bus routes offered and that most trips require a transfer downtown. The city has recognized that transit as a major issue for the rapidly growing and urbanizing city and has initiated several studies in recent times to improve upon the existing bus system starting with a plan known as the Fixed Guideway Study.Oklahoma Fixed Guideway Study (Retrieved April 21, 2010) This study identified several potential commuter transit routes from the suburbs into downtown OKC as well as feeder-line bus and/or rail routes throughout the city.
left|thumb|Riders prepare to board the Amtrak Heartland Flyer.
Though Oklahoma City currently has no light rail or commuter rail service, city residents identified improved transit as one of their top priorities and from the fruits of the Fixed Guideway and other studies city leaders strongly desire to incorporate urban rail transit into the region's future transportation plans. The greater Oklahoma City metropolitan transit plan identified from the Fixed Guideway Study includes streetcar in the downtown section that would be fed by enhanced city bus service and commuter rail from the suburbs including Edmond, Norman, and Midwest City. There is a significant push for a commuter rail line connecting downtown OKC with the eastern suburbs of Del City, Midwest City, and Tinker Air Force Base. In addition to commuter rail, a short heritage rail line that would run from Bricktown just a few blocks away from the Amtrak station to the Adventure District in northeast Oklahoma City is currently under reconstruction.
On December 2009, Oklahoma City voters passed MAPS 3, the $777 million (7-year 1-cent tax) initiative, which will include funding (appx $130M) for an estimated modern streetcar in downtown Oklahoma City and the establishment of a transit hub. It is believed the streetcar would begin construction in 2014 and be in operation around 2017.
On September 10, 2013 the federal government announced Oklahoma City would receive $13.8M grant from the US Department of Transportation's TIGER program. This is the first ever grant for Oklahoma City for rail-based initiative and is thought to be somewhat of a turning point by city leaders who have previously applied for grants only to continuously be denied. It is believed the city will use the TIGER grant along with approximately $10M from the MAPS 3 Transit budget to revitalize the city's Amtrak station as an Intermodal Transportation Hub, taking over the role of the existing transit hub at NW 5th/Hudson Ave.
Walkability
A 2013 study by Walk Score ranked Oklahoma City 43rd most walkable out of the 50 largest U.S. cities.
Health
thumb|OU Physicians Center.
Oklahoma City and the surrounding metropolitan area are home to a number of health care facilities and specialty hospitals. In Oklahoma City's MidTown district near downtown resides the state's oldest and largest single site hospital, St. Anthony Hospital and Physicians Medical Center.
OU Medicine, an academic medical institution located on the campus of The University of Oklahoma Health Sciences Center, is home to OU Medical Center. OU Medicine operates Oklahoma's only level-one trauma center at the OU Medical Center and the state's only level-one trauma center for children at Children's Hospital at OU Medicine, both of which are located in the Oklahoma Health Center district. Other medical facilities operated by OU Medicine include OU Physicians and OU Children's Physicians, the OU College of Medicine, the Oklahoma Cancer Center and OU Medical Center Edmond, the latter being located in the northern suburb of Edmond.
thumb|INTEGRIS Baptist Medical Center, Oklahoma City, Oklahoma.
INTEGRIS Health owns several hospitals, including INTEGRIS Baptist Medical Center, the INTEGRIS Cancer Institute of Oklahoma,INTEGRIS Cancer Institute of Oklahoma. and the INTEGRIS Southwest Medical Center. INTEGRIS Health operates hospitals, rehabilitation centers, physician clinics, mental health facilities, independent living centers and home health agencies located throughout much of Oklahoma. INTEGRIS Baptist Medical Center was named in U.S. News & World Reports 2012 list of Best Hospitals. INTEGRIS Baptist Medical Center ranks high-performing in the following categories: Cardiology and Heart Surgery; Diabetes and Endocrinology; Ear, Nose and Throat; Gastroenterology; Geriatrics; Nephrology; Orthopedics; Pulmonology and Urology.
The Midwest Regional Medical Center located in the suburb of Midwest City; other major hospitals in the city include the Oklahoma Heart Hospital and the Mercy Health Center. There are 347 physicians for every 100,000 people in the city.Best Places to Live in Oklahoma City, Oklahoma – Health (Retrieved May 6, 2010).
In the American College of Sports Medicine's annual ranking of the United States' 50 most populous metropolitan areas on the basis of community health, Oklahoma City took last place in 2010, falling five places from its 2009 rank of 45. The ACSM's report, published as part of its American Fitness Index program, cited, among other things, the poor diet of residents, low levels of physical fitness, higher incidences of obesity, diabetes, and cardiovascular disease than the national average, low access to recreational facilities like swimming pools and baseball diamonds, the paucity of parks and low investment by the city in their development, the high percentage of households below the poverty level, and the lack of state-mandated physical education curriculum as contributing factors.
Notable people
Sister cities
Oklahoma City has seven sister cities, as designated by Sister Cities International:http://sister-cities.org/interactive-map/Oklahoma%20City,%20Oklahoma Sister Cities International (Oklahoma City)
Haikou, China
Puebla, Mexico
Rio de Janeiro, Brazil
Tainan, Taiwan
Taipei, Taiwan
Ulyanovsk, Russia
Kigali, Rwanda
See also
Coyle v. Smith
History of Oklahoma
List of mayors of Oklahoma City
Notes
References
External links
Official City Website
Oklahoma City tourism information
Convention & Visitors' Bureau
City-Data page
Oklahoma City Historic Film Row District Website
New York Times travel article about Oklahoma City
OKC.NET Cultural commentary about Oklahoma City
Voices of Oklahoma interview with Ron Norick, mayor during the Oklahoma City bombing
Category:1889 establishments in Indian Territory
Category:Oklahoma City metropolitan area
Category:Cities in Canadian County, Oklahoma
Category:Cities in Oklahoma
Category:Cities in Cleveland County, Oklahoma
Category:Cities in Oklahoma County, Oklahoma
Category:Cities in Pottawatomie County, Oklahoma
Category:County seats in Oklahoma
Category:Populated places established in 1889 | 57,848 | 2017-01 |
Korean War | The Korean War (in South Korean , "Korean War"; in North Korean , "Fatherland Liberation War"; 25 June 1950 – 27 July 1953) began when North Korea invaded South Korea. The United Nations, with the United States as the principal force, came to the aid of South Korea. China came to the aid of North Korea, and the Soviet Union gave some assistance.
Korea was ruled by Japan from 1910 until the closing days of World War II. In August 1945, the Soviet Union declared war on Japan, as a result of an agreement with the United States, and liberated Korea north of the 38th parallel. U.S. forces subsequently moved into the south. By 1948, as a product of the Cold War between the Soviet Union and the United States, Korea was split into two regions, with separate governments. Both governments claimed to be the legitimate government of all of Korea, and neither side accepted the border as permanent. The conflict escalated into open warfare when North Korean forces—supported by the Soviet Union and China—moved into the south on 25 June 1950. On that day, the United Nations Security Council recognized this North Korean act as invasion and called for an immediate ceasefire.Derek W. Bowett, United Nations Forces: A Legal Study of United Nations Practice, Stevens, London, 1964, pp.29–60 On 27 June, the Security Council adopted S/RES/83: Complaint of aggression upon the Republic of Korea and decided the formation and dispatch of the UN Forces in Korea. Twenty-one countries of the United Nations eventually contributed to the UN force, with the United States providing 88% of the UN's military personnel.
After the first two months of the conflict, South Korean forces were on the point of defeat, forced back to the Pusan Perimeter. In September 1950, an amphibious UN counter-offensive was launched at Inchon, and cut off many of the North Korean troops. Those that escaped envelopment and capture were rapidly forced back north all the way to the border with China at the Yalu River, or into the mountainous interior. At this point, in October 1950, Chinese forces crossed the Yalu and entered the war. Chinese intervention triggered a retreat of UN forces which continued until mid-1951.
After these reversals of fortune, which saw Seoul change hands four times, the last two years of conflict became a war of attrition, with the front line close to the 38th parallel. The war in the air, however, was never a stalemate. North Korea was subject to a massive bombing campaign. Jet fighters confronted each other in air-to-air combat for the first time in history, and Soviet pilots covertly flew in defense of their communist allies.
The fighting ended on 27 July 1953, when an armistice was signed. The agreement created the Korean Demilitarized Zone to separate North and South Korea, and allowed the return of prisoners. However, no peace treaty has been signed, and the two Koreas are technically still at war. Periodic clashes, many of which are deadly, have continued to the present.
Names
In the U.S., the war was initially described by President Harry S. Truman as a "police action" as it was an undeclared military action, conducted under the auspices of the United Nations. It has been referred to in the Anglosphere as "The Forgotten War" or "The Unknown War" because of the lack of public attention it received both during and after the war, and in relation to the global scale of World War II, which preceded it, and the subsequent angst of the Vietnam War, which succeeded it.
In South Korea, the war is usually referred to as "625" or the "6–2–5 Upheaval" ( (), yook-i-o dongnan), reflecting the date of its commencement on 25 June.
In North Korea, the war is officially referred to as the "Fatherland Liberation War" (Choguk haebang chǒnjaeng) or alternatively the "Chosǒn [Korean] War" (, Chosǒn chǒnjaeng).
In China, the war is officially called the "War to Resist U.S. Aggression and Aid Korea" (), although the term "Chaoxian (Korean) War" () is also used in unofficial contexts, along with the term "Korean Conflict" () more commonly used in regions such as Hong Kong and Macau.
Background
Imperial Japanese rule (1910–1945)
Japan destroyed the influence of China over Korea in the First Sino-Japanese War (1894–95), ushering in the short-lived Korean Empire. A decade later, after defeating Imperial Russia in the Russo-Japanese War (1904–05), Japan made Korea its protectorate with the Eulsa Treaty in 1905, then annexed it with the Japan–Korea Annexation Treaty in 1910.
Many Korean nationalists fled the country. A Provisional Government of the Republic of Korea was founded in 1919 in Nationalist China. It failed to achieve international recognition, failed to unite nationalist groups, and had a fractious relationship with its American-based founding President, Syngman Rhee. From 1919 to 1925 and beyond, Korean communists led internal and external warfare against the Japanese.
Korea was considered to be part of the Empire of Japan as an industrialized colony along with Taiwan, and both were part of the Greater East Asia Co-Prosperity Sphere. In 1937, the colonial Governor-General, General Jirō Minami, commanded the attempted cultural assimilation of Korea's 23.5 million people by banning the use and study of Korean language, literature, and culture, to be replaced with that of mandatory use and study of their Japanese counterparts. Starting in 1939, the populace was required to use Japanese names under the Sōshi-kaimei policy. Conscription of Koreans for labor in war industries began in 1939, with as many as 2 million Koreans conscripted into either the Japanese Army or into the Japanese labor force.
In China, the Nationalist National Revolutionary Army and the communist People's Liberation Army helped organize Korean refugees against the Japanese military, which had also occupied parts of China. The Nationalist-backed Koreans, led by Yi Pom-Sok, fought in the Burma Campaign (December 1941 – August 1945). The communists, led by Kim Il-sung among others, fought the Japanese in Korea and Manchuria.
At the Cairo Conference in November 1943, China, the United Kingdom, and United States all decided "in due course Korea shall become free and independent".
Soviet-Japanese War (1945)
At the Tehran Conference in November 1943 and the Yalta Conference in February 1945, the Soviet Union promised to join its allies in the Pacific War within three months of the victory in Europe. Accordingly, it declared war on Japan on 9 August 1945. By 10 August, the Red Army had begun to occupy the northern part of the Korean peninsula.
On the night of 10 August in Washington, American Colonels Dean Rusk and Charles H. Bonesteel III were tasked with dividing the Korean Peninsula into Soviet and U.S. occupation zones and proposed the 38th parallel. This was incorporated into America's General Order No. 1 which responded to the Japanese surrender on 15 August. Explaining the choice of the 38th parallel, Rusk observed, "even though it was further north than could be realistically reached by U.S. forces, in the event of Soviet disagreement...we felt it important to include the capital of Korea in the area of responsibility of American troops". He noted that he was "faced with the scarcity of US forces immediately available, and time and space factors, which would make it difficult to reach very far north, before Soviet troops could enter the area". As Rusk's comments indicate, the Americans doubted whether the Soviet government would agree to this. Stalin, however, maintained his wartime policy of co-operation, and on 16 August the Red Army halted at the 38th parallel for three weeks to await the arrival of U.S. forces in the south.
Korea divided (1945–1949)
thumb|U.S. troops in Korea, September 1945
On 8 September 1945, U.S. Lt. Gen. John R. Hodge arrived in Incheon to accept the Japanese surrender south of the 38th parallel. Appointed as military governor, General Hodge directly controlled South Korea as head of the United States Army Military Government in Korea (USAMGIK 1945–48). He attempted to establish control by restoring Japanese colonial administrators to power, but in the face of Korean protests he quickly reversed this decision. The USAMGIK refused to recognize the provisional government of the short-lived People's Republic of Korea (PRK) due to its suspected Communist sympathies.
In December 1945, Korea was administered by a U.S.-Soviet Union Joint Commission, as agreed at the Moscow Conference, with the aim of granting independence after a five-year trusteeship. The idea was not popular among Koreans and riots broke out. To contain them, the USAMGIK banned strikes on 8 December 1945 and outlawed the PRK Revolutionary Government and the PRK People's Committees on 12 December 1945.
thumb|upright|left|South Korean citizens protest Allied trusteeship in December 1945.
The right-wing Representative Democratic Council, led by Syngman Rhee, who had arrived with the U.S. military, opposed the trusteeship, arguing that Korea had already suffered from foreign occupation far too long. General Hodge began to distance himself from the proposal, even though it had originated with his government.
On 23 September 1946, an 8,000-strong railroad worker strike began in Pusan. Civil disorder spread throughout the country in what became known as the Autumn uprising. On 1 October 1946, Korean police killed three students in the Daegu Uprising; protesters counter-attacked, killing 38 policemen. On 3 October, some 10,000 people attacked the Yeongcheon police station, killing three policemen and injuring some 40 more; elsewhere, some 20 landlords and pro-Japanese South Korean officials were killed. The USAMGIK declared martial law.
Citing the inability of the Joint Commission to make progress, the U.S. government decided to hold an election under United Nations auspices with the aim of creating an independent Korea. The Soviet authorities and the Korean Communists refused to co-operate on the grounds it would not be fair, and many South Korean politicians boycotted it. A general election was held in the South on 10 May 1948. It was marred by political violence and sabotage resulting in 600 deaths. North Korea held parliamentary elections three months later on 25 August.
thumb|Jeju residents awaiting execution in May 1948
The resultant South Korean government promulgated a national political constitution on 17 July 1948, and elected Syngman Rhee as President on 20 July 1948. The Republic of Korea (South Korea) was established on 15 August 1948. In the Russian Korean Zone of Occupation, the Soviet Union established a communist government led by Kim Il-sung. President Rhee's régime excluded communists and leftists from southern politics. Disenfranchised, they headed for the hills, to prepare for guerrilla war against the US-sponsored ROK government.
Meanwhile, on 3 April 1948, what began as a demonstration commemorating Korean resistance to Japanese rule ended with the Jeju uprising where between 14,000 According to Chalmers Johnson, the death toll is 14,000–30,000 and 60,000 people died. South Korean Army soldiers carried out large-scale atrocities during its suppression of the uprising. In October 1948, some South Korean soldiers mutinied against the clampdown in the Yeosu-Suncheon Rebellion.
The Soviet Union withdrew as agreed from Korea in 1948, and U.S. troops withdrew in 1949. On 24 December 1949, South Korean forces killed 86 to 88 people in the Mungyeong massacre and blamed the crime on marauding communist bands. By early 1950, Syngman Rhee had about 30,000 alleged communists in jails and about 300,000 suspected sympathizers enrolled in the Bodo League re-education movement.
Chinese Civil War (1945–1949)
With the end of the war with Japan, the Chinese Civil War resumed between the Chinese Communists and the Chinese Nationalists. While the Communists were struggling for supremacy in Manchuria, they were supported by the North Korean government with matériel and manpower. According to Chinese sources, the North Koreans donated 2,000 railway cars worth of matériel while thousands of Koreans served in the Chinese People's Liberation Army (PLA) during the war. North Korea also provided the Chinese Communists in Manchuria with a safe refuge for non-combatants and communications with the rest of China.
The North Korean contributions to the Chinese Communist victory were not forgotten after the creation of the People's Republic of China in 1949. As a token of gratitude, between 50,000 and 70,000 Korean veterans that served in the PLA were sent back along with their weapons, and they later played a significant role in the initial invasion of South Korea. China promised to support the North Koreans in the event of a war against South Korea. The Chinese support created a deep division between the Korean Communists, and Kim Il-sung's authority within the Communist party was challenged by the Chinese faction led by Pak Il-yu, who was later purged by Kim.
After the formation of the People's Republic of China in 1949, the Chinese government named the Western nations, led by the United States, as the biggest threat to its national security. Basing this judgment on China's century of humiliation beginning in the early 19th century, American support for the Nationalists during the Chinese Civil War, and the ideological struggles between revolutionaries and reactionaries, the Chinese leadership believed that China would become a critical battleground in the United States' crusade against Communism. As a countermeasure and to elevate China's standing among the worldwide Communist movements, the Chinese leadership adopted a foreign policy that actively promoted Communist revolutions throughout territories on China's periphery.
Course of the war
Outbreak of war (1950)
thumb|upright|Territory often changed hands early in the war, until the front stabilized.North Korean and Chinese forcesSouth Korean, American, Commonwealth and United Nations forces
By 1949, South Korean forces had reduced the active number of communist guerrillas in the South from 5,000 to 1,000. However, Kim Il-sung believed that the guerrillas had weakened the South Korean military and that a North Korean invasion would be welcomed by much of the South Korean population. Kim began seeking Stalin's support for an invasion in March 1949, travelling to Moscow to attempt to persuade him.
Initially, Stalin did not think the time was right for a war in Korea. Chinese Communist forces were still fighting in China. American forces were still stationed in South Korea (they would complete their withdrawal in June 1949) and Stalin did not want the Soviet Union to become embroiled in a war with the United States.
By spring 1950, Stalin believed the strategic situation had changed. The Soviets had detonated their first nuclear bomb in September 1949; American soldiers had fully withdrawn from Korea; the Americans had not intervened to stop the communist victory in China, and Stalin calculated that the Americans would be even less willing to fight in Korea—which had seemingly much less strategic significance. The Soviets had also cracked the codes used by the US to communicate with the US embassy in Moscow, and reading these dispatches convinced Stalin that Korea did not have the importance to the US that would warrant a nuclear confrontation. Stalin began a more aggressive strategy in Asia based on these developments, including promising economic and military aid to China through the Sino–Soviet Friendship, Alliance, and Mutual Assistance Treaty.
Throughout 1949 and 1950 the Soviets continued to arm North Korea. After the Communist victory in the Chinese Civil War, ethnic Korean units in the Chinese People's Liberation Army (PLA) were released to North Korea. The combat veterans from China, the tanks, artillery and aircraft supplied by the Soviets, and rigorous training increased North Korea's military superiority over the South, which had been armed by the American military with mostly small arms and given no heavy weaponry such as tanks.
In April 1950, Stalin gave Kim permission to invade the South under the condition that Mao would agree to send reinforcements if they became needed. Stalin made it clear that Soviet forces would not openly engage in combat, to avoid a direct war with the Americans. Kim met with Mao in May 1950. Mao was concerned that the Americans would intervene but agreed to support the North Korean invasion. China desperately needed the economic and military aid promised by the Soviets. At that time, the Chinese were in the process of demobilizing half of the PLA's 5.6 million soldiers. However, Mao sent more ethnic Korean PLA veterans to Korea and promised to move an army closer to the Korean border. Once Mao's commitment was secured, preparations for war accelerated.Mark O'Neill, "Soviet Involvement in the Korean War: A New View from the Soviet-Era Archives", OAH Magazine of History, Spring 2000, p21.
Soviet generals with extensive combat experience from the Second World War were sent to North Korea as the Soviet Advisory Group. These generals completed the plans for the attack by May. The original plans called for a skirmish to be initiated in the Ongjin Peninsula on the west coast of Korea. The North Koreans would then launch a "counterattack" that would capture Seoul and encircle and destroy the South Korean army. The final stage would involve destroying South Korean government remnants, capturing the rest of South Korea, including the ports.
On 7 June 1950, Kim Il-sung called for a Korea-wide election on 5–8 August 1950 and a consultative conference in Haeju on 15–17 June 1950. On 11 June, the North sent three diplomats to the South as a peace overture that Rhee rejected outright. On 21 June, Kim Il-Sung revised his war plan to involve general attack across the 38th parallel, rather than a limited operation in the Ongjin peninsula. Kim was concerned that South Korean agents had learned about the plans and South Korean forces were strengthening their defenses. Stalin agreed to this change of plan.
While these preparations were underway in the North, there were frequent clashes along the 38th parallel, especially at Kaesong and Ongjin, many initiated by the South. The Republic of Korea Army (ROK Army) was being trained by the U.S. Korean Military Advisory Group (KMAG). On the eve of war, KMAG's commander General William Lynn Roberts voiced utmost confidence in the ROK Army and boasted that any North Korean invasion would merely provide "target practice". For his part, Syngman Rhee repeatedly expressed his desire to conquer the North, including when American diplomat John Foster Dulles visited Korea on 18 June.
Although some South Korean and American intelligence officers were predicting an attack from the North, similar predictions had been made before and nothing had happened. The Central Intelligence Agency did note the southward movement by the Korean People's Army (KPA), but assessed this as a "defensive measure" and concluded an invasion was "unlikely". On 23 June, UN observers inspected the border and did not detect that war was imminent.
thumb|left|A U.S. Air Force C-54 Skymaster transport burning in South Korea in June 1950. North Korean fighters destroyed a C-54 at Kimpo airfield on 25 June
At dawn on Sunday, 25 June 1950, the Korean People's Army crossed the 38th parallel behind artillery fire. The KPA justified its assault with the claim that ROK troops had attacked first, and that they were aiming to arrest and execute the "bandit traitor Syngman Rhee". Fighting began on the strategic Ongjin peninsula in the west. There were initial South Korean claims that they had captured the city of Haeju, and this sequence of events has led some scholars to argue that the South Koreans actually fired first.
Whoever fired the first shots in Ongjin, within an hour, North Korean forces attacked all along the 38th parallel. The North Koreans had a combined arms force including tanks supported by heavy artillery. The South Koreans did not have any tanks, anti-tank weapons, nor heavy artillery, that could stop such an attack. In addition, South Koreans committed their forces in a piecemeal fashion and these were routed within a few days.
On 27 June, Rhee evacuated from Seoul with some of the government. On 28 June, at 2 am, the South Korean Army blew up the highway Hangang Bridge across the Han River in an attempt to stop the North Korean army. The bridge was detonated while 4,000 refugees were crossing it, and hundreds were killed. Destroying the bridge also trapped many South Korean military units north of the Han River. In spite of such desperate measures, Seoul fell that same day. A number of South Korean National Assemblymen remained in Seoul when it fell, and forty-eight subsequently pledged allegiance to the North.
On 28 June, Rhee ordered the massacre of suspected political opponents in his own country.
In five days, the South Korean forces, which had 95,000 men on 25 June, was down to less than 22,000 men. In early July, when U.S. forces arrived, what was left of the South Korean forces were placed under U.S. operational command of the United Nations Command.
thumb|Hundreds of thousands of South Koreans fled south in mid-1950 after the North Korean army invaded
Factors in US intervention
The Truman administration was unprepared for the invasion. Korea was not included in the strategic Asian Defense Perimeter outlined by Secretary of State Dean Acheson. Military strategists were more concerned with the security of Europe against the Soviet Union than East Asia. At the same time, the Administration was worried that a war in Korea could quickly widen into another world war should the Chinese or Soviets decide to get involved as well.
One facet of the changing attitude toward Korea and whether to get involved was Japan. Especially after the fall of China to the Communists, U.S. East Asian experts saw Japan as the critical counterweight to the Soviet Union and China in the region. While there was no United States policy that dealt with South Korea directly as a national interest, its proximity to Japan increased the importance of South Korea. Said Kim: "The recognition that the security of Japan required a non-hostile Korea led directly to President Truman's decision to intervene... The essential point... is that the American response to the North Korean attack stemmed from considerations of US policy toward Japan."
A major consideration was the possible Soviet reaction in the event that the US intervened. The Truman administration was fretful that a war in Korea was a diversionary assault that would escalate to a general war in Europe once the United States committed in Korea. At the same time, "[t]here was no suggestion from anyone that the United Nations or the United States could back away from [the conflict]". Yugoslavia–a possible Soviet target because of the Tito-Stalin Split—was vital to the defense of Italy and Greece, and the country was first on the list of the National Security Council's post-North Korea invasion list of "chief danger spots". Truman believed if aggression went unchecked a chain reaction would be initiated that would marginalize the United Nations and encourage Communist aggression elsewhere. The UN Security Council approved the use of force to help the South Koreans and the US immediately began using what air and naval forces that were in the area to that end. The Administration still refrained from committing on the ground because some advisers believed the North Koreans could be stopped by air and naval power alone.
The Truman administration was still uncertain if the attack was a ploy by the Soviet Union or just a test of U.S. resolve. The decision to commit ground troops became viable when a communiqué was received on 27 June indicating the Soviet Union would not move against U.S. forces in Korea. The Truman administration now believed it could intervene in Korea without undermining its commitments elsewhere.
United Nations Security Council Resolutions
On 25 June 1950, the United Nations Security Council unanimously condemned the North Korean invasion of the Republic of Korea, with UN Security Council Resolution 82. The Soviet Union, a veto-wielding power, had boycotted the Council meetings since January 1950, protesting that the Republic of China (Taiwan), not the People's Republic of China, held a permanent seat in the UN Security Council. After debating the matter, the Security Council, on 27 June 1950, published Resolution 83 recommending member states provide military assistance to the Republic of Korea. On 27 June President Truman ordered U.S. air and sea forces to help the South Korean regime. On 4 July the Soviet Deputy Foreign Minister accused the United States of starting armed intervention on behalf of South Korea.
The Soviet Union challenged the legitimacy of the war for several reasons. The ROK Army intelligence upon which Resolution 83 was based came from U.S. Intelligence; North Korea was not invited as a sitting temporary member of the UN, which violated UN Charter Article 32; and the Korean conflict was beyond the UN Charter's scope, because the initial north–south border fighting was classed as a civil war. Because the Soviet Union was boycotting the Security Council at the time, legal scholars posited that deciding upon an action of this type required the unanimous vote of the five permanent members.
Comparison of military forces
thumb|In early 1951 USAF recruits arrived by the train load, more than doubling the population of Lackland AFB in San Antonio, Texas
By mid-1950, North Korean forces numbered between 150,000 and 200,000 troops, organized into 10 infantry divisions, one tank division, and one air force division, with 210 fighter planes and 280 tanks, who captured scheduled objectives and territory, among them Kaesong, Chuncheon, Uijeongbu, and Ongjin. Their forces included 274 T-34-85 tanks, 200 artillery pieces, 110 attack bombers, some 150 Yak fighter planes, 78 Yak trainers, and 35 reconnaissance aircraft. In addition to the invasion force, the North KPA had 114 fighters, 78 bombers, 105 T-34-85 tanks, and some 30,000 soldiers stationed in reserve in North Korea. Although each navy consisted of only several small warships, the North and South Korean navies fought in the war as sea-borne artillery for their in-country armies.
In contrast, the ROK Army defenders were relatively unprepared and ill-equipped. In South to the Naktong, North to the Yalu (1961), R.E. Appleman reports the ROK forces' low combat readiness as of 25 June 1950. The ROK Army had 98,000 soldiers (65,000 combat, 33,000 support), no tanks (they had been requested from the U.S. military, but requests were denied), and a 22-piece air force comprising 12 liaison-type and 10 AT6 advanced-trainer airplanes. There were no large foreign military garrisons in Korea at the time of the invasion, but there were large U.S. garrisons and air forces in Japan.
Within days of the invasion, masses of ROK Army soldiers—of dubious loyalty to the Syngman Rhee regime—were either retreating southwards or were defecting en masse to the northern side, the KPA.
United Nations response (July – August 1950)
thumb|alt=A group of soldiers readying a large gun in some brush.|A U.S. howitzer position near the Kum River, 15 July
On Saturday, 24 June 1950, U.S. Secretary of State Dean Acheson informed President Truman that the North Koreans had invaded South Korea. Truman and Acheson discussed a U.S. invasion response and agreed that the United States was obligated to act, paralleling the North Korean invasion with Adolf Hitler's aggressions in the 1930s, with the conclusion being that the mistake of appeasement must not be repeated. Several U.S. industries were mobilized to supply materials, labor, capital, production facilities, and other services necessary to support the military objectives of the Korean War.Reis, M. (12 May 2014), "WWII and Korean War Industrial Mobilization: History Programs and Related Records", History Associates, retrieved 17 June 2014. However, President Truman later acknowledged that he believed fighting the invasion was essential to the American goal of the global containment of communism as outlined in the National Security Council Report 68 (NSC-68) (declassified in 1975):
thumb|left|G.I. comforting a grieving infantryman
In August 1950, the President and the Secretary of State obtained the consent of Congress to appropriate $12 billion for military action in Korea.
As an initial response, Truman called for a naval blockade of North Korea, and was shocked to learn that such a blockade could be imposed only 'on paper', since the U.S. Navy no longer had the warships with which to carry out his request. In fact, because of the extensive defense cuts and the emphasis placed on building a nuclear bomber force, none of the services were in a position to make a robust response with conventional military strength. General Omar Bradley, Chairman of the Joint Chiefs of Staff, was faced with re-organizing and deploying an American military force that was a shadow of its World War II counterpart.Hofmann, George F., Tanks and the Korean War: A case study of unpreparedness, Armor, Vol. 109 Issue 5 (Sep/Oct 2000), pp. 7–12: In 1948, the U.S. Army had to impose an 80 percent reduction in equipment requirements, deferring any equipment modernization. When the Joint Chiefs of Staff submitted a $30 billion total defense budget for FY 1948, the administration capped the DOD budget at the $14.4 billion set in 1947 and progressively reduced in succeeding fiscal years until January 1950, when it was reduced again to $13.5 billion. The impact of the Truman administration's defense budget cutbacks were now keenly felt, as American troops fought a series of costly rearguard actions. Lacking sufficient anti-tank weapons, artillery or armor, they were driven back down the Korean peninsula to Pusan.Dunford, J.F. (Lt. Col.) The Strategic Implications of Defensive Operations at the Pusan Perimeter July–September 1950, Carlisle, PA: U.S. Army War College (7 April 1999) pp. 6–8, 12Zabecki, David T., Stand or Die – 1950 Defense of Korea's Pusan Perimeter, Military History (May 2009): The inability of U.S. forces to stop the 1950 North Korean summer offensive cost the Eighth Army 4,280 killed in action, 12,377 wounded, with 2,107 missing and 401 confirmed captured between 5 July and 16 September 1950. In addition the lives of tens of thousands of South Korean soldiers and civilians were lost as well. In a postwar analysis of the unpreparedness of U.S. Army forces deployed to Korea during the summer and fall of 1950, Army Major General Floyd L. Parks stated that "Many who never lived to tell the tale had to fight the full range of ground warfare from offensive to delaying action, unit by unit, man by man ... [T]hat we were able to snatch victory from the jaws of defeat ... does not relieve us from the blame of having placed our own flesh and blood in such a predicament."Lewis, Adrian R., The American culture of war, New York: Taylor & Francis Group, ISBN 978-0-415-97975-7 (2007), p. 82
Acting on State Secretary Acheson's recommendation, President Truman ordered General MacArthur to transfer matériel to the Army of the Republic of Korea while giving air cover to the evacuation of U.S. nationals. The President disagreed with advisers who recommended unilateral U.S. bombing of the North Korean forces, and ordered the US Seventh Fleet to protect the Republic of China (Taiwan), whose government asked to fight in Korea. The United States denied ROC's request for combat, lest it provoke a communist Chinese retaliation. Because the United States had sent the Seventh Fleet to "neutralize" the Taiwan Strait, Chinese premier Zhou Enlai criticized both the UN and U.S. initiatives as "armed aggression on Chinese territory."
thumb|Crew of an M-24 tank along the Nakdong River front, August 1950
The Battle of Osan, the first significant American engagement of the Korean War, involved the 540-soldier Task Force Smith, which was a small forward element of the 24th Infantry Division which had been flown in from Japan. On 5 July 1950, Task Force Smith attacked the North Koreans at Osan but without weapons capable of destroying the North Koreans' tanks. They were unsuccessful; the result was 180 dead, wounded, or taken prisoner. The KPA progressed southwards, pushing back the US force at Pyongtaek, Chonan, and Chochiwon, forcing the 24th Division's retreat to Taejeon, which the KPA captured in the Battle of Taejon; the 24th Division suffered 3,602 dead and wounded and 2,962 captured, including the Division's Commander, Major General William F. Dean.
By August, the KPA had pushed back the ROK Army and the Eighth United States Army to the vicinity of Pusan in southeast Korea. In their southward advance, the KPA purged the Republic of Korea's intelligentsia by killing civil servants and intellectuals. On 20 August, General MacArthur warned North Korean leader Kim Il-sung that he was responsible for the KPA's atrocities. By September, the UN Command controlled the Pusan perimeter, enclosing about 10% of Korea, in a line partially defined by the Nakdong River.
Although Kim's early successes had led him to predict that he would end the war by the end of August, Chinese leaders were more pessimistic. To counter a possible U.S. deployment, Zhou Enlai secured a Soviet commitment to have the Soviet Union support Chinese forces with air cover, and deployed 260,000 soldiers along the Korean border, under the command of Gao Gang. Zhou commanded Chai Chengwen to conduct a topographical survey of Korea, and directed Lei Yingfu, Zhou's military advisor in Korea, to analyze the military situation in Korea. Lei concluded that MacArthur would most likely attempt a landing at Incheon. After conferring with Mao that this would be MacArthur's most likely strategy, Zhou briefed Soviet and North Korean advisers of Lei's findings, and issued orders to Chinese army commanders deployed on the Korean border to prepare for American naval activity in the Korea Strait.
Escalation (August – September 1950)
thumb|U.S. Air Force attacking railroads south of Wonsan on the eastern coast of North Korea
In the resulting Battle of Pusan Perimeter (August–September 1950), the U.S. Army withstood KPA attacks meant to capture the city at the Naktong Bulge, P'ohang-dong, and Taegu. The United States Air Force (USAF) interrupted KPA logistics with 40 daily ground support sorties that destroyed 32 bridges, halting most daytime road and rail traffic. KPA forces were forced to hide in tunnels by day and move only at night. To deny matériel to the KPA, the USAF destroyed logistics depots, petroleum refineries, and harbors, while the U.S. Navy air forces attacked transport hubs. Consequently, the over-extended KPA could not be supplied throughout the south. On 27 August, 67th Fighter Squadron aircraft mistakenly attacked facilities in Chinese territory and the Soviet Union called the UN Security Council's attention to China's complaint about the incident.493rd meeting of the UN Security Council, 31 August 1950 United Nations Security Council Official Records No. 35, p. 25 The US proposed that a commission of India and Sweden determine what the US should pay in compensation but the Soviets vetoed the US proposal.Telegram, Dean Rusk to James Webb Foreign Relations of the United States 1950 Volume VII, Korea, Document 551
Meanwhile, U.S. garrisons in Japan continually dispatched soldiers and matériel to reinforce defenders in the Pusan Perimeter. Tank battalions deployed to Korea directly from the U.S. mainland from the port of San Francisco to the port of Pusan, the largest Korean port. By late August, the Pusan Perimeter had some 500 medium tanks battle-ready. In early September 1950, ROK Army and UN Command forces outnumbered the KPA 180,000 to 100,000 soldiers. The UN forces, once prepared, counterattacked and broke out of the Pusan Perimeter.
Battle of Inchon (September 1950)
Against the rested and re-armed Pusan Perimeter defenders and their reinforcements, the KPA were undermanned and poorly supplied; unlike the UN Command, they lacked naval and air support. To relieve the Pusan Perimeter, General MacArthur recommended an amphibious landing at Inchon (now known as Incheon), near Seoul and well over behind the KPA lines. On 6 July, he ordered Major General Hobart R. Gay, Commander, 1st Cavalry Division, to plan the division's amphibious landing at Incheon; on 12–14 July, the 1st Cavalry Division embarked from Yokohama, Japan to reinforce the 24th Infantry Division inside the Pusan Perimeter.
thumb|left|General Douglas MacArthur, UN Command CiC (seated), observes the naval shelling of Incheon from , 15 September 1950
Soon after the war began, General MacArthur had begun planning a landing at Incheon, but the Pentagon opposed him. When authorized, he activated a combined U.S. Army and Marine Corps, and ROK Army force. The X Corps, led by General Edward Almond, Commander, consisted of 40,000 men of the 1st Marine Division, the 7th Infantry Division and around 8,600 ROK Army soldiers. By 15 September, the amphibious assault force faced few KPA defenders at Incheon: military intelligence, psychological warfare, guerrilla reconnaissance, and protracted bombardment facilitated a relatively light battle. However, the bombardment destroyed most of the city of Incheon.
After the Incheon landing, the 1st Cavalry Division began its northward advance from the Pusan Perimeter. "Task Force Lynch" (after Lieutenant Colonel James H. Lynch), 3rd Battalion, 7th Cavalry Regiment, and two 70th Tank Battalion units (Charlie Company and the Intelligence–Reconnaissance Platoon) effected the "Pusan Perimeter Breakout" through of enemy territory to join the 7th Infantry Division at Osan. The X Corps rapidly defeated the KPA defenders around Seoul, thus threatening to trap the main KPA force in Southern Korea.
On 18 September, Stalin dispatched General H. M. Zakharov to Korea to advise Kim Il-sung to halt his offensive around the Pusan perimeter and to redeploy his forces to defend Seoul. Chinese commanders were not briefed on North Korean troop numbers or operational plans. As the overall commander of Chinese forces, Zhou Enlai suggested that the North Koreans should attempt to eliminate the enemy forces at Inchon only if they had reserves of at least 100,000 men; otherwise, he advised the North Koreans to withdraw their forces north.
On 25 September, Seoul was recaptured by South Korean forces. American air raids caused heavy damage to the KPA, destroying most of its tanks and much of its artillery. North Korean troops in the south, instead of effectively withdrawing north, rapidly disintegrated, leaving Pyongyang vulnerable. During the general retreat only 25,000 to 30,000 soldiers managed to rejoin the Northern KPA lines. On 27 September, Stalin convened an emergency session of the Politburo, in which he condemned the incompetence of the KPA command and held Soviet military advisers responsible for the defeat.
UN forces cross partition line (September – October 1950)
thumb|Combat in the streets of Seoul
On 27 September, MacArthur received the top secret National Security Council Memorandum 81/1 from Truman reminding him that operations north of the 38th parallel were authorized only if "at the time of such operation there was no entry into North Korea by major Soviet or Chinese Communist forces, no announcements of intended entry, nor a threat to counter our operations militarily..." On 29 September MacArthur restored the government of the Republic of Korea under Syngman Rhee. On 30 September, Defense Secretary George Marshall sent an eyes-only message to MacArthur: "We want you to feel unhampered tactically and strategically to proceed north of the 38th parallel." During October, the ROK police executed people who were suspected to be sympathetic to North Korea, and similar massacres were carried out until early 1951.
On 30 September, Zhou Enlai warned the United States that China was prepared to intervene in Korea if the United States crossed the 38th parallel. Zhou attempted to advise North Korean commanders on how to conduct a general withdrawal by using the same tactics which had allowed Chinese communist forces to successfully escape Chiang Kai-shek's Encirclement Campaigns in the 1930s, but by some accounts North Korean commanders did not utilize these tactics effectively. Historian Bruce Cumings argues, however, the KPA's rapid withdrawal was strategic, with troops melting into the mountains from where they could launch guerrilla raids on the UN forces spread out on the coasts.
By 1 October 1950, the UN Command repelled the KPA northwards past the 38th parallel; the ROK Army crossed after them, into North Korea. MacArthur made a statement demanding the KPA's unconditional surrender. Six days later, on 7 October, with UN authorization, the UN Command forces followed the ROK forces northwards. The X Corps landed at Wonsan (in southeastern North Korea) and Riwon (in northeastern North Korea), already captured by ROK forces. The Eighth U.S. Army and the ROK Army drove up western Korea and captured Pyongyang city, the North Korean capital, on 19 October 1950. The 187th Airborne Regimental Combat Team ("Rakkasans") made their first of two combat jumps during the Korean War on 20 October 1950 at Sunchon and Sukchon. The missions of the 187th were to cut the road north going to China, preventing North Korean leaders from escaping from Pyongyang; and to rescue American prisoners of war. At month's end, UN forces held 135,000 KPA prisoners of war. As they neared the Sino-Korean border, the UN forces in the west were divided from those in the east by 50–100 miles of mountainous terrain.
Taking advantage of the UN Command's strategic momentum against the communists, General MacArthur believed it necessary to extend the Korean War into China to destroy depots supplying the North Korean war effort. President Truman disagreed, and ordered caution at the Sino-Korean border.
China intervenes (October – December 1950)
thumb|Chinese forces cross the Yalu River
On 27 June 1950, two days after the KPA invaded and three months before the Chinese entered the war, President Truman dispatched the United States Seventh Fleet to the Taiwan Strait, to prevent hostilities between the Nationalist Republic of China (Taiwan) and the People's Republic of China (PRC). On 4 August 1950, with the PRC invasion of Taiwan aborted, Mao Zedong reported to the Politburo that he would intervene in Korea when the People's Liberation Army's (PLA) Taiwan invasion force was reorganized into the PLA North East Frontier Force. China justified its entry into the war as a response to "American aggression in the guise of the UN".
On 20 August 1950, Premier Zhou Enlai informed the UN that "Korea is China's neighbor... The Chinese people cannot but be concerned about a solution of the Korean question". Thus, through neutral-country diplomats, China warned that in safeguarding Chinese national security, they would intervene against the UN Command in Korea. President Truman interpreted the communication as "a bald attempt to blackmail the UN", and dismissed it.
thumb|left|Three commanders of PVA during the Korean War. From left to right: Chen Geng (1952), Peng Dehuai (1950–1952) and Deng Hua (1952–1953)
1 October 1950, the day that UN troops crossed the 38th parallel, was also the first anniversary of the founding of the People's Republic of China. On that day the Soviet ambassador forwarded a telegram from Stalin to Mao and Zhou requesting that China send five to six divisions into Korea, and Kim Il-sung sent frantic appeals to Mao for Chinese military intervention. At the same time, Stalin made it clear that Soviet forces themselves would not directly intervene.
In a series of emergency meetings that lasted from 2–5 October, Chinese leaders debated whether to send Chinese troops into Korea. There was considerable resistance among many leaders, including senior military leaders, to confronting the U.S. in Korea. Mao strongly supported intervention, and Zhou was one of the few Chinese leaders who firmly supported him. After Lin Biao politely refused Mao's offer to command Chinese forces in Korea (citing his upcoming medical treatment), Mao decided that Peng Dehuai would be the commander of the Chinese forces in Korea after Peng agreed to support Mao's position. Mao then asked Peng to speak in favor of intervention to the rest of the Chinese leaders. After Peng made the case that if U.S. troops conquered Korea and reached the Yalu they might cross it and invade China the Politburo agreed to intervene in Korea. Later, the Chinese claimed that US bombers had violated PRC national airspace on three separate occasions and attacked Chinese targets before China intervened. On 8 October 1950, Mao Zedong redesignated the PLA North East Frontier Force as the Chinese People's Volunteer Army (PVA).
In order to enlist Stalin's support, Zhou and a Chinese delegation left for Moscow on 8 October, arriving there on 10 October at which point they flew to Stalin's home at the Black Sea. There they conferred with the top Soviet leadership which included Joseph Stalin as well as Vyacheslav Molotov, Lavrentiy Beria and Georgi Malenkov. Stalin initially agreed to send military equipment and ammunition, but warned Zhou that the Soviet Union's air force would need two or three months to prepare any operations. In a subsequent meeting, Stalin told Zhou that he would only provide China with equipment on a credit basis, and that the Soviet air force would only operate over Chinese airspace, and only after an undisclosed period of time. Stalin did not agree to send either military equipment or air support until March 1951. Mao did not find Soviet air support especially useful, as the fighting was going to take place on the south side of the Yalu. Soviet shipments of matériel, when they did arrive, were limited to small quantities of trucks, grenades, machine guns, and the like.
Immediately on his return to Beijing on 18 October 1950, Zhou met with Mao Zedong, Peng Dehuai, and Gao Gang, and the group ordered two hundred thousand Chinese troops to enter North Korea, which they did on 25 October. After consulting with Stalin, on 13 November, Mao appointed Zhou the overall commander and coordinator of the war effort, with Peng as field commander. Orders given by Zhou were delivered in the name of the Central Military Commission.
thumb|left|A Soviet-built MiG 15 in North Korean markings. The arrival of MiGs challenged UN air superiority
UN aerial reconnaissance had difficulty sighting PVA units in daytime, because their march and bivouac discipline minimized aerial detection. The PVA marched "dark-to-dark" (19:00–03:00), and aerial camouflage (concealing soldiers, pack animals, and equipment) was deployed by 05:30. Meanwhile, daylight advance parties scouted for the next bivouac site. During daylight activity or marching, soldiers were to remain motionless if an aircraft appeared, until it flew away; PVA officers were under order to shoot security violators. Such battlefield discipline allowed a three-division army to march the from An-tung, Manchuria, to the combat zone in some 19 days. Another division night-marched a circuitous mountain route, averaging daily for 18 days.
Meanwhile, on 10 October 1950, the 89th Tank Battalion was attached to the 1st Cavalry Division, increasing the armor available for the Northern Offensive. On 15 October, after moderate KPA resistance, the 7th Cavalry Regiment and Charlie Company, 70th Tank Battalion captured Namchonjam city. On 17 October, they flanked rightwards, away from the principal road (to Pyongyang), to capture Hwangju. Two days later, the 1st Cavalry Division captured Pyongyang, the North's capital city, on 19 October 1950. Kim Il Sung and his government temporarily moved its capital to Sinuiju – although as UNC forces approached, the government again moved – this time to Kanggye.
On 15 October 1950, President Truman and General MacArthur met at Wake Island in the mid-Pacific Ocean. This meeting was much publicized because of the General's discourteous refusal to meet the President on the continental United States. To President Truman, MacArthur speculated there was little risk of Chinese intervention in Korea, and that the PRC's opportunity for aiding the KPA had lapsed. He believed the PRC had some 300,000 soldiers in Manchuria, and some 100,000–125,000 soldiers at the Yalu River. He further concluded that, although half of those forces might cross south, "if the Chinese tried to get down to Pyongyang, there would be the greatest slaughter" without air force protection.
After secretly crossing the Yalu River on 19 October, the PVA 13th Army Group launched the First Phase Offensive on 25 October, attacking the advancing UN forces near the Sino-Korean border. This military decision made solely by China changed the attitude of the Soviet Union. Twelve days after Chinese troops entered the war, Stalin allowed the Soviet Air Force to provide air cover, and supported more aid to China.Shen Zhihua, China and the Dispatch of the Soviet Air Force: The Formation of the Chinese-Soviet-Korean Alliance in the Early Stage of the Korean WarThe Journal of Strategic Studies, vol. 33, no.2, pp. 211–230 After decimating the ROK II Corps at the Battle of Onjong, the first confrontation between Chinese and U.S. military occurred on 1 November 1950; deep in North Korea, thousands of soldiers from the PVA 39th Army encircled and attacked the U.S. 8th Cavalry Regiment with three-prong assaults—from the north, northwest, and west—and overran the defensive position flanks in the Battle of Unsan. The surprise assault resulted in the UN forces retreating back to the Ch'ongch'on River, while the Chinese unexpectedly disappeared into mountain hideouts following victory. It is unclear why the Chinese did not press the attack and follow up their victory.
thumb|Soldiers from the U.S. 2nd Infantry Division in action near the Ch'ongch'on River, 20 November 1950
The UN Command, however, were unconvinced that the Chinese had openly intervened because of the sudden Chinese withdrawal. On 24 November, the Home-by-Christmas Offensive was launched with the U.S. Eighth Army advancing in northwest Korea, while the US X Corps were attacking along the Korean east coast. But the Chinese were waiting in ambush with their Second Phase Offensive.
On 25 November at the Korean western front, the PVA 13th Army Group attacked and overran the ROK II Corps at the Battle of the Ch'ongch'on River, and then decimated the US 2nd Infantry Division on the UN forces' right flank. The UN Command retreated; the U.S. Eighth Army's retreat (the longest in US Army history) was made possible because of the Turkish Brigade's successful, but very costly, rear-guard delaying action near Kunuri that slowed the PVA attack for two days (27–29 November). On 27 November at the Korean eastern front, a U.S. 7th Infantry Division Regimental Combat Team (3,000 soldiers) and the U.S. 1st Marine Division (12,000–15,000 marines) were unprepared for the PVA 9th Army Group's three-pronged encirclement tactics at the Battle of Chosin Reservoir, but they managed to escape under Air Force and X Corps support fire—albeit with some 15,000 collective casualties.
thumb|left|F4U-5 Corsairs provide close air support to U.S. Marines fighting Chinese forces, December 1950.
By 30 November, the PVA 13th Army Group managed to expel the U.S. Eighth Army from northwest Korea. Retreating from the north faster than they had counter-invaded, the Eighth Army crossed the 38th parallel border in mid December. UN morale hit rock bottom when commanding General Walton Walker of the U.S. Eighth Army was killed on 23 December 1950 in an automobile accident. In northeast Korea by 11 December, the U.S. X Corps managed to cripple the PVA 9th Army Group while establishing a defensive perimeter at the port city of Hungnam. The X Corps were forced to evacuate by 24 December in order to reinforce the badly depleted U.S. Eighth Army to the south.
thumb|250px|Map of the UN retreat in the wake of Chinese intervention
During the Hungnam evacuation, about 193 shiploads of UN Command forces and matériel (approximately 105,000 soldiers, 98,000 civilians, 17,500 vehicles, and 350,000 tons of supplies) were evacuated to Pusan. The SS Meredith Victory was noted for evacuating 14,000 refugees, the largest rescue operation by a single ship, even though it was designed to hold 12 passengers. Before escaping, the UN Command forces razed most of Hungnam city, especially the port facilities; and on 16 December 1950, President Truman declared a national emergency with Presidential Proclamation No. 2914, 3 C.F.R. 99 (1953), which remained in force until 14 September 1978. The next day (17 December 1950) Kim Il-sung was deprived of the right of command of KPA by China.Jung Chang and Jon Halliday, MAO: The Unknown Story. After that, the leading part of the war became the Chinese army.
Fighting around the 38th parallel (January – June 1951)
With Lieutenant-General Matthew Ridgway assuming the command of the U.S. Eighth Army on 26 December, the PVA and the KPA launched their Third Phase Offensive (also known as the "Chinese New Year's Offensive") on New Year's Eve of 1950. Utilizing night attacks in which UN Command fighting positions were encircled and then assaulted by numerically superior troops who had the element of surprise, the attacks were accompanied by loud trumpets and gongs, which fulfilled the double purpose of facilitating tactical communication and mentally disorienting the enemy. UN forces initially had no familiarity with this tactic, and as a result some soldiers panicked, abandoning their weapons and retreating to the south. The Chinese New Year's Offensive overwhelmed UN forces, allowing the PVA and KPA to conquer Seoul for the second time on 4 January 1951.
thumb|B-26 Invaders bomb logistics depots in Wonsan, North Korea, 1951
These setbacks prompted General MacArthur to consider using nuclear weapons against the Chinese or North Korean interiors, with the intention that radioactive fallout zones would interrupt the Chinese supply chains. However, upon the arrival of the charismatic General Ridgway, the esprit de corps of the bloodied Eighth Army immediately began to revive.
UN forces retreated to Suwon in the west, Wonju in the center, and the territory north of Samcheok in the east, where the battlefront stabilized and held. The PVA had outrun its logistics capability and thus were unable to press on beyond Seoul as food, ammunition, and matériel were carried nightly, on foot and bicycle, from the border at the Yalu River to the three battle lines. In late January, upon finding that the PVA had abandoned their battle lines, General Ridgway ordered a reconnaissance-in-force, which became Operation Roundup (5 February 1951). A full-scale X Corps advance proceeded, which fully exploited the UN Command's air superiority, concluding with the UN reaching the Han River and recapturing Wonju.
After cease-fire negotiations failed in January, the United Nations General Assembly passed Resolution 498 on 1 February, condemning PRC as an aggressor, and called upon its forces to withdraw from Korea.
In early February, the South Korean 11th Division ran the operation to destroy the guerrillas and their sympathizer citizens in Southern Korea. During the operation, the division and police conducted the Geochang massacre and Sancheong-Hamyang massacre. In mid-February, the PVA counterattacked with the Fourth Phase Offensive and achieved initial victory at Hoengseong. But the offensive was soon blunted by the IX Corps positions at Chipyong-ni in the center. The U.S. 2nd Infantry "Warrior" Division's 23rd Regimental Combat Team and the French Battalion fought a short but desperate battle that broke the attack's momentum. The battle is sometimes known as the Gettysburg of the Korean War. 5,600 Korean, American, and French troops were surrounded on all sides by 25,000 Chinese. United Nations forces had previously retreated in the face of large Communist forces instead of getting cut off, but this time they stood and fought, and won.
thumb|left|170px|U.S. Marines move out over rugged mountain terrain while closing with hostile North Korean forces
In the last two weeks of February 1951, Operation Roundup was followed by Operation Killer, carried out by the revitalized Eighth Army. It was a full-scale, battlefront-length attack staged for maximum exploitation of firepower to kill as many KPA and PVA troops as possible. Operation Killer concluded with I Corps re-occupying the territory south of the Han River, and IX Corps capturing Hoengseong. On 7 March 1951, the Eighth Army attacked with Operation Ripper, expelling the PVA and the KPA from Seoul on 14 March 1951. This was the city's fourth conquest in a year's time, leaving it a ruin; the 1.5 million pre-war population was down to 200,000, and people were suffering from severe food shortages.
On 1 March 1951, Mao sent a cable to Stalin, in which he emphasized the difficulties faced by Chinese forces and the urgent need for air cover, especially over supply lines. Apparently impressed by the Chinese war effort, Stalin finally agreed to supply two air force divisions, three anti-aircraft divisions, and six thousand trucks. PVA troops in Korea continued to suffer severe logistical problems throughout the war. In late April Peng Dehuai sent his deputy, Hong Xuezhi, to brief Zhou Enlai in Beijing. What Chinese soldiers feared, Hong said, was not the enemy, but that they had nothing to eat, no bullets to shoot, and no trucks to transport them to the rear when they were wounded. Zhou attempted to respond to the PVA's logistical concerns by increasing Chinese production and improving methods of supply, but these efforts were never completely sufficient. At the same time, large-scale air defense training programs were carried out, and the Chinese Air Force began to participate in the war from September 1951 onward.
On 11 April 1951, Commander-in-Chief Truman relieved the controversial General MacArthur, the Supreme Commander in Korea. There were several reasons for the dismissal. MacArthur had crossed the 38th parallel in the mistaken belief that the Chinese would not enter the war, leading to major allied losses. He believed that whether or not to use nuclear weapons should be his own decision, not the President's. MacArthur threatened to destroy China unless it surrendered. While MacArthur felt total victory was the only honorable outcome, Truman was more pessimistic about his chances once involved in a land war in Asia, and felt a truce and orderly withdrawal from Korea could be a valid solution. MacArthur was the subject of congressional hearings in May and June 1951, which determined that he had defied the orders of the President and thus had violated the U.S. Constitution. A popular criticism of MacArthur was that he never spent a night in Korea, and directed the war from the safety of Tokyo.
thumb|British UN troops advance alongside a Centurion tank, March 1951
General Ridgway was appointed Supreme Commander, Korea; he regrouped the UN forces for successful counterattacks, while General James Van Fleet assumed command of the U.S. Eighth Army. Further attacks slowly depleted the PVA and KPA forces; Operations Courageous (23–28 March 1951) and Tomahawk (23 March 1951) were a joint ground and airborne infilltration meant to trap Chinese forces between Kaesong and Seoul. UN forces advanced to "Line Kansas", north of the 38th parallel. The 187th Airborne Regimental Combat Team's ("Rakkasans") second of two combat jumps was on Easter Sunday, 1951, at Munsan-ni, South Korea, codenamed Operation Tomahawk. The mission was to get behind Chinese forces and block their movement north. The 60th Indian Parachute Field Ambulance provided the medical cover for the operations, dropping an ADS and a surgical team and treating over 400 battle casualties apart from the civilian casualties that formed the core of their objective as the unit was on a humanitarian mission.
The Chinese counterattacked in April 1951, with the Fifth Phase Offensive, also known as the Chinese Spring Offensive, with three field armies (approximately 700,000 men). The offensive's first thrust fell upon I Corps, which fiercely resisted in the Battle of the Imjin River (22–25 April 1951) and the Battle of Kapyong (22–25 April 1951), blunting the impetus of the offensive, which was halted at the "No-name Line" north of Seoul. On 15 May 1951, the Chinese commenced the second impulse of the Spring Offensive and attacked the ROK Army and the U.S. X Corps in the east at the Soyang River. After initial success, they were halted by 20 May. At month's end, the U.S. Eighth Army counterattacked and regained "Line Kansas", just north of the 38th parallel. The UN's "Line Kansas" halt and subsequent offensive action stand-down began the stalemate that lasted until the armistice of 1953.
Stalemate (July 1951 – July 1953)
thumb|American M46 Patton tanks, painted with tiger heads thought to demoralise Chinese forces
thumb|ROK soldiers dump spent artillery casings
thumb|New Zealand artillery crew in action, 1952
For the remainder of the Korean War the UN Command and the PVA fought, but exchanged little territory; the stalemate held. Large-scale bombing of North Korea continued, and protracted armistice negotiations began 10 July 1951 at Kaesong. On the Chinese side, Zhou Enlai directed peace talks, and Li Kenong and Qiao Guanghua headed the negotiation team. Combat continued while the belligerents negotiated; the UN Command forces' goal was to recapture all of South Korea and to avoid losing territory. The PVA and the KPA attempted similar operations, and later effected military and psychological operations in order to test the UN Command's resolve to continue the war.
The principal battles of the stalemate include the Battle of Bloody Ridge (18 August–15 September 1951), the Battle of the Punchbowl (31 August-21 September 1951), the Battle of Heartbreak Ridge (13 September–15 October 1951), the Battle of Old Baldy (26 June–4 August 1952), the Battle of White Horse (6–15 October 1952), the Battle of Triangle Hill (14 October–25 November 1952), the Battle of Hill Eerie (21 March–21 June 1952), the sieges of Outpost Harry (10–18 June 1953), the Battle of the Hook (28–29 May 1953), the Battle of Pork Chop Hill (23 March–16 July 1953), and the Battle of Kumsong (13–27 July 1953).
Chinese troops suffered from deficient military equipment, serious logistical problems, overextended communication and supply lines, and the constant threat of UN bombers. All of these factors generally led to a rate of Chinese casualties that was far greater than the casualties suffered by UN troops. The situation became so serious that, on November 1951, Zhou Enlai called a conference in Shenyang to discuss the PVA's logistical problems. At the meeting it was decided to accelerate the construction of railways and airfields in the area, to increase the number of trucks available to the army, and to improve air defense by any means possible. These commitments did little to directly address the problems confronting PVA troops.
In the months after the Shenyang conference Peng Dehuai went to Beijing several times to brief Mao and Zhou about the heavy casualties suffered by Chinese troops and the increasing difficulty of keeping the front lines supplied with basic necessities. Peng was convinced that the war would be protracted, and that neither side would be able to achieve victory in the near future. On 24 February 1952, the Military Commission, presided over by Zhou, discussed the PVA's logistical problems with members of various government agencies involved in the war effort. After the government representatives emphasized their inability to meet the demands of the war, Peng, in an angry outburst, shouted: "You have this and that problem... You should go to the front and see with your own eyes what food and clothing the soldiers have! Not to speak of the casualties! For what are they giving their lives? We have no aircraft. We have only a few guns. Transports are not protected. More and more soldiers are dying of starvation. Can't you overcome some of your difficulties?" The atmosphere became so tense that Zhou was forced to adjourn the conference. Zhou subsequently called a series of meetings, where it was agreed that the PVA would be divided into three groups, to be dispatched to Korea in shifts; to accelerate the training of Chinese pilots; to provide more anti-aircraft guns to the front lines; to purchase more military equipment and ammunition from the Soviet Union; to provide the army with more food and clothing; and, to transfer the responsibility of logistics to the central government.
Armistice (July 1953 – November 1954)
thumb|left|Men from the Royal Australian Regiment, June 1953.
The on-again, off-again armistice negotiations continued for two years, first at Kaesong, on the border between North and South Korea, and then at the neighbouring village of Panmunjom. A major, problematic negotiation point was prisoner of war (POW) repatriation. The PVA, KPA, and UN Command could not agree on a system of repatriation because many PVA and KPA soldiers refused to be repatriated back to the north, which was unacceptable to the Chinese and North Koreans. In the final armistice agreement, signed on 27 July 1953, a Neutral Nations Repatriation Commission, under the chairman Indian General K. S. Thimayya, was set up to handle the matter.
In 1952, the United States elected a new president, and on 29 November 1952, the president-elect, Dwight D. Eisenhower, went to Korea to learn what might end the Korean War. With the United Nations' acceptance of India's proposed Korean War armistice, the KPA, the PVA, and the UN Command ceased fire with the battle line approximately at the 38th parallel. Upon agreeing to the armistice, the belligerents established the Korean Demilitarized Zone (DMZ), which has since been patrolled by the KPA and ROKA, United States, and Joint UN Commands.
The Demilitarized Zone runs northeast of the 38th parallel; to the south, it travels west. The old Korean capital city of Kaesong, site of the armistice negotiations, originally was in pre-war South Korea, but now is part of North Korea. The United Nations Command, supported by the United States, the North Korean People's Army, and the Chinese People's Volunteers, signed the Armistice Agreement on 27 July 1953 to end the fighting. The Armistice also called upon the governments of South Korea, North Korea, China and the United States to participate in continued peace talks. The war is considered to have ended at this point, even though there was no peace treaty. North Korea nevertheless claims that it won the Korean War.
After the war, Operation Glory was conducted from July to November 1954, to allow combatant countries to exchange their dead. The remains of 4,167 U.S. Army and U.S. Marine Corps dead were exchanged for 13,528 KPA and PVA dead, and 546 civilians dead in UN prisoner-of-war camps were delivered to the South Korean government. After Operation Glory, 416 Korean War unknown soldiers were buried in the National Memorial Cemetery of the Pacific (The Punchbowl), on the island of Oahu, Hawaii. Defense Prisoner of War/Missing Personnel Office (DPMO) records indicate that the PRC and the DPRK transmitted 1,394 names, of which 858 were correct. From 4,167 containers of returned remains, forensic examination identified 4,219 individuals. Of these, 2,944 were identified as American, and all but 416 were identified by name. From 1996 to 2006, the DPRK recovered 220 remains near the Sino-Korean border.
Division of Korea (1954–present)
thumb|left|upright=1.5|Delegates sign the Korean Armistice Agreement in P'anmunjŏm
The Korean Armistice Agreement provided for monitoring by an international commission. Since 1953, the Neutral Nations Supervisory Commission (NNSC), composed of members from the Swiss and Swedish Armed Forces, has been stationed near the DMZ.
In April 1975, South Vietnam's capital was captured by the North Vietnamese army. Encouraged by the success of Communist revolution in Indochina, Kim Il-sung saw it as an opportunity to invade the South. Kim visited China in April of that year, and met with Mao Zedong and Zhou Enlai to ask for military aid. Despite Pyongyang's expectations, however, Beijing refused to help North Korea for another war in Korea.
thumb|float|right|A U.S. Army officer confers with South Korean soldiers at Observation Post (OP) Ouellette, viewing northward, in April 2008.
thumb|float|right|The DMZ as seen from the north, 2005.
Since the armistice, there have been numerous incursions and acts of aggression by North Korea. In 1976, the axe murder incident was widely publicized. Since 1974, four incursion tunnels leading to Seoul have been uncovered. In 2010, a North Korean submarine torpedoed and sank the South Korean corvette ROKS Cheonan, resulting in the deaths of 46 sailors. Again in 2010, North Korea fired artillery shells on Yeonpyeong island, killing two military personnel and two civilians.
After a new wave of UN sanctions, on 11 March 2013, North Korea claimed that it had invalidated the 1953 armistice. On 13 March 2013, North Korea confirmed it ended the 1953 Armistice and declared North Korea "is not restrained by the North-South declaration on non-aggression". On 30 March 2013, North Korea stated that it had entered a "state of war" with South Korea and declared that "The long-standing situation of the Korean peninsula being neither at peace nor at war is finally over". Speaking on 4 April 2013, the U.S. Secretary of Defense, Chuck Hagel, informed the press that Pyongyang had "formally informed" the Pentagon that it had "ratified" the potential usage of a nuclear weapon against South Korea, Japan and the United States of America, including Guam and Hawaii. Hagel also stated that the United States would deploy the Terminal High Altitude Area Defense anti-ballistic missile system to Guam, because of a credible and realistic nuclear threat from North Korea.
In 2016, it was revealed that North Korea approached the United States about conducting formal peace talks to formally end the war. While the White House agreed to secret peace talks, the plan was rejected due to the country's refusal to discuss nuclear disarmament as part of the terms of the treaty. Any possibility of talks ended on 6 January when they conducted their fourth nuclear test.
Characteristics
thumb|Korean War memorials are found in every UN Command Korean War participant country; this one is in Pretoria, South Africa.
Casualties
According to the data from the U.S. Department of Defense, the United States suffered 33,686 battle deaths, along with 2,830 non-battle deaths, during the Korean War. U.S. battle deaths were 8,516 up to their first engagement with the Chinese on 1 November 1950.Defense Casualty Analysis System search Korean War Extract Data File. Accessed 21 December 2014. South Korea reported some 373,599 civilian and 137,899 military deaths. Western sources estimate the PVA suffered about 400,000 killed and 486,000 wounded, while the KPA suffered 215,000 killed and 303,000 wounded.
Data from official Chinese sources, on the other hand, reported that the PVA had suffered 114,000 battle deaths, 34,000 non-battle deaths, 340,000 wounded, 7,600 missing and during the war. 7,110 Chinese POWs were repatriated to China. Chinese sources also reported that North Korea had suffered 290,000 casualties, 90,000 captured and a large number of civilian deaths.
The Chinese and North Koreans estimated that about 390,000 soldiers from the United States, 660,000 soldiers from South Korea and 29,000 other UN soldiers were "eliminated" from the battlefield.
Recent scholarship has put the full battle death toll on all sides at just over 1.2 million.Bethany Lacina and Nils Petter Gleditsch, Monitoring Trends in Global Combat: A New Dataset of Battle Deaths, European Journal of Population (2005) 21: 145–166. Also available here
Armored warfare
The initial assault by North Korean KPA forces was aided by the use of Soviet T-34-85 tanks. A North Korean tank corps equipped with about 120 T-34s spearheaded the invasion. These drove against a ROK Army with few anti-tank weapons adequate to deal with the Soviet T-34s. Additional Soviet armor was added as the offensive progressed. The North Korean tanks had a good deal of early successes against South Korean infantry, elements of the 24th Infantry Division, and the United States built M24 Chaffee light tanks that they encountered.Zaloga & Kinnear 1996:36 Interdiction by ground attack aircraft was the only means of slowing the advancing Korean armor. The tide turned in favour of the United Nations forces in August 1950 when the North Koreans suffered major tank losses during a series of battles in which the UN forces brought heavier equipment to bear, including M4A3 Sherman medium tanks backed by U.S. M26 heavy tanks, along with British Centurion, Churchill, and Cromwell tanks.
The U.S. landings at Inchon on 15 September cut off the North Korean supply lines, causing their armored forces and infantry to run out of fuel, ammunition, and other supplies. As a result, the North Koreans had to retreat, and many of the T-34s and heavy weapons had to be abandoned. By the time the North Koreans withdrew from the South, a total of 239 T-34s and 74 SU-76s had been lost. After November 1950, North Korean armor was rarely encountered.Zaloga & Kinnear 1996:33-4
Following the initial assault by the north, the Korean War saw limited use of the tank and featured no large-scale tank battles. The mountainous, forested terrain, especially in the Eastern Central Zone, was poor tank country, limiting their mobility. Through the last two years of the war in Korea, UN tanks served largely as infantry support and mobile artillery pieces.
Naval warfare
thumb|To disrupt North Korean communications, fires a salvo from its 16-inch guns at shore targets near Chongjin, North Korea, 21 October 1950
Further information: List of U.S. Navy ships sunk or damaged in action during the Korean conflict
Because neither Korea had a significant navy, the Korean War featured few naval battles. A skirmish between North Korea and the UN Command occurred on 2 July 1950; the U.S. Navy cruiser , the Royal Navy cruiser , and the frigate fought four North Korean torpedo boats and two mortar gunboats, and sank them.
USS Juneau later sank several ammunition ships that had been present. The last sea battle of the Korean War occurred at Inchon, days before the Battle of Incheon; the ROK ship PC-703 sank a North Korean mine layer in the Battle of Haeju Island, near Inchon. Three other supply ships were sunk by PC-703 two days later in the Yellow Sea. Thereafter, vessels from the UN nations held undisputed control of the sea about Korea. The gun ships were used in shore bombardment, while the aircraft carriers provided air support to the ground forces.
During most of the war, the UN navies patrolled the west and east coasts of North Korea, sinking supply and ammunition ships and denying the North Koreans the ability to resupply from the sea. Aside from very occasional gunfire from North Korean shore batteries, the main threat to United States and UN navy ships was from magnetic mines. During the war, five U.S. Navy ships were lost to mines: two minesweepers, two minesweeper escorts, and one ocean tug. Mines and gunfire from North Korean coastal artillery damaged another 87 U.S. warships, resulting in slight to moderate damage.
Aerial warfare
The Korean War was the first war in which jet aircraft played the central role in air combat. Once-formidable fighters such as the P-51 Mustang, F4U Corsair, and Hawker Sea Fury—all piston-engined, propeller-driven, and designed during World War II—relinquished their air-superiority roles to a new generation of faster, jet-powered fighters arriving in the theater. For the initial months of the war, the P-80 Shooting Star, F9F Panther, Gloster Meteor and other jets under the UN flag dominated North Korea's prop-driven air force of Soviet Yakovlev Yak-9 and Lavochkin La-9s.
The Chinese intervention in late October 1950 bolstered the Korean People's Air Force (KPAF) of North Korea with the MiG-15, one of the world's most advanced jet fighters. The heavily armed MiGs were faster than first-generation UN jets and so could reach and destroy U.S. B-29 Superfortress bomber flights despite their fighter escorts. With increasing B-29 losses, the Air Force was forced to switch from a daylight bombing campaign to a safer but less accurate nighttime bombing of targets.
thumb|upright|left|A B-29 Superfortress bomber unloading its bombs.
The USAF countered the MiG-15 by sending over three squadrons of its most capable fighter, the F-86 Sabre. These arrived in December 1950. The MiG was designed as a bomber interceptor. It had a very high service ceiling— and carried very heavy weaponry: one 37 mm cannon and two 23 mm cannons. The F-86 had a ceiling of and were armed with six .50 caliber (12.7 mm) machine guns, which were range adjusted by radar gunsights. If coming in at higher altitude the advantage of engaging or not went to the MiG. Once in a level flight dogfight, both swept-wing designs attained comparable maximum speeds of around . The MiG climbed faster, but the Sabre turned and dived better.
In summer and autumn 1951, the outnumbered Sabres of the USAF's 4th Fighter Interceptor Wing—only 44 at one point—continued seeking battle in MiG Alley, where the Yalu River marks the Chinese border, against Chinese and North Korean air forces capable of deploying some 500 aircraft. Following Colonel Harrison Thyng's communication with the Pentagon, the 51st Fighter-Interceptor Wing finally reinforced the beleaguered 4th Wing in December 1951; for the next year-and-a-half stretch of the war, aerial warfare continued.
thumb|A US Navy Sikorsky HO4S flying near
Unlike the Vietnam War, in which the Soviet Union only officially sent "advisers", in the Korean aerial war Soviet forces participated via the 64th Airborne Corps. Fearful of confronting the United States directly, the Soviet Union denied involvement of their personnel in anything other than an advisory role, but air combat quickly resulted in Soviet pilots dropping their code signals and speaking over the wireless in Russian. This known direct Soviet participation was a casus belli that the UN Command deliberately overlooked, lest the war for the Korean peninsula expand to include the Soviet Union, and potentially escalate into atomic warfare. 1,106 enemy airplanes were officially downed by the Soviet pilots, 52 of whom got ace status. The Soviet system of confirming air kills erred on the conservative side; the pilot's words had to be corroborated and enemy aircraft falling into the sea were not counted, the number might exceed 1,106.
After the war, and to the present day, the USAF reports an F-86 Sabre kill ratio in excess of 10:1, with 792 MiG-15s and 108 other aircraft shot down by Sabres, and 78 Sabres lost to enemy fire. The Soviet Air Force reported some 1,100 air-to-air victories and 335 MiG combat losses, while China's People's Liberation Army Air Force (PLAAF) reported 231 combat losses, mostly MiG-15s, and 168 other aircraft lost. The KPAF reported no data, but the UN Command estimates some 200 KPAF aircraft lost in the war's first stage, and 70 additional aircraft after the Chinese intervention. The USAF disputes Soviet and Chinese claims of 650 and 211 downed F-86s, respectively. However, one unconfirmed source claims that the U.S. Air Force has more recently cited 230 losses out of 674 F-86s deployed to Korea.
The Korean War marked a major milestone not only for fixed-wing aircraft, but also for rotorcraft, featuring the first large-scale deployment of helicopters for medical evacuation (medevac). In 1944–1945, during the Second World War, the YR-4 helicopter saw limited ambulance duty, but in Korea, where rough terrain trumped the jeep as a speedy medevac vehicle, helicopters like the Sikorsky H-19 helped reduce fatal casualties to a dramatic degree when combined with complementary medical innovations such as Mobile Army Surgical Hospitals. The limitations of jet aircraft for close air support highlighted the helicopter's potential in the role, leading to development of the AH-1 Cobra and other helicopter gunships used in the Vietnam War (1965–75).
Bombing North Korea
The first major U.S. strategic bombing campaign against North Korea, begun in late July 1950, was conceived much along the lines of the major offensives of World War II. On 12 August 1950, the U.S. Air Force dropped 625 tons of bombs on North Korea; two weeks later, the daily tonnage increased to some 800 tons. After the Chinese intervention in November, General MacArthur ordered the increased bombing campaign on North Korea, including incendiary attacks against their arsenals and communications centers and especially against the "Korean end" of all the bridges across the Yalu River. As with the aerial bombing campaigns over Germany and Japan in World War II, the nominal objective of the U.S. Air Force was to destroy North Korea's war infrastructure and shatter their morale. After MacArthur was removed as Supreme Commander in Korea in April 1951, his successors continued this policy and eventually extended it to all of North Korea. Overall, the U.S. dropped 635,000 tons of bombs—including 32,557 tons of napalm—on Korea, more than they did during the whole Pacific campaign of World War II.
thumb|left|A USAF Douglas B-26B Invader of the 452nd Bombardment Wing bombing a target in North Korea, 29 May 1951.
As a result, almost every substantial building in North Korea was destroyed. The war's highest-ranking American POW, U.S. Major General William F. Dean, reported that most of the North Korean cities and villages he saw were either rubble or snow-covered wastelands.William F Dean (1954) General Dean's Story, (as told to William L Worden), Viking Press, pp. 272–273. North Korean factories, schools, hospitals, and government offices were forced to move underground, and air defenses were "virtually non-existent." In November 1950, the North Korean leadership instructed their population to build dugouts and mud huts, as well as dig underground tunnels, in order to solve the acute housing problem. U.S. Air Force General Curtis LeMay commented, "we went over there and fought the war and eventually burned down every town in North Korea anyway, some way or another, and some in South Korea, too." Pyongyang, which saw 75 percent of its area destroyed, was so devastated that bombing was halted as there were no longer any worthy targets. On 28 November, Bomber Command reported on the campaign's progress: 95 percent of Manpojin was destroyed, along with 90 percent of Hoeryong, Namsi and Koindong, 85 percent of Chosan, 75 percent of both Sakchu and Huichon, and 20 percent of Uiju. According to USAF damage assessments, "eighteen of twenty-two major cities in North Korea had been at least half obliterated." By the end of the campaign, US bombers had difficulty in finding targets and were reduced to bombing footbridges or jettisoning their bombs into the sea.
As well as conventional bombing, the Communist side claimed that the U.S. had used biological weapons. These claims have been disputed; Conrad Crane asserts that while the U.S. worked towards developing chemical and biological weapons, the American military "possessed neither the ability, nor the will", to use them in combat.
U.S. threat of atomic warfare
thumb|Mark 4 bomb, seen on display, which was transferred to the 9th Bombardment Wing, Heavy
On 5 November 1950, the Joint Chiefs of Staff (JCS) issued orders for the retaliatory atomic bombing of Manchurian PRC military bases, if either their armies crossed into Korea or if PRC or KPA bombers attacked Korea from there. The President ordered the transfer of nine Mark 4 nuclear bombs "to the Air Force's Ninth Bomb Group, the designated carrier of the weapons ... [and] signed an order to use them against Chinese and Korean targets", which he never transmitted.
Many American officials viewed the deployment of nuclear-capable (but not nuclear-armed) B-29 bombers to Britain as helping to resolve the Berlin Blockade of 1948–1949. Truman and Eisenhower both had military experience and viewed nuclear weapons as potentially usable components of their military. During Truman's first meeting to discuss the war on 25 June 1950, he ordered plans be prepared for attacking Soviet forces if they entered the war. By July, Truman approved another B-29 deployment to Britain, this time with bombs (but without their cores), to remind the Soviets of American offensive ability. Deployment of a similar fleet to Guam was leaked to The New York Times. As United Nations forces retreated to Pusan, and the CIA reported that mainland China was building up forces for a possible invasion of Taiwan, the Pentagon believed that Congress and the public would demand using nuclear weapons if the situation in Korea required them.
As Chinese forces pushed back the United States forces from the Yalu River, Truman stated during a 30 November 1950 press conference that using nuclear weapons had "always been [under] active consideration", with control under the local military commander. The Indian ambassador, K. Madhava Panikkar, reports "that Truman announced that he was thinking of using the atom bomb in Korea. But the Chinese seemed totally unmoved by this threat ... The propaganda against American aggression was stepped up. The 'Aid Korea to resist America' campaign was made the slogan for increased production, greater national integration, and more rigid control over anti-national activities. One could not help feeling that Truman's threat came in very useful to the leaders of the Revolution, to enable them to keep up the tempo of their activities."
After his statement caused concern in Europe, Truman met on 4 December 1950 with UK prime minister and Commonwealth spokesman Clement Attlee, French Premier René Pleven, and Foreign Minister Robert Schuman to discuss their worries about atomic warfare and its likely continental expansion. The United States' forgoing atomic warfare was not because of "a disinclination by the Soviet Union and People's Republic of China to escalate" the Korean War, but because UN allies—notably from the UK, the Commonwealth, and France—were concerned about a geopolitical imbalance rendering NATO defenseless while the United States fought China, who then might persuade the Soviet Union to conquer Western Europe. The Joint Chiefs of Staff advised Truman to tell Attlee that the United States would use nuclear weapons only if necessary to protect an evacuation of UN troops, or to prevent a "major military disaster".
On 6 December 1950, after the Chinese intervention repelled the UN Command armies from northern North Korea, General J. Lawton Collins (Army Chief of Staff), General MacArthur, Admiral C. Turner Joy, General George E. Stratemeyer, and staff officers Major General Doyle Hickey, Major General Charles A. Willoughby, and Major General Edwin K. Wright met in Tokyo to plan strategy countering the Chinese intervention; they considered three potential atomic warfare scenarios encompassing the next weeks and months of warfare.
In the first scenario: If the PVA continued attacking in full and the UN Command was forbidden to blockade and bomb China, and without ROC reinforcements, and without an increase in U.S. forces until April 1951 (four National Guard divisions were due to arrive), then atomic bombs might be used in North Korea.
In the second scenario: If the PVA continued full attacks and the UN Command had blockaded China and had effective aerial reconnaissance and bombing of the Chinese interior, and the ROC soldiers were maximally exploited, and tactical atomic bombing was to hand, then the UN forces could hold positions deep in North Korea.
In the third scenario: if China agreed to not cross the 38th parallel border, General MacArthur recommended UN acceptance of an armistice disallowing PVA and KPA troops south of the parallel, and requiring PVA and KPA guerrillas to withdraw northwards. The U.S. Eighth Army would remain to protect the Seoul–Incheon area, while X Corps would retreat to Pusan. A UN commission should supervise implementation of the armistice.
Both the Pentagon and the State Department were nonetheless cautious about using nuclear weapons because of the risk of general war with China and the diplomatic ramifications. Truman and his senior advisors agreed, and never seriously considered using them in early December 1950 despite the poor military situation in Korea.
In 1951, the U.S. escalated closest to atomic warfare in Korea. Because China had deployed new armies to the Sino-Korean frontier, pit crews at the Kadena Air Base, Okinawa, assembled atomic bombs for Korean warfare, "lacking only the essential pit nuclear cores". In October 1951, the United States effected Operation Hudson Harbor to establish a nuclear weapons capability. USAF B-29 bombers practised individual bombing runs from Okinawa to North Korea (using dummy nuclear or conventional bombs), coordinated from Yokota Air Base in east-central Japan. Hudson Harbor tested "actual functioning of all activities which would be involved in an atomic strike, including weapons assembly and testing, leading, ground control of bomb aiming". The bombing run data indicated that atomic bombs would be tactically ineffective against massed infantry, because the "timely identification of large masses of enemy troops was extremely rare."
Ridgway was authorized to use nuclear weapons if a major air attack originated from outside Korea. An envoy was sent to Hong Kong to deliver a warning to China. The message likely caused Chinese leaders to be more cautious about potential American use of nuclear weapons, but whether they learned about the B-29 deployment is unclear and the failure of the two major Chinese offensives that month likely was what caused them to shift to a defensive strategy in Korea. The B-29s returned to the United States in June.
Despite the greater destructive power deploying atomic weapons would bring to the war, their effects on determining the war's outcome would have likely been minimal. Tactically, given the dispersed nature of Chinese and North Korean forces, the relatively primitive infrastructure for staging and logistics centers, and the small number of bombs available (most would have been conserved for use against the Soviets), atomic attacks would have limited effects against the ability of China to mobilize and move forces. Strategically, attacking Chinese cities to destroy civilian industry and infrastructure would cause the immediate dispersion of the leadership away from such areas and give propaganda value for the communists to galvanize the support of Chinese civilians. Since the Soviets were not expected to intervene with their few primitive atomic weapons on China or North Korea's behalf if the U.S. used theirs first, factors such as little operational value and the lowering of the "threshold" for using atomic weapons against non-nuclear states in future conflicts played more of a role in not employing them than the threat of a possible nuclear exchange.
When Eisenhower succeeded Truman in early 1953 he was similarly cautious about using nuclear weapons in Korea, including for diplomatic purposes to encourage progress in the ongoing truce discussions. The administration prepared contingency plans for using them against China, but like Truman, the new president feared that doing so would result in Soviet attacks on Japan. The war ended as it had begun, without American nuclear weapons deployed near battle.
War crimes
Civilian deaths and massacres
thumb|upright|float|right|South Korean soldiers walk among the bodies of political prisoners executed near Daejon, July 1950
thumb|upright|float|right|Civilians killed during a night battle near Yongsan, August 1950
There were numerous atrocities and massacres of civilians throughout the Korean war committed by both the North and South Koreans. Many of them started on the first days of the war. South Korean President Syngman Rhee ordered the Bodo League massacre on 28 June, beginning numerous killings of more than 100,000 suspected leftist sympathizers and their families by South Korean officials and right-wing groups. During the massacre, the British protested to their allies and saved some citizens.
In occupied areas, North Korean Army political officers purged South Korean society of its intelligentsia by executing every educated person—academic, governmental, religious—who might lead resistance against the North; the purges continued during the NPA retreat. When the North Koreans retreated north in September 1950, they abducted tens of thousands of South Korean men. The reasons are not clear, but the intention might have been to acquire skilled professionals to the North.
In addition to conventional military operations, North Korean soldiers fought the UN forces by infiltrating guerrillas among refugees. These soldiers disguised as refugees would approach UN forces asking for food and help, then open fire and attack. U.S. troops acted under a "shoot-first-ask-questions-later" policy against any civilian refugee approaching U.S. battlefield positions, a policy that led U.S. soldiers to kill an estimated 400 civilians at No Gun Ri (26–29 July 1950) in central Korea because they believed some of the refugees to be North Korean soldiers in disguise. The South Korean Truth and Reconciliation Commission defended this policy as a "military necessity".
Beginning in 2005, the South Korean Truth and Reconciliation Commission has investigated numerous atrocities committed by the Japanese colonial government, North Korean military, U.S. military, and the authoritarian South Korean government. It has investigated atrocities before, during and after the Korean War.
The Commission has verified over 14,000 civilians were killed in the Jeju uprising (1948–49) that involved South Korean military and paramilitary units against pro-North Korean guerrillas. Although most of the fighting had subsided by 1949, fighting continued until 1950. The Commission estimates 86% of the civilians were killed by South Korean forces. The Americans on the island documented the events, but never intervened.
Prisoners of war
thumb|upright|left|float|A U.S. Marine guards North Korean prisoners of war aboard an American warship in 1951.
During the first days of the war North Korean soldiers committed the Seoul National University Hospital massacre.
The United States reported that North Korea mistreated prisoners of war: soldiers were beaten, starved, put to forced labor, marched to death, and summarily executed.
The KPA killed POWs at the battles for Hill 312, Hill 303, the Pusan Perimeter, and Daejeon; these massacres were discovered afterwards by the UN forces. Later, a U.S. Congress war crimes investigation, the United States Senate Subcommittee on Korean War Atrocities of the Permanent Subcommittee of the Investigations of the Committee on Government Operations, reported that "two-thirds of all American prisoners of war in Korea died as a result of war crimes".
Although the Chinese rarely executed prisoners like their North Korean counterparts, mass starvation and diseases swept through the Chinese-run POW camps during the winter of 1950–51. About 43 percent of all U.S. POWs died during this period. The Chinese defended their actions by stating that all Chinese soldiers during this period were suffering mass starvation and diseases due to logistical difficulties. The UN POWs said that most of the Chinese camps were located near the easily supplied Sino-Korean border, and that the Chinese withheld food to force the prisoners to accept the communism indoctrination programs. According to Chinese reports, over a thousand U.S. POWs died by the end of June 1951, while a dozen British POWs died, and all Turkish POW survived.中国人民解放军总政治部联络部编. 敌军工作史料·第6册(1949年-1955年). 1989 According to Hastings, wounded U.S. POWs died for lack of medical attention and were fed a diet of corn and millet "devoid of vegetables, almost barren of proteins, minerals, or vitamins" with only 1/3 the calories of their usual diet. Especially in early 1951, thousands of prisoners lost the will to live and "simply declined to eat the mess of sorghum and rice with which they were provided."Hastings. The Korean War. Guild Publishing London. 1987 : 290-292
thumb|upright|float|right|alt=Two men without shirts on sit surrounded by soldiers|Two Hill 303 survivors after being rescued by American units, 17 August 1950.
Chinese POWs said that the UN forces helped anti-Communism POWs to torture Chinese POWs, such as to put anti-Communism tattoos on their body by force, so that they would have to refuse to be repatriated back to the north. They even killed Communist POWs in public, to frighten the others.
The unpreparedness of U.S. POWs to resist heavy communist indoctrination during the Korean War led to the Code of the United States Fighting Force which governs how U.S. military personnel in combat should act when they must "evade capture, resist while a prisoner or escape from the enemy".The military Code of Conduct: a brief history
North Korea may have detained up to 50,000 South Korean POWs after the ceasefire. Over 88,000 South Korean soldiers were missing and the Communists' themselves had claimed that they had captured 70,000 South Koreans. However, when ceasefire negotiations began in 1951, the Communists reported that they held only 8,000 South Koreans. The UN Command protested the discrepancies and alleged that the Communists were forcing South Korean POWs to join the KPA.
The Communist side denied such allegations. They claimed that their POW rosters were small because many POWs were killed in UN air raids and that they had released ROK soldiers at the front. They insisted that only volunteers were allowed to serve in the KPA. By early 1952, UN negotiators gave up trying to get back the missing South Koreans. The POW exchange proceeded without access to South Korean POWs not on the Communist rosters.
North Korea continued to claim that any South Korean POW who stayed in the North did so voluntarily. However, since 1994, South Korean POWs have been escaping North Korea on their own after decades of captivity. As of 2010, the South Korean Ministry of Unification reported that 79 ROK POWs had escaped the North. The South Korean government estimates 500 South Korean POWs continue to be detained in North Korea.
The escaped POWs have testified about their treatment and written memoirs about their lives in North Korea. They report that they were not told about the POW exchange procedures, and were assigned to work in mines in the remote northeastern regions near the Chinese and Russian border. Declassified Soviet Foreign Ministry documents corroborate such testimony.
In 1997, the Geoje POW Camp in South Korea was turned into a memorial.
Starvation
In December 1950, National Defense Corps was founded; the soldiers were 406,000 drafted citizens.
In the winter of 1951, 50,000 to 90,000 South Korean National Defense Corps soldiers starved to death while marching southward under the Chinese offensive when their commanding officers embezzled funds earmarked for their food. This event is called the National Defense Corps Incident. There is no evidence that Syngman Rhee was personally involved in or benefited from the corruption.
Recreation
thumb|upright|right|float|Bob Hope entertained X Corps in Korea on 26 October 1950.
In 1950, Secretary of Defense George C. Marshall and Secretary of the Navy Francis P. Matthews called on the USO which was disbanded by 1947 to provide support for U.S. servicemen. By the end of the war, more than 113,000 American USO volunteers were working at home front and abroad. Many stars came to Korea to give their performances. Throughout the Korean War, UN Comfort Stations were operated by South Korean officials for UN soldiers.
Aftermath
thumb|left|upright|float|The Korean Peninsula at night, shown in a 2012 composite photograph from NASA.
Postwar recovery was different in the two Koreas. South Korea stagnated in the first postwar decade. In 1953, South Korea and the United States concluded a Mutual Defense Treaty. In 1960, the April Revolution occurred and students joined an anti-Syngman Rhee demonstration; 142 were killed by police; in consequence Syngman Rhee resigned and left for exile in the United States. Park Chung-hee's May 16 coup enabled social stability. In the 1960s, prostitution and related services earned 25 percent of South Korean GNP. From 1965 to 1973, South Korea dispatched troops to Vietnam and received $235,560,000 allowance and military procurement from the United States. GNP increased fivefold during the Vietnam War. South Korea industrialized and modernized. Contemporary North Korea remains underdeveloped.South Korea's debt-to-GDP ratio reaches 34% in 2011 – Xinhua | English.news.cn. News.xinhuanet.com (10 April 2012). Retrieved on 12 July 2013.North Korea cornered with snowballing debts-The Korea Herald. View.koreaherald.com (18 August 2010). Retrieved on 12 July 2013. South Korea had one of the world's fastest-growing economies from the early 1960s to the late 1990s. In 1957 South Korea had a lower per capita GDP than Ghana, and by 2010 it was ranked thirteenth in the world (Ghana was 86th).
Following extensive USAF bombing, North Korea "had been virtually destroyed as an industrial society." After the armistice, Kim Il-Sung requested Soviet economic and industrial assistance. In September 1953, the Soviet government agreed to "cancel or postpone repayment for all ... outstanding debts", and promised to grant North Korea one billion rubles in monetary aid, industrial equipment and consumer goods. Eastern European members of the Soviet Bloc also contributed with "logistical support, technical aid, [and] medical supplies." China cancelled North Korea's war debts, provided 800 million yuan, promised trade cooperation, and sent in thousands of troops to rebuild damaged infrastructure.
Postwar, about 100,000 North Koreans were executed in purges.Courtois, Stephane, The Black Book of Communism, Harvard University Press, 1999, pg. 564. According to Rummel, forced labor and concentration camps were responsible for over one million deaths in North Korea from 1945 to 1987; others have estimated 400,000 deaths in concentration camps alone.Omestad, Thomas, "Gulag Nation", U.S. News & World Report, 23 June 2003. Estimates based on the most recent North Korean census suggest that 240,000 to 420,000 people died as a result of the 1990s North Korean famine and that there were 600,000 to 850,000 unnatural deaths in North Korea from 1993 to 2008. The North Korean government has been accused of "crimes against humanity" for its alleged culpability in creating and prolonging the 1990s famine. A study by South Korean anthropologists of North Korean children who had defected to China found that 18-year-old males were 5 inches shorter than South Koreans their age because of malnutrition.
South Korean anti-Americanism after the war was fueled by the presence and behavior of American military personnel (USFK) and U.S. support for the authoritarian regime, a fact still evident during the country's democratic transition in the 1980s. However, anti-Americanism has declined significantly in South Korea in recent years, from 46% favorable in 2003 to 74% favorable in 2011,"Global Unease With Major World Powers". Pew Research Center. 27 June 2007. making South Korea one of the most pro-American countries in the world.Views of US Continue to Improve in 2011 BBC Country Rating Poll, 7 March 2011.
In addition, a large number of mixed-race "G.I. babies" (offspring of American and other UN soldiers and Korean women) were filling up the country's orphanages. Because Korean traditional society places significant weight on paternal family ties, bloodlines, and purity of race, children of mixed race or those without fathers are not easily accepted in South Korean society. International adoption of Korean children began in 1954. The U.S. Immigration Act of 1952 legalized the naturalization of non-whites as American citizens, and made possible the entry of military spouses and children from South Korea after the Korean War. With the passage of the Immigration Act of 1965, which substantially changed U.S. immigration policy toward non-Europeans, Koreans became one of the fastest-growing Asian groups in the United States.
Mao Zedong's decision to take on the United States in the Korean War was a direct attempt to confront what the Communist bloc viewed as the strongest anti-Communist power in the world, undertaken at a time when the Chinese Communist regime was still consolidating its own power after winning the Chinese Civil War. Mao supported intervention not to save North Korea, but because he believed that a military conflict with the United States was inevitable after the United States entered the Korean War, and also to appease the Soviet Union in order to secure military dispensation and achieve Mao's goal of making China a major world military power. Mao was equally ambitious in improving his own prestige inside the communist international community by demonstrating that his Marxist concerns were international. In his later years Mao believed that Stalin only gained a positive opinion of him after China's entrance into the Korean War. Inside Mainland China, the war improved the long-term prestige of Mao, Zhou, and Peng, allowing the Chinese Communist Party to increase its legitimacy while weakening anti-Communist dissent.
thumb|float|right|North Koreans touring the "Museum of American War Atrocities" in 2009
The Chinese government have encouraged the point of view that the war was initiated by the United States and South Korea, though ComIntern documents have shown that Mao sought approval from Joseph Stalin to enter the war. In Chinese media, the Chinese war effort is considered as an example of China's engaging the strongest power in the world with an under-equipped army, forcing it to retreat, and fighting it to a military stalemate. These successes were contrasted with China's historical humiliations by Japan and by Western powers over the previous hundred years, highlighting the abilities of the People's Liberation Army and the Chinese Communist Party. The most significant negative long-term consequence of the war (for China) was that it led the United States to guarantee the safety of Chiang Kai-shek's regime in Taiwan, effectively ensuring that Taiwan would remain outside of PRC control until the present day. Mao had also discovered the usefulness of large-scale mass movements in the war while implementing them among most of his ruling measures over PRC.沈志华、李丹慧.《战后中苏关系若干问题研究》(Research into Some Issues of Sino-USSR Relationship After WWII)人民出版社,2006年:pp.115 Finally, anti-American sentiments, which were already a significant factor during the Chinese Civil War, was ingrained into Chinese culture during the Communist propaganda campaigns of the Korean War.
The Korean War affected other participant combatants. Turkey, for example, entered NATO in 1952, and the foundation was laid for bilateral diplomatic and trade relations with South Korea.
See also
1st Commonwealth Division
Australia in the Korean War
Canada in the Korean War
Historical revisionism (negationism)#North Korea and the Korean War
Joint Advisory Commission, Korea
Korean conflict
Korean DMZ Conflict (1966–1969)
Korean reunification
Korean War in popular culture
List of books about the Korean War
List of Korean War weapons
List of Korean War Medal of Honor recipients
List of military equipment used in the Korean War
List of wars and anthropogenic disasters by death toll
New Zealand in the Korean War
Operation Big Switch
Operation Little Switch
Operation Moolah
Partisans in Korean War, Partisan Movement
Philippine Expeditionary Forces to Korea
Pyongyang Sally
UNCMAC—the UN Command Military Armistice Commission operating from 1953 to the present
UNCURK—the 1951 UN Commission for the Unification and Rehabilitation of Korea
UNTCOK—the 1950 United Nations Temporary Commission on Korea
M*A*S*H (TV series)
MASH (film)
War memorials:
United Nations Memorial Cemetery, Busan, Republic of Korea
Korean War Veterans Memorial, Washington, D.C.
Philadelphia Korean War Memorial
National War Memorial (New Zealand)
Korean War Memorial Wall, Brampton, Ontario
War Memorial of Korea Yongsan-dong, Yongsan-gu, Seoul, South Korea
Footnotes
Citations
References
External links
Historical
Anniversary of the Korean War Armistice: Truman on Acheson's Crucial Role in Going to War Shapell Manuscript Foundation
Korean War resources, Dwight D. Eisenhower Presidential Library
North Korea International Documentation Project
Grand Valley State University Veteran's History Project digital collection
The Forgotten War, Remembered – four testimonials in The New York Times
Collection of Books and Research Materials on the Korean War an online collection of the United States Army Center of Military History
Korean War, US Army Signal Corps Photograph Collection US Army Heritage and Education Center, Carlisle, Pennsylvania
The Korean War at History.com
Korean-War.com
Koreanwar-educator.org
Media
The Korean War You Never Knew & Life in the Korean War – slideshows by Life magazine
QuickTime sequence of 27 maps adapted from the West Point Atlas of American Wars
Animation for operations in 1950
Animation for operations in 1951
US Army Korea Media Center official Korean War online image archive
Rare pictures of the Korean War from the U.S. Library of Congress and National Archives
Land of the Morning Calm Canadians in Korea – multimedia project including veteran interviews
Pathé Online newsreel archive featuring films on the war
CBC Digital Archives—Forgotten Heroes: Canada and the Korean War
Organizations
Korea Defense Veterans of America
Korean War Ex-POW Association
Korean War Veterans Association
The Center for the Study of the Korean War
Memorials
Korean Children's War Memorial
Chinese 50th Anniversary Korean War Memorial
Category:Revolution-based civil wars
Category:Civil wars involving the states and peoples of Asia
Category:History of Korea
Korea
Category:Wars involving Australia
Category:Wars involving Canada
Category:Wars involving Belgium
Category:Wars involving the People's Republic of China
Category:Wars involving Ethiopia
Category:Wars involving France
Category:Wars involving Greece
Category:Wars involving Luxembourg
Category:Wars involving New Zealand
Category:Wars involving South Africa
Category:Wars involving Thailand
Category:Wars involving Turkey
Category:Wars involving the Netherlands
Category:Wars involving the Philippines
Category:Wars involving the Soviet Union
Category:Wars involving the United Kingdom
Category:Wars involving the United States
Category:Conflicts in 1950
Category:Conflicts in 1951
Category:Conflicts in 1952
Category:Conflicts in 1953
Category:Civil wars post-1945
Category:Communism-based civil wars
Category:Aftermath of World War II
Category:Wars involving North Korea
Category:Wars involving South Korea
Category:1950s in North Korea
Category:1950s in South Korea
Category:Proxy wars | 16,772 | 2017-01 |
Biodiversity | Biodiversity, a contraction of "biological diversity," generally refers to the variety and variability of life on Earth. One of the most widely used definitions defines it in terms of the variability within species, between species and between ecosystems. It is a measure of the variety of organisms present in different ecosystems. This can refer to genetic variation, ecosystem variation, or species variation (number of species) within an area, biome, or planet. Terrestrial biodiversity tends to be greater near the equator, which seems to be the result of the warm climate and high primary productivity. Biodiversity is not distributed evenly on Earth. It is richest in the tropics. Marine biodiversity tends to be highest along coasts in the Western Pacific, where sea surface temperature is highest and in the mid-latitudinal band in all oceans. There are latitudinal gradients in species diversity. Biodiversity generally tends to cluster in hotspots, and has been increasing through time, but will be likely to slow in the future.
The number and variety of plants, animals and other organisms that exist is known as biodiversity. It is an essential component of nature and it ensures the survival of human species by providing food, fuel, shelter, medicines and other resources to mankind. The richness of biodiversity depends on the climatic conditions and area of the region. All species of plants taken together are known as flora and about 300,000 species of plants are known to date. All species of animals taken together are known as fauna which includes birds, mammals, fish, reptiles, insects, crustaceans, molluscs, etc.
Rapid environmental changes typically cause mass extinctions. More than 99 percent of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86 percent have not yet been described. More recently, in May 2016, scientists reported that 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described. The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037 and weighs 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon). In July 2016, scientists reported identifying a set of 355 genes from the Last Universal Common Ancestor (LUCA) of all organisms living on Earth.
The age of the Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates at least from 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. There are microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old meta-sedimentary rocks discovered in Western Greenland. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. Early edition, published online before print. According to one of the researchers, "If life arose relatively quickly on Earth .. then it could be common in the universe."
Since life began on Earth, five major mass extinctions and several minor events have led to large and sudden drops in biodiversity. The Phanerozoic eon (the last 540 million years) marked a rapid growth in biodiversity via the Cambrian explosion—a period during which the majority of multicellular phyla first appeared. The next 400 million years included repeated, massive biodiversity losses classified as mass extinction events. In the Carboniferous, rainforest collapse led to a great loss of plant and animal life. The Permian–Triassic extinction event, 251 million years ago, was the worst; vertebrate recovery took 30 million years. The most recent, the Cretaceous–Paleogene extinction event, occurred 65 million years ago and has often attracted more attention than others because it resulted in the extinction of the dinosaurs.
The period since the emergence of humans has displayed an ongoing biodiversity reduction and an accompanying loss of genetic diversity. Named the Holocene extinction, the reduction is caused primarily by human impacts, particularly habitat destruction. Conversely, biodiversity impacts human health in a number of ways, both positively and negatively.
The United Nations designated 2011–2020 as the United Nations Decade on Biodiversity.
Etymology
The term biological diversity was used first by wildlife scientist and conservationist Raymond F. Dasmann in the year 1968 lay book A Different Kind of Country advocating conservation. The term was widely adopted only after more than a decade, when in the 1980s it came into common usage in science and environmental policy. Thomas Lovejoy, in the foreword to the book Conservation Biology, introduced the term to the scientific community. Until then the term "natural diversity" was common, introduced by The Science Division of The Nature Conservancy in an important 1975 study, "The Preservation of Natural Diversity." By the early 1980s TNC's Science program and its head, Robert E. Jenkins, Lovejoy and other leading conservation scientists at the time in America advocated the use of the term "biological diversity".
The term's contracted form biodiversity may have been coined by W.G. Rosen in 1985 while planning the 1986 National Forum on Biological Diversity organized by the National Research Council (NRC). It first appeared in a publication in 1988 when sociobiologist E. O. Wilson used it as the title of the proceedings online edition of that forum. Annex 6, Glossary. Used as source by "Biodiversity", Glossary of terms related to the CBD, Belgian Clearing-House Mechanism. Retrieved 2006-04-26.
Since this period the term has achieved widespread use among biologists, environmentalists, political leaders and concerned citizens.
A similar term in the United States is "natural heritage." It pre-dates the others and is more accepted by the wider audience interested in conservation. Broader than biodiversity, it includes geology and landforms.
Definitions
thumb|right|A sampling of fungi collected during summer 2008 in Northern Saskatchewan mixed woods, near LaRonge is an example regarding the species diversity of fungi. In this photo, there are also leaf lichens and mosses.
"Biodiversity" is most commonly used to replace the more clearly defined and long established terms, species diversity and species richness. Biologists most often define biodiversity as the "totality of genes, species and ecosystems of a region". An advantage of this definition is that it seems to describe most circumstances and presents a unified view of the traditional types of biological variety previously identified:
taxonomic diversity (usually measured at the species diversity level)
ecological diversity often viewed from the perspective of ecosystem diversity
morphological diversity which stems from genetic diversity
functional diversity which is a measure of the number of functionally disparate species within a population (e.g. different feeding mechanism, different motility, predator vs prey, etc.)
In 2003, Anthony Campbell defined a fourth level: Molecular Diversity.
This multilevel construct is consistent with Datman and Lovejoy. An explicit definition consistent with this interpretation was first given in a paper by Bruce A. Wilcox commissioned by the International Union for the Conservation of Nature and Natural Resources (IUCN) for the 1982 World National Parks Conference. Wilcox's definition was "Biological diversity is the variety of life forms...at all levels of biological systems (i.e., molecular, organismic, population, species and ecosystem)...".Wilcox, Bruce A. 1984. In situ conservation of genetic resources: determinants of minimum area requirements. In National Parks, Conservation and Development, Proceedings of the World Congress on National Parks,, J.A. McNeely and K.R. Miller, Smithsonian Institution Press, pp. 18–30.
The 1992 United Nations Earth Summit defined "biological diversity" as "the variability among living organisms from all sources, including, 'inter alia', terrestrial, marine and other aquatic ecosystems and the ecological complexes of which they are part: this includes diversity within species, between species and of ecosystems". This definition is used in the United Nations Convention on Biological Diversity.
One textbook's definition is "variation of life at all levels of biological organization".
Genetically biodiversity can be defined as the diversity of alleles, genes and organisms. They study processes such as mutation and gene transfer that drive evolution.
Measuring diversity at one level in a group of organisms may not precisely correspond to diversity at other levels. However, tetrapod (terrestrial vertebrates) taxonomic and ecological diversity shows a very close correlation.
Distribution
thumb|A conifer forest in the Swiss Alps (National Park)
Biodiversity is not evenly distributed, rather it varies greatly across the globe as well as within regions. Among other factors, the diversity of all living things (biota) depends on temperature, precipitation, altitude, soils, geography and the presence of other species. The study of the spatial distribution of organisms, species and ecosystems, is the science of biogeography.
Diversity consistently measures higher in the tropics and in other localized regions such as the Cape Floristic Region and lower in polar regions generally. Rain forests that have had wet climates for a long time, such as Yasuní National Park in Ecuador, have particularly high biodiversity.
Terrestrial biodiversity is thought to be up to 25 times greater than ocean biodiversity. A recently discovered method put the total number of species on Earth at 8.7 million, of which 2.1 million were estimated to live in the ocean. However, this estimate seems to under-represent the diversity of microorganisms.
Latitudinal gradients
Generally, there is an increase in biodiversity from the poles to the tropics. Thus localities at lower latitudes have more species than localities at higher latitudes. This is often referred to as the latitudinal gradient in species diversity. Several ecological mechanisms may contribute to the gradient, but the ultimate factor behind many of them is the greater mean temperature at the equator compared to that of the poles.
Even though terrestrial biodiversity declines from the equator to the poles, some studies claim that this characteristic is unverified in aquatic ecosystems, especially in marine ecosystems. The latitudinal distribution of parasites does not appear to follow this rule.
Hotspots
A biodiversity hotspot is a region with a high level of endemic species that has experienced great habitat loss. The term hotspot was introduced in 1988 by Norman Myers. While hotspots are spread all over the world, the majority are forest areas and most are located in the tropics.
Brazil's Atlantic Forest is considered one such hotspot, containing roughly 20,000 plant species, 1,350 vertebrates and millions of insects, about half of which occur nowhere else. The island of Madagascar and India are also particularly notable. Colombia is characterized by high biodiversity, with the highest rate of species by area unit worldwide and it has the largest number of endemics (species that are not found naturally anywhere else) of any country. About 10% of the species of the Earth can be found in Colombia, including over 1,900 species of bird, more than in Europe and North America combined, Colombia has 10% of the world's mammals species, 14% of the amphibian species and 18% of the bird species of the world. Madagascar dry deciduous forests and lowland rainforests possess a high ratio of endemism. Since the island separated from mainland Africa 66 million years ago, many species and ecosystems have evolved independently. Indonesia's 17,000 islands cover and contain 10% of the world's flowering plants, 12% of mammals and 17% of reptiles, amphibians and birds—along with nearly 240 million people. Many regions of high biodiversity and/or endemism arise from specialized habitats which require unusual adaptations, for example, alpine environments in high mountains, or Northern European peat bogs.
Accurately measuring differences in biodiversity can be difficult. Selection bias amongst researchers may contribute to biased empirical research for modern estimates of biodiversity. In 1768, Rev. Gilbert White succinctly observed of his Selborne, Hampshire "all nature is so full, that that district produces the most variety which is the most examined."
Evolution and history
thumb|300px|Apparent marine fossil diversity during the Phanerozoic
Biodiversity is the result of 3.5 billion years of evolution. The origin of life has not been definitely established by science, however some evidence suggests that life may already have been well-established only a few hundred million years after the formation of the Earth. Until approximately 600 million years ago, all life consisted of archaea, bacteria, protozoans and similar single-celled organisms.
The history of biodiversity during the Phanerozoic (the last 540 million years), starts with rapid growth during the Cambrian explosion—a period during which nearly every phylum of multicellular organisms first appeared. Over the next 400 million years or so, invertebrate diversity showed little overall trend and vertebrate diversity shows an overall exponential trend. This dramatic rise in diversity was marked by periodic, massive losses of diversity classified as mass extinction events. A significant loss occurred when rainforests collapsed in the carboniferous. The worst was the Permian-Triassic extinction event, 251 million years ago. Vertebrates took 30 million years to recover from this event.
The fossil record suggests that the last few million years featured the greatest biodiversity in history. However, not all scientists support this view, since there is uncertainty as to how strongly the fossil record is biased by the greater availability and preservation of recent geologic sections. Some scientists believe that corrected for sampling artifacts, modern biodiversity may not be much different from biodiversity 300 million years ago., whereas others consider the fossil record reasonably reflective of the diversification of life. Estimates of the present global macroscopic species diversity vary from 2 million to 100 million, with a best estimate of somewhere near 9 million, the vast majority arthropods. Diversity appears to increase continually in the absence of natural selection.
Evolutionary diversification
The existence of a "global carrying capacity", limiting the amount of life that can live at once, is debated, as is the question of whether such a limit would also cap the number of species. While records of life in the sea shows a logistic pattern of growth, life on land (insects, plants and tetrapods)shows an exponential rise in diversity. As one author states, "Tetrapods have not yet invaded 64 per cent of potentially habitable modes and it could be that without human influence the ecological and taxonomic diversity of tetrapods would continue to increase in an exponential fashion until most or all of the available ecospace is filled."
On the other hand, changes through the Phanerozoic correlate much better with the hyperbolic model (widely used in population biology, demography and macrosociology, as well as fossil biodiversity) than with exponential and logistic models. The latter models imply that changes in diversity are guided by a first-order positive feedback (more ancestors, more descendants) and/or a negative feedback arising from resource limitation. Hyperbolic model implies a second-order positive feedback. The hyperbolic pattern of the world population growth arises from a second-order positive feedback between the population size and the rate of technological growth. The hyperbolic character of biodiversity growth can be similarly accounted for by a feedback between diversity and community structure complexity. The similarity between the curves of biodiversity and human population probably comes from the fact that both are derived from the interference of the hyperbolic trend with cyclical and stochastic dynamics.
Most biologists agree however that the period since human emergence is part of a new mass extinction, named the Holocene extinction event, caused primarily by the impact humans are having on the environment.National Survey Reveals Biodiversity Crisis American Museum of Natural History It has been argued that the present rate of extinction is sufficient to eliminate most species on the planet Earth within 100 years.
New species are regularly discovered (on average between 5–10,000 new species each year, most of them insects) and many, though discovered, are not yet classified (estimates are that nearly 90% of all arthropods are not yet classified). Most of the terrestrial diversity is found in tropical forests and in general, land has more species than the ocean; some 8.7 million species may exists on Earth, of which some 2.1 million live in the ocean.
Ecosystem services
thumb|Summer field in Belgium (Hamois). The blue flowers are Centaurea cyanus and the red are Papaver rhoeas.
The balance of evidence
"Ecosystem services are the suite of benefits that ecosystems provide to humanity."
These services come in three flavors:
Provisioning services which involve the production of renewable resources (e.g.: food, wood, fresh water)
Regulating services which are those that lessen environmental change (e.g.: climate regulation, pest/disease control)
Cultural services represent human value and enjoyment (e.g.: landscape aesthetics, cultural heritage, outdoor recreation and spiritual significance)
There have been many claims about biodiversity's effect on these ecosystem services, especially provisioning and regulating services. After an exhaustive survey through peer-reviewed literature to evaluate 36 different claims about biodiversity's effect on ecosystem services, 14 of those claims have been validated, 6 demonstrate mixed support or are unsupported, 3 are incorrect and 13 lack enough evidence to draw definitive conclusions.
Services enhanced
Provisioning services
Greater species diversity of plants increases fodder yield (synthesis of 271 experimental studies).
Greater genetic diversity of plants (i.e.: diversity within a single species) increases overall crop yield (synthesis of 575 experimental studies). Although another review of 100 experimental studies reports mixed evidence.
Greater species diversity of trees increases overall wood production (Synthesis of 53 experimental studies). However, there is not enough data to draw a conclusion about the effect of tree trait diversity on wood production.
Regulating services
Greater species diversity of fish increases the stability of fisheries yield (Synthesis of 8 observational studies)
Greater species diversity of natural pest enemies decreases herbivorous pest populations (Data from two separate reviews; Synthesis of 266 experimental and observational studies; Synthesis of 18 observational studies. Although another review of 38 experimental studies found mixed support for this claim, suggesting that in cases where mutual intraguild predation occurs, a single predatory species is often more effective
Greater species diversity of plants decreases disease prevalence on plants (Synthesis of 107 experimental studies)
Greater species diversity of plants increases resistance to plant invasion (Data from two separate reviews; Synthesis of 105 experimental studies; Synthesis of 15 experimental studies)
Greater species diversity of plants increases carbon sequestration, but note that this finding only relates to actual uptake of carbon dioxide and not long term storage, see below; Synthesis of 479 experimental studies)
Greater species diversity of plants increases soil nutrient remineralization (Synthesis of 103 experimental studies)
Greater species diversity of plants increases soil organic matter (Synthesis of 85 experimental studies)
Services with mixed evidence
Provisioning services
None to date
Regulating services
Greater species diversity of plants may or may not decrease herbivorous pest populations. Data from two separate reviews suggest that greater diversity decreases pest populations (Synthesis of 40 observational studies; Synthesis of 100 experimental studies). One review found mixed evidence (Synthesis of 287 experimental studies), while another found contrary evidence (Synthesis of 100 experimental studies)
Greater species diversity of animals may or may not decrease disease prevalence on those animals (Synthesis of 45 experimental and observational studies), although a 2013 study offers more support showing that biodiversity may in fact enhance disease resistance within animal communities, at least in amphibian frog ponds. Many more studies must be published in support of diversity to sway the balance of evidence will be such that we can draw a general rule on this service.
Greater species and trait diversity of plants may or may not increase long term carbon storage (Synthesis of 33 observational studies)
Greater pollinator diversity may or may not increase pollination (Synthesis of 7 observational studies), but a publication from March 2013 suggests that increased native pollinator diversity enhances pollen deposition (although not necessarily fruit set as the authors would have you believe, for details explore their lengthy supplementary material).
Services hindered
Provisioning services
Greater species diversity of plants reduces primary production (Synthesis of 7 experimental studies)
Regulating services
Greater genetic and species diversity of a number of organisms reduces freshwater purification (Synthesis of 8 experimental studies, although an attempt by the authors to investigate the effect of detritivore diversity on freshwater purification was unsuccessful due to a lack of available evidence (only 1 observational study was found
Provisioning services
Effect of species diversity of plants on biofuel yield (In a survey of the literature, the investigators only found 3 studies)
Effect of species diversity of fish on fishery yield (In a survey of the literature, the investigators only found 4 experimental studies and 1 observational study)
Regulating services
Effect of species diversity on the stability of biofuel yield (In a survey of the literature, the investigators did not find any studies)
Effect of species diversity of plants on the stability of fodder yield (In a survey of the literature, the investigators only found 2 studies)
Effect of species diversity of plants on the stability of crop yield (In a survey of the literature, the investigators only found 1 study)
Effect of genetic diversity of plants on the stability of crop yield (In a survey of the literature, the investigators only found 2 studies)
Effect of diversity on the stability of wood production (In a survey of the literature, the investigators could not find any studies)
Effect of species diversity of multiple taxa on erosion control (In a survey of the literature, the investigators could not find any studies – they did however find studies on the effect of species diversity and root biomass)
Effect of diversity on flood regulation (In a survey of the literature, the investigators could not find any studies)
Effect of species and trait diversity of plants on soil moisture (In a survey of the literature, the investigators only found 2 studies)
Other sources have reported somewhat conflicting results and in 1997 Robert Costanza and colleagues reported the estimated global value of ecosystem services (not captured in traditional markets) at an average of $33 trillion annually.
Since the stone age, species loss has accelerated above the average basal rate, driven by human activity. Estimates of species losses are at a rate 100-10,000 times as fast as is typical in the fossil record.
Biodiversity also affords many non-material benefits including spiritual and aesthetic values, knowledge systems and education.
Agriculture
thumb|250px|Amazon Rainforest in South America
Agricultural diversity can be divided into two categories: intraspecific diversity, which includes the genetic variety within a single species, like the potato (Solanum tuberosum) that is composed of many different forms and types (e.g.: in the U.S. we might compare russet potatoes with new potatoes or purple potatoes, all different, but all part of the same species, S. tuberosum).
The other category of agricultural diversity is called interspecific diversity and refers to the number and types of different species. Thinking about this diversity we might note that many small vegetable farmers grow many different crops like potatoes and also carrots, peppers, lettuce etc.
Agricultural diversity can also be divided by whether it is ‘planned’ diversity or ‘associated’ diversity. This is a functional classification that we impose and not an intrinsic feature of life or diversity. Planned diversity includes the crops which a farmer has encouraged, planted or raised (e.g.: crops, covers, symbionts and livestock, among others), which can be contrasted with the associated diversity that arrives among the crops, uninvited (e.g.: herbivores, weed species and pathogens, among others).
The control of associated biodiversity is one of the great agricultural challenges that farmers face. On monoculture farms, the approach is generally to eradicate associated diversity using a suite of biologically destructive pesticides, mechanized tools and transgenic engineering techniques, then to rotate crops. Although some polyculture farmers use the same techniques, they also employ integrated pest management strategies as well as strategies that are more labor-intensive, but generally less dependent on capital, biotechnology and energy.
Interspecific crop diversity is, in part, responsible for offering variety in what we eat. Intraspecific diversity, the variety of alleles within a single species, also offers us choice in our diets. If a crop fails in a monoculture, we rely on agricultural diversity to replant the land with something new. If a wheat crop is destroyed by a pest we may plant a hardier variety of wheat the next year, relying on intraspecific diversity. We may forgo wheat production in that area and plant a different species altogether, relying on interspecific diversity. Even an agricultural society which primarily grows monocultures, relies on biodiversity at some point.
The Irish potato blight of 1846 was a major factor in the deaths of one million people and the emigration of about two million. It was the result of planting only two potato varieties, both vulnerable to the blight, Phytophthora infestans, which arrived in 1845
When rice grassy stunt virus struck rice fields from Indonesia to India in the 1970s, 6,273 varieties were tested for resistance. Only one was resistant, an Indian variety and known to science only since 1966. This variety formed a hybrid with other varieties and is now widely grown.
Coffee rust attacked coffee plantations in Sri Lanka, Brazil and Central America in 1970. A resistant variety was found in Ethiopia. The diseases are themselves a form of biodiversity.
Monoculture was a contributing factor to several agricultural disasters, including the European wine industry collapse in the late 19th century and the US southern corn leaf blight epidemic of 1970.
Although about 80 percent of humans' food supply comes from just 20 kinds of plants, humans use at least 40,000 species. Many people depend on these species for food, shelter and clothing. Earth's surviving biodiversity provides resources for increasing the range of food and other products suitable for human use, although the present extinction rate shrinks that potential.
Human health
thumb|upright|The diverse forest canopy on Barro Colorado Island, Panama, yielded this display of different fruit
Biodiversity's relevance to human health is becoming an international political issue, as scientific evidence builds on the global health implications of biodiversity loss.World Health Organization and Secretariat of the Convention on Biological Diversity (2015) Connecting Global Priorities: Biodiversity and Human Health, a State of Knowledge Review . See also Website of the Secretariat of the Convention on Biological Diversity on biodiversity and health. Other relevant resources include
Reports of the 1st and 2nd International Conferences on Health and Biodiversity. See also: Website of the UN COHAB Initiative
This issue is closely linked with the issue of climate change,(2009) "Climate Change and Biological Diversity" Convention on Biological Diversity Retrieved November 5, 2009 as many of the anticipated health risks of climate change are associated with changes in biodiversity (e.g. changes in populations and distribution of disease vectors, scarcity of fresh water, impacts on agricultural biodiversity and food resources etc.) This is because the species most likely to disappear are those that buffer against infectious disease transmission, while surviving species tend to be the ones that increase disease transmission, such as that of West Nile Virus, Lyme disease and Hantavirus, according to a study done co-authored by Felicia Keesing, an ecologist at Bard College and Drew Harvell, associate director for Environment of the Atkinson Center for a Sustainable Future (ACSF) at Cornell University.
The growing demand and lack of drinkable water on the planet presents an additional challenge to the future of human health. Partly, the problem lies in the success of water suppliers to increase supplies and failure of groups promoting preservation of water resources. While the distribution of clean water increases, in some parts of the world it remains unequal. According to 2008 World Population Data Sheet, only 62% of least developed countries are able to access clean water.Population Bulletin. Vol.63., No.3., p.8.
Some of the health issues influenced by biodiversity include dietary health and nutrition security, infectious disease, medical science and medicinal resources, social and psychological health. Biodiversity is also known to have an important role in reducing disaster risk and in post-disaster relief and recovery efforts.
Biodiversity provides critical support for drug discovery and the availability of medicinal resources.(2006) "Molecular Pharming" GMO Compass Retrieved November 5, 2009, GMOcompass.org A significant proportion of drugs are derived, directly or indirectly, from biological sources: at least 50% of the pharmaceutical compounds on the US market are derived from plants, animals and micro-organisms, while about 80% of the world population depends on medicines from nature (used in either modern or traditional medical practice) for primary healthcare. Only a tiny fraction of wild species has been investigated for medical potential. Biodiversity has been critical to advances throughout the field of bionics. Evidence from market analysis and biodiversity science indicates that the decline in output from the pharmaceutical sector since the mid-1980s can be attributed to a move away from natural product exploration ("bioprospecting") in favor of genomics and synthetic chemistry, indeed claims about the value of undiscovered pharmaceuticals may not provide enough incentive for companies in free markets to search for them because of the high cost of development; meanwhile, natural products have a long history of supporting significant economic and health innovation. Marine ecosystems are particularly important, although inappropriate bioprospecting can increase biodiversity loss, as well as violating the laws of the communities and states from which the resources are taken.
Business and industry
thumb|right|Agriculture production, pictured is a tractor and a chaser bin
Many industrial materials derive directly from biological sources. These include building materials, fibers, dyes, rubber and oil. Biodiversity is also important to the security of resources such as water, timber, paper, fiber and food.IUCN, WRI, World Business Council for Sustainable Development, Earthwatch Inst. 2007 Business and Ecosystems: Ecosystem Challenges and Business ImplicationsMillennium Ecosystem Assessment 2005 Ecosystems and Human Well-being: Opportunities and Challenges for Business and Industry As a result, biodiversity loss is a significant risk factor in business development and a threat to long term economic sustainability.WRI Corporate Ecosystem Services Review. See also: Examples of Ecosystem-Service Based Risks, Opportunities and StrategiesCorporate Biodiversity Accounting. See also: Making the Natural Capital Declaration Accountable.
Leisure, cultural and aesthetic value
Biodiversity enriches leisure activities such as hiking, birdwatching or natural history study. Biodiversity inspires musicians, painters, sculptors, writers and other artists. Many cultures view themselves as an integral part of the natural world which requires them to respect other living organisms.
Popular activities such as gardening, fishkeeping and specimen collecting strongly depend on biodiversity. The number of species involved in such pursuits is in the tens of thousands, though the majority do not enter commerce.
The relationships between the original natural areas of these often exotic animals and plants and commercial collectors, suppliers, breeders, propagators and those who promote their understanding and enjoyment are complex and poorly understood. The general public responds well to exposure to rare and unusual organisms, reflecting their inherent value.
Philosophically it could be argued that biodiversity has intrinsic aesthetic and spiritual value to mankind in and of itself. This idea can be used as a counterweight to the notion that tropical forests and other ecological realms are only worthy of conservation because of the services they provide.
Ecological services
thumb|upright|Eagle Creek, Oregon hiking
Biodiversity supports many ecosystem services:
"There is now unequivocal evidence that biodiversity loss reduces the efficiency by which ecological communities capture biologically essential resources, produce biomass, decompose and recycle biologically essential nutrients... There is mounting evidence that biodiversity increases the stability of ecosystem functions through time... Diverse communities are more productive because they contain key species that have a large influence on productivity and differences in functional traits among organisms increase total resource capture... The impacts of diversity loss on ecological processes might be sufficiently large to rival the impacts of many other global drivers of environmental change... Maintaining multiple ecosystem processes at multiple places and times requires higher levels of biodiversity than does a single process at a single place and time."
It plays a part in regulating the chemistry of our atmosphere and water supply. Biodiversity is directly involved in water purification, recycling nutrients and providing fertile soils. Experiments with controlled environments have shown that humans cannot easily build ecosystems to support human needs; for example insect pollination cannot be mimicked, and that activity alone represented between $2.1-14.6 billions in 2003.
Number of species
thumb|700px|center|Discovered and predicted total number of species on land and in the oceans
According to Mora and colleagues, the total number of terrestrial species is estimated to be around 8.7 million while the number of oceanic species is much lower, estimated at 2.2 million. The authors note that these estimates are strongest for eukaryotic organisms and likely represent the lower bound of prokaryote diversity. Other estimates include:
220,000 vascular plants, estimated using the species-area relation method
0.7-1 million marine species
10–30 million insects; (of some 0.9 million we know today)Le Monde newspaper article (in French)
5–10 million bacteria;Proceedings of the National Academy of Sciences, Census of Marine Life (CoML)
News.BBC.co.uk
1.5-3 million fungi, estimates based on data from the tropics, long-term non-tropical sites and molecular studies that have revealed cryptic speciation. Some 0.075 million species of fungi had been documented by 2001)
1 million mites
The number of microbial species is not reliably known, but the Global Ocean Sampling Expedition dramatically increased the estimates of genetic diversity by identifying an enormous number of new genes from near-surface plankton samples at various marine locations, initially over the 2004-2006 period. The findings may eventually cause a significant change in the way science defines species and other taxonomic categories.
Since the rate of extinction has increased, many extant species may become extinct before they are described. Not surprisingly, in the animalia the most studied groups are birds and mammals, whereas fishes and arthropods are the least studied animals groups.
Measuring biodiversity
Species loss rates
During the last century, decreases in biodiversity have been increasingly observed. In 2007, German Federal Environment Minister Sigmar Gabriel cited estimates that up to 30% of all species will be extinct by 2050. Of these, about one eighth of known plant species are threatened with extinction. Estimates reach as high as 140,000 species per year (based on Species-area theory). This figure indicates unsustainable ecological practices, because few species emerge each year. Almost all scientists acknowledge that the rate of species loss is greater now than at any time in human history, with extinctions occurring at rates hundreds of times higher than background extinction rates. As of 2012, some studies suggest that 25% of all mammal species could be extinct in 20 years.
In absolute terms, the planet has lost 52% of its biodiversity since 1970 according to a 2014 study by the World Wildlife Fund. The Living Planet Report 2014 claims that "the number of mammals, birds, reptiles, amphibians and fish across the globe is, on average, about half the size it was 40 years ago". Of that number, 39% accounts for the terrestrial wildlife gone, 39% for the marine wildlife gone and 76% for the freshwater wildlife gone. Biodiversity took the biggest hit in Latin America, plummeting 83 percent. High-income countries showed a 10% increase in biodiversity, which was canceled out by a loss in low-income countries. This is despite the fact that high-income countries use five times the ecological resources of low-income countries, which was explained as a result of process whereby wealthy nations are outsourcing resource depletion to poorer nations, which are suffering the greatest ecosystem losses.
Threats
In 2006 many species were formally classified as rare or endangered or threatened; moreover, scientists have estimated that millions more species are at risk which have not been formally recognized. About 40 percent of the 40,177 species assessed using the IUCN Red List criteria are now listed as threatened with extinction—a total of 16,119.
Jared Diamond describes an "Evil Quartet" of habitat destruction, overkill, introduced species and secondary extinctions. Edward O. Wilson prefers the acronym HIPPO, standing for Habitat destruction, Invasive species, Pollution, human over-Population and Over-harvesting. The most authoritative classification in use today is IUCN's Classification of Direct Threats which has been adopted by major international conservation organizations such as the US Nature Conservancy, the World Wildlife Fund, Conservation International and BirdLife International.
Habitat destruction
thumb|Deforestation and increased road-building in the Amazon Rainforest are a significant concern because of increased human encroachment upon wild areas, increased resource extraction and further threats to biodiversity.
Habitat destruction has played a key role in extinctions, especially related to tropical forest destruction. Factors contributing to habitat loss are: overconsumption, overpopulation, land use change, deforestation,C.Michael Hogan. 2010. Deforestation Encyclopedia of Earth. ed. C.Cleveland. NCSE. Washington DC pollution (air pollution, water pollution, soil contamination) and global warming or climate change.
Habitat size and numbers of species are systematically related. Physically larger species and those living at lower latitudes or in forests or oceans are more sensitive to reduction in habitat area. Conversion to "trivial" standardized ecosystems (e.g., monoculture following deforestation) effectively destroys habitat for the more diverse species that preceded the conversion. In some countries lack of property rights or lax law/regulatory enforcement necessarily leads to biodiversity loss (degradation costs having to be supported by the community).
A 2007 study conducted by the National Science Foundation found that biodiversity and genetic diversity are codependent—that diversity among species requires diversity within a species and vice versa. "If any one type is removed from the system, the cycle can break down and the community becomes dominated by a single species."
At present, the most threatened ecosystems are found in fresh water, according to the Millennium Ecosystem Assessment 2005, which was confirmed by the "Freshwater Animal Diversity Assessment", organised by the biodiversity platform and the French Institut de recherche pour le développement (MNHNP).Science Connection 22 (July 2008)
Co-extinctions are a form of habitat destruction. Co-extinction occurs when the extinction or decline in one accompanies the other, such as in plants and beetles.
Introduced and invasive species
thumb|right|Male Lophura nycthemera (silver pheasant), a native of East Asia that has been introduced into parts of Europe for ornamental reasons
Barriers such as large rivers, seas, oceans, mountains and deserts encourage diversity by enabling independent evolution on either side of the barrier, via the process of allopatric speciation. The term invasive species is applied to species that breach the natural barriers that would normally keep them constrained. Without barriers, such species occupy new territory, often supplanting native species by occupying their niches, or by using resources that would normally sustain native species.
The number of species invasions has been on the rise at least since the beginning of the 1900s. Species are increasingly being moved by humans (on purpose and accidentally). In some cases the invaders are causing drastic changes and damage to their new habitats (e.g.: zebra mussels and the emerald ash borer in the Great Lakes region and the lion fish along the North American Atlantic coast). Some evidence suggests that invasive species are competitive in their new habitats because they are subject to less pathogen disturbance. Others report confounding evidence that occasionally suggest that species-rich communities harbor many native and exotic species simultaneously while some say that diverse ecosystems are more resilient and resist invasive plants and animals. An important question is, "do invasive species cause extinctions?" Many studies cite effects of invasive species on natives, but not extinctions. Invasive species seem to increase local (i.e.: alpha diversity) diversity, which decreases turnover of diversity (i.e.: beta diversity). Overall gamma diversity may be lowered because species are going extinct because of other causes, but even some of the most insidious invaders (e.g.: Dutch elm disease, emerald ash borer, chestnut blight in North America) have not caused their host species to become extinct. Extirpation, population decline and homogenization of regional biodiversity are much more common. Human activities have frequently been the cause of invasive species circumventing their barriers, by introducing them for food and other purposes. Human activities therefore allow species to migrate to new areas (and thus become invasive) occurred on time scales much shorter than historically have been required for a species to extend its range.
Not all introduced species are invasive, nor all invasive species deliberately introduced. In cases such as the zebra mussel, invasion of US waterways was unintentional. In other cases, such as mongooses in Hawaii, the introduction is deliberate but ineffective (nocturnal rats were not vulnerable to the diurnal mongoose). In other cases, such as oil palms in Indonesia and Malaysia, the introduction produces substantial economic benefits, but the benefits are accompanied by costly unintended consequences.
Finally, an introduced species may unintentionally injure a species that depends on the species it replaces. In Belgium, Prunus spinosa from Eastern Europe leafs much sooner than its West European counterparts, disrupting the feeding habits of the Thecla betulae butterfly (which feeds on the leaves). Introducing new species often leaves endemic and other local species unable to compete with the exotic species and unable to survive. The exotic organisms may be predators, parasites, or may simply outcompete indigenous species for nutrients, water and light.
At present, several countries have already imported so many exotic species, particularly agricultural and ornamental plants, that their own indigenous fauna/flora may be outnumbered. For example, the introduction of kudzu from Southeast Asia to Canada and the United States has threatened biodiversity in certain areas.
Genetic pollution
Endemic species can be threatened with extinction through the process of genetic pollution, i.e. uncontrolled hybridization, introgression and genetic swamping. Genetic pollution leads to homogenization or replacement of local genomes as a result of either a numerical and/or fitness advantage of an introduced species.
Hybridization and introgression are side-effects of introduction and invasion. These phenomena can be especially detrimental to rare species that come into contact with more abundant ones. The abundant species can interbreed with the rare species, swamping its gene pool. This problem is not always apparent from morphological (outward appearance) observations alone. Some degree of gene flow is normal adaptation and not all gene and genotype constellations can be preserved. However, hybridization with or without introgression may, nevertheless, threaten a rare species' existence.RIRDC.gov.au RIRDC Publication No 01/114; RIRDC Project No CPF - 3A; Australian Government, Rural Industrial Research and Development Corporation
Overexploitation
Overexploitation occurs when a resource is consumed at an unsustainable rate. This occurs on land in the form of overhunting, excessive logging, poor soil conservation in agriculture and the illegal wildlife trade.
About 25% of world fisheries are now overfished to the point where their current biomass is less than the level that maximizes their sustainable yield.
The overkill hypothesis, a pattern of large animal extinctions connected with human migration patterns, can be used explain why megafaunal extinctions can occur within a relatively short time period.
Hybridization, genetic pollution/erosion and food security
right|thumb|The Yecoro wheat (right) cultivar is sensitive to salinity, plants resulting from a hybrid cross with cultivar W4910 (left) show greater tolerance to high salinity
In agriculture and animal husbandry, the Green Revolution popularized the use of conventional hybridization to increase yield. Often hybridized breeds originated in developed countries and were further hybridized with local varieties in the developing world to create high yield strains resistant to local climate and diseases. Local governments and industry have been pushing hybridization. Formerly huge gene pools of various wild and indigenous breeds have collapsed causing widespread genetic erosion and genetic pollution. This has resulted in loss of genetic diversity and biodiversity as a whole."Genetic Pollution: The Great Genetic Scandal";
(GM organisms) have genetic material altered by genetic engineering procedures such as recombinant DNA technology. GM crops have become a common source for genetic pollution, not only of wild varieties but also of domesticated varieties derived from classical hybridization. Reviewed in "Genetic pollution: Uncontrolled escape of genetic information (frequently referring to products of genetic engineering) into the genomes of organisms in the environment where those genes never existed before." Searchable Biotechnology Dictionary, University of Minnesota, Boku.ac.at
Genetic erosion coupled with genetic pollution may be destroying unique genotypes, thereby creating a hidden crisis which could result in a severe threat to our food security. Diverse genetic material could cease to exist which would impact our ability to further hybridize food crops and livestock against more resistant diseases and climatic changes.
Climate change
thumb|right|Polar bears on the sea ice of the Arctic Ocean, near the North Pole. Climate change has started affecting bear populations.
Global warming is also considered to be a major potential threat to global biodiversity in the future. For example, coral reefs - which are biodiversity hotspots - will be lost within the century if global warming continues at the current trend.
Climate change has seen many claims about potential to affect biodiversity but evidence supporting the statement is tenuous. Increasing atmospheric carbon dioxide certainly affects plant morphology and is acidifying oceans, and temperature affects species ranges, phenology, and weather, but the major impacts that have been predicted are still just potential impacts. We have not documented major extinctions yet, even as climate change drastically alters the biology of many species.
In 2004, an international collaborative study on four continents estimated that 10 percent of species would become extinct by 2050 because of global warming. "We need to limit climate change or we wind up with a lot of species in trouble, possibly extinct," said Dr. Lee Hannah, a co-author of the paper and chief climate change biologist at the Center for Applied Biodiversity Science at Conservation International.
A recent study predicts that up to 35% of the world terrestrial carnivores and ungulates will be at higher risk of extinction by 2050 because of the joint effects of predicted climate and land-use change under business-as-usual human development scenarios.
Human overpopulation
From 1950 to 2011, world population increased from 2.5 billion to 7 billion and is forecast to reach a plateau of more than 9 billion during the 21st century."World Population Growth, 1950–2050". Population Reference Bureau. Some recent forecasts place the possible number of people on the planet at 11 billion or 15 billion by 2100.World population to keep growing this century, hit 11 billion by 2100. UWToday. September 18, 2014. Sir David King, former chief scientific adviser to the UK government, told a parliamentary inquiry: "It is self-evident that the massive growth in the human population through the 20th century has had more impact on biodiversity than any other single factor.""Citizens arrest". The Guardian. July 11, 2007."Population Bomb Author's Fix For Next Extinction: Educate Women". Scientific American. August 12, 2008. At least until the middle of the 21st century, worldwide losses of pristine biodiverse land will probably depend much on the worldwide human birth rate. Biologists such as Paul R. Ehrlich and Stuart Pimm have noted that human population growth is one of the main drivers of species extinction.
According to a 2014 study by the World Wildlife Fund, the global human population already exceeds planet's biocapacity - it would take the equivalent of 1.5 Earths of biocapacity to meet our current demands. The report further points that if everyone on the planet had the Footprint of the average resident of Qatar, we would need 4.8 Earths and if we lived the lifestyle of a typical resident of the USA, we would need 3.9 Earths.
The Holocene extinction
Rates of decline in biodiversity in this sixth mass extinction match or exceed rates of loss in the five previous mass extinction events in the fossil record. Loss of biodiversity results in the loss of natural capital that supplies ecosystem goods and services. From the perspective of the method known as Natural Economy the economic value of 17 ecosystem services for Earth's biosphere (calculated in 1997) has an estimated value of US$33 trillion (3.3x1013) per year.
Conservation
thumb|A schematic image illustrating the relationship between biodiversity, ecosystem services, human well-being and poverty.Millennium Ecosystem Assessment (2005). World Resources Institute, Washington, DC. Ecosystems and Human Well-being: Biodiversity Synthesis The illustration shows where conservation action, strategies and plans can influence the drivers of the current biodiversity crisis at local, regional, to global scales.
thumb|right|300px|The retreat of Aletsch Glacier in the Swiss Alps (situation in 1979, 1991 and 2002), due to global warming.
Conservation biology matured in the mid-20th century as ecologists, naturalists and other scientists began to research and address issues pertaining to global biodiversity declines.
The conservation ethic advocates management of natural resources for the purpose of sustaining biodiversity in species, ecosystems, the evolutionary process and human culture and society.
Conservation biology is reforming around strategic plans to protect biodiversity. Preserving global biodiversity is a priority in strategic conservation plans that are designed to engage public policy and concerns affecting local, regional and global scales of communities, ecosystems and cultures.Example: Gascon, C., Collins, J. P., Moore, R. D., Church, D. R., McKay, J. E. and Mendelson, J. R. III (eds) (2007). Amphibian Conservation Action Plan. IUCN/SSC Amphibian Specialist Group. Gland, Switzerland and Cambridge, UK. 64pp. Amphibians.org, see also Millenniumassessment.org, Europa.eu Action plans identify ways of sustaining human well-being, employing natural capital, market capital and ecosystem services.Millenniumassessment.org
In the EU Directive 1999/22/EC zoos are described as having a role in the preservation of the biodiversity of wildlife animals by conducting research or participation in breeding programs.
Protection and restoration techniques
Removal of exotic species will allow the species that they have negatively impacted to recover their ecological niches. Exotic species that have become pests can be identified taxonomically (e.g., with Digital Automated Identification SYstem (DAISY), using the barcode of life).Eradication of exotic animals (camels) in Australia Removal is practical only given large groups of individuals due to the economic cost.
As sustainable populations of the remaining native species in an area become assured, "missing" species that are candidates for reintroduction can be identified using databases such as the Encyclopedia of Life and the Global Biodiversity Information Facility.
Biodiversity banking places a monetary value on biodiversity. One example is the Australian Native Vegetation Management Framework.
Gene banks are collections of specimens and genetic material. Some banks intend to reintroduce banked species to the ecosystem (e.g., via tree nurseries).
Reduction of and better targeting of pesticides allows more species to survive in agricultural and urbanized areas.
Location-specific approaches may be less useful for protecting migratory species. One approach is to create wildlife corridors that correspond to the animals' movements. National and other boundaries can complicate corridor creation.
Protected areas
Protected areas is meant for affording protection to wild animals and their habitat which also includes forest reserves and biosphere reserves. Protected areas have been set up all over the world with the specific aim of protecting and conserving plants and animals.
National parks
National park and nature reserve is the area selected by governments or private organizations for special protection against damage or degradation with the objective of biodiversity and landscape conservation. National parks are usually owned and managed by national or state governments. A limit is placed on the number of visitors permitted to enter certain fragile areas. Designated trails or roads are created. The visitors are allowed to enter only for study, cultural and recreation purposes. Forestry operations, grazing of animals and hunting of animals are regulated. Exploitation of habitat or wildlife is banned.
Wildlife sanctuary
Wildlife sanctuary aims only at conservation of species and have the following features:
The boundaries of the sanctuaries are not limited by state legislation.
The killing, hunting or capturing of any species is prohibited except by or under the control of the highest authority in the department which is responsible for the management of the sanctuary.
Private ownership may be allowed.
Forestry and other usages can also be permitted.
Forest reserves
The forests play a vital role in harbouring more than 45,000 floral and 81,000 faunal species of which 5150 floral and 1837 faunal species are endemic. Plant and animal species confined to a specific geographical area are called endemic species. In reserved forests, rights to activities like hunting and grazing are sometimes given to communities living on the fringes of the forest, who sustain their livelihood partially or wholly from forest resources or products. The unclassed forests covers 6.4 percent of the total forest area and they are marked by the following characteristics:
They are large inaccessible forests.
Many of these are unoccupied.
They are ecologically and economically less important.
Steps to conserve the forest cover
An extensive reforestation/afforestation program should be followed.
Alternative environment-friendly sources of fuel energy such as biogas other than wood should be used.
Loss of biodiversity due to forest fire is a major problem, immediate steps to prevent forest fire need to be taken.
Overgrazing by cattle can damage a forest seriously. Therefore, certain steps should be taken to prevent overgrazing by cattle.
Hunting and poaching should be banned.
Zoological parks
In zoological parks or zoos, live animals are kept for public recreation, education and conservation purposes. Modern zoos offer veterinary facilities, provide opportunities for threatened species to breed in captivity and usually build environments that simulate the native habitats of the animals in their care. Zoos play a major role in creating awareness among common people about the need to conserve nature.
Botanical gardens
Botanical garden is a garden in which plants are grown and displayed primarily for scientific and educational purposes. It consists of a collection of living plants, grown outdoors or under glass in greenhouses and conservatories. In addition, it includes a collection of dried plants or herbarium and such facilities as lecture rooms, laboratories, libraries, museums and experimental or research plantings.
Resource allocation
Focusing on limited areas of higher potential biodiversity promises greater immediate return on investment than spreading resources evenly or focusing on areas of little diversity but greater interest in biodiversity.Conservationists Use Triage to Determine which Species to Save and Not; Like battlefield medics, conservationists are being forced to explicitly apply triage to determine which creatures to save and which to let go July 23, 2012 Scientific American.
A second strategy focuses on areas that retain most of their original diversity, which typically require little or no restoration. These are typically non-urbanized, non-agricultural areas. Tropical areas often fit both criteria, given their natively high diversity and relative lack of development.
Legal status
thumb|right|A great deal of work is occurring to preserve the natural characteristics of Hopetoun Falls, Australia while continuing to allow visitor access.
International
United Nations Convention on Biological Diversity (1992) and Cartagena Protocol on Biosafety;
Convention on International Trade in Endangered Species (CITES);
Ramsar Convention (Wetlands);
Bonn Convention on Migratory Species;
World Heritage Convention (indirectly by protecting biodiversity habitats)
Regional Conventions such as the Apia Convention
Bilateral agreements such as the Japan-Australia Migratory Bird Agreement.
Global agreements such as the Convention on Biological Diversity, give "sovereign national rights over biological resources" (not property). The agreements commit countries to "conserve biodiversity", "develop resources for sustainability" and "share the benefits" resulting from their use. Biodiverse countries that allow bioprospecting or collection of natural products, expect a share of the benefits rather than allowing the individual or institution that discovers/exploits the resource to capture them privately. Bioprospecting can become a type of biopiracy when such principles are not respected.
Sovereignty principles can rely upon what is better known as Access and Benefit Sharing Agreements (ABAs). The Convention on Biodiversity implies informed consent between the source country and the collector, to establish which resource will be used and for what and to settle on a fair agreement on benefit sharing.
National level laws
Biodiversity is taken into account in some political and judicial decisions:
The relationship between law and ecosystems is very ancient and has consequences for biodiversity. It is related to private and public property rights. It can define protection for threatened ecosystems, but also some rights and duties (for example, fishing and hunting rights).
Law regarding species is more recent. It defines species that must be protected because they may be threatened by extinction. The U.S. Endangered Species Act is an example of an attempt to address the "law and species" issue.
Laws regarding gene pools are only about a century old. Domestication and plant breeding methods are not new, but advances in genetic engineering have led to tighter laws covering distribution of genetically modified organisms, gene patents and process patents. Governments struggle to decide whether to focus on for example, genes, genomes, or organisms and species.
Uniform approval for use of biodiversity as a legal standard has not been achieved, however. Bosselman argues that biodiversity should not be used as a legal standard, claiming that the remaining areas of scientific uncertainty cause unacceptable administrative waste and increase litigation without promoting preservation goals.
India passed the Biological Diversity Act in 2002 for the conservation of biological diversity in India. The Act also provides mechanisms for equitable sharing of benefits from the use of traditional biological resources and knowledge.
Analytical limits
Taxonomic and size relationships
Less than 1% of all species that have been described have been studied beyond simply noting their existence. The vast majority of Earth's species are microbial. Contemporary biodiversity physics is "firmly fixated on the visible [macroscopic] world". For example, microbial life is metabolically and environmentally more diverse than multicellular life (see e.g., extremophile). "On the tree of life, based on analyses of small-subunit ribosomal RNA, visible life consists of barely noticeable twigs. The inverse relationship of size and population recurs higher on the evolutionary ladder—"to a first approximation, all multicellular species on Earth are insects". Insect extinction rates are high—supporting the Holocene extinction hypothesis.
See also
Ecological indicator
Global biodiversity
Index of biodiversity articles
Measurement of biodiversity
Megadiverse countries
Deforestation and climate change
References
Further reading
D+C-Interview with Achim Steiner, UNEP: "Our generation's responsibility
External links
NatureServe: This site serves as a portal for accessing several types of publicly available biodiversity data
Biodiversity Factsheet by the University of Michigan's Center for Sustainable Systems
Color-coded images of vertebrate biodiversity hotspots
Pakistan Avicultural Foundation
Documents
Biodiversity Synthesis Report (PDF) by the Millennium Ecosystem Assessment (MA, 2005)
Conservation International hotspot map
Zhuravlev, Yu. N., ed. (2000) Стратегия сохранения биоразнообразия Сихотэ-Алиня = A Biodiversity Conservation Strategy for the Sikhote-Alin' Vladivostok: Russian Academy of Sciences, Far Eastern Branch
Tools
GLOBIO, an ongoing programme to map the past, current and future impacts of human activities on biodiversity
World Map of Biodiversity an interactive map from the United Nations Environment Programme World Conservation Monitoring Centre
/bison.usgs.ornl.gov/ Biodiversity Information Serving Our Nation (BISON), provides a United States gateway for serving, searching, mapping and downloading integrated species occurrence records from multiple data sources
Resources
Biodiversity Heritage Library – Open access digital library of taxonomic literature
Mapping of biodiversity
Encyclopedia of Life – Documenting all species of life on earth
Category:Conservation biology
Category:Population genetics
Category:Species
Category:Ecology | 45,086 | 2017-01 |
Brigham Young University | Brigham Young University (often referred to as BYU or, colloquially, The Y) is a private research university in Provo, Utah, United States. It is owned and operated by The Church of Jesus Christ of Latter-day Saints (LDS Church), and excluding online students, is the largest of any religious university and the third largest private university in the United States, with 29,672 on-campus students. Approximately 99 percent of the students are members of the LDS Church, and one-third of its US students are from Utah.
Students attending BYU are required to follow an honor code, which mandates behavior in line with LDS teachings such as academic honesty, adherence to dress and grooming standards, and abstinence from extramarital sex and from the consumption of drugs and alcohol. Many students (88 percent of men, 39 percent of women) either delay enrollment or take a hiatus from their studies to serve as Mormon missionaries. An education at BYU is also less expensive than at similar private universities, since "a significant portion" of the cost of operating the university is subsidized by the church's tithing funds.
BYU offers a variety of academic programs, including liberal arts, engineering, agriculture, management, physical and mathematical sciences, nursing, and law. The university is broadly organized into 11 colleges or schools at its main Provo campus, with certain colleges and divisions defining their own admission standards. The university also administers two satellite campuses, one in Jerusalem and one in Salt Lake City, while its parent organization, the Church Educational System (CES), sponsors sister schools in Hawaii and Idaho. The university's primary focus is on undergraduate education, but it also has 68 master's and 25 doctoral degree programs.
BYU's athletic teams compete in Division I of the NCAA and are collectively known as the BYU Cougars. Their college football team is an NCAA Division I Independent, while their other sports teams compete in either the West Coast Conference or Mountain Pacific Sports Federation. BYU's sports teams have won a total of fourteen national championships.
History
thumb|right|Brigham Young, the school's eponym.
Early days
Brigham Young University's origin can be traced back to 1862 when a man named Warren Dusenberry started a Provo school in Cluff Hall, a prominent adobe building in the northeast corner of 200 East and 200 North. On October 16, 1875, Brigham Young, then president of the LDS Church, personally purchased the Lewis Building after previously hinting that a school would be built in Draper, Utah, in 1867. Hence, October 16, 1875, is commonly held as BYU's founding date. Said Young about his vision: "I hope to see an Academy established in Provo... at which the children of the Latter-day Saints can receive a good education unmixed with the pernicious atheistic influences that are found in so many of the higher schools of the country."
thumb|left|250px|The Brigham Young Academy building circa 1900
The school broke off from the University of Deseret and became Brigham Young Academy, with classes commencing on January 3, 1876. Warren Dusenberry served as interim principal of the school for several months until April 1876 when Brigham Young's choice for principal arrived—a German immigrant named Karl Maeser. Under Maeser's direction the school educated many luminaries including future U.S. Supreme Court Justice George Sutherland and future U.S. Senator Reed Smoot among others. The school, however, did not become a university until the end of Benjamin Cluff's term at the helm of the institution. At that time, the school was also still privately supported by members of the community and was not absorbed and sponsored officially by the LDS Church until July 18, 1896. A series of odd managerial decisions by Cluff led to his demotion; however, in his last official act, he proposed to the Board that the Academy be named "Brigham Young University". The suggestion received a large amount of opposition, with many members of the Board saying that the school wasn't large enough to be a university, but the decision ultimately passed. One opponent to the decision, Anthon H. Lund, later said, "I hope their head will grow big enough for their hat."
In 1903 Brigham Young Academy was dissolved, and was replaced by two institutions: Brigham Young High School, and Brigham Young University. (The BY High School class of 1907 was ultimately responsible for the famous giant "Y" that is to this day embedded on a mountain near campus.) The Board elected George H. Brimhall as the new President of BYU. He had not received a high school education until he was forty. Nevertheless, he was an excellent orator and organizer. Under his tenure in 1904 the new Brigham Young University bought of land from Provo called "Temple Hill". After some controversy among locals over BYU's purchase of this property, construction began in 1909 on the first building on the current campus, the Karl G. Maeser Memorial. Brimhall also presided over the University during a brief crisis involving the theory of evolution. The religious nature of the school seemed at the time to collide with this scientific theory. Joseph F. Smith, LDS Church president, settled the question for a time by asking that evolution not be taught at the school. A few have described the school at this time as nothing more than a "religious seminary". However, many of its graduates at this time would go on to great success and become well renowned in their fields.
Expansion
thumb|left|250px|The Abraham O. Smoot Administration Building
Franklin S. Harris was appointed the university's president in 1921. He was the first BYU president to have a doctoral degree. Harris made several important changes to the school, reorganizing it into a true university, whereas before, its organization had remnants of the Academy days. At the beginning of his tenure, the school was not officially recognized as a university by any accreditation organization. By the end of his term, the school was accredited under all major accrediting organizations at the time. He was eventually replaced by Howard S. McDonald, who received his doctorate from the University of California. When he first received the position, the Second World War had just ended, and thousands of students were flooding into BYU. By the end of his stay, the school had grown nearly five times to an enrollment of 5,440 students. The university did not have the facilities to handle such a large influx, so he bought part of an Air Force Base in Ogden, Utah and rebuilt it to house some of the students. The next president, Ernest L. Wilkinson, also oversaw a period of intense growth, as the school adopted an accelerated building program. Wilkinson was responsible for the building of over eighty structures on the campus, many of which still stand. During his tenure, the student body increased six-fold, making BYU the largest private school at the time. The quality of the students also increased, leading to higher educational standards at the school. Finally, Wilkinson reorganized the LDS Church units on campus, with ten stakes and over 100 wards being added during his administration.
thumb|right|250px|Overlooking North Campus
Dallin H. Oaks replaced Wilkinson as president in 1971. Oaks continued the expansion of his predecessor, adding a law school and proposing plans for a new School of Management. During his administration, a new library was also added, doubling the library space on campus. Jeffrey R. Holland followed as president in 1980, encouraging a combination of educational excellence and religious faith at the university. He believed that one of the school's greatest strengths was its religious nature and that this should be taken advantage of rather than hidden. During his administration, the university added a campus in Jerusalem, now called the BYU Jerusalem Center. In 1989, Holland was replaced by Rex E. Lee. Lee was responsible for the Benson Science Building and the Museum of Art on campus. A cancer victim, Lee is memorialized annually at BYU during a cancer fundraiser called the Rex Lee Run. Shortly before his death, Lee was replaced in 1995 by Merrill J. Bateman.
Bateman was responsible for the building of 36 new buildings for the university both on and off campus, including the expansion of the Harold B. Lee Library. He was also one of several key college leaders who brought about the creation of the Mountain West Conference, which BYU's athletics program joined — BYU previously participated in the Western Athletic Conference. A BYU satellite TV network also opened in 2000 under his leadership. Bateman was also president during the September 11th attacks in 2001. The planes crashed on a Tuesday, hours before the weekly devotional normally held at BYU. Previous plans for the devotional were altered, as Bateman led the student body in a prayer for peace. Bateman was followed by Cecil O. Samuelson in 2003. Samuelson was succeeded by Kevin J Worthen in 2014.
Campus
thumb|right|250px|BYU campus with Y mountain and Squaw Peak in the background
The main campus in Provo, Utah, United States sits on approximately nestled at the base of the Wasatch Mountains and includes 295 buildings. The buildings feature a wide variety of architectural styles, each building being built in the style of its time. The grass, trees, and flower beds on BYU's campus are impeccably maintained. Furthermore, views of the Wasatch Mountains, (including Mount Timpanogos) can be seen from the campus. BYU's Harold B. Lee Library (also known as "HBLL"), which The Princeton Review ranked as the No. 1 "Great College Library" in 2004, has approximately 8½ million items in its collections, contains of shelving, and can seat 4,600 people. The Spencer W. Kimball Tower, shortened to SWKT and pronounced Swicket by many students, is home to several of the university's departments and programs and is the tallest building in Provo, Utah. Furthermore, BYU's Marriott Center, used as a basketball arena, can seat over 22,000 and is one of the largest on-campus arenas in the nation.Knupke, Gene. Profiles of American / Canadian Sports Stadiums and Arenas. S.L.: Xlibris Corporation, 2006. pg. 301 ISBN 1-4134-9823-X Interestingly absent on the campus of this church owned university is a campus chapel. Notwithstanding, each Sunday LDS Church services for students are conducted on campus, but due to the large number of students attending these services, nearly all of the buildings and possible meeting spaces on campus are utilized (in addition, many students attend services off campus in LDS chapels in the surrounding communities).
Museums
thumb|right|Museum of Art north entrance
The campus is home to several museums containing exhibits from many different fields of study. BYU's Museum of Art, for example, is one of the largest and most attended art museums in the Mountain West. This Museum aids in academic pursuits of students at BYU via research and study of the artworks in its collection. The Museum is also open to the general public and provides educational programming. The Museum of Peoples and Cultures is a museum of archaeology and ethnology. It focuses on native cultures and artifacts of the Great Basin, American Southwest, Mesoamerica, Peru, and Polynesia. Home to more than 40,000 artifacts and 50,000 photographs, it documents BYU's archaeological research. The BYU Museum of Paleontology was built in 1976 to display the many fossils found by BYU's James A. Jensen. It holds many vertebrate fossils from the Jurassic and Cretaceous periods, and is one of the top five vertebrate fossil collections in the world from the Jurassic. The museum receives about 25,000 visitors every year. The Monte L. Bean Life Science Museum was formed in 1978. It features several forms of plant and animal life on display and available for research by students and scholars.
The campus also houses several performing arts facilities. The de Jong Concert Hall seats 1282 people and is named for Gerrit de Jong Jr. The Pardoe Theatre is named for T. Earl and Kathryn Pardoe. Students use its stage in a variety of theatre experiments, as well as for Pardoe Series performances. It seats 500 people, and has quite a large stage with a proscenium opening of 19 by . The Margetts Theatre was named for Philip N. Margetts, a prominent Utah theatre figure. A smaller, black box theater, it allows a variety of seating and staging formats. It seats 125, and measures 30 by . The Nelke Theatre, named for one of BYU's first drama teachers, is used largely for instruction in experimental theater. It seats 280.
Student housing
thumb|right|Foreign Language Student Residence, where students commit to speak only their language of study
Single students have four options for on-campus housing: Heritage Halls, Helaman Halls, Wyview Park, and the FLSR. Married students can live in Wymount Terrace or Wyview Park.
Heritage Halls is a twenty-four-building housing complex on campus which offers apartment-style living. The halls house both male and female students, divided by gender into separate buildings. Each building has ten to fourteen units capable of housing six people each.
Helaman Halls is a slightly newer complex which underwent a 12-year renovation between 1991 and 2004. Helaman Halls is a nine-building (ninth opened in the summer of 2010), dormitory-style living area. Residents share a room (larger than Heritage Halls) with one other resident, but do not have their own kitchen and use shared bathrooms. Residents are required to have a meal plan, and eat at the newly remodeled Commons at the Cannon Center.BYU.edu
Wyview Park was originally built for families in 1996, but this changed in 2006 when the complex began housing single students in order to counteract loss of singles' housing in other areas. Wyview Park has 30 buildings that offer apartment-style living for students, along with the option for shared or single rooms.
The Foreign Language Student Residence complex has twenty-five apartments where students speak exclusively in a selected foreign language. The immersion experience is available in nine languages, and students are accompanied by a native resident throughout the year to enhance the experience.
Married students can house in Wymount Terrace, which contains a total of 462 apartments in 24 buildings.
Branches of the BYU Creamery provide basic food and general grocery products for students living in Heritage Halls, Helaman, Wymount, Wyview, and the FLSR. Helaman Halls is also served by a central cafeteria called the Cannon Center. The creamery, begun in 1949, has become a BYU tradition and is also frequented by visitors to the university and members of the community. It was the first on-campus full-service grocery store in the country.
Sustainability
BYU has designated energy conservation, products and materials, recycling, site planning and building design, student involvement, transportation, water conservation, and zero waste events as top priority categories in which to further its efforts to be an environmentally sustainable campus. The university has stated that "we have a responsibility to be wise stewards of the earth and its resources." BYU is working to increase the energy efficiency of its buildings by installing various speed drives on all pumps and fans, replacing incandescent lighting with fluorescent lighting, retrofitting campus buildings with low-E reflective glass, and upgraded roof insulation to prevent heat loss. The student groups BYU Recycles, Eco-Response, and BYU Earth educate students, faculty, staff, and administrators about how the campus can decrease its environmental impact. BYU Recycles spearheaded the recent campaign to begin recycling plastics, which the university did after a year of student campaigning.
Organization and administration
College/school founding College/school Year founded Business (Marriott) 1891 Education (McKay) 1913 Engineering and Technology (Fulton) 1953 Family, Home, and Social Sciences 1969 Fine Arts and Communications 1925 Humanities 1965 Law (Clark) 1973 Life Sciences 1954 Nursing 1953 Physical and Mathematical Sciences 1949 Religious Education 1959
Brigham Young University is a part of the LDS Church Educational System. It is organized under a Board of Trustees, with the President of the Church (currently Thomas S. Monson) as chairman. This board consists of the same people as the Church Board of Education, a pattern that has been in place since 1939. Prior to 1939, BYU had a separate board of trustees that was subordinate to the Church Board of Education.Wilkinson, Ernest L., Brigham Young University: The First 100 Years. (Provo: BYU Press, 1975) Vol. 2 The President of BYU, currently Kevin J Worthen, reports to the Board, through the Commissioner of Education.
The university operates under 11 colleges or schools, which collectively offer 194 bachelor's degree programs, 68 master's degree programs, 25 PhD programs, and a Juris Doctor program. BYU also manages some courses and majors through the David M. Kennedy Center for International Studies and "miscellaneous" college departments, including Undergraduate Education, Graduate Studies, Independent Study, Continuing Education, and the Honors Program.Index, honors.byu.edu BYU's Winter semester ends earlier than most universities in April since there is no Spring break, thus allowing students to pursue internships and other summer activities earlier. A typical academic year is broken up into two semesters: Fall (September–December) and Winter (January–April), as well as two shorter terms during the summer months: Spring (May–June) and Summer (July–August).
Academics
Admissions and demographics
BYU accepted 49 percent of the 11,423 people who applied for admission in the summer term and fall semester of 2013. The average GPA for these admitted students was 3.82. U.S. News and World Report describes BYU's selectivity as being "more selective" and compares it with such universities as the University of Texas at Austin and The Ohio State University. In the case of University of Texas-Austin ("UT"), BYU appears to be more selective in some regards, with 27 percent of admitted freshmen having ACT scores over 30, as compared with 23 percent for UT. In addition, BYU is ranked 26th in colleges with the most freshman Merit Scholars, with 88 in 2006.The Chronicle of Higher Education, August 31, 2007. BYU has one of the highest percentage of accepted applicants that go on to enroll (78 percent in 2010).
thumb|left|250px|The Harold B. Lee Library is consistently ranked among the top ten in the nation, with a No. 1 ranking in 2004 by The Princeton Review.
Students from every state in the U.S. and from many foreign countries attend BYU. (In the 2005–06 academic year, there were 2,396 foreign students, or eight (8) percent of enrollment.) Slightly more than 98 percent of these students are active members of the LDS Church. In 2006, 12.6 percent of the student body reported themselves as ethnic minorities, mostly Asians, Pacific islanders and Hispanics.
Graduation honors
Undergraduate students may qualify for graduation honors. University Honors is the highest distinction BYU awards its graduates. Administered by the Honors Program, the distinction requires students to complete an honors curriculum requirement, a Great Questions requirement, an Experiential Learning requirement, an honors thesis requirement, and a graduation portfolio that summarizes the student's honors experiences.
The university also awards Latin scholastic distinctions separately from the Honors Program: summa cum laude (top 1 percent), magna cum laude (top 5 percent), and cum laude (top 10 percent). The university additionally recognizes Phi Kappa Phi graduation honors.
Rankings
For 2017, U.S. News & World Report ranked BYU as tied for 68th among national universities in the United States. A 2013 Quarterly Journal of Economics study of where the nation's top high school students choose to enroll ranked BYU No. 21 in its peer-reviewed study. The Princeton Review has ranked BYU the best value for college in 2007, and its library is consistently ranked in the nation's top ten — No. 1 in 2004 and No. 4 in 2007. BYU is also ranked No. 19 in the U.S. News and World Report's "Great Schools, Great Prices" lineup, and No. 12 in lowest student-incurred debt. Due in part to the school's emphasis on undergraduate research, in rankings for 2008-2009, BYU was ranked No. 10 nationally for the number of students who go on to earn PhDs, No. 1 nationally for students who go on to dental school, No. 6 nationally for students who go on to law school, and No. 10 nationally for students who go on to medical school. BYU is designated as a research university with high research activity by the Carnegie Foundation for the Advancement of Teaching. Forbes magazine ranked it as the No. 1 "Top University to Work For in 2014" and as the best college in Utah.
In 2009 the university's Marriott School of Management received a No. 5 ranking by BusinessWeek for its undergraduate programs, and its MBA program was ranked by several sources: No. 22 ranking by BusinessWeek, No. 16 by Forbes, and No. 29 by U.S. News & World Report. Among regional schools the MBA program was ranked No. 1 by The Wall Street Journal's most recent ranking (2007), and it was ranked No. 92 among business schools worldwide in 2009 by Financial Times. For 2009, the university's School of Accountancy, which is housed within the Marriott School, received two No. 3 rankings for its undergraduate program—one by Public Accounting Report and the other by U.S. News & World Report. The same two reporting agencies also ranked the school's MAcc program No. 3 and No. 8 in the nation, respectively. In 2010 an article in the Wall Street Journal listing institutions whose graduates were the top-rated by recruiters ranked BYU No. 11. Using 2010 fiscal year data, the Association of University Technology Managers ranked BYU No. 3 in an evaluation of universities creating the most startup companies through campus research.
Notable research and awards
thumb|right|250px|The N. Eldon Tanner Building, home of the Marriott School of Management
Scientists associated with BYU have created some notable inventions. Philo T. Farnsworth, inventor of the electronic television, received his education at BYU, and later returned to do fusion research, receiving an honorary degree from the university. Harvey Fletcher, also an alumnus of BYU, inventor of stereophonic sound, went on to carry out the now famous oil-drop experiment with Robert Millikan, and was later Founding Dean of the BYU College of Engineering. H. Tracy Hall, inventor of the man-made diamond, left General Electric in 1955 and became a full professor of chemistry and Director of Research at BYU. While there, he invented a new type of diamond press, the tetrahedral press. In student achievements, BYU Ad Lab teams won both the 2007 and 2008 L'Oréal National Brandstorm Competition, and students developed the Magnetic Lasso algorithm found in Adobe Photoshop. In prestigious scholarships, BYU has produced 10 Rhodes Scholars, four Gates Scholars in the last six years, and in the last decade has claimed 41 Fulbright scholars and 3 Jack Kent Cooke scholars.
International focus
thumb|left|250px|The Eyring Science Center houses a planetarium, an anechoic chamber and a Foucault pendulum.
Over three quarters of the student body has some proficiency in a second language (numbering 107 languages in total). This is partially due to the fact that 45 percent of the student body at BYU has been missionaries for LDS Church, and many of them learned a foreign language as part of their mission assignment. During any given semester, about one-third of the student body is enrolled in foreign language classes, a rate nearly four times the national average. BYU offers courses in over 60 different languages, many with advanced courses that are seldom offered elsewhere. Several of its language programs are the largest of their kind in the nation, the Russian program being one example. The university was selected by the United States Department of Education as the location of the national Middle East Language Resource Center, making the school a hub for experts on that region. It was also selected as a Center for International Business Education Research, a function of which is to train business employees in international languages and relations.
Beyond this, BYU also runs a very large study abroad program, with satellite centers in London, Jerusalem, and Paris, as well as more than 20 other sites. Nearly 2,000 students take advantage of these programs yearly. In 2001 the Institute of International Education ranked BYU as the number one university in the U.S. to offer students study abroad opportunities. The BYU Jerusalem Center, which was closed in 2000 due to student security concerns related to the Second Intifada and, more recently, the 2006 Israel-Lebanon conflict, was reopened to students in the Winter 2007 semester.
thumb|right|The Maeser Building, built in 1911, houses BYU's Honors Program.
A few special additions enhance the language-learning experience. For example, BYU's International Cinema, featuring films in several languages, is the largest and longest-running university-run foreign film program in the country. As already noted, BYU also offers an intensive foreign language living experience, the Foreign Language Student Residence. This is an on-campus apartment complex where students commit to speak only their chosen foreign language while in their apartments. Each apartment has at least one native speaker to ensure correct language usage.
Academic freedom issues
In 1992 the university drafted a new Statement on Academic Freedom, specifying that limitations may be placed upon "expression with students or in public that: (1) contradicts or opposes, rather than analyzes or discusses, fundamental Church doctrine or policy; (2) deliberately attacks or derides the Church or its general leaders; or (3) violates the Honor Code because the expression is dishonest, illegal, unchaste, profane, or unduly disrespectful of others." These restrictions have caused some controversy as several professors have been disciplined according to the then-new rule. The American Association of University Professors has claimed that "infringements on academic freedom are distressingly common and that the climate for academic freedom is distressingly poor."
The newer rules have not affected BYU's accreditation, as the university's chosen accrediting body allows "religious colleges and universities to place limitations on academic freedom so long as they publish those limitations candidly", according to associate academic vice president Jim Gordon. The AAUP's concern was not with restrictions on the faculty member's religious expression but with a failure, as alleged by the faculty member and AAUP, that the restrictions had not been adequately specified in advance by BYU: "The AAUP requires that any doctrinal limitations on academic freedom be laid out clearly in writing. We [AAUP] concluded that BYU had failed to do so adequately."Cary Nelson (AAUP President), "Praying to the Wrong God" (Subject of massmail message), AAUP Online, 2008 September 23.
Performing arts
right|thumb|The BYU Centennial Carillon stands at the north end of campus.
Dance
The BYU Ballroom Dance Company is known as one of the best formation ballroom dance teams in the world, having won the U.S. National Formation Dance Championship every year since 1982. BYU's Ballroom dance team has won first place in Latin or Standard (or both) many times when they have competed at the Blackpool Dance Festival, and they were the first U.S. team to win the formation championships at the famed British Championships in Blackpool, England in 1972 . The NDCA National DanceSport championships have been held at BYU for several years, and BYU holds dozens of ballroom dance classes each semester and is consequently the largest collegiate ballroom dance program in the world. In addition, BYU has a number of other notable dance teams and programs. These teams include the Theatre Ballet, Contemporary Dance Theatre, Living Legends, and International Folk Dance Ensemble. The Living Legends perform Latin, Native American, and Polynesian dancing. BYU boasts one of the largest dance departments in the nation. Many students from all different majors across campus participate in various dance classes each semester.
Music
The Young Ambassadors are a song and dance performing group with a 50-year history at BYU. Prior to 1970 the group was known as Curtain Time USA. In the 1960s their world tour stops included Lebanon, Jordan, and Iraq. The group first performed as the Young Ambassadors at Expo '70 in Japan, and has since performed in over 56 nations. The royalty of Thailand and Jordan, along with persons of high office in countries such as India, have been among their audiences.
thumb|left|350px|The Concert Choir in performance
The BYU Opera Workshop gave the first North American performance of the Ralph Vaughan Williams opera The Pilgrim's Progress in April 1968, directed by Max C. Golightly.Stephen Connock, The Pilgrim's Progress in Performance, ENO London 2012
BYU's Wind Symphony and Chamber Orchestra have toured many countries including Denmark, Hong Kong, Russia, the British Isles, and Central Europe. The Symphonic Band is also an ensemble dedicated to developing the musician, but with a less strenuous focus on performance. Additionally, BYU has a marching band program called the Cougar Marching Band.
BYU has a choral program with over 500 members. The four BYU auditioned choirs include the 40-member BYU Singers, the 90-member BYU Concert Choir, the 200-member BYU Men's Chorus (the largest male collegiate choir in the U.S.), and the 190-member BYU Women's Chorus. Both the BYU Men's Chorus and BYU Singers have toured across the United States and around the globe. Each of the four groups has recorded several times under BYU's label Tantara Records.
BYU also has a Balinese gamelan ensemble, Gamelan Bintang Wahyu.
Athletics
thumb|right|The school's first football team won the regional championship in 1896.
BYU has 21 NCAA varsity teams.Athletic Department fact sheet Nineteen of these teams played mainly in the Mountain West Conference from its inception in 1999 until the school left that conference in 2011. Prior to that time BYU teams competed in the Western Athletic Conference. All teams are named the "Cougars", and Cosmo the Cougar has been the school's mascot since 1953. The school's fight song is the Cougar Fight Song. Because many of its players serve on full-time missions for two years (men when they're 18, women when 19), BYU athletes are often older on average than other schools' players. The NCAA allows students to serve missions for two years without subtracting that time from their eligibility period. This has caused minor controversy, but is largely recognized as not lending the school any significant advantage, since players receive no athletic and little physical training during their missions. BYU has also received attention from sports networks for refusal to play games on Sunday, as well as expelling players due to honor code violations. Beginning in the 2011 season, BYU football competes in college football as an independent. In addition, most other sports now compete in the West Coast Conference. Teams in swimming and diving and indoor track and field for both men and women joined the men's volleyball program in the Mountain Pacific Sports Federation. For outdoor track and field, the Cougars became an Independent. Softball returned to the Western Athletic Conference, but spent only one season in the WAC; the team moved to the Pacific Coast Softball Conference after the 2012 season. The softball program may move again after the 2013 season; the July 2013 return of Pacific to the WCC will enable that conference to add softball as an official sport.
As of 2016, more recently BYU has had several standout basketball players. This included Jimmer Fredette, who in 2011 was named the NCAA basketball player of the year and led the nation in scoring; Tyler Haws, as part of the 2014-15 season, was a finalist for the Jerry West Award and scored the most points in the nation; and Kyle Collinsworth, who set the NCAA single-season record for triple-doubles with six (in both the 2014–15 and 2015–16 seasons) and holds the NCAA career triple-double record of twelve.
Extramural and Recognized sports
BYU sponsors extramural competition in six sports under Student Life. These sports are racquetball, men's lacrosse, women's lacrosse, men's rugby, women's rugby, and men's soccer.BYU Athletics, Extramural Sports at BYU, http://byucougars.com/athletics/extramural-sports-byu Men's hockey is not an "extramural sport" but is given "recognized sport" status.
Student life
LDS atmosphere
"The mission of [BYU] is to assist individuals in their quest for perfection and eternal life. That assistance should provide a period of intensive learning in a stimulating setting where a commitment to excellence is expected and the full realization of human potential is pursued...." — BYU Mission Statement
BYU's stated mission "is to assist individuals in their quest for perfection and eternal life." BYU is thus considered by its leaders to be at heart a religious institution, wherein, ideally, religious and secular education are interwoven in a way that encourages the highest standards in both areas. This weaving of the secular and the religious aspects of a religious university goes back as far as Brigham Young himself, who told Karl G. Maeser when the Church purchased the school: "I want you to remember that you ought not to teach even the alphabet or the
multiplication tables without the Spirit of God."
thumb|left|200px|The BYU Bell Tower with the Provo LDS temple in the background
BYU has been considered by some Latter-day Saints, as well as some university and church leaders, to be "The Lord's university". This phrase is used in reference to the school's mission as an "ambassador" to the world for the LDS Church and thus, for Jesus Christ. In the past, some students and faculty have expressed dissatisfaction with this nickname, stating that it gives students the idea that university authorities are always divinely inspired and never to be contradicted. Leaders of the school, however, acknowledge that the nickname represents more a goal that the university strives for and not its current state of being. Leaders encourage students and faculty to help fulfill the goal by following the teachings of their religion, adhering to the school's honor code, and serving others with the knowledge they gain while attending.
BYU mandates that its students who are members of the LDS Church be religiously active. Both LDS and Non-LDS students are required to provide an endorsement from an ecclesiastic leader with their application for admittance. Over 900 rooms on BYU campus are used for the purposes of LDS Church congregations. More than 150 congregations meet on BYU campus each Sunday. "BYU's campus becomes one of the busiest and largest centers of worship in the world" with about 24,000 persons attending church services on campus.
Some 97 percent of male BYU graduates and 32 percent of female graduates took a hiatus from their undergraduate studies at one point to serve as LDS missionaries. In October 2012, the LDS Church announced at its general conference that young men could serve a mission after they turn 18 and have graduated from high school, rather than after age 19 under the old policy. Many young men would often attend a semester or two of higher education prior to beginning missionary service. This policy change will likely impact what has been the traditional incoming freshman class at BYU. Female students may now begin their missionary service anytime after turning 19, rather than age 21 under the previous policy. For males, a full-time mission is two years in length, and for females it lasts 18 months.
Honor code
"As a matter of personal commitment, faculty, administration, staff, and students of Brigham Young University, Brigham Young University—Hawaii, Brigham Young University—Idaho, and LDS Business College seek to demonstrate in daily living on and off campus those moral virtues encompassed in the gospel of Jesus Christ, and will
Be honest
Live a chaste and virtuous life
Obey the law and all campus policies
Use clean language
Respect others
Abstain from alcoholic beverages, tobacco, tea, coffee, and substance abuse
Participate regularly in church services
Observe the Dress and Grooming Standards
Encourage others in their commitment to comply with the Honor Code" — Church Educational System Honor Code Statement
All students and faculty, regardless of religion, are required to agree to adhere to an honor code. Early forms of the Church Educational System Honor Code are found as far back as the days of the Brigham Young Academy and early school President Karl G. Maeser. Maeser created the "Domestic Organization", a group of teachers who would visit students at their homes to ensure they were following the school's moral rules prohibiting obscenity, profanity, smoking, and alcohol consumption. The Honor Code was not created until about 1940, and was used mainly for cases of cheating and academic dishonesty.
President Wilkinson expanded the Honor Code in 1957 to include other school standards. This led to what the Honor Code represents today: rules regarding chastity, dress, grooming, drugs, and alcohol. A signed commitment to live the honor code is part of the application process, and must be adhered by all students, faculty, and staff. Students and faculty found in violation of standards are warned or called to meet with representatives of the Honor Council. In certain cases, students and faculty can be expelled or lose tenure. Both LDS and non-LDS students are required to meet annually with a Church leader to receive an ecclesiastical endorsement for both acceptance and continuance.
Controversy has grown since 2014 surrounding the school's honor code, especially around its policies towards LGBTQ students, as evidenced by growing national attention paid by national news media and the American Bar Association. Various LGBT advocacy groups have protested the honor code and criticized it as being anti-gay."BYU Continues the Legacy of Anti-Gay Policies", HeartStrong."Brigham Young University Pages", Affirmation: Gay and Lesbian Mormons."The 2006 Equality Ride Route: Brigham Young University", Soulforce. From 1962 to 1973 there was a total ban on admitting or retaining any student for whom there was "convincing evidence [was] a homosexual". After 1973 the ban only continued for "overt and active homosexuals" while students who had "repented of" homosexual acts and "forsaken" them for a "lengthy period of time" could be admitted or remain as students. From the late 1950s until at least the late 1970s BYU had an electroshock and vomit aversion therapy program dedicated to "curing" homosexual students reported by bishops and BYU administrators through administering electrical shocks or vomit inducing drugs while showing "nude" pictures to the patient in an attempt to associate pain with homosexual visual stimulation. According to the Standards Office director from 1971 to 1981, all homosexual BYU students who were reported to the Standards Office were either expelled, or, for "less serious" offenses, were required to undergo therapy in order to remain at the university; in "special cases" this treatment included "electroshock and vomiting aversion therapies". In the fall of 2016 BYU faced national criticism when many called its LGBT policies discriminatory while the university was being considered as an addition to the Big 12 Conference. The Princeton Review has regularly ranked BYU among the most LGBT-unfriendly schools in the United States.
Other criticism have focused on the environment that the policies and enforcement create for survivors of sexual assault. Beginning in 2014 and continuing through 2016, some students reported that, after being sexually assaulted or raped, they were told they would face discipline because of honor code violations that came to light during the investigation of the assaults. Criticism has been leveled that this atmosphere may prevent other students from reporting sexual assault crimes to police, a situation that local law enforcement have publicly criticized. In response, the Victim Services Coordinator of the Provo Police Department called for an amnesty clause to be added to the Honor Code, which would not punish sexual assault survivors for past honor code violations discovered during the investigation. BYU launched a review of the practice, which concluded in October 2016. BYU announced several changes to how it would handle sexual assault reports, including adding an amnesty clause, and ensuring under most circumstances that information is not shared between Title IX Office and Honor Code Office without the victim's consent.
Culture and activities
BYU's social and cultural atmosphere is unique. The high rate of enrollment at the university by members of The Church of Jesus Christ of Latter-day Saints (more than 98 percent) results in an amplification of LDS cultural norms; BYU was ranked by The Princeton Review in 2008 as 14th in the nation for having the happiest students and highest quality of life. However, the quirkiness and sometimes "too nice" culture is often caricatured, for example, in terms of marrying early and being very conservative.
One of the characteristics of BYU most often pointed out is its reputation for emphasizing a "marriage culture". Members of The Church of Jesus Christ of Latter-day Saints highly value marriage and family, especially marriage within the faith. Approximately 51 percent of the graduates in BYU's class of 2005 were married. This is compared to a national marriage average among college graduates of 11 percent. BYU students on average marry at the age of 22, according to a 2005 study, while the national average age is 29 years for men and 27 years for women.https://www.census.gov/hhes/families/files/graphics/MS-2.pdf
Brigham Young University's Honor Code, which all BYU students must agree to follow as a condition of studying at BYU, prohibits the consumption of alcoholic beverages, tobacco, etc. As mentioned earlier, The Princeton Review has rated BYU the "#1 stone cold sober school" in the nation for several years running, an honor which the late LDS Church president Gordon B. Hinckley had commented on with pride. BYU's 2014 "#1 stone cold" sober rating marked the 17th year in a row the school had earned that rating. BYU has used this and other honors awarded to the school to advertise itself to prospective students, showing that BYU is proud of the rating. According to the Uniform Crime Reports, incidents of crime in Provo are lower than the national average. Murder is rare, and robberies are about 1/10 the national average. Business Insider rated BYU as the #1 safest college campus in the nation.
Many on-campus student activities and clubs are organized by BYUSA, the university's official student association. A popular comedy club is Divine Comedy.
BYU sponsors a question-answering service known as the "100 Hour Board". Previously a bulletin board in the Wilkinson Student Center, it is now hosted online. Anyone with an account may ask a question, with topics ranging from academic questions to questions about relationships or church doctrine. The questions are answered in 100 hours by pseudo-anonymous BYU students. It has been affiliated with The Universe since 2006.
Media
thumb|The BYU Broadcasting building while under construction in 2010.
The BYU Broadcasting Technical Operations Center is an HD production and distribution facility that is home to local PBS affiliate KBYU-TV, local classical music station KBYU-FM Classical 89, BYU Radio, BYU Radio Instrumental, BYU Radio International, BYUtv and BYU Television International with content in Spanish and Portuguese (both available via terrestrial, satellite, and internet signals). BYUtv is also available via cable throughout some areas of the United States. The BYU Broadcasting Technical Operations Center is home to three television production studios, two television control rooms, radio studios, radio performance space, and master control operations.
The university produces a weekly newspaper called The Universe (it was published daily until 2012), maintains an online news site that is regularly updated called The Digital Universe and has a daily news program broadcast via KBYU-TV. The university also has a recording label called Tantara Records which is run by the BYU School of Music and promotes the works of student ensembles and faculty.
Alumni
thumb|right|250px|Gordon B. Hinckley Alumni and Visitors Center
As of November 2007, BYU has approximately 362,000 living alumni. Alumni relations are coordinated and activities are held at the new Gordon B. Hinckley Alumni and Visitors Center.
Over 21 BYU graduates have served in the U.S. Senate and U.S. House of Representatives, such as former Dean of the U.S. Senate Reed Smoot (class of 1876). Cabinet members of American presidents include former Secretary of Agriculture to President Dwight D. Eisenhower, Ezra Taft Benson '26 and Rex E. Lee '60, who was United States Solicitor General under President Ronald Reagan. Mitt Romney, former Governor of Massachusetts and 2012 Republican Presidential Candidate, class of 1971.
BYU alumni in academia include former Dean of the Harvard Business School Kim B. Clark, two time world's most influential business thinker Clayton M. Christensen, Michael K. Young '73, current president of the University of Washington, Matthew S. Holland, current president of Utah Valley University, Stan L. Albrecht, current president of Utah State University, Teppo Felin, Professor at the University of Oxford, and Stephen D. Nadauld, previous president of Dixie State University. The University also graduated Nobel Prize winner Paul D. Boyer, as well as Philo Farnsworth (inventor of the electronic television) and Harvey Fletcher (inventor of the hearing aid). Four of BYU's thirteen presidents were alumni of the University. Additionally, alumni of BYU who have served as business leaders include Citigroup CFO Gary Crittenden '76, former Dell CEO Kevin Rollins '84, Deseret Book CEO Sheri L. Dew, and Matthew K. McCauley, CEO of children's clothing company Gymboree.
In literature and journalism, BYU has produced several best-selling authors, including Orson Scott Card '75, Brandon Sanderson '00 & '05, Ben English '98, and Stephenie Meyer '95. BYU also graduated American activist and contributor for ABC News Elizabeth Smart-Gilmour. Other media personalities include former CBS News correspondent Art Rascon, award-winning ESPN sportscaster and former Miss America Sharlene Wells Hawkes '86 and former co-host of CBS's The Early Show Jane Clayson Johnson '90. In entertainment and television, BYU is represented by Jon Heder '02 (best known for his role as Napoleon Dynamite), writer-director Daryn Tufts '98, Golden Globe-nominated Aaron Eckhart '94, animator and filmmaker Don Bluth '54, Jeopardy! all-time runner-up Ken Jennings '00, Academy Award winning filmmaker Kieth Merrill '67, and Richard Dutcher, the "Father of Mormon Cinema." In the music industry BYU is represented by lead singer of the Grammy Award winning band Imagine Dragons Dan Reynolds, multi-platinum selling drummer Elaine Bradley from the band Neon Trees, crossover dubstep violinist Lindsey Stirling, former American Idol contestant Carmen Rasmusen, and Mormon Tabernacle Choir director Mack Wilberg.
BYU has also produced many religious leaders. Among the alumni are several LDS Church general authorities, including two church presidents: Ezra Taft Benson '26, and Thomas S. Monson '74), six apostles (Neil L. Andersen, D. Todd Christofferson '69, David A. Bednar '76, Jeffrey R. Holland '65 & '66, Dallin H. Oaks '54, and Reed Smoot 1876), and two general presidents of the Relief Society (Julie B. Beck '73 and Belle Spafford '20).
A number of BYU alumni have found success in professional sports, representing the University in 7 MLB World Series, 5 NBA Finals, and 25 NFL Super Bowls. In baseball, BYU alumni include All-Stars Rick Aguilera '83, Wally Joyner '84, and Jack Morris '76. Professional basketball players include three-time NBA champion Danny Ainge '81, 1952 NBA Rookie of the Year and 4-time NBA All-Star Mel Hutchins '51, three-time Olympic medalist and Hall of Famer Krešimir Ćosić '73, and consensus 2011 national college player of the year Jimmer Fredette '11, currently with the New York Knicks organization. BYU also claims notable professional football players including two-time NFL MVP and Super Bowl MVP and Pro Football Hall of Fame quarterback Steve Young '84 & J.D. '96, Heisman Trophy winner Ty Detmer '90, and two-time Super Bowl winner Jim McMahon. In golf, BYU alumni include two major championship winners: Johnny Miller ('69) at the 1973 U.S. Open and 1976 British Open and Mike Weir ('92) at the 2003 Masters.
See also
List of colleges and universities in Utah
Provo City Library
References
External links
BYU Athletics website
Category:Academic language institutions
Category:Educational institutions established in 1875
University, Brigham Young
Category:Private universities and colleges in Utah
Category:Buildings and structures in Provo, Utah
Category:Significant places in Mormonism
Category:The Church of Jesus Christ of Latter-day Saints in Utah
Category:Universities and colleges accredited by the Northwest Commission on Colleges and Universities
Category:Universities and colleges in Utah
Category:Universities and colleges in Utah County, Utah
Category:Tourist attractions in Provo, Utah
Category:1875 establishments in Utah Territory | 82,058 | 2017-01 |
Oklahoma | Oklahoma (; Cherokee: Asgaya gigageyi / ᎠᏍᎦᏯ ᎩᎦᎨᏱ; or transliterated from English as ᎣᎦᎳᎰᎹ (òɡàlàhoma), Pawnee: Uukuhuúwa, Cayuga: Gahnawiyoˀgeh) is a state located in the South Central United States. Oklahoma is the 20th-most extensive and the 28th-most populous of the 50 United States. The state's name is derived from the Choctaw words okla and humma, meaning "red people". It is also known informally by its nickname, The Sooner State, in reference to the non-Native settlers who staked their claims on the choicest pieces of land before the official opening date, and the Indian Appropriations Act of 1889, which opened the door for white settlement in America's Indian Territory. The name was settled upon statehood, Oklahoma Territory and Indian Territory were merged and Indian was dropped from the name. On November 16, 1907, Oklahoma became the 46th state to enter the union. Its residents are known as Oklahomans, or informally "Okies", and its capital and largest city is Oklahoma City.
A major producer of natural gas, oil, and agricultural products, Oklahoma relies on an economic base of aviation, energy, telecommunications, and biotechnology. In 2007, it had one of the fastest-growing economies in the United States, ranking among the top states in per capita income growth and gross domestic product growth. Oklahoma City and Tulsa serve as Oklahoma's primary economic anchors, with nearly two-thirds of Oklahomans living within their metropolitan statistical areas.
With small mountain ranges, prairie, mesas, and eastern forests, most of Oklahoma lies in the Great Plains, Cross Timbers and the U.S. Interior Highlands—a region especially prone to severe weather. In addition to having a prevalence of English, German, Scottish, Scotch-Irish, and Native American ancestry, more than 25 Native American languages are spoken in Oklahoma, second only to California.
Oklahoma is located on a confluence of three major American cultural regions and historically served as a route for cattle drives, a destination for southern settlers, and a government-sanctioned territory for Native Americans.
Etymology
The name Oklahoma comes from the Choctaw phrase okla humma, literally meaning red people. Choctaw Chief Allen Wright suggested the name in 1866 during treaty negotiations with the federal government regarding the use of Indian Territory, in which he envisioned an all-Indian state controlled by the United States Superintendent of Indian Affairs. Equivalent to the English word Indian, okla humma was a phrase in the Choctaw language used to describe Native American people as a whole. Oklahoma later became the de facto name for Oklahoma Territory, and it was officially approved in 1890, two years after the area was opened to white settlers.
Geography
300px|thumb|Köppen climate types of Oklahoma
thumb|State rock (rose rock) specimens from Cleveland County, with a US quarter for size reference.
thumb|The state's high plains stretch behind a greeting sign in the Oklahoma Panhandle.
thumb|A view of Mt Scott
Oklahoma is the 20th-largest state in the United States, covering an area of 69,898 square miles (181,035 km2), with 68,667 square miles (177847 km2) of land and 1,281 square miles (3,188 km2) of water. It is one of six states on the Frontier Strip and lies partly in the Great Plains near the geographical center of the 48 contiguous states. It is bounded on the east by Arkansas and Missouri, on the north by Kansas, on the northwest by Colorado, on the far west by New Mexico, and on the south and near-west by Texas. Much of its border with Texas lies along the Red River.
The Oklahoma panhandle's Western edge is out of alignment with its Texas border. The Oklahoma/New Mexico border is actually 2.1 to 2.2 miles east of the Texas line. The border between Texas and New Mexico was set first as a result of a survey by Spain in 1819. It was then set along the 103rd Meridian. In the 1890s, when Oklahoma was formally surveyed using more accurate surveying equipment and techniques, it was discovered the Texas line was not set along the 103rd Meridian. Surveying techniques were not as accurate in 1819, and the actual 103rd Meridian was approximately 2.2 miles to the east. It was much easier to leave the mistake than for Texas to cede land to New Mexico to correct the surveying error. The placement of the Oklahoma/New Mexico border represents the true 103rd Meridian.
Cimarron County in Oklahoma's panhandle is the only county in the United States that touches four other states: New Mexico, Texas, Colorado and Kansas.
Topography
Oklahoma is between the Great Plains and the Ozark Plateau in the Gulf of Mexico watershed, generally sloping from the high plains of its western boundary to the low wetlands of its southeastern boundary. Its highest and lowest points follow this trend, with its highest peak, Black Mesa, at 4,973 feet (1,516 m) above sea level, situated near its far northwest corner in the Oklahoma Panhandle. The state's lowest point is on the Little River near its far southeastern boundary near the town of Idabel, Oklahoma, which dips to 289 feet (88 m) above sea level.
thumb|right|The lower dam on Medicine Creek in Medicine Park, below Lake Lawtonka, built c. 1901 to serve the nearby city of Lawton. Medicine Park was one of the first resort communities established in the Wichita Mountains.
thumb|A river carves a canyon in the Wichita Mountains.
Among the most geographically diverse states, Oklahoma is one of four to harbor more than 10 distinct ecological regions, with 11 in its borders – more per square mile than in any other state. Its western and eastern halves, however, are marked by extreme differences in geographical diversity: Eastern Oklahoma touches eight ecological regions and its western half contains three. Although having fewer ecological regions Western Oklahoma contains many rare, relic species.
thumb|The Ouachita Mountains cover much of southeastern Oklahoma.
thumb|Grave Creek in McIntosh County, Oklahoma
Oklahoma has four primary mountain ranges: the Ouachita Mountains, the Arbuckle Mountains, the Wichita Mountains, and the Ozark Mountains. Contained within the U.S. Interior Highlands region, the Ozark and Ouachita Mountains mark the only major mountainous region between the Rocky Mountains and the Appalachians. A portion of the Flint Hills stretches into north-central Oklahoma, and near the state's eastern border, Cavanal Hill is regarded by the Oklahoma Tourism & Recreation Department as the world's tallest hill; at 1,999 feet (609 m), it fails their definition of a mountain by one foot.
The semi-arid high plains in the state's northwestern corner harbor few natural forests; the region has a rolling to flat landscape with intermittent canyons and mesa ranges like the Glass Mountains. Partial plains interrupted by small, sky island mountain ranges like the Antelope Hills and the Wichita Mountains dot southwestern Oklahoma; transitional prairie and oak savannahs cover the central portion of the state. The Ozark and Ouachita Mountains rise from west to east over the state's eastern third, gradually increasing in elevation in an eastward direction.
thumb|Turner Falls
More than 500 named creeks and rivers make up Oklahoma's waterways, and with 200 lakes created by dams, it holds the nation's highest number of artificial reservoirs. Most of the state lies in two primary drainage basins belonging to the Red and Arkansas rivers, though the Lee and Little rivers also contain significant drainage basins.
Flora and fauna
thumb|Populations of American bison inhabit the state's prairie ecosystems.
Due to Oklahoma's location at the confluence of many geographic regions, the state's climatic regions have a high rate of biodiversity for their size. Forests cover 24 percent of Oklahoma and prairie grasslands composed of shortgrass, mixed-grass, and tallgrass prairie, harbor expansive ecosystems in the state's central and western portions, although cropland has largely replaced native grasses. Where rainfall is sparse in the state's western regions, shortgrass prairie and shrublands are the most prominent ecosystems, though pinyon pines, red cedar (junipers), and ponderosa pines grow near rivers and creek beds in the panhandle's far western reaches. Southwestern Oklahoma contains many rare, disjunct species including sugar maple, bigtooth maple, nolina and southern live oak.
Marshlands, cypress forests and mixtures of shortleaf pine, loblolly pine, blue palmetto, and deciduous forests dominate the state's southeastern quarter, while mixtures of largely post oak, elm, red cedar (Juniperus virginiana) and pine forests cover northeastern Oklahoma.
The state holds populations of white-tailed deer, mule deer, antelope, coyotes, mountain lions, bobcats, elk, and birds such as quail, doves, cardinals, bald eagles, red-tailed hawks, and pheasants. In prairie ecosystems, American bison, greater prairie chickens, badgers, and armadillo are common, and some of the nation's largest prairie dog towns inhabit shortgrass prairie in the state's panhandle. The Cross Timbers, a region transitioning from prairie to woodlands in Central Oklahoma, harbors 351 vertebrate species. The Ouachita Mountains are home to black bear, red fox, grey fox, and river otter populations, which coexist with a total of 328 vertebrate species in southeastern Oklahoma. Also, in southeastern Oklahoma lives the American alligator.
Protected lands
thumb|Mesas rise above one of Oklahoma's state parks.
Oklahoma has 50 state parks, six national parks or protected regions, two national protected forests or grasslands, and a network of wildlife preserves and conservation areas. Six percent of the state's 10 million acres (40,000 km2) of forest is public land, including the western portions of the Ouachita National Forest, the largest and oldest national forest in the Southern United States.
With 39,000 acres (158 km2), the Tallgrass Prairie Preserve in north-central Oklahoma is the largest protected area of tallgrass prairie in the world and is part of an ecosystem that encompasses only 10 percent of its former land area, once covering 14 states. In addition, the Black Kettle National Grassland covers 31,300 acres (127 km2) of prairie in southwestern Oklahoma. The Wichita Mountains Wildlife Refuge is the oldest and largest of nine national wildlife refuges in the state and was founded in 1901, encompassing 59,020 acres (238.8 km2).
Of Oklahoma's federally protected park or recreational sites; the Chickasaw National Recreation Area is the largest, with 9,898.63 acres (18 km2). Other sites include the Santa Fe and Trail of Tears national historic trails, the Fort Smith and Washita Battlefield national historic sites, and the Oklahoma City National Memorial.
Climate
thumb|Oklahoma's climate is prime for the generation of thunderstorms
thumb|Winter at the Oklahoma Baptist University campus
Oklahoma is located in a humid subtropical region. Oklahoma lies in a transition zone between humid continental climate to the north, semi-arid climate to the west, and humid subtropical climate in the central, south and eastern portions of the state. Most of the state lies in an area known as Tornado Alley characterized by frequent interaction between cold, dry air from Canada, warm to hot, dry air from Mexico and the Southwestern U.S., and warm, moist air from the Gulf of Mexico. The interactions between these three contrasting air currents produces severe weather (severe thunderstorms, damaging thunderstorm winds, large hail and tornadoes) with a frequency virtually unseen anywhere else on planet Earth. An average 62 tornadoes strike the state per year—one of the highest rates in the world.
Because of Oklahoma's position between zones of differing prevailing temperature and winds, weather patterns within the state can vary widely over relatively short distances and can change drastically in a short time. As an example, on November 11, 1911, the temperature at Oklahoma City reached in the afternoon (the record high for that date), then an Arctic cold front of unprecedented intensity slammed across the state, causing the temperature to crash 66 degrees, down to at midnight (the record low for that date); thus, both the record high and record low for November 11 were set on the same date. This type of phenomenon is also responsible for many of the tornadoes in the area, such as the 1912 Oklahoma tornado outbreak, when a warm front traveled along a stalled cold front, resulting in an average of about one tornado per hour over the course of a day.
The humid subtropical climate (Koppen Cfa) of central, southern and eastern Oklahoma is influenced heavily by southerly winds bringing moisture from the Gulf of Mexico. Traveling westward, the climate transitions progressively toward a semi-arid zone (Koppen BSk) in the high plains of the Panhandle and other western areas from about Lawton westward, less frequently touched by southern moisture. Precipitation and temperatures decline from east to west accordingly, with areas in the southeast averaging an annual temperature of 62 °F (17 °C) and an annual rainfall of generally over and up to , while areas of the (higher-elevation) panhandle average 58 °F (14 °C), with an annual rainfall under .Oklahoma Water Resources Board
Over almost all of Oklahoma, winter is the driest season. Average monthly precipitation increases dramatically in the spring to a peak in May, the wettest month over most of the state, with its frequent and not uncommonly severe thunderstorm activity. Early June can still be wet, but most years see a marked decrease in rainfall during June and early July. Mid-summer (July and August) represents a secondary dry season over much of Oklahoma, with long stretches of hot weather with only sporadic thunderstorm activity not uncommon many years. Severe drought is common in the hottest summers, such as those of 1934, 1954, 1980 and 2011, all of which featured weeks on end of virtual rainlessness and high temperatures well over . Average precipitation rises again from September to mid-October, representing a secondary wetter season, then declines from late October through December.
All of the state frequently experiences temperatures above 100 °F (38 °C) or below 0 °F (−18 °C), though below-zero temperatures are rare in south-central and southeastern Oklahoma. Snowfall ranges from an average of less than in the south to just over on the border of Colorado in the panhandle. The state is home to the Storm Prediction Center, the National Severe Storms Laboratory, and the Warning Decision Training Branch, all part of the National Weather Service and located in Norman. Oklahoma's highest recorded temperature of was recorded at Tipton on June 27, 1994 and the lowest recorded temperature of was recorded at Nowata on February 10, 2011.
Monthly temperatures for Oklahoma's largest cities City Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Oklahoma City 50/29 55/33 63/41 73/50 80/60 88/68 94/72 93/71 85/63 73/52 62/40 51/31 Tulsa 48/27 53/31 62/40 72/49 79/59 88/68 93/73 93/71 84/62 73/51 61/40 49/30 Lawton 50/26 56/31 65/40 73/49 82/59 90/68 96/73 95/71 86/63 76/51 62/39 52/30 Average high/low temperatures in °F
History
thumb|Map of Indian Territory (Oklahoma) 1889. Britannica 9th ed.
Evidence exists that native peoples traveled through Oklahoma as early as the last ice age. Ancestors of the Wichita and Caddo lived in what is now Oklahoma. The Panhandle culture peoples were precontact residents of the panhandle region. The westernmost center of the Mississippian culture was Spiro Mounds, in what is now Spiro, Oklahoma, which flourished between AD 850 and 1450. Spaniard Francisco Vásquez de Coronado traveled through the state in 1541, but French explorers claimed the area in the 1700s and it remained under French rule until 1803, when all the French territory west of the Mississippi River was purchased by the United States in the Louisiana Purchase.
The territory now known as Oklahoma was first a part of the Arkansas Territory from 1819 until 1828.
During the 19th century, thousands of Native Americans were expelled from their ancestral homelands from across North America and transported to the area including and surrounding present-day Oklahoma. The Choctaw was the first of the Five Civilized Tribes to be removed from the southeastern United States. The phrase "Trail of Tears" originated from a description of the removal of the Choctaw Nation in 1831, although the term is usually used for the Cherokee removal.
A total of 17,000 Cherokees and 2,000 of their black slaves were deported.Carter (III), Samuel (1976). Cherokee sunset: A nation betrayed: a narrative of travail and triumph, persecution and exile. New York: Doubleday, p. 232. The area, already occupied by Osage and Quapaw tribes, was called for the Choctaw Nation until revised Native American and then later American policy redefined the boundaries to include other Native Americans. By 1890, more than 30 Native American nations and tribes had been concentrated on land within Indian Territory or "Indian Country".
All Five Civilized Tribes supported and signed treaties with the Confederate military during the American Civil War. The Cherokee Nation had an internal civil war. Slavery in Indian Territory was not abolished until 1866.
In the period between 1866 and 1899, cattle ranches in Texas strove to meet the demands for food in eastern cities and railroads in Kansas promised to deliver in a timely manner. Cattle trails and cattle ranches developed as cowboys either drove their product north or settled illegally in Indian Territory. In 1881, four of five major cattle trails on the western frontier traveled through Indian Territory.
Increased presence of white settlers in Indian Territory prompted the United States Government to establish the Dawes Act in 1887, which divided the lands of individual tribes into allotments for individual families, encouraging farming and private land ownership among Native Americans but expropriating land to the federal government. In the process, railroad companies took nearly half of Indian-held land within the territory for outside settlers and for purchase.
thumb|250px|The Dust Bowl sent thousands of farmers into poverty during the 1930s.
Major land runs, including the Land Run of 1889, were held for settlers where certain territories were opened to settlement starting at a precise time. Usually land was open to settlers on a first come first served basis. Those who broke the rules by crossing the border into the territory before the official opening time were said to have been crossing the border sooner, leading to the term sooners, which eventually became the state's official nickname.
Deliberations to make the territory into a state began near the end of the 19th century, when the Curtis Act continued the allotment of Indian tribal land.
20th and 21st centuries
Attempts to create an all-Indian state named Oklahoma and a later attempt to create an all-Indian state named Sequoyah failed but the Sequoyah Statehood Convention of 1905 eventually laid the groundwork for the Oklahoma Statehood Convention, which took place two years later. On November 16, 1907, Oklahoma was established as the 46th state in the Union.
thumb|250px|The bombing of the Alfred P. Murrah Federal Building in Oklahoma City was one of the deadliest acts of terrorism in American history.
The new state became a focal point for the emerging oil industry, as discoveries of oil pools prompted towns to grow rapidly in population and wealth. Tulsa eventually became known as the "Oil Capital of the World" for most of the 20th century and oil investments fueled much of the state's early economy. In 1927, Oklahoman businessman Cyrus Avery, known as the "Father of Route 66", began the campaign to create U.S. Route 66. Using a stretch of highway from Amarillo, Texas to Tulsa, Oklahoma to form the original portion of Highway 66, Avery spearheaded the creation of the U.S. Highway 66 Association to oversee the planning of Route 66, based in his hometown of Tulsa.
Oklahoma also has a rich African American history. There were many black towns that thrived in the early 20th century because of black settlers moving from neighboring states, especially Kansas. The politician Edward P. McCabe encouraged black settlers to come to what was then Indian Territory. He discussed with President Theodore Roosevelt the possibility of making Oklahoma a majority-black state.
By the early 20th century, the Greenwood neighborhood of Tulsa was one of the most prosperous African-American communities in the United States. Jim Crow laws had established racial segregation since before the start of the 20th century, but the blacks had created a thriving area.
Social tensions were exacerbated by the revival of the Ku Klux Klan after 1915. The Tulsa Race Riot broke out in 1921, with whites attacking blacks. In one of the costliest episodes of racial violence in American history, sixteen hours of rioting resulted in 35 city blocks destroyed, $1.8 million in property damage, and a death toll estimated to be as high as 300 people. By the late 1920s, the Ku Klux Klan had declined to negligible influence within the state.
During the 1930s, parts of the state began suffering the consequences of poor farming practices, extended drought and high winds. Known as the Dust Bowl, areas of Kansas, Texas, New Mexico and northwestern Oklahoma were hampered by long periods of little rainfall and abnormally high temperatures, sending thousands of farmers into poverty and forcing them to relocate to more fertile areas of the western United States. Over a twenty-year period ending in 1950, the state saw its only historical decline in population, dropping 6.9 percent as impoverished families migrated out of the state after the Dust Bowl.
Soil and water conservation projects markedly changed practices in the state and led to the construction of massive flood control systems and dams; they built hundreds of reservoirs and man-made lakes to supply water for domestic needs and agricultural irrigation. By the 1960s, Oklahoma had created more than 200 lakes, the most in the nation.
In 1995, Oklahoma City was the site of one of the most destructive acts of domestic terrorism in American history. The Oklahoma City bombing of April 19, 1995, in which Timothy McVeigh detonated a large, crude explosive device outside the Alfred P. Murrah Federal Building, killed 168 people, including 19 children. For his crime, McVeigh was executed by the federal government on June 11, 2001. His accomplice, Terry Nichols, is serving life in prison without parole for helping plan the attack and prepare the explosive.
On May 31, 2016, several cities experienced record setting flooding.
Demographics
thumb|450px|Oklahoma population density map.
The United States Census Bureau estimates that the population of Oklahoma was 3,923,561 on July 1, 2016, a 4.6% increase since the 2010 United States Census.
At the 2010 Census, 68.7% of the population was non-Hispanic White, down from 88% in 1970, 7.3% non-Hispanic Black or African American, 8.2% non-Hispanic American Indian and Alaska Native, 1.7% non-Hispanic Asian, 0.1% non-Hispanic Native Hawaiian and Other Pacific Islander, 0.1% from some other race (non-Hispanic) and 5.1% of two or more races (non-Hispanic). 8.9% of Oklahoma's population was of Hispanic, Latino, or Spanish origin (they may be of any race).
+ Oklahoma racial breakdown of population Racial composition 1970 1990 2000 2010 White 89.1% 82.1% 76.2% 72.0% Native 3.8% 8.0% 7.9% 8.7% Black 6.7% 7.4% 7.6% 7.4% Asian 0.1% 1.1% 1.4% 1.7% Native Hawaiian andother Pacific Islander – – 0.1% 0.1% Other race 0.2% 1.3% 2.4% 4.1% Two or more races – – 4.5% 6.0%
, 47.3% of Oklahoma's population younger than age 1 were minorities, meaning that they had at least one parent who was not non-Hispanic white.
Oklahoma had a population of 3,642,361 with an estimated 2005 ancestral makeup of 14.5% German, 13.1% American, 11.8% Irish, 9.6% English, 8.1% African American, and 11.4% Native American (including 7.9% Cherokee) though the percentage of people claiming American Indian as their only race was 8.1%. Most people from Oklahoma who self-identify as having American ancestry are of overwhelmingly English ancestry with significant amounts of Scottish and Welsh inflection as well.David Hackett Fischer, Albion's Seed: Four British Folkways in America, New York: Oxford University Press, 1989, pp.602–645Sharing the Dream: White Males in a Multicultural America By Dominic J. Pulera.
The state had the second-highest number of Native Americans in 2002, estimated at 395,219, as well as the second highest percentage among all states. , 4.7% of Oklahoma's residents were foreign born, compared to 12.4% for the nation. The center of population of Oklahoma is located in Lincoln County near the town of Sparks.
The state's 2006 per capita personal income ranked 37th at $32,210, though it has the third fastest-growing per capita income in the nation and ranks consistently among the lowest states in cost of living index. The Oklahoma City suburb Nichols Hills is first on Oklahoma locations by per capita income at $73,661, though Tulsa County holds the highest average. In 2011, 7.0% of Oklahomans were under the age of 5, 24.7% under 18, and 13.7% were 65 or older. Females made up 50.5% of the population.
Cities and towns
The state is located in the Southern United States. According to the 2010 United States Census, Oklahoma is the 28th most populous state with inhabitants but the 19th largest by land area spanning of land. Oklahoma is divided into 77 counties and contains 597 incorporated municipalities consisting of cities and towns.
In Oklahoma, cities are all those incorporated communities which are 1000 or more in population and are incorporated as cities. Towns are limited to town board type of municipal government. Cities may choose among aldermanic, mayoral, council-manager, and home-rule charter types of government. Cities may also petition to incorporate as towns.
Language
thumb|right|Recording of a Cherokee language stomp dance ceremony in Oklahoma.
thumb|right|150px|Bilingual stop sign in English and the Cherokee syllabary, Tahlequah, Oklahoma
The English language has been official in the state of Oklahoma since 2010. The variety of North American English spoken is called Oklahoma English, and this dialect is quite diverse with its uneven blending of features of North Midland, South Midland, and Southern dialects. In 2000, 2,977,187 Oklahomans—92.6% of the resident population five years or older—spoke only English at home, a decrease from 95% in 1990. 238,732 Oklahoma residents reported speaking a language other than English in the 2,000 census, about 7.4% of the total population of the state. Spanish is the second-most commonly spoken language in the state, with 141,060 speakers counted in 2000. The most commonly spoken native North American language is Cherokee, with 10,000 speakers living within the Cherokee Nation tribal jurisdiction area of eastern Oklahoma. Cherokee is an official language in the Cherokee Nation tribal jurisdiction area and in the United Keetoowah Band of Cherokee Indians.
+ Top 10 Non-English Languages Spoken in Oklahoma Language Percentage of population() Spanish 4.4% Native North American languages 0.6% German and Vietnamese (tied) 0.4% French 0.3% Chinese 0.2% Korean, Arabic, Tagalog, Japanese (tied) 0.1%
German has 13,444 speakers representing about 0.4% of the total state population, and Vietnamese is spoken by 11,330 people, or about 0.4% of the population, many of whom live in the Asia District of Oklahoma City. Other languages include French with 8,258 speakers (0.3%), Chinese with 6,413 (0.2%), Korean with 3,948 (0.1%), Arabic with 3,265 (0.1%), other Asian languages with 3,134 (0.1%), Tagalog with 2,888 (0.1%), Japanese with 2,546 (0.1%), and African languages with 2,546 (0.1%). In addition to Cherokee, more than 25 Native American languages are spoken in Oklahoma, second only to California (though, it should be noted that only Cherokee exhibits language vitality at present).
Religion
Oklahoma is part of a geographical region characterized by conservative and Evangelical Christianity known as the "Bible Belt". Spanning the southern and eastern parts of the United States, the area is known for politically and socially conservative views, with the Republican Party having the greater number of voters registered between the two parties.https://www.ok.gov/elections/documents/20161101%20-%20Registration%20By%20County%20%28vr2420%29.pdf Tulsa, the state's second-largest city, home to Oral Roberts University, is sometimes called the "buckle of the Bible Belt". According to the Pew Research Center, the majority of Oklahoma's religious adherents are Christian, accounting for about 80 percent of the population. The percentage of Oklahomans affiliated with Catholicism is half of the national average, while the percentage affiliated with Evangelical Protestantism is more than twice the national average – tied with Arkansas for the largest percentage of any state.
thumb|right|The Boston Avenue Methodist Church in Tulsa is a National Historic Landmark.
In 2010, the state's largest church memberships were in the Southern Baptist Convention (886,394 members), the United Methodist Church (282,347), the Roman Catholic Church (178,430), and the Assemblies of God (85,926). Other religions represented in the state include Buddhism, Hinduism, and Islam.
In 2000, there were about 5,000 Jews and 6,000 Muslims, with 10 congregations to each group.
Oklahoma religious makeup:
Evangelical Protestant – 53%
Mainline Protestant – 16%
Roman Catholic – 13%
Other – 6%
Unaffiliated – 12%
Economy
thumb|250px|The BOK Tower of Tulsa, Oklahoma's second tallest building, serves as the world headquarters for Williams Companies.
Oklahoma is host to a diverse range of sectors including aviation, energy, transportation equipment, food processing, electronics, and telecommunications. Oklahoma is an important producer of natural gas, aircraft, and food. The state ranks third in the nation for production of natural gas, is the 27th-most agriculturally productive state, and also ranks 5th in production of wheat. Four Fortune 500 companies and six Fortune 1000 companies are headquartered in Oklahoma, and it has been rated one of the most business-friendly states in the nation, with the 7th-lowest tax burden in 2007.
In 2010, Oklahoma City-based Love's Travel Stops & Country Stores ranked 18th on the Forbes list of largest private companies, Tulsa-based QuikTrip ranked 37th, and Oklahoma City-based Hobby Lobby ranked 198th in 2010 report. Oklahoma's gross domestic product grew from $131.9 billion in 2006 to $147.5 billion in 2010, a jump of 10.6 percent. Oklahoma's gross domestic product per capita was $35,480 in 2010, which was ranked 40th among the states.
Though oil has historically dominated the state's economy, a collapse in the energy industry during the 1980s led to the loss of nearly 90,000 energy-related jobs between 1980 and 2000, severely damaging the local economy. Oil accounted for 35 billion dollars in Oklahoma's economy in 2007, and employment in the state's oil industry was outpaced by five other industries in 2007. , the state's unemployment rate is 4.4%.Bls.gov; Local Area Unemployment Statistics
Industry
In mid-2011, Oklahoma had a civilian labor force of 1.7 million and total non-farm employment fluctuated around 1.5 million. The government sector provides the most jobs, with 339,300 in 2011, followed by the transportation and utilities sector, providing 279,500 jobs, and the sectors of education, business, and manufacturing, providing 207,800, 177,400, and 132,700 jobs, respectively. Among the state's largest industries, the aerospace sector generates $11 billion annually.
Tulsa is home to the largest airline maintenance base in the world, which serves as the global maintenance and engineering headquarters for American Airlines. In total, aerospace accounts for more than 10 percent of Oklahoma's industrial output, and it is one of the top 10 states in aerospace engine manufacturing. Because of its position in the center of the United States, Oklahoma is also among the top states for logistic centers, and a major contributor to weather-related research.
The state is the top manufacturer of tires in North America and contains one of the fastest-growing biotechnology industries in the nation. In 2005, international exports from Oklahoma's manufacturing industry totaled $4.3 billion, accounting for 3.6 percent of its economic impact. Tire manufacturing, meat processing, oil and gas equipment manufacturing, and air conditioner manufacturing are the state's largest manufacturing industries.
Energy
thumb|250px|A major oil producing state, Oklahoma is the fifth-largest producer of crude oil in the United States.
Oklahoma is the nation's third-largest producer of natural gas, fifth-largest producer of crude oil, and has the second-greatest number of active drilling rigs, and ranks fifth in crude oil reserves. While the state ranked eighth for installed wind energy capacity in 2011, it is at the bottom of states in usage of renewable energy, with 94 percent of its electricity being generated by non-renewable sources in 2009, including 25 percent from coal and 46 percent from natural gas. Oklahoma has no nuclear power. Ranking 13th for total energy consumption per capita in 2009, Oklahoma's energy costs were 8th lowest in the nation.
As a whole, the oil energy industry contributes $35 billion to Oklahoma's gross domestic product, and employees of Oklahoma oil-related companies earn an average of twice the state's typical yearly income. In 2009, the state had 83,700 commercial oil wells churning of crude oil. Eight and a half percent of the nation's natural gas supply is held in Oklahoma, with being produced in 2009.
According to Forbes magazine, Oklahoma City-based Devon Energy Corporation, Chesapeake Energy Corporation, and SandRidge Energy Corporation are the largest private oil-related companies in the nation, and all of Oklahoma's Fortune 500 companies are energy-related. Tulsa's ONEOK and Williams Companies are the state's largest and second-largest companies respectively, also ranking as the nation's second- and third-largest companies in the field of energy, according to Fortune magazine. The magazine also placed Devon Energy as the second-largest company in the mining and crude oil-producing industry in the nation, while Chesapeake Energy ranks seventh respectively in that sector and Oklahoma Gas & Electric ranks as the 25th-largest gas and electric utility company.
Oklahoma Gas & Electric, commonly referred to as OG&E (NYSE: OGE) operates four base electric power plants in Oklahoma. Two of them are coal-fired power plants: one in Muskogee, and the other in Redrock. Two are gas-fired power plants: one in Harrah and the other in Konawa. OG&E was the first electric company in Oklahoma to generate electricity from wind farms in 2003.
Wind generation
Oklahoma Wind Generation (GWh, Million kWh) Year Total January February March April May June July August September October November December2009 2,698 183 182 233 233 159 175 140 172 152 253 269 3082010 3,808 252 187 389 400 305 360 265 260 311 299 408 3752011 5,369 319 446 519 531 510 513 329 335 337 487 574 4692012 632 555 744 634 726 639 570 453 516 100
Source:
Agriculture
The 27th-most agriculturally productive state, Oklahoma is fifth in cattle production and fifth in production of wheat. Approximately 5.5 percent of American beef comes from Oklahoma, while the state produces 6.1 percent of American wheat, 4.2 percent of American pig products, and 2.2 percent of dairy products.
The state had 85,500 farms in 2012, collectively producing $4.3 billion in animal products and fewer than one billion dollars in crop output with more than $6.1 billion added to the state's gross domestic product. Poultry and swine are its second and third-largest agricultural industries.
Culture
thumb|right|upright|Oklahoma's heritage as a pioneer state is depicted with the Pioneer Woman statue in Ponca City.
Oklahoma is placed in the South by the United States Census Bureau, but lies fully or partially in the Midwest, Southwest, and southern cultural regions by varying definitions, and partially in the Upland South and Great Plains by definitions of abstract geographical-cultural regions. Oklahomans have a high rate of English, Scotch-Irish, German, and Native American ancestry, with 25 different native languages spoken.
Because many Native Americans were forced to move to Oklahoma when White settlement in North America increased, Oklahoma has much linguistic diversity. Mary Linn, an associate professor of anthropology at the University of Oklahoma and the associate curator of Native American languages at the Sam Noble Museum, notes that Oklahoma also has high levels of language endangerment.Smith, Diane. "Universities partner to save dying languages". Associated Press. June 2011. Retrieved on October 23, 2015.
Six governments have claimed the area now known as Oklahoma at different times, and 67 Native American tribes are represented in Oklahoma, including 39 federally recognized tribes, who are headquartered and have tribal jurisdictional areas in the state. Western ranchers, Native American tribes, southern settlers, and eastern oil barons have shaped the state's cultural predisposition, and its largest cities have been named among the most underrated cultural destinations in the United States.
Residents of Oklahoma are associated with traits of southern hospitality – the 2006 Catalogue for Philanthropy (with data from 2004) ranks Oklahomans 7th in the nation for overall generosity. The state has also been associated with a negative cultural stereotype first popularized by John Steinbeck's novel The Grapes of Wrath, which described the plight of uneducated, poverty-stricken Dust Bowl-era farmers deemed "Okies". However, the term is often used in a positive manner by Oklahomans.
Arts and theater
thumb|250px|Philbrook Museum is one of the top 50 fine art museums in the United States.
In the state's largest urban areas, pockets of jazz culture flourish, and Native American, Mexican American, and Asian American communities produce music and art of their respective cultures. The Oklahoma Mozart Festival in Bartlesville is one of the largest classical music festivals on the southern plains, and Oklahoma City's Festival of the Arts has been named one of the top fine arts festivals in the nation.
The state has a rich history in ballet with five Native American ballerinas attaining worldwide fame. These were Yvonne Chouteau, sisters Marjorie and Maria Tallchief, Rosella Hightower and Moscelyne Larkin, known collectively as the Five Moons. The New York Times rates the Tulsa Ballet as one of the top ballet companies in the United States. The Oklahoma City Ballet and University of Oklahoma's dance program were formed by ballerina Yvonne Chouteau and husband Miguel Terekhov. The University program was founded in 1962 and was the first fully accredited program of its kind in the United States.
In Sand Springs, an outdoor amphitheater called "Discoveryland!" is the official performance headquarters for the musical Oklahoma! Ridge Bond, native of McAlester, Oklahoma, starred in the Broadway and International touring productions of Oklahoma!, playing the role of "Curly McClain" in more than 2,600 performances. In 1953 he was featured along with the Oklahoma! cast on a CBS Omnibus television broadcast. Bond was instrumental in the title song becoming the Oklahoma state song and is also featured on the U.S. postage stamp commemorating the musical's 50th anniversary. Historically, the state has produced musical styles such as The Tulsa Sound and western swing, which was popularized at Cain's Ballroom in Tulsa. The building, known as the "Carnegie Hall of Western Swing", served as the performance headquarters of Bob Wills and the Texas Playboys during the 1930s. Stillwater is known as the epicenter of Red Dirt music, the best-known proponent of which is the late Bob Childers.
Prominent theatre companies in Oklahoma include, in the capital city, Oklahoma City Theatre Company, Carpenter Square Theatre, Oklahoma Shakespeare in the Park, and CityRep. CityRep is a professional company affording equity points to those performers and technical theatre professionals. In Tulsa, Oklahoma's oldest resident professional company is American Theatre Company, and Theatre Tulsa is the oldest community theatre company west of the Mississippi. Other companies in Tulsa include Heller Theatre and Tulsa Spotlight Theater. The cities of Norman, Lawton, and Stillwater, among others, also host well-reviewed community theatre companies.
Oklahoma is in the nation's middle percentile in per capita spending on the arts, ranking 17th, and contains more than 300 museums. The Philbrook Museum of Tulsa is considered one of the top 50 fine art museums in the United States, and the Sam Noble Oklahoma Museum of Natural History in Norman, one of the largest university-based art and history museums in the country, documents the natural history of the region. The collections of Thomas Gilcrease are housed in the Gilcrease Museum of Tulsa, which also holds the world's largest, most comprehensive collection of art and artifacts of the American West.
The Egyptian art collection at the Mabee-Gerrer Museum of Art in Shawnee is considered to be the finest Egyptian collection between Chicago and Los Angeles. The Oklahoma City Museum of Art contains the most comprehensive collection of glass sculptures by artist Dale Chihuly in the world, and Oklahoma City's National Cowboy and Western Heritage Museum documents the heritage of the American Western frontier. With remnants of the Holocaust and artifacts relevant to Judaism, the Sherwin Miller Museum of Jewish Art of Tulsa preserves the largest collection of Jewish art in the Southwest United States.
Festivals and events
thumb|200px|National Powwow dancer of the Cherokee of Oklahoma, 2007.
Oklahoma's centennial celebration was named the top event in the United States for 2007 by the American Bus Association, and consisted of multiple celebrations saving with the 100th anniversary of statehood on November 16, 2007. Annual ethnic festivals and events take place throughout the state such as Native American powwows and ceremonial events, and include festivals (as examples) in Scottish, Irish, German, Italian, Vietnamese, Chinese, Czech, Jewish, Arab, Mexican and African-American communities depicting cultural heritage or traditions.
During a 10-day run in Oklahoma City, the State Fair of Oklahoma attracts roughly one million people along with the annual Festival of the Arts. Large national pow-wows, various Latin and Asian heritage festivals, and cultural festivals such as the Juneteenth celebrations are held in Oklahoma City each year. The Tulsa State Fair attracts over one million people during its 10-day run, and the city's Mayfest festival entertained more than 375,000 people in four days during 2007. In 2006, Tulsa's Oktoberfest was named one of the top 10 in the world by USA Today and one of the top German food festivals in the nation by Bon Appetit magazine.
Norman plays host to the Norman Music Festival, a festival that highlights native Oklahoma bands and musicians. Norman is also host to the Medieval Fair of Norman, which has been held annually since 1976 and was Oklahoma's first medieval fair. The Fair was held first on the south oval of the University of Oklahoma campus and in the third year moved to the Duck Pond in Norman until the Fair became too big and moved to Reaves Park in 2003. The Medieval Fair of Norman is Oklahoma's "largest weekend event and the third-largest event in Oklahoma, and was selected by Events Media Network as one of the top 100 events in the nation".
Education
thumb|250px|Oklahoma's system of public regional universities includes Northeastern State University in Tahlequah.
With an educational system made up of public school districts and independent private institutions, Oklahoma had 638,817 students enrolled in 1,845 public primary, secondary, and vocational schools in 533 school districts . Oklahoma has the highest enrollment of Native American students in the nation with 126,078 students in the 2009–10 school year. Ranked near the bottom of states in expenditures per student, Oklahoma spent $7,755 for each student in 2008, 47th in the nation, though its growth of total education expenditures between 1992 and 2002 ranked 22nd.
The state is among the best in pre-kindergarten education, and the National Institute for Early Education Research rated it first in the United States with regard to standards, quality, and access to pre-kindergarten education in 2004, calling it a model for early childhood schooling. High school dropout rate decreased from 3.1 to 2.5 percent between 2007 and 2008 with Oklahoma ranked among 18 other states with 3 percent or less dropout rate. In 2004, the state ranked 36th in the nation for the relative number of adults with high school diplomas, though at 85.2 percent, it had the highest rate among southern states.
Oklahoma State University, the University of Oklahoma, the University of Central Oklahoma, and Northeastern State University are the largest public institutions of higher education in Oklahoma, operating through one primary campus and satellite campuses throughout the state. The two state universities, along with Oklahoma City University and the University of Tulsa, rank among the country's best in undergraduate business programs.
Oklahoma City University School of Law, University of Oklahoma College of Law, and University of Tulsa College of Law are the state's only ABA accredited institutions. Both University of Oklahoma and University of Tulsa are Tier 1 institutions, with the University of Oklahoma ranked 68th and the University of Tulsa ranked 86th in the nation.
Oklahoma holds eleven public regional universities, including Northeastern State University, the second-oldest institution of higher education west of the Mississippi River, also containing the only College of Optometry in Oklahoma and the largest enrollment of Native American students in the nation by percentage and amount. Langston University is Oklahoma's only historically black college. Six of the state's universities were placed in the Princeton Review's list of best 122 regional colleges in 2007, and three made the list of top colleges for best value. The state has 55 post-secondary technical institutions operated by Oklahoma's CareerTech program for training in specific fields of industry or trade.
In the 2007–2008 school year, there were 181,973 undergraduate students, 20,014 graduate students, and 4,395 first-professional degree students enrolled in Oklahoma colleges. Of these students, 18,892 received a bachelor's degree, 5,386 received a master's degree, and 462 received a first professional degree. This means the state of Oklahoma produces an average of 38,278 degree-holders per completions component (i.e. July 1, 2007 – June 30, 2008). National average is 68,322 total degrees awarded per completions component.
Non-English Education
thumb|Oklahoma Cherokee language immersion school student writing in the Cherokee syllabary.
The Cherokee Nation instigated a 10-year language preservation plan that involved growing new fluent speakers of the Cherokee language from childhood on up through school immersion programs as well as a collaborative community effort to continue to use the language at home. This plan was part of an ambitious goal that in 50 years, 80% or more of the Cherokee people will be fluent in the language. The Cherokee Preservation Foundation has invested $3 million into opening schools, training teachers, and developing curricula for language education, as well as initiating community gatherings where the language can be actively used.
There is a Cherokee language immersion school in Tahlequah, Oklahoma that educates students from pre-school through eighth grade. Graduates are fluent speakers of the language. Several universities offer Cherokee as a second language, including the University of Oklahoma and Northeastern State University.
Sports
Oklahoma has teams in basketball, football, arena football, baseball, soccer, hockey, and wrestling located in Oklahoma City, Tulsa, Enid, Norman, and Lawton. The Oklahoma City Thunder of the National Basketball Association (NBA) is the state's only major league sports franchise. The state had a team in the Women's National Basketball Association, the Tulsa Shock, from 2010 through 2015, but the team relocated to Dallas–Fort Worth after that season and became the Dallas Wings. Oklahoma supports teams in several minor leagues, including Minor League Baseball at the AAA and AA levels (Oklahoma City Dodgers and Tulsa Drillers, respectively), hockey's ECHL with the Tulsa Oilers, and a number of indoor football leagues. In the last-named sport, the state's most notable team was the Tulsa Talons, which played in the Arena Football League until 2012, when the team was moved to San Antonio. The Oklahoma Defenders replaced the Talons as Tulsa's only professional arena football team, playing the CPIFL. The Oklahoma City Blue, of the NBA Development League, relocated to Oklahoma City from Tulsa in 2014, where they were formerly known as the Tulsa 66ers. Tulsa is the base for the Tulsa Revolution, which plays in the American Indoor Soccer League. Enid and Lawton host professional basketball teams in the USBL and the CBA.
thumb|The Oklahoma City Thunder moved to the state in 2008, becoming its first permanent major league team in any sport
The NBA's New Orleans Hornets became the first major league sports franchise based in Oklahoma when the team was forced to relocate to Oklahoma City's Ford Center, now known as Chesapeake Energy Arena, for two seasons following Hurricane Katrina in 2005. In July 2008, the Seattle SuperSonics, a franchise owned by the Professional Basketball Club LLC, a group of Oklahoma City businessmen led by Clay Bennett, relocated to Oklahoma City and announced that play would begin at the Ford Center as the Oklahoma City Thunder for the , becoming the state's first permanent major league franchise.
Collegiate athletics are a popular draw in the state. The state has four schools that compete at the highest level of college sports, NCAA Division I. The most prominent are the state's two members of the Big 12 Conference, one of the so-called Power Five conferences of the top tier of college football, Division I FBS. The University of Oklahoma and Oklahoma State University average well over 50,000 fans attending their football games, and Oklahoma's football program ranked 12th in attendance among American colleges in 2010, with an average of 84,738 people attending its home games. The two universities meet several times each year in rivalry matches known as the Bedlam Series, which are some of the greatest sporting draws to the state. Sports Illustrated magazine rates Oklahoma and Oklahoma State among the top colleges for athletics in the nation. Two private institutions in Tulsa, the University of Tulsa and Oral Roberts University; are also Division I members. Tulsa competes in FBS football and other sports in the American Athletic Conference, while Oral Roberts, which does not sponsor football, Move the cursor over "Sports" on the menu to see a list of varsity sports; football is not listed. is a member of The Summit League. In addition, 12 of the state's smaller colleges and universities compete in NCAA Division II as members of four different conferences, Move the cursor over "The GAC" on the menu to see a list of members; six members are from Oklahoma. and eight other Oklahoma institutions participate in the NAIA, mostly within the Sooner Athletic Conference.
Regular LPGA tournaments are held at Cedar Ridge Country Club in Tulsa, and major championships for the PGA or LPGA have been played at Southern Hills Country Club in Tulsa, Oak Tree Country Club in Oklahoma City, and Cedar Ridge Country Club in Tulsa. Rated one of the top golf courses in the nation, Southern Hills has hosted four PGA Championships, including one in 2007, and three U.S. Opens, the most recent in 2001. Rodeos are popular throughout the state, and Guymon, in the state's panhandle, hosts one of the largest in the nation.
Current teams
+Basketball Club Type League Venue City Area (Metro/Region) Oklahoma City Thunder Men's Basketball NBA Chesapeake Energy Arena Oklahoma City OKC Metro Oklahoma City Blue Men's Basketball NBADL Cox Convention Center Oklahoma City OKC Metro
+Baseball Club Type League Venue City Area (Metro/Region) Oklahoma City Dodgers Baseball PCL (AAA) Chickasaw Bricktown Ballpark Oklahoma City OKC Metro Tulsa Drillers Baseball Texas League (AA) ONEOK Field Tulsa Tulsa Metro
+Hockey Club Type League Venue City Area (Metro/Region) Tulsa Oilers Hockey ECHL BOK Center Tulsa Tulsa Metro
+Football Club Type League Venue City Area (Metro/Region) Oklahoma Defenders Indoor Football CPIFL Tulsa Convention Center Tulsa Tulsa Metro Oklahoma Thunder Football GDFL Bixby High School Bixby Tulsa Metro Oklahoma City Bounty Hunters Football GDFL Putnam City Stadium Warr Acres OKC Metro
+Soccer Club Type League Venue City Area (Metro/Region) Tulsa Spirit Women's Soccer WPSL Union 8th Broken Arrow Tulsa Metro Oklahoma City FC Women's Soccer WPSL Miller Stadium Oklahoma City OKC Metro Oklahoma City Energy Men's Soccer USL Taft Stadium; Oklahoma City OKC Metro Tulsa Roughnecks Men's Soccer USL ONEOK Field Tulsa Tulsa Metro Tulsa Athletics Men's Soccer NPSL Drillers Stadium Tulsa Tulsa Metro
Health
225px|thumb| INTEGRIS Cancer Institute of Oklahoma, in Oklahoma City
thumb|left|Cancer Treatment Centers of America at Southwestern Regional Medical Center is located in Tulsa.
Oklahoma was the 21st-largest recipient of medical funding from the federal government in 2005, with health-related federal expenditures in the state totaling $75,801,364; immunizations, bioterrorism preparedness, and health education were the top three most funded medical items. Instances of major diseases are near the national average in Oklahoma, and the state ranks at or slightly above the rest of the country in percentage of people with asthma, diabetes, cancer, and hypertension.
In 2000, Oklahoma ranked 45th in physicians per capita and slightly below the national average in nurses per capita, but was slightly over the national average in hospital beds per 100,000 people and above the national average in net growth of health services over a 12-year period. One of the worst states for percentage of insured people, nearly 25 percent of Oklahomans between the age of 18 and 64 did not have health insurance in 2005, the fifth-highest rate in the nation.
Oklahomans are in the upper half of Americans in terms of obesity prevalence, and the state is the 5th most obese in the nation, with 30.3 percent of its population at or near obesity. Oklahoma ranked last among the 50 states in a 2007 study by the Commonwealth Fund on health care performance.
The OU Medical Center, Oklahoma's largest collection of hospitals, is the only hospital in the state designated a Level I trauma center by the American College of Surgeons. OU Medical Center is located on the grounds of the Oklahoma Health Center in Oklahoma City, the state's largest concentration of medical research facilities.
The Cancer Treatment Centers of America at Southwestern Regional Medical Center in Tulsa is one of four such regional facilities nationwide, offering cancer treatment to the entire southwestern United States, and is one of the largest cancer treatment hospitals in the country. The largest osteopathic teaching facility in the nation, Oklahoma State University Medical Center at Tulsa, also rates as one of the largest facilities in the field of neuroscience.
Media
thumb|right|upright|The second-largest newspaper in Oklahoma, the Tulsa World has a circulation of 189,789.
Oklahoma City and Tulsa are the 45th and 61st-largest media markets in the United States as ranked by Nielsen Media Research. The state's third-largest media market, Lawton-Wichita Falls, Texas, is ranked 149th nationally by the agency. Broadcast television in Oklahoma began in 1949 when KFOR-TV (then WKY-TV) in Oklahoma City and KOTV-TV in Tulsa began broadcasting a few months apart. Currently, all major American broadcast networks have affiliated television stations in the state.
The state has two primary newspapers. The Oklahoman, based in Oklahoma City, is the largest newspaper in the state and 54th-largest in the nation by circulation, with a weekday readership of 138,493 and a Sunday readership of 202,690. The Tulsa World, the second-most widely circulated newspaper in Oklahoma and 79th in the nation, holds a Sunday circulation of 132,969 and a weekday readership of 93,558. Oklahoma's first newspaper was established in 1844, called the Cherokee Advocate, and was written in both Cherokee and English. In 2006, there were more than 220 newspapers located in the state, including 177 with weekly publications and 48 with daily publications.
The state's first radio station, WKY in Oklahoma City, signed on in 1920, followed by KRFU in Bristow, which later on moved to Tulsa and became KVOO in 1927. In 2006, there were more than 500 radio stations in Oklahoma broadcasting with various local or nationally owned networks. Five universities in Oklahoma operate non-commercial, public radio stations/networks.
Oklahoma has a few ethnic-oriented TV stations broadcasting in Spanish, Asian languages and sometimes have Native American programming. TBN, a Christian religious television network has a studio in Tulsa, and built their first entirely TBN-owned affiliate in Oklahoma City in 1980.
Transportation
thumb|250px|One of ten major toll highways in Oklahoma, the Will Rogers Turnpike extends northeast from Tulsa.
thumb|250px|A map of Oklahoma showing major roads and thoroughfares.
Transportation in Oklahoma is generated by an anchor system of Interstate Highways, intercity rail lines, airports, inland ports, and mass transit networks. Situated along an integral point in the United States Interstate network, Oklahoma contains three interstate highways and four auxiliary Interstate Highways. In Oklahoma City, Interstate 35 intersects with Interstate 44 and Interstate 40, forming one of the most important intersections along the United States highway system.
More than of roads make up the state's major highway skeleton, including state-operated highways, ten turnpikes or major toll roads, and the longest drivable stretch of Route 66 in the nation. In 2008, Interstate 44 in Oklahoma City was Oklahoma's busiest highway, with a daily traffic volume of 123,300 cars. In 2010, the state had the nation's third highest number of bridges classified as structurally deficient, with nearly 5,212 bridges in disrepair, including 235 National Highway System Bridges.
Oklahoma's largest commercial airport is Will Rogers World Airport in Oklahoma City, averaging a yearly passenger count of more than 3.5 million (1.7 million boardings) in 2010. Tulsa International Airport, the state's second-largest commercial airport, served more than 1.3 million boardings in 2010. Between the two, six airlines operate in Oklahoma. In terms of traffic, R. L. Jones Jr. (Riverside) Airport in Tulsa is the state's busiest airport, with 335,826 takeoffs and landings in 2008. In total, Oklahoma has over 150 public-use airports.
Oklahoma is connected to the nation's rail network via Amtrak's Heartland Flyer, its only regional passenger rail line. It currently stretches from Oklahoma City to Fort Worth, Texas, though lawmakers began seeking funding in early 2007 to connect the Heartland Flyer to Tulsa.
Two inland ports on rivers serve Oklahoma: the Port of Muskogee and the Tulsa Port of Catoosa. The only port handling international cargo in the state, the Tulsa Port of Catoosa is the most inland ocean-going port in the nation and ships over two million tons of cargo each year. Both ports are located on the McClellan-Kerr Arkansas River Navigation System, which connects barge traffic from Tulsa and Muskogee to the Mississippi River via the Verdigris and Arkansas rivers, contributing to one of the busiest waterways in the world.
Law and government
thumb|250px|The Oklahoma State Capitol located in Oklahoma City.
Oklahoma is a constitutional republic with a government modeled after the Federal Government of the United States, with executive, legislative, and judicial branches. The state has 77 counties with jurisdiction over most local government functions within each respective domain, five congressional districts, and a voting base with a plurality in the Democratic Party. State officials are elected by plurality voting in the state of Oklahoma.
Oklahoma is one of 32 states with capital punishment as a legal sentence, and the state has had (between 1976 through mid-2011) the highest per capita execution rate in the US.
State government
The Legislature of Oklahoma consists of the Senate and the House of Representatives. As the lawmaking branch of the state government, it is responsible for raising and distributing the money necessary to run the government. The Senate has 48 members serving four-year terms, while the House has 101 members with two-year terms. The state has a term limit for its legislature that restricts any one person to a total of twelve cumulative years service between both legislative branches.
Oklahoma's judicial branch consists of the Oklahoma Supreme Court, the Oklahoma Court of Criminal Appeals, and 77 District Courts that each serves one county. The Oklahoma judiciary also contains two independent courts: a Court of Impeachment and the Oklahoma Court on the Judiciary. Oklahoma has two courts of last resort: the state Supreme Court hears civil cases, and the state Court of Criminal Appeals hears criminal cases (this split system exists only in Oklahoma and neighboring Texas). Judges of those two courts, as well as the Court of Civil Appeals are appointed by the Governor upon the recommendation of the state Judicial Nominating Commission, and are subject to a non-partisan retention vote on a six-year rotating schedule.
thumb|250px|The Five congressional districts in Oklahoma.
The executive branch consists of the Governor, their staff, and other elected officials. The principal head of government, the Governor is the chief executive of the Oklahoma executive branch, serving as the ex officio Commander-in-Chief of the Oklahoma National Guard when not called into Federal use and reserving the power to veto bills passed through the Legislature. The responsibilities of the Executive branch include submitting the budget, ensuring that state laws are enforced, and ensuring peace within the state is preserved.
Local government
The state is divided into 77 counties that govern locally, each headed by a three-member council of elected commissioners, a tax assessor, clerk, court clerk, treasurer, and sheriff. While each municipality operates as a separate and independent local government with executive, legislative and judicial power, county governments maintain jurisdiction over both incorporated cities and non-incorporated areas within their boundaries, but have executive power but no legislative or judicial power. Both county and municipal governments collect taxes, employ a separate police force, hold elections, and operate emergency response services within their jurisdiction. Other local government units include school districts, technology center districts, community college districts, rural fire departments, rural water districts, and other special use districts.
Thirty-nine Native American tribal governments are based in Oklahoma, each holding limited powers within designated areas. While Indian reservations typical in most of the United States are not present in Oklahoma, tribal governments hold land granted during the Indian Territory era, but with limited jurisdiction and no control over state governing bodies such as municipalities and counties. Tribal governments are recognized by the United States as quasi-sovereign entities with executive, judicial, and legislative powers over tribal members and functions, but are subject to the authority of the United States Congress to revoke or withhold certain powers. The tribal governments are required to submit a constitution and any subsequent amendments to the United States Congress for approval.
Oklahoma has 11 substate districts including the two large Councils of Governments, INCOG in Tulsa (Indian Nations Council of Governments) and ACOG (Association of Central Oklahoma Governments). For a complete list visit the Oklahoma Association of Regional Councils.
National politics
+ Presidential election resultsYearRepublicansDemocrats201665.32% 949,13628.93% 420,375201266.77% 891,32533.23% 443,547200865.65% 960,16534.35% 502,496200465.57% 959,79234.43% 503,966200060.31% 744,33738.43% 474,276199648.26% 582,31540.45% 488,105199242.65% 592,92934.02% 473,066198857.93% 678,36741.28% 483,423198468.61% 861,53030.67% 385,080198060.50% 695,57034.97% 402,026197649.96% 545,70848.75% 532,442197273.70% 759,02524.00% 247,147196847.68% 449,69731.99% 301,658196444.25% 412,66555.75% 519,834196059.02% 533,03940.98% 370,111
Oklahoma has been politically conservative for much of its history, especially recently. During the first half century of statehood, it was considered a Democratic stronghold, being carried by the Republican Party in only two presidential elections (1920 and 1928). During this time, it was also carried by every winning Democratic candidate up to Harry Truman. However, Oklahoma Democrats were generally considered to be more conservative than Democrats in other states.
After the 1948 election, the state turned firmly Republican. Although registered Republicans were a minority in the state until 2015, starting in 1952, Oklahoma has been carried by Republican presidential candidates in all but one election (1964). This is not to say that every election has been a landslide for Republicans: Jimmy Carter lost the state by less than 1.5% in 1976, while Michael Dukakis and Bill Clinton both won 40% or more of the state's popular vote in 1988 and 1996 respectively. Al Gore in 2000, though, was the last Democrat to even win any counties in the state. Oklahoma was one of two states, the other being Utah, where Barack Obama failed to carry any of its counties in 2012.
Generally, Republicans are strongest in the suburbs of Oklahoma City and Tulsa, as well as the Panhandle. Democrats are strongest in the eastern part of the state and Little Dixie, as well as the most heavily African American and inner parts of Oklahoma City and Tulsa. With a population of 8.6% Native American in the state, it is also worth noting that most Native American precincts vote Democratic in margins exceeded only by African Americans.Dems woo Native American vote. Politico. Published 5/29/08.
Following the 2000 census, the Oklahoma delegation to the U.S. House of Representatives was reduced from six to five representatives, each serving one congressional district. For the 112th Congress (2011–2013), there were no changes in party strength, and the delegation included four Republicans and one Democrat. In the 112th Congress, Oklahoma's U.S. senators were Republicans Jim Inhofe and Tom Coburn, and its U.S. Representatives were John Sullivan (R-OK-1), Dan Boren (D-OK-2), Frank D. Lucas (R-OK-3), Tom Cole (R-OK-4), and James Lankford (R-OK-5).
In 2012, Dan Boren (D-OK-2) retired from Congress, therefore making the seat vacant. This district, which covers most of Little Dixie, is the Democrats' best region of the state, and has been represented by a Democrat for a dozen years. Republican Markwayne Mullin won the election, making the state's congressional delegation entirely Republican.
Voter registration and party enrollment Party Number of voters Percentage Republican 983,932 45.61% Democratic 856,717 39.71% Unaffiliated 316,801 14.68% Total 2,157,450 100%
Military
Cities and towns
Major cities
+Most Populous CitiesCity Population (2012 state estimate)1. Oklahoma City 599,1992. Tulsa 393,9873. Norman 115,5624. Broken Arrow 102,0195. Lawton 98,3766. Edmond 84,8857. Moore 57,8108. Midwest City 56,0809. Enid 49,85410. Stillwater 46,56011. Muskogee 38,98112. Bartlesville 36,245
left|250px|thumb|Oklahoma City is the state's capital and largest city.
Oklahoma had 598 incorporated places in 2010, including four cities over 100,000 in population and 43 over 10,000. Two of the fifty-largest cities in the United States are located in Oklahoma, Oklahoma City and Tulsa, and 65 percent of Oklahomans live within their metropolitan areas, or spheres of economic and social influence defined by the United States Census Bureau as a metropolitan statistical area. Oklahoma City, the state's capital and largest city, had the largest metropolitan area in the state in 2010, with 1,252,987 people, and the metropolitan area of Tulsa had 937,478 residents. Between 2000 and 2010, the cities that led the state in population growth were Blanchard (172.4%), Elgin (78.2%), Jenks (77.0%), Piedmont (56.7%), Bixby (56.6%), and Owasso (56.3%).
thumb|Tulsa is the state's second-largest city by population and land area.
In descending order of population, Oklahoma's largest cities in 2010 were: Oklahoma City (579,999, +14.6%), Tulsa (391,906, −0.3%), Norman (110,925, +15.9%), Broken Arrow (98,850, +32.0%), Lawton (96,867, +4.4%), Edmond (81,405, +19.2%), Moore (55,081, +33.9%), Midwest City (54,371, +0.5%), Enid (49,379, +5.0%), and Stillwater (45,688, +17.0%). Of the state's ten largest cities, three are outside the metropolitan areas of Oklahoma City and Tulsa, and only Lawton has a metropolitan statistical area of its own as designated by the United States Census Bureau, though the metropolitan statistical area of Fort Smith, Arkansas extends into the state.
Under Oklahoma law, municipalities are divided into two categories: cities, defined as having more than 1,000 residents, and towns, with under 1,000 residents. Both have legislative, judicial, and public power within their boundaries, but cities can choose between a mayor-council, council-manager, or strong mayor form of government, while towns operate through an elected officer system.
State symbols
thumb|The American bison, Oklahoma's state mammal
thumb|Oklahoma's quarter, released in 2008 as part of the state quarters series, depicts Oklahoma's state bird flying above its state wildflower.
State law codifies Oklahoma's state emblems and honorary positions; the Oklahoma Senate or House of Representatives may adopt resolutions designating others for special events and to benefit organizations. Currently the State Senate is waiting to vote on a change to the state's motto. The House passed HCR 1024, which will change the state motto from "Labor Omnia Vincit" to "Oklahoma-In God We Trust!" The author of the resolution stated that a constituent researched the Oklahoma Constitution and found no "official" vote regarding "Labor Omnia Vincit", therefore opening the door for an entirely new motto.
State symbols:
State cartoon: Gusty Created by Don Woods, Oklahoma's first professional meteorologist, used on KTUL-TV from 1954–1989.Oklahoma Statutes, §25-98.8
State bird: Scissor-tailed flycatcher
State tree: Eastern redbud
State mammal: American bison
State beverage: Milk
State fruit: Strawberry
State vegetable: Watermelon
State game bird: Wild turkey
State fish: Sand bass
State floral emblem: Mistletoe
State flower: Oklahoma rose
State wildflower: Indian blanket (Gaillardia pulchella)
State grass: Indiangrass (Sorghastrum nutans)
State fossil: Saurophaganax maximus
State rock: Rose rock
State insect: Honeybee
State soil: Port Silt Loam
State reptile: Collared lizard
State amphibian: Bullfrog
State meal: Fried okra, squash, cornbread, barbecue pork, biscuits, sausage and gravy, grits, corn, strawberries, chicken fried steak, pecan pie, and black-eyed peas.
State folk dance: Square dance
State percussive instrument: Drum
State waltz: "Oklahoma Wind"
State butterfly: Black swallowtail
State song: "Oklahoma!"
State language: English; Cherokee and other Native American languages
State gospel song: "Swing Low, Sweet Chariot"
State rock song: "Do You Realize??" by The Flaming Lips
See also
Index of Oklahoma-related articles
Outline of Oklahoma – organized list of topics about Oklahoma
LGBT rights in Oklahoma
Notes
A. Determined by a survey by the Pew Research Center in 2008. Percentages represent claimed religious beliefs, not necessarily membership in any particular congregation. Figures have a ±5 percent margin of error.
B. Buddhism, Islam, Hinduism, Judaism, other faiths each account for less than 1 percent. Jehovah's Witness, Mormons, Orthodox Christianity, and other Christian traditions each compose less than .5% percent. 1% refused to answer the Pew Research Center's survey.
References
Further reading
complete text online; 900 pages of scholarly articles
External links
General
Festival and Fairs
The Castle of Muskogee
Red Earth
Woody Guthrie Folk Festival
Government
Oklahoma's official web site
Oklahoma Legislative Branch
Oklahoma State Constitution
Oklahoma Department of Commerce
Oklahoma Department of Human Services
Oklahoma Department of Transportation
Tourism and recreation
Oklahoma Tourism Board
Official Oklahoma Tourism Info
Oklahoma State Parks
Oklahoma City Convention and Visitors Bureau
Norman Convention and Visitors Bureau
Tulsa Convention and Visitors Bureau
Culture and history
Oklahoma State Guide from the Library of Congress
Oklahoma Arts Council
Oklahoma Theatre Association
Oklahoma City History
Tulsa Historical Society
Oklahoma Oral History Research Program
Encyclopedia of Oklahoma History and Culture
Newspapers
The Oklahoman
The Tulsa World
Maps and demographics
Oklahoma QuickFacts Geographic and Demographic information
State Facts from USDA
State highway maps
Oklahoma Genealogical Society
Realtime USGS geographic, weather, and geologic information
Oklahoma Digital Maps: Digital Collections of Oklahoma and Indian Territory
Category:States of the United States
Category:States and territories established in 1907
Category:Southern United States
Category:Cherokee-speaking countries and territories
Category:1907 establishments in the United States | 22,489 | 2017-01 |
Eton College | Eton College is an English independent boarding school for boys in Eton, Berkshire, near Windsor. It educates more than 1,300 pupils, aged 13 to 18 years. It was founded in 1440 by King Henry VI as "The King's College of Our Lady of Eton besides Wyndsor",Nevill, p.3 ff. making it the 18th oldest Headmasters' and Headmistresses' Conference (HMC) school.
Eton is one of ten English HMC schools, commonly referred to as "public schools", regulated by the Public Schools Act of 1868. Following the public school tradition, Eton is a full boarding school, which means all pupils live at the school, and it is one of four such remaining single-sex boys' public schools in the United Kingdom (the others being Harrow, Radley, and Winchester) to continue this practice. Eton has educated 19 British prime ministers and generations of the aristocracy and has been referred to as the chief nurse of England's statesmen. Charging up to £12,354 per term (there are three terms per academic year) in 2016/17, Eton was noted as being the sixth most expensive HMC boarding school in the UK in 2013/14.
Background
Eton has a long list of distinguished former pupils. David Cameron was the 19th British prime minister to have attended the school, and recommended that Eton set up a school in the state sector to help drive up standards. Eton now co-sponsors a state sixth-form college in Newham, a deprived area of East London, called the London Academy of Excellence, opened in 2012, which is free of charge and aims to get all its students into higher education. In September 2014, Eton opened, and became the sole educational sponsor for, a new purpose-built co-educational state boarding and day school for around 500 pupils, Holyport College, in Maidenhead in Berkshire, with construction costing around £15 million, in which a fifth of places for day pupils will be set aside for children from poor homes, 21 boarding places will go to youngsters on the verge of being taken into care, and a further 28 boarders will be funded or part-funded through bursaries.
upright=0.9|thumb|200px| 16-17th century coat of arms produced from the masonry of Eton College building.
About 20% of pupils at Eton receive financial support, through a range of bursaries and scholarships. The recent Head Master, Tony Little, said that Eton is developing plans to allow any boy to attend the school whatever his parents' income and, in 2011, said that around 250 boys received "significant" financial help from the school. In early 2014, this figure had risen to 263 pupils receiving the equivalent of around 60% of school fee assistance, whilst a further 63 received their education free of charge. Little said that, in the short term, he wanted to ensure that around 320 pupils per year receive bursaries, and that 70 were educated free of charge, with the intention that the number of pupils receiving financial assistance would continue to increase. These comparatively new developments will run alongside long-established courses that Eton has provided for pupils from state schools, most of them in the summer holidays (July and August). Launched in 1982, the Universities Summer School is an intensive residential course open to boys and girls throughout the UK who attend state schools, are at the end of their first year in the Sixth Form, and are about to begin their final year of schooling. The Brent-Eton Summer School, started in 1994, offers 40–50 young people from the London Borough of Brent, an area of inner-city deprivation, an intensive one-week residential course, free of charge, designed to help bridge the gap between GCSE and A-level. In 2008, Eton helped found the Eton, Slough, Windsor and Hounslow Independent and State School Partnership (ISSP), with six local state schools. The ISSP's aims are "to raise pupil achievement, improve pupil self-esteem, raise pupil aspirations and improve professional practice across the schools". Eton also runs a number of choral and English language courses during the summer months.
In the run-up to the London 2012 Summer Olympic Games and London 2012 Summer Paralympic Games, Eton's purpose-built Dorney Lake, a permanent, eight-lane, 2,200 metre course (about 1.4 miles) in a 400-acre park, officially known throughout the Games as Eton Dorney, provided training facilities for Olympic and Paralympic competitors, and during the Games, hosted the Olympic and Paralympic Rowing competitions as well as the Olympic Canoe Sprint event, attracting over 400,000 visitors during the Games period (around 30,000 per day), and voted the best 2012 Olympic venue by spectators. Access to the parkland around the Lake is provided to members of the public, free of charge, almost all the year round.
In the past, Eton has educated generations of British and foreign aristocracy, and for the first time, members of the Royal family, Prince William and his brother Prince Harry, in contrast to the Royal tradition of male education at either naval college or Gordonstoun, or by Palace tutors. Registration at birth has been consigned to the past, and by the mid-1990s, Eton ranked among Britain's top three schools in getting its pupils into Oxford and Cambridge.
Eton has traditionally been referred to as "the chief nurse of England's statesmen", and has been described as the most famous public school in the world. Early in the 20th century, a historian of Eton wrote, "No other school can claim to have sent forth such a cohort of distinguished figures to make their mark on the world."Nevill, p.1.
The Good Schools Guide called the school "the number one boys' public school", adding that "The teaching and facilities are second to none." The school is a member of the G20 Schools Group.
Overview
The school is headed by a Provost and Fellows (Board of Governors), who appoint the Head Master. It contains 25 boys' houses, each headed by a housemaster, selected from the more senior members of the teaching staff, which numbers some 155. Almost all of the school's pupils go on to universities, about a third of them to Oxford or Cambridge.
The Head Master is a member of the Headmasters' and Headmistresses' Conference and the school is a member of the Eton Group of independent schools in the United Kingdom.
Eton today is a larger school than it has been for much of its history. In 1678, there were 207 boys. In the late 18th century, there were about 300, while today, the total has risen to over 1,300.Nevill, pp.15, 23.
History
upright=0.9|thumb|Statue of the founder Henry VI in School Yard
upright=0.9|thumbnail|right|Eton College in 1690, in an engraving by David Loggan
Eton College was founded by King Henry VI as a charity school to provide free education to 70 poor boys who would then go on to King's College, Cambridge, founded by the same King in 1441. Henry took Winchester College as his model, visiting on many occasions, borrowing its Statutes and removing its Headmaster and some of the Scholars to start his new school.
When Henry VI founded the school, he granted it a large number of endowments, including much valuable land. The group of feoffees appointed by the king to receive forfeited lands of the Alien Priories for the endowment of Eton were as follows:Watts, John, Henry VI and the Politics of Kingship, pp.169–70, quoting Calendar of Patent Rolls 1436–41 pp.454, 471
Archbishop Chichele
Bishop Stafford
Bishop Lowe
Bishop Ayscough
William de la Pole, 1st Marquess of Suffolk (1396–1450) (later Duke of Suffolk)
John Somerset (d. 1454), Chancellor of the Exchequer and the king's doctor
Thomas Beckington (c. 1390–1465), Archdeacon of Buckingham, the king's secretary and later Keeper of the Privy Seal
Richard Andrew (d. 1477), first Warden of All Souls College, Oxford, later the king's secretary
Adam Moleyns (d. 1450), Clerk of the Council
John Hampton (d. 1472) of Kniver, Staffordshire, an Esquire of the Body
James Fiennes, another member of the Royal Household
William Tresham, another member of the Royal Household
It was intended to have formidable buildings (Henry intended the nave of the College Chapel to be the longest in Europe) and several religious relics, supposedly including a part of the True Cross and the Crown of Thorns. He persuaded the then Pope, Eugene IV, to grant him a privilege unparalleled anywhere in England: the right to grant indulgences to penitents on the Feast of the Assumption. The school also came into possession of one of England's Apocalypse manuscripts.
thumb|left|Eton College Chapel
However, when Henry was deposed by King Edward IV in 1461, the new King annulled all grants to the school and removed most of its assets and treasures to St George's Chapel, Windsor, on the other side of the River Thames. Legend has it that Edward's mistress, Jane Shore, intervened on the school's behalf. She was able to save a good part of the school,Nevill. p.5. although the royal bequest and the number of staff were much reduced.
Construction of the chapel, originally intended to be slightly over twice as long,Nevill, p.5. with eighteen—or possibly seventeen—bays (there are eight today) was stopped when Henry VI was deposed. Only the Quire of the intended building was completed. Eton's first Headmaster, William Waynflete, founder of Magdalen College, Oxford and previously Head Master of Winchester College,Nevill, p.4. built the ante-chapel that finishes the Chapel today. The important wall paintings in the Chapel and the brick north range of the present School Yard also date from the 1480s; the lower storeys of the cloister, including College Hall, had been built between 1441 and 1460.Nikolaus Pevsner, Buildings of England – Buckinghamshire
As the school suffered reduced income while still under construction, the completion and further development of the school has since depended to some extent on wealthy benefactors. Building resumed when Roger Lupton was Provost, around 1517. His name is borne by the big gate-house in the west range of the cloisters, fronting School Yard, perhaps the most famous image of the school. This range includes the important interiors of the Parlour, Election Hall, and Election Chamber, where most of the 18th century "leaving portraits" are kept.
"After Lupton's time nothing important was built until about 1670, when Provost Allestree gave a range to close the west side of School Yard between Lower School and Chapel".Nikolaus Pevsner, op. cit. p.119. This was remodelled later and completed 1694 by Matthew Bankes, Master Carpenter of the Royal Works. The last important addition to the central college buildings was the College Library, in the south range of the cloister, 1725–29, by Thomas Rowland. It has a very important collection of books and manuscripts.
In the 19th century, the architect John Shaw Jr (1803–1870) became surveyor to Eton. He designed New Buildings (1844–46),Nikolaus Pevsner, op. cit. Provost Francis Hodgson's addition to provide better accommodation for Collegers, who until then had mostly lived in Long Chamber, a long first floor room where conditions were inhumane.
Following complaints about the finances, buildings and management of Eton, the Clarendon Commission was set up in 1861 as a Royal Commission to investigate the state of nine schools in England, including Eton.J. Stuart Maclure, Educational Documents: England and Wales, 1816 to present day (Methuen Young Books, 1973, ISBN 978-0-416-78290-5), p.83
Questioned by the Commission in 1862, head master Edward Balston came under attack for his view that in the classroom little time could be spared for subjects other than classical studies.Report of Her Majesty's Commissioners appointed to inquire into the Revenues and Management of certain Colleges and Schools, and the Studies pursued and Instruction given therein; with an Appendix and Evidence, vol. III (evidence) (Her Majesty's Stationery Office, 1864), pp.114–116
thumbnail|left|An Eton College classroom in the 19th century
The Duke of Wellington is often incorrectly quoted as saying that "The Battle of Waterloo was won on the playing-fields of Eton". Wellington was at Eton from 1781 to 1784 and was to send his sons there. According to Nevill (citing the historian Sir Edward Creasy), what Wellington said, while passing an Eton cricket match many decades later, was, "There grows the stuff that won Waterloo",Nevill, p.125. a remark Nevill construes as a reference to "the manly character induced by games and sport" among English youth generally, not a comment about Eton specifically. In 1889, Sir William Fraser conflated this uncorroborated remark with the one attributed to him by Count Charles de Montalembert's "C'est ici qu'a été gagné la bataille de Waterloo" ("It is here that the Battle of Waterloo was won.")
As with other public schools,The Boy's Own Paper Nov 1915 to September 1919 a scheme was devised towards the end of the 19th century to familiarize privileged schoolboys with social conditions in deprived areas.Arthur C. Benson, Hugh, Memoirs of a Brother, chapter eight The project of establishing an 'Eton Mission' in the crowded district of Hackney Wick in east London was started at the beginning of 1880, and lasted until 1971 when it was decided that a more local project (at Dorney) would be more realistic. However over the years much money was raised for the Eton Mission, a fine church by G. F. Bodley was erected, many Etonians visited, and stimulated among other things the Eton Manor Boys' Club, a notable rowing club which has survived the Mission itself, and the 59 Club for motorcyclists.
thumbnail|right|Students at Eton dressed for the Fourth of June celebrations in 1932
The very large and ornate School Hall and School Library (by L. K. Hall) were erected in 1906–08 across the road from Upper School as the school's memorial to the Etonians who had died in the Boer War. Many tablets in the cloisters and chapel commemorate the large number of dead Etonians of the Great War. A bomb destroyed part of Upper School in World War II and blew out many windows in the Chapel. The college commissioned replacements by Evie Hone (1949–52) and by John Piper and Patrick Reyntiens (1959 onward).
Among headmasters of the 20th century were Cyril Alington, Robert Birley and Anthony Chenevix-Trench. M. R. James was a provost.
In 1959, the College constructed a nuclear bunker to house the College's Provost and Fellows. The facility is now used for storage.
In 2005, the School was one of fifty of the country's leading independent schools found to have breached the Competition Act (see below under "Controversy").
In 2011, plans to attack Eton were found on the body of a senior al-Qaeda leader shot dead in Somalia.
In the past, people at Eton have occasionally been guilty of antisemitism. For a time, new admissions were called 'Jews' by their fellow Collegers. In 1945, the school introduced a nationality statute conditioning entry on the applicant's father being British by birth. The statute was removed after the intervention of Prime Minister Harold Macmillan in the 1960s after it came to the attention of Oxford's Wykeham Professor of Logic, A. J. Ayer, himself Jewish and an Old Etonian, who "suspected a whiff of anti-semitism".
School terms
There are three academic terms (known as halves)McConnell, p.30 in the year,
The Michaelmas Half, from early September to mid December. New boys are now admitted only at the start of the Michaelmas Half, unless in exceptional circumstances.
The Lent Half, from mid-January to late March.
The Summer Half, from late April to late June or early July.
They are called halves because the school year was once split into two halves, between which the boys went home.
Boys' houses
King's Scholars
One boarding house, College, is reserved for 70 King's Scholars, who attend Eton on scholarships provided by the original foundation and awarded by examination each year; King's Scholars pay up to 90 percent of full fees, depending on their means. Of the other pupils, up to a third receive some kind of bursary or scholarship. The name "King's Scholars" is because the school was founded by King Henry VI in 1440. The original School consisted of the 70 Scholars (together with some Commensals) and the Scholars were educated and boarded at the foundation's expense.
King's Scholars are entitled to use the letters "KS" after their name and they can be identified by a black gown worn over the top of their tailcoats, giving them the nickname tugs (Latin: togati, wearers of gowns); and occasionally by a surplice in Chapel. The house is looked after by the Master in College.
Oppidans
As the School grew, more students were allowed to attend provided that they paid their own fees and lived in the town, outside the College's original buildings. These students became known as Oppidans, from the Latin word oppidum, meaning town.McConnell, pp.19–20 The Houses developed over time as a means of providing residence for the Oppidans in a more congenial manner, and during the 18th and 19th centuries were mostly run by women known as "dames". They typically contain about fifty boys. Although classes are organised on a School basis, most boys spend a large proportion of their time in their House. Each House has a formal name, mainly used for post and people outside the Eton community. It is generally known by the boys by the initials or surname of the House Master, the teacher who lives in the house and manages the pupils in it.
Not all boys who pass the College election examination choose to become King's Scholars. If they choose instead to belong to one of the 24 Oppidan Houses, they are known as Oppidan Scholars.McConnell, p.177 Oppidan scholarships may also be awarded for consistently performing with distinction in School and external examinations. To gain an Oppidan Scholarship, a boy must have either three distinctions in a row or four throughout his career. Within the school, an Oppidan Scholar is entitled to use the letters OS after his name.
The Oppidan Houses are named Godolphin House, Jourdelay's, (both built as such c. 1720),Pevsner op. cit. Hawtrey House, Durnford House, (the first two built as such by the Provost and Fellows, 1845, when the school was increasing in numbers and needed more centralised control), The Hopgarden, South Lawn, Waynflete, Evans's, Keate House, Warre House, Villiers House, Common Lane House, Penn House, Walpole House, Cotton Hall, Wotton House, Holland House, Mustians, Angelo's, Manor House, Farrer House, Baldwin's Bec, The Timbralls, and Westbury.
House structure
thumb|upright=0.9|Front of Eton College
In addition to the House Master, each house has a House Captain and a House Captain of Games. Some Houses have more than one. House prefects were once elected from the oldest year, but this no longer happens. The old term, Library, survives in the name of the room set aside for the oldest year's use, where boys have their own kitchen. Similarly, boys in their penultimate year have a room known as Debate.
There are entire house gatherings every evening, usually around 8:05–8:30 p.m. These are known as Prayers, due to their original nature. The House Master and boys have an opportunity to make announcements, and sometimes the boys provide light entertainment.
For much of Eton's history, junior boys had to act as "fags", or servants, to older boys. Their duties included cleaning, cooking, and running errands. A Library member was entitled to yell at any time and without notice, "Boy, Up!" or "Boy, Queue!", and all first-year boys had to come running. The last boy to arrive was given the task. These practices, known as fagging, were partially phased out of most houses in the 1970s. Captains of House and Games still sometimes give tasks to first-year boys, such as collecting the mail from School Office.
There are many inter-house competitions, mostly in sports.
Head Masters: 1442–present
Uniform
thumb|upright|Prince Henry, Duke of Gloucester in Eton dress in 1914. Top hats and cropped jackets are no longer worn.
The School is known for its traditions, including a uniform of black tailcoat (or morning coat) and waistcoat, false-collar and pinstriped trousers. Most pupils wear a white tie that is effectively a strip of cloth folded over into a starched, detachable collar, but some senior boys are entitled to wear a white bow tie and winged collar ("Stick-Ups"). There are some variations in the school dress worn by boys in authority, see School Prefects and King's Scholars sections.
The long-standing claim that the present uniform was first worn as mourning for the death of George IIINevill, p.33. is unfounded. "Eton dress" has undergone significant changes since its standardisation in the 19th century. Originally (along with a top-hat and walking-cane), Etonian dress was reserved for formal occasions, but boys wear it today for classes, which are referred to as "divisions", or "divs". As stated above, King's Scholars wear a black gown over the top of their tailcoats, and occasionally a surplice in Chapel. Members of the teaching staff (known as Beaks) are required to wear a form of school dress when teaching.
From 1820Nevill, p.34. until 1967, boys under the height of 5'4" (1.63 m) were required to wear the 'Eton suit', which replaced the tailcoat with the cropped 'Eton jacket' (known colloquially as a "bum-freezer") and included an 'Eton collar', a large, stiff-starched, white collar. The Eton suit was copied by other schools and has remained in use in some, particularly choir schools.The Eton Suit at British Schoolboy Uniforms.
Tutors and teaching
The pupil to teacher ratio is 8:1, which is low by general school standards. Class sizes start at around twenty to twenty-five in the first year and are often below ten by the final year.
The original curriculum concentrated on prayers, Latin and devotion, and "as late as 1530 no Greek was taught".Nevill, p.6.
Later the emphasis was on classical studies, dominated by Latin and Ancient History, and, for boys with sufficient ability, Classical Greek. From the latter part of the 19th century this curriculum has changed and broadened:See e.g. B. J. W. Hill, A Portrait of Eton, 1958, and Tim Card, Eton Renewed: A History of Eton College from 1860 to the Present Day, 1994 for example, there are now more than 100 students of Chinese, which is a non-curriculum course. In the 1970s, there was just one school computer, in a small room attached to the science buildings. It used paper tape to store programs. Today, all boys must have laptop computers, and the school fibre-optic network connects all classrooms and all boys' bedrooms to the internet.
The primary responsibility for a boy's studies lies with his House Master, but he is assisted by an additional director of studies, known as a tutor.McConnell, pp.70–76 Classes, colloquially known as "divs" (divisions), are organised on a School basis; the classrooms are separate from the houses. New school buildings have appeared for teaching purposes every decade or so since New Schools, designed by Henry Woodyer and built 1861–63.The Buildings of England – Buckinghamshire, Nikolaus Pevsner, 1960 Despite the introduction of modern technology, the external appearance and locations of many of the classrooms have remained unchanged for a long time.
Every evening, about an hour and a quarter, known as Quiet Hour, is set aside, during which boys are expected to study or prepare work for their teachers if not otherwise engaged. Some Houses, at the discretion of the House Master, may observe a second Quiet Hour after prayers in the evening. This is less formal, with boys being allowed to visit each other's rooms to socialise if neither boy has work outstanding.
The Independent Schools Inspectorate's latest report says, "The achievement of pupils is exceptional. Progress and abilities of all pupils are at a high level. Pupils are highly successful in public examinations, and the record of entrance to universities with demanding entry requirements in the United Kingdom and overseas is strong."
Societies
At Eton, there are dozens of organisations known as 'societies', in many of which pupils come together to discuss a particular topic, presided over by a master, and often including a guest speaker. At any one time there are about fifty societies and clubs in existence, catering for a wide range of interests and largely run by boys.
Societies tend to come and go, depending on the special enthusiasms of the masters and boys in the school at the time, but some have been in existence many years. Those in existence at present include: Aeronautical, African, Alexander Cozens (Art), Amnesty, Archeological, Architectural, Astronomy, Banks (conservation), Caledonian, Cheese, Classical, Comedy, Cosmopolitan, Debating, Design, Entrepreneurship, Francophone, Geographical, Geopolitical, Henry Fielding, Hispanic, History, Keynes (economics), Law, Literary, Mathematical, Medical, Middle Eastern, Model United Nations, Modern Languages, Oriental, Orwell (left-wing), Simeon (Christian), Parry (music), Photographic, Political, Praed (poetry), Rock (music), Rous (equestrian), Salisbury (diplomatic), Savile (Rare Books and Manuscripts), Shelley, Scientific, Sports, Tech Club, Theatre, Wellington (military), Wine and Wotton’s (philosophy).
Among past guest speakers are Andrew Lloyd Webber, J. K. Rowling, Vivienne Westwood, Ian McKellen, Kevin Warwick, Boris Johnson, Rowan Atkinson, Ralph Fiennes, Terry Wogan, King Constantine II of Greece, Katie Price, Zoe Wanamaker, Boris Berezovsky and Kit Hesketh-Harvey.
Grants and prizes
Prizes are awarded on the results of trials (internal exams), GCSE and AS-levels. In addition, many subjects and activities have specially endowed prizes, several of which are awarded by visiting experts. The most prestigious is the Newcastle Scholarship, awarded on the strength of an examination, consisting of two papers in philosophical theology, moral theory and applied ethics. Also of note are the Gladstone Memorial Prize and the Coutts Prize, awarded on the results of trials and AS-level examinations in C; and the Huxley Prize, awarded for a project on a scientific subject. Other specialist prizes include the Newcastle Classical Prize; the Rosebery Exhibition for History; the Queen's Prizes for French and German; the Duke of Newcastle's Russian Prize; the Beddington Spanish Prize; the Strafford and Bowman Shakespeare Prizes; the Tomline and Russell Prizes in Mathematics; the Sotheby Prize for History of Art; the Waddington Prize for Theology and Philosophy; the Birley Prize for History; the Rorie Mackenzie Prize for Modern Languages; The Lower Boy Rosebery Prize and the Wilder Prize for Theology. Prizes are awarded too for excellence in such activities as painting, sculpture, ceramics, playing musical instruments, musical composition, declamation, silverwork, and design.
Various benefactions make it possible to give grants each year to boys who wish, for educational or cultural reasons, to work or travel abroad. These include the Busk Fund, which supports individual ventures that show particular initiative; the C. M. Wells Memorial Trust Fund, for the promotion of visits to classical lands; the Sadler Fund, which supports, among others, those intending to enter the Foreign Service; and the Marsden Fund, for travel in countries where the principal language is not English.
Incentives and sanctions
Eton has a well-established system for encouraging boys to produce high-standard work. An excellent piece of work may be rewarded with a "Show Up", to be shown to the boy's tutors as evidence of progress.McConnell, p.84 If, in any particular term, a pupil makes a particularly good effort in any subject, he may be "Commended for Good Effort" to the Head Master (or Lower Master).
If any boy produces an outstanding piece of work, it may be "Sent Up For Good", storing the effort in the College Archives for posterity. This award has been around since the 18th century. As Sending Up For Good is fairly infrequent, the process is rather mysterious to many of Eton's boys. First, the master wishing to Send Up For Good must gain the permission of the relevant Head of Department. Upon receiving his or her approval, the piece of work will be marked with Sent Up For Good and the student will receive a card to be signed by House Master, tutor and division master.
The opposite of a Show Up is a "Rip".McConnell, pp.82–83 This is for sub-standard work, which is sometimes torn at the top of the page/sheet and must be submitted to the boy's housemaster for signature. Boys who accumulate rips are liable to be given a "White Ticket", which must be signed by all his teachers and may be accompanied by other punishments, usually involving doing domestic chores or writing lines. In recent times, a milder form of the rip, 'sign for information', colloquially known as an "info", has been introduced, which must also be signed by the boy's housemaster and tutor.
Internal examinations are held at the end of the Michaelmas term for all pupils, and in the Summer term for those in the first, second and fourth years. These internal examinations are called "Trials".McConnell, pp.85–89
A boy who is late for any division or other appointment may be required to sign "Tardy Book", a register kept in the School Office, between 7:35am and 7:45am, every morning for the duration of his sentence (typically three days).McConnell, p.42 Tardy Book may also be issued for late work. For more serious misdeeds, a boy is summoned from his lessons to the Head Master, or Lower Master if the boy is in the lower two years, to talk personally about his misdeeds. This is known as the "Bill".McConnell, pp.83–84 The most serious misdeeds may result in expulsion, or rustication (suspension). Conversely, should a master be more than 15 minutes late for a class, traditionally the pupils might claim it as a "run" and absent themselves for the rest of its duration.
A traditional form of punishment took the form of being made to copy, by hand, Latin hexameters. Miscreants were frequently set 100 hexameters by Library members, or, for more serious offences, Georgics (more than 500 hexameters) by their House Masters or the Head Master. The giving of a Georgic is now extremely rare, but still occasionally occurs.
Corporal punishment
Eton used to be renowned for its use of corporal punishment, generally known as "beating". In the 16th century, Friday was set aside as "flogging day".Nevill, p.9.
Beating was phased out in the 1980s. The film director Sebastian Doggart claims to have been the last boy caned at Eton, in 1984. Until 1964, offending boys could be summoned to the Head Master or the Lower Master, as appropriate, to receive a birching on the bare posterior, in a semi-public ceremony held in the Library, where there was a special wooden birching block over which the offender was held.
John Keate, Head Master from 1809 to 1834, took over at a time when discipline was poor. Anthony Chenevix-Trench, Head Master from 1964 to 1970, abolished the birch and replaced it with caning, also applied to the bare buttocks, which he administered privately in his office.Onyeama, Dillibe (1972). Nigger at Eton. London: Leslie Frewin. p.100. ISBN 978-0-85632-003-3 Chenevix-Trench also abolished corporal punishment administered by senior boys. Previously, House Captains were permitted to cane miscreants over the seat of the trousers. This was a routine occurrence, carried out privately with the boy bending over with his head under the edge of a table. Less common but more severe were the canings administered by Pop (see Eton Society below) in the form of a "Pop-Tanning", in which a large number of hard strokes were inflicted by the President of Pop in the presence of all Pop members (or, in earlier times, each member of Pop took it in turns to inflict a stroke). The culprit was summoned to appear in a pair of old trousers, as the caning would cut the cloth to shreds. This was the most severe form of physical punishment at Eton.Cheetham, Anthony; Parfit, Derek (1964). Eton Microcosm. London: Sidgwick & Jackson.
Chenevix-Trench's successor from 1970, Michael McCrum, retained private corporal punishment by masters, but ended the practice of requiring boys to take their trousers and underwear down when bending over to be caned by the Head Master. By the mid-1970s, the only people allowed to administer caning were the Head Master and the Lower Master.
Prefects
In addition to the masters, the following three categories of senior boys are entitled to exercise School discipline. Boys who belong to any of these categories, in addition to a limited number of other boy office holders, are entitled to wear winged collars with bow ties.
Eton Society: commonly known as Pop.McConnell, pp.57–58 Over the years its power and privileges have grown. Pop is the oldest self-electing society at Eton. The rules were altered in 1987 and again in 2005 so that the new intake are not elected solely by the existing year and a committee of masters. Members of Pop are entitled to wear checked spongebag trousers, and a waistcoat designed as they wish. Historically, only members of Pop were entitled to furl their umbrellasNevill, p.35. or sit on the wall on the Long Walk, in front of the main building. However, this tradition has died out. They perform roles at many of the routine events of the school year, including School Plays, parents' evenings and other official events. Notable ex-members of Pop include Prince William, Duke of Cambridge, Eddie Redmayne and Boris Johnson.
Sixth Form Select: an academically selected prefectorial group consisting, by custom, of the 10 senior King's Scholars and the 10 senior Oppidan Scholars.McConnell, pp.57, 129–137 Members of Sixth Form Select are entitled to wear silver buttons on their waistcoats. They also act as Praepostors: they enter classrooms and ask, "Is (family name) in this division?" followed by "He's to see the Head Master at (time)" (the Bill, see above). Members of Sixth Form Select also perform "Speeches", a formal event held five times a year.
House Captains: The captains of each of the 25 boys' houses (see above) have disciplinary powers at school level.McConnell, pp.59–62 House Captains are entitled to wear a mottled-grey waistcoat.
It is possible to belong to the Eton Society and Sixth Form Select at the same time.
In the era of Queen Elizabeth I there were two praepostors in every form, who noted down the names of absentees. Until the late 19th century, there was a praepostor for every division of the school.
Sports
Sport is a feature of Eton; there is an extensive network of playing fields. Their names include Agar's Plough, Dutchman's, Upper Club, Lower Club, Sixpenny/The Field, and Mesopotamia (situated between two streams and often shortened to "Mespots").
During the Michaelmas Half, the sport curriculum is dominated by football (called Association) and rugby union, with some rowing for a smaller number of boys.
During the Lent Half it is dominated by the field game, a code of football, but this is unique to Eton and cannot be played against other schools. During this half, Collegers also play the Eton wall game; this game received national publicity when it was taken up by Prince Harry. Aided by AstroTurf facilities on Masters' field, field hockey has become a major Lent Half sport along with Rugby 7's. Elite rowers prepare for the Schools' Head of the River Race in late March.
During the Summer Half, sporting boys divide into dry bobs, who play cricket, tennis or athletics, and wet bobs, who row on the River Thames and the rowing lake in preparation for The National Schools Regatta and the Princess Elizabeth Challenge Cup at Henley Royal Regatta.
The rowing lake at Dorney was developed and is owned by the College. It was the venue for the rowing and canoeing events at the 2012 Summer Olympics and the World Junior Rowing Championships.
The annual cricket match against Harrow at Lord's Cricket Ground is the oldest fixture of the cricketing calendar, having been played there since 1805. A staple of the London society calendar since the 1800s, in 1914, its importance was such that over 38,000 people attended the two days' play, and in 1910 the match made national headlines. But interest has since declined considerably, and the match is now a one-day limited overs contest.
There is a running track at the Thames Valley Athletics Centre and an annual steeplechase.
Among the other sports played at Eton is Eton Fives.
In 1815, Eton College documented its football rules, the first football code to be written down anywhere in the world.
Music and drama
Music
The current "Precentor" (Head of Music) is Tim Johnson, and the School boasts eight organs and an entire building for music (performance spaces include the School Hall, the Farrer Theatre and two halls dedicated to music, the Parry Hall and the Concert Hall). Many instruments are taught, including obscure ones such as the didgeridoo. The School participates in many national competitions; many pupils are part of the National Youth Orchestra, and the School gives scholarships for dedicated and talented musicians. A former Precentor of the college, Ralph Allwood set up and organised Eton Choral Courses, which run at the School every summer.
In 2009, the School's musical protégés came to wider notice when featured in a TV documentary A Boy Called Alex. The film followed an Etonian, Alex Stobbs, a musician with cystic fibrosis, as he worked toward conducting the difficult Magnificat by Johann Sebastian Bach.
Drama
thumb|right|The exterior of Eton's main theatre, the Farrer.
Numerous plays are put on every year at Eton College; there is one main theatre, called the Farrer (seating 400) and 2 Studio theatres, called the Caccia Studio and Empty Space (seating 90 and 80 respectively). There are about 8 or 9 house productions each year, around 3 or 4 "independent" plays (not confined solely to one house, produced, directed and funded by Etonians) and three school plays, one specifically for boys in the first two years, and two open to all years. The School Plays have such good reputations that they are normally fully booked every night. Productions also take place in varying locations around the School, varying from the sports fields to more historic buildings such as Upper School and College Chapel.
In recent years, the School has put on a musical version of The Bacchae (October 2009) as well as productions of A Funny Thing Happened on the Way to the Forum (May 2010), The Cherry Orchard (February 2011), Joseph K (October 2011), Cyrano de Bergerac (May 2012), Macbeth (October 2012), London Assurance (May 2013), Jerusalem (October 2013), and A Midsummer Night's Dream (May 2014). Often girls from surrounding schools, such as St George's, Ascot, St Mary's School Ascot, Windsor Girls' School and Heathfield St Mary's School, are cast in female roles. Boys from the School are also responsible for the lighting, sound and stage management of all the productions, under the guidance of several professional full-time theatre staff.
Every year, Eton employs a 'Director-in-Residence', an external professional director on a one-year contract who normally directs one house play and the Lower Boy play (a school play open solely to the first two-year groups), as well as teaching Drama and Theatre Studies to most year groups.
The drama department is headed by Hailz-Emily Osborne and several other teachers; Simon Dormandy was on the staff until late 2012. The School offers GCSE drama as well as A-level "English with Theatre Studies."
Celebrations
Eton's best-known holiday takes place on the so-called "Fourth of June", a celebration of the birthday of King George III, Eton's greatest patron. This day is celebrated with the Procession of Boats, in which the top rowing crews from the top four years row past in vintage wooden rowing boats. Similar to the Queen's Official Birthday, the "Fourth of June" is no longer celebrated on 4 June, but on the Wednesday before the first weekend of June. Eton also observes St. Andrew's Day, on which the Eton wall game is played.
School magazines
The Junior Chronicle and The Chronicle are the official School magazines, the latter having been founded in 1863.Nevill, p.25. Both are edited by boys at the School. Although liable to censorship, the latter has a tradition of satirising and attacking School policies, as well as documenting recent events. The Oppidan, founded in 1828, was published once a Half; it covered all sport in Eton and some professional events as well, but no longer exists.
Other School magazines, including The Spectrum (the Academic Yearbook), The Arts Review, and The Eton Zeitgeist have been published, as well as publications produced by individual departments such as The Cave (Philosophy), Etonomics (Economics), Scientific Etonian (Science), Timeline (History), Praed (Poetry and Song), The Mayflower (English), and The Lexicon (Modern Languages).
Charitable status and fees
Until 18 December 2010, Eton College was an exempt charity under English law (Charities Act 1993, Schedule 2). Under the provisions of the Charities Act 2006, it is now an excepted charity, and fully registered with the Charities Commission,Eton College Registration with Charity Commission. 18 December 2010. Retrieved 21 December 2011. and is now one of the 100 largest charities in the UK.Ranked by total annual income averaged over three years. As a charity, it benefits from substantial tax breaks. It was calculated by the late David Jewell, former Master of Haileybury, that in 1992 such tax breaks saved the School about £1,945 per pupil per year, although he had no direct connection with the School. This subsidy has declined since the 2001 abolition by the Labour Government of state-funded scholarships (formerly known as "assisted places") to independent schools. However, no child attended Eton on this scheme, meaning that the actual level of state assistance to the School has always been lower. Eton's retiring Head Master, Tony Little, has claimed that the benefits that Eton provides to the local community free of charge (use of its facilities, etc.) have a higher value than the tax breaks it receives as a result of its charitable status. The fee for the academic year 2010–2011 was £29,862 (approximately US$48,600 or €35,100 as of March 2011), although the sum is considerably lower for those pupils on bursaries and scholarships.
Controversy
Lottery grant (1995)
In 1995 the National Lottery granted money for a £4.6m sports complex, to add to Eton's existing facilities of two swimming pools, 30 cricket squares, 24 football, rugby and hockey pitches and a gym. The College paid £200,000 and contributed 4.5 hectares of land in return for exclusive use of the facilities during the daytime only. The UK Sports Council defended the deal on the grounds that the whole community would benefit, while the bursar claimed that Windsor, Slough and Eton Athletic Club was "deprived" because local people (who were not pupils at the College) did not have a world-class running track and facilities to train with. Steve Osborn, director of the Safe Neighbourhoods Unit, described the decision as "staggering" given the background of a substantial reduction in youth services by councils across the country, a matter over which, however, neither the College nor the UK Sports Council, had any control. The facility, which became the Thames Valley Athletics Centre, opened in April 1999.
Unfair dismissal of an art teacher (2004)
In October 2004, Sarah Forsyth claimed that she had been dismissed unfairly by Eton College and had been bullied by senior staff. She also claimed she was instructed to do some of Prince Harry's coursework to enable him to pass AS Art. As evidence, Forsyth provided secretly recorded conversations with both Prince Harry and her Head of Department, Ian Burke. An employment tribunal in July 2005 found that she had been unfairly dismissed and criticised Burke for bullying her and for repeatedly changing his story. It also criticised the school for failing to produce its capability procedures and criticised the Head Master for not reviewing the case independently.
It criticised Forsyth's decision to record a conversation with Harry as an abuse of teacher–student confidentiality and said "It is clear whichever version of the evidence is accepted that Mr Burke did ask the claimant to assist Prince Harry with text for his expressive art project. ... It is not part of this tribunal's function to determine whether or not it was legitimate." In response to the tribunal's ruling concerning the allegations about Prince Harry, the School issued a statement, saying Forsyth's claims "were dismissed for what they always have been—unfounded and irrelevant." A spokesperson from Clarence House said, "We are delighted that Harry has been totally cleared of cheating."
School fees cartel (2005)
In 2005, the Office of Fair Trading found fifty independent schools, including Eton, to have breached the Competition Act by "regularly and systematically" exchanging information about planned increases in school fees, which was collated and distributed among the schools by the bursar at Sevenoaks School. Following the investigation by the OFT, each school was required to pay around £70,000, totalling around £3.5 million, significantly less than the maximum possible fine. In addition, the schools together agreed to contribute another £3m to a new charitable educational fund. The incident raised concerns over whether the charitable status of independent schools such as Eton should be reconsidered, and perhaps revoked. However, Jean Scott, the head of the Independent Schools Council, said the schools were following a long-established procedure in sharing the information with each other because independent schools were previously exempt from anti-cartel rules applied to business and that they were unaware of the change to the law (on which they had not been consulted). She wrote to John Vickers, the OFT director-general, saying, "They are not a group of businessmen meeting behind closed doors to fix the price of their products to the disadvantage of the consumer. They are schools that have quite openly continued to follow a long-established practice because they were unaware that the law had changed."
Farming subsidies (2005)
A Freedom of Information request in 2005 revealed that Eton had received £2,652 in farming subsidies in 2004 under the Common Agricultural Policy. Asked to explain under what grounds it was eligible to receive farming subsidies, Eton admitted that it was 'a bit of a mystery'. Panorama revealed in March 2012 that farming subsidies were granted to Eton for 'environmental improvements', in effect 'being paid without having to do any farming at all'.
University admissions (2010, 2011)
Figures obtained by The Daily Telegraph had revealed that, in 2010, 37 applicants from Eton were accepted by Oxford whilst state schools had difficulty obtaining entry even for pupils with the country's most impressive exam results. According to The Economist, Oxford and Cambridge admit more Etonians each year than applicants from the whole country who qualify for free school meals. In April 2011 the Labour MP David Lammy described as unfair and 'indefensible' the fact that Oxford University had organised nine 'outreach events' at Eton in 2010, although he admitted that it had, in fact, held fewer such events for Eton than for another independent school, Wellington College.
Scholarship exam question about killing protesters (2011)
In May 2013, Eton College was criticised in several editorials for asking potential 2011 scholarship students how, if they were Prime Minister, they might defend the use of lethal force by the Army after two days of violent protests in which several policemen have been killed.
Mistaken acceptance emails (2015)
In July 2015, Eton accidentally sent emails to 400 prospective students, offering them conditional entrance to the school in September 2017. The email was intended for nine students, but an IT glitch caused the email to be sent to 400 additional families, who didn't necessarily have a place. In response, the school issued the following statement: "This error was discovered within minutes and each family was immediately contacted to notify them that it should be disregarded and to apologise. We take this type of incident very seriously indeed and so a thorough investigation, overseen by the headmaster Tony Little and led by the tutor for admissions, is being carried out to find out exactly what went wrong and ensure it cannot happen again. Eton College offers its sincere apologies to those boys concerned and their families. We deeply regret the confusion and upset this must have caused."
Historical relations with other schools
Eton College has links with some private schools in India today, maintained from the days of the British Raj, such as The Doon School and Mayo College. Eton College is also a member of the G20 Schools Group, a collection of college preparatory boarding schools from around the world, including Turkey's Robert College, the United States' Phillips Academy and Phillips Exeter Academy, Australia's Scotch College, Melbourne Grammar School and Launceston Church Grammar School, Singapore's Raffles Institution, and Switzerland's International School of Geneva. Eton has recently fostered a relationship with the Roxbury Latin School, a traditional all-boys private school in Boston, US. Former Eton headmaster and provost Sir Eric Anderson shares a close friendship with Roxbury Latin Headmaster emeritus F. Washington Jarvis; Anderson has visited Roxbury Latin on numerous occasions, while Jarvis briefly taught theology at Eton after retiring from his headmaster post at Roxbury Latin. The headmasters' close friendship spawned the Hennessy Scholarship, an annual prize established in 2005 and awarded to a graduating RL senior for a year of study at Eton. Hennessy Scholars generally reside in Wotton house.
The Doon School, India
The Doon School, founded in 1935, was the first all-boys' public school in India modelled along the lines of Eton. The School's first headmaster was an Englishman, Arthur E. Foot, who had spent nine years as a science master at Eton College, before joining Doon. This led to similar slang being introduced in Doon which is still in use today, such as trials, dame, fagging, schools (as opposed to 'periods') and tuck shop.
In Doon's early years, faculty from Eton travelled to India to fill up the academic posts. Peter Lawrence was one of the first few masters to go to Doon.Peter Lawrence (teacher) In February 2013, Eton's Head Master Tony Little visited the Doon School in India to hold talks with Peter McLaughlin, headmaster of Doon, on further collaboration between the two schools. Both schools participate in an exchange programme which sees boys from either school visiting the other for one academic term.
Although the School has often been cited as 'Eton of India' by media outlets such as BBC, The Guardian, Financial Times, The Economist, The Daily Telegraph and Forbes, it strongly eschews the label.
Holyport College
In September 2014 Eton College helped establish Holyport College, a state-funded free school with boarding facilities. The school is located in Holyport, Berkshire and Eton College acts as the main educational sponsor of the school.
Old Etonians
thumb|200px|Old Etonian Tie: black with turquoise stripes
Former pupils of Eton College are known as Old Etonians.
Eton has produced nineteen British Prime Ministers, including Sir Robert Walpole, William Pitt the Elder, the first Duke of Wellington, William Ewart Gladstone, the fifth Lord Rosebery, Arthur James Balfour, Anthony Eden, Harold Macmillan, Alec Douglas-Home, and David Cameron.
A rising number of pupils come to Eton from overseas, including members of royal families from Europe, Africa and Asia, some of whom have been sending their sons to Eton for generations. One of them, King Prajadhipok or Rama VII (1893–1941) of Siam, donated a garden to Eton.
The former Prime Minister of Thailand, Abhisit Vejjajiva, who governed from 2008 to 2011, was also educated at Eton. King Leopold III of Belgium was sent to Eton during the First World War.
Besides Prince William and Prince Harry, members of the extended British Royal Family who have attended Eton include Prince Richard, Duke of Gloucester and his son Alexander Windsor, Earl of Ulster; Prince Edward, Duke of Kent, his eldest son George Windsor, Earl of St Andrews and grandson Edward Windsor, Lord Downpatrick and his youngest son Lord Nicholas Windsor; Prince Michael of Kent and his son Lord Frederick Windsor; James Ogilvy, son of Princess Alexandra and the Right Honourable Angus Ogilvy, himself an Eton alumnus. Prince William of Gloucester (1942–1972) also attended Eton, as did George Lascelles, 7th Earl of Harewood, son of Princess Mary, Princess Royal.
The former Mayor of London, Boris Johnson, elected in 2008 and 2012, was educated at Eton, as was Justin Welby, the current Archbishop of Canterbury.
Old Etonians who have been writers include Henry Fielding, Thomas Gray, Horace Walpole, Aldous Huxley, Percy Bysshe Shelley, Robert Bridges, Gilbert Frankau, Eric Blair (aka George Orwell), Anthony Powell, Cyril Connolly and Ian Fleming. The mediaevalist and ghost story writer M. R. James was provost of Eton from 1918 until his death in 1936.
Other notable Old Etonians include scientists Robert Boyle, John Maynard Smith, J. B. S. Haldane, Stephen Wolfram and the 2012 Nobel Prize in Physiology or Medicine winner, John Gurdon; Beau Brummell; economists John Maynard Keynes and Richard Layard; Antarctic explorer Lawrence Oates; politician Alan Clark; entrepreneur, charity organiser and partner of Adele, Simon Konecki; cricket commentator Henry Blofeld; explorer Sir Ranulph Fiennes; adventurer Bear Grylls; composers Thomas Arne, George Butterworth, Roger Quilter, Frederick Septimus Kelly, Donald Tovey, Thomas Dunhill, Lord Berners, Victor Hely-Hutchinson, and Peter Warlock (Philip Heseltine); Hubert Parry, who wrote the song Jerusalem and the coronation anthem I was glad; and musicians Frank Turner and Humphrey Lyttelton.
Notable Old Etonians in the media include the former Political Editor of both ITN and The Times, Julian Haviland; the current BBC Deputy Political Editor, James Landale, and the BBC Science Editor, David Shukman; the current President of Conde Nast International and Managing Director of Conde Nast UK, Nicholas Coleridge; the former ITN newscaster and BBC Panorama presenter, Ludovic Kennedy; current BBC World News and BBC Rough Justice current affairs presenter David Jessel; former chief ITV and Channel 4 racing commentator John Oaksey; 1950s BBC newsreader and 1960s ITN newscaster Timothy Brinton; 1960s BBC newsreader Corbet Woodall; the former Editor of The Daily Telegraph, Charles Moore; the former Editor of The Spectator, Ferdinand Mount; and the current Editor of The Mail on Sunday, Geordie Greig.
Notable Old Etonian film and television actors include Eddie Redmayne, Damian Lewis, Christopher Cazenove, Dominic West, Jeremy Clyde, actor and comedian Michael Bentine, Sebastian Armesto, Julian Ovenden, Henry Faber, Jeremy Brett, Hugh Laurie, Tom Hiddleston, Ian Ogilvy, John Standing, Harry Hadden-Paton, Moray Watson, Jeremy Child, Harry Lloyd, Patrick Macnee and Nyasha Hatendi.
Actor Dominic West has been unenthusiastic about the career benefits of being an Old Etonian, saying it "is a stigma that is slightly above 'paedophile' in the media in a gallery of infamy", but asked whether he would consider sending his own children there, said "Yes, I would. It's an extraordinary place. ... It has the facilities and the excellence of teaching and it will find what you’re good at and nurture it",Farndale, Nigel (6 November 2011)."Dominic West: 'Old Etonian? That was a lifetime ago". The Daily Telegraph (London). Retrieved 5 March 2014. while the actor Tom Hiddleston says there are widespread misconceptions about Eton, and that "People think it's just full of braying toffs. ... It isn't true... It's actually one of the most broadminded places I've ever been. The reason it's a good school is that it encourages people to find the thing they love and to go for it. They champion the talent of the individual and that's what's special about it".
Thirty-seven Old Etonians have been awarded the Victoria Cross—the largest number to alumni of any school.
Fictional Old Etonians
Many fictional characters have been described as Old Etonians. These include:
Bertie Wooster and Psmith in books by P. G. Wodehouse
Captain Hook, pirate leader in J. M. Barrie's play Peter PanHook's dying words are "Floreat Etona". Barrie, J. M. Peter Pan; or, the Boy Who Wouldn't Grow Up (1904). Barrie later gave a talk at the school entitled "Captain Hook at Eton".
Lord Peter Wimsey, detective in books by Dorothy L. Sayers
George Hysteron-Proteron, fanatic game shot in books by J. K. StanfordStanford, J.K., The Twelfth and After (1964), p.113.
Lord Sebastian Flyte in Evelyn Waugh's novel Brideshead RevisitedMason, Richard. 'Foreword to the Tenth Anniversary Edition' in The Drowning People (London: Hachette, 2011 edition)
MI6 Agent James Bond was expelled from Eton after some "trouble" with a maid (Bond author Ian Fleming also attended the school)
Francis Urquhart in the House of Cards trilogy by Michael Dobbs
Sir Arnold Robinson in the 1980s British sitcom Yes Minister and its sequel Yes, Prime Minister
Mark Darcy in the Bridget Jones films, who says when confronted with the possibility of having a baby that he will "visit him at Eton on St Andrew's Day. The Darcy men have been going to Eton for five generations."
Jack Gurney, the 14th Earl of Gurney, in Peter Barnes's play The Ruling Class and in its film adaptation, a role that garnered Peter O'Toole an Oscar nomination
Rudolph Rassendyll in The Prisoner of Zenda
Jonathan Higgins, played by John Hillerman, in the American crime drama series Magnum, P.I.
Captain Arthur Hastings in books by Agatha Christie
Rawdon Crawley, the husband of Becky Sharp, in William Makepeace Thackeray's novel Vanity Fair
Merlyn in some editions of T. H. White's The Once and Future King had attended Eton and received a medal for achievement in an unspecified subject (in others he had a medal for being the best scholar at Winchester)
Allan Quatermain in books by H. Rider Haggard
Lord Grantham, played by Hugh Bonneville, in the ITV series Downton Abbey
Inspector Thomas "Tommy" Lynley, 8th Earl of Asherton, in The Inspector Lynley Mysteries, a BBC series based on novels by Elizabeth George
Dr. Donald "Ducky" Mallard, played by David McCallum, chief medical examiner at NCIS in the American crime drama series NCIS
Will Bailey, played by Joshua Malina, in the American serial drama The West Wing
Maxwell Sheffield, played by Charles Shaughnessy, in the American sitcom The Nanny
Stephen Dene, from the book series "The Shades of London," written by Maureen Johnson
Partially filmed at Eton
Here follows a list of films partially filmed at Eton.
Henry VIII and His Six Wives (1972)
Aces High (1976)
Chariots of Fire (1981)
Young Sherlock Holmes (1985)
The Fourth Protocol (1987)
Inspector Morse: Absolute Conviction (1992 TV episode)
Lovejoy: "Friends in High Places" (1992 TV episode)
The Secret Garden (1993)
The Madness of King George (1994)
A Dance to the Music of Time (1997 TV mini-series)
Shakespeare in Love (1998)
Mansfield Park (1999)
A History of Britain (2000 TV series documentary)
My Week With Marilyn (2010)
Opening scenes of 'Public Eye' British TV series (1965–1975) where Alfred Burke as dour private-eye Frank Marker has an office in Eton.
See also
Eton and Castle – the electoral ward comprising the College
Eton Boating Song – the Eton school song
Eton College Collections
Eton Fives
Eton mess
Eton Montem
Eton Racing Boats
The Eton Rifles – a 1979 top ten hit for the Jam about class struggle, the lyrics of which reflect contemporary attitudes toward Eton as a wellspring of the establishment
List of the oldest schools in the world
List of Provosts of Eton College
List of head masters of Eton College
List of Victoria Crosses by school
Newcastle Scholarship
Notes
References
Nevill, Ralph (1911). Floreat Etona: Anecdotes and Memories of Eton College. London: Macmillan. .
McConnell, J.D.R. (1967). Eton: How It Works. London: Faber and Faber. .
Further reading
Card, Tim, Eton Established: A History From 1440 to 1860 (London, John Murray, 2001, ISBN 978-0-7195-6052-1)
Fraser, Nick, The Importance of Being Eton (London, Short Books, June 2006) ISBN 978-1-904977-53-7
Osborne, Richard, Music and Musicians of Eton: 1440 to the present (London, Cygnet Press, 2012, ISBN 978-0-907435-19-8)
Parker, Eric, Playing Fields: School Days at Eton (London, Philip Allan, 1922)
External links
Independent Schools Inspectorate – Eton College
Mohamad at Eton – documentary about Palestinian refugee attending Eton
*
Category:1440 establishments in England
Category:Boarding schools in Berkshire
Category:Boys' schools in Berkshire
Category:Charities based in Berkshire
Category:Church of England independent schools in the Diocese of Oxford
Category:Educational institutions established in the 15th century
Category:Exempt charities
Category:Grade I listed buildings in Berkshire
Category:Grade I listed educational buildings
Category:Independent schools in Windsor and Maidenhead
Category:Member schools of the Headmasters' and Headmistresses' Conference
Category:Racquets venues
Category:Schools with a Royal Charter
Category:Eton, Berkshire
Category:Schools cricket | 53,228 | 2017-01 |
Alfred North Whitehead | Alfred North Whitehead (15 February 1861 – 30 December 1947) was an English mathematician and philosopher. He is best known as the defining figure of the philosophical school known as process philosophy,David Ray Griffin, Reenchantment Without Supernaturalism: A Process Philosophy of Religion (Ithaca: Cornell University Press, 2001), vii. which today has found application to a wide variety of disciplines, including ecology, theology, education, physics, biology, economics, and psychology, among other areas.
In his early career Whitehead wrote primarily on mathematics, logic, and physics. His most notable work in these fields is the three-volume Principia Mathematica (1910–13), which he wrote with former student Bertrand Russell. Principia Mathematica is considered one of the twentieth century's most important works in mathematical logic, and placed 23rd in a list of the top 100 English-language nonfiction books of the twentieth century by Modern Library."The Modern Library's Top 100 Nonfiction Books of the Century", last modified April 30, 1999, New York Times, accessed November 21, 2013, http://www.nytimes.com/library/books/042999best-nonfiction-list.html.
Beginning in the late 1910s and early 1920s, Whitehead gradually turned his attention from mathematics to philosophy of science, and finally to metaphysics. He developed a comprehensive metaphysical system which radically departed from most of western philosophy. Whitehead argued that reality consists of processes rather than material objects, and that processes are best defined by their relations with other processes, thus rejecting the theory that reality is fundamentally constructed by bits of matter that exist independently of one another.C. Robert Mesle, Process-Relational Philosophy: An Introduction to Alfred North Whitehead (West Conshohocken: Templeton Foundation Press, 2009), 9. Today Whitehead's philosophical works – particularly Process and Reality – are regarded as the foundational texts of process philosophy.
Whitehead's process philosophy argues that "there is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us." For this reason, one of the most promising applications of Whitehead's thought in recent years has been in the area of ecological civilization and environmental ethics pioneered by John B. Cobb, Jr.Philip Rose, On Whitehead (Belmont: Wadsworth, 2002), preface.
Life
thumb|left|Whewell's Court north range at Trinity College, Cambridge. Whitehead spent thirty years at Trinity, five as a student and twenty-five as a senior lecturer.
Alfred North Whitehead was born in Ramsgate, Kent, England, in 1861.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 2. His father, Alfred Whitehead, was a minister and schoolmaster of Chatham House Academy, a school for boys established by Thomas Whitehead, Alfred North's grandfather.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 13. Whitehead himself recalled both of them as being very successful schoolmasters, but that his grandfather was the more extraordinary man. Whitehead's mother was Maria Sarah Whitehead, formerly Maria Sarah Buckmaster. Whitehead was apparently not particularly close with his mother, as he never mentioned her in any of his writings, and there is evidence that Whitehead's wife, Evelyn, had a low opinion of her.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 27.
Whitehead was educated at Sherborne School, Dorset, then considered one of the best public schools in the country.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 44. His childhood was described as over-protected,Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 32–33. but when at school he excelled in sports and mathematicsVictor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 54–60. and was head prefect of his class.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 63.
In 1880, Whitehead began attending Trinity College, Cambridge, and studied mathematics.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 72. His academic advisor was Edward John Routh. He earned his BA from Trinity in 1884, and graduated as fourth wrangler.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 103. Elected a fellow of Trinity in 1884, Whitehead would teach and write on mathematics and physics at the college until 1910, spending the 1890s writing his Treatise on Universal Algebra (1898), and the 1900s collaborating with his former pupil, Bertrand Russell, on the first edition of Principia Mathematica.On Whitehead the mathematician and logician, see Ivor Grattan-Guinness, The Search for Mathematical Roots 1870–1940: Logics, Set Theories, and the Foundations of Mathematics from Cantor through Russell to Gödel (Princeton: Princeton University Press, 2000), and Quine's chapter in Paul Schilpp, The Philosophy of Alfred North Whitehead (New York: Tudor Publishing Company, 1941), 125–163. He was a Cambridge Apostle.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 112.
In 1890, Whitehead married Evelyn Wade, an Irish woman raised in France; they had a daughter, Jessie Whitehead, and two sons, Thomas North Whitehead and Eric Whitehead. Eric Whitehead died in action while serving in the Royal Flying Corps during World War I at the age of 19.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 34.
thumb|right|Bertrand Russell in 1907. Russell was a student of Whitehead's at Trinity College, and a longtime collaborator and friend.
In 1910, Whitehead resigned his Senior Lectureship in Mathematics at Trinity and moved to London without first lining up another job.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 2. After being unemployed for a year, Whitehead accepted a position as Lecturer in Applied Mathematics and Mechanics at University College London, but was passed over a year later for the Goldsmid Chair of Applied Mathematics and Mechanics, a position for which he had hoped to be seriously considered.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 6-8.
In 1914 Whitehead accepted a position as Professor of Applied Mathematics at the newly chartered Imperial College London, where his old friend Andrew Forsyth had recently been appointed Chief Professor of Mathematics.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 26-27.
In 1918 Whitehead's academic responsibilities began to seriously expand as he accepted a number of high administrative positions within the University of London system, of which Imperial College London was a member at the time. He was elected Dean of the Faculty of Science at the University of London in late 1918 (a post he held for four years), a member of the University of London's Senate in 1919, and chairman of the Senate's Academic (leadership) Council in 1920, a post which he held until he departed for America in 1924. Whitehead was able to exert his newfound influence to successfully lobby for a new history of science department, help establish a Bachelor of Science degree (previously only Bachelor of Arts degrees had been offered), and make the school more accessible to less wealthy students.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 72-74.
Toward the end of his time in England, Whitehead turned his attention to philosophy. Though he had no advanced training in philosophy, his philosophical work soon became highly regarded. After publishing The Concept of Nature in 1920, he served as president of the Aristotelian Society from 1922 to 1923.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 127. In 1924, Henry Osborn Taylor invited the 63-year-old Whitehead to join the faculty at Harvard University as a professor of philosophy.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 132.
During his time at Harvard, Whitehead produced his most important philosophical contributions. In 1925, he wrote Science and the Modern World, which was immediately hailed as an alternative to the Cartesian dualism that plagued popular science.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 3–4. A few years later, he published his seminal work Process and Reality, which has been compared (both in importance and difficulty) to Kant's Critique of Pure Reason.
The Whiteheads spent the rest of their lives in the United States. Alfred North retired from Harvard in 1937 and remained in Cambridge, Massachusetts until his death on 30 December 1947.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 262.
The two volume biography of Whitehead by Victor LoweVictor Lowe, Alfred North Whitehead: The Man and his Work, Vols I & II (Baltimore: The Johns Hopkins Press, 1985 & 1990). is the most definitive presentation of the life of Whitehead. However, many details of Whitehead's life remain obscure because he left no Nachlass; his family carried out his instructions that all of his papers be destroyed after his death.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 7. Additionally, Whitehead was known for his "almost fanatical belief in the right to privacy", and for writing very few personal letters of the kind that would help to gain insight on his life. This led to Lowe himself remarking on the first page of Whitehead's biography, "No professional biographer in his right mind would touch him."
The Whitehead Research Project of the Center for Process Studies is currently working on a critical edition of Whitehead's writings."Critical Edition of Whitehead", last modified July 16, 2013, Whitehead Research Project, accessed November 21, 2013, http://whiteheadresearch.org/research/cew/press-release.shtml.
Mathematics and logic
In addition to numerous articles on mathematics, Whitehead wrote three major books on the subject: A Treatise on Universal Algebra (1898), Principia Mathematica (co-written with Bertrand Russell and published in three volumes between 1910 and 1913), and An Introduction to Mathematics (1911). The former two books were aimed exclusively at professional mathematicians, while the latter book was intended for a larger audience, covering the history of mathematics and its philosophical foundations.Christoph Wassermann, "The Relevance of An Introduction to Mathematics to Whitehead's Philosophy", Process Studies 17 (1988): 181. Available online at http://www.religion-online.org/showarticle.asp?title=2753 Principia Mathematica in particular is regarded as one of the most important works in mathematical logic of the 20th century.
In addition to his legacy as a co-writer of Principia Mathematica, Whitehead's theory of "extensive abstraction" is considered foundational for the branch of ontology and computer science known as "mereotopology", a theory describing spatial relations among wholes, parts, parts of parts, and the boundaries between parts."Whitehead, Alfred North", last modified May 8, 2007, Gary L. Herstein, Internet Encyclopedia of Philosophy, accessed November 21, 2013, http://www.iep.utm.edu/whitehed/.
A Treatise on Universal Algebra
In A Treatise on Universal Algebra (1898) the term "universal algebra" had essentially the same meaning that it has today: the study of algebraic structures themselves, rather than examples ("models") of algebraic structures.George Grätzer, Universal Algebra (Princeton: Van Nostrand Co., Inc., 1968), v. Whitehead credits William Rowan Hamilton and Augustus De Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself.Cf. Michel Weber and Will Desmond (eds.). Handbook of Whiteheadian Process Thought (Frankfurt / Lancaster, Ontos Verlag, Process Thought X1 & X2, 2008) and Ronny Desmet & Michel Weber (edited by), Whitehead. The Algebra of Metaphysics. Applied Process Metaphysics Summer Institute Memorandum, Louvain-la-Neuve, Les Éditions Chromatika, 2010.
At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: "The main idea of the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but rather the comparative study of their several structures."Alexander Macfarlane, "Review of A Treatise on Universal Algebra", Science 9 (1899): 325. In a separate review, G. B. Mathews wrote, "It possesses a unity of design which is really remarkable, considering the variety of its themes."G. B. Mathews (1898) A Treatise on Universal Algebra from Nature 58:385 to 7 (#1504)
A Treatise on Universal Algebra sought to examine Hermann Grassmann's theory of extension ("Ausdehnungslehre"), Boole's algebra of logic, and Hamilton's quaternions (this last number system was to be taken up in Volume II, which was never finished due to Whitehead's work on Principia Mathematica).Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 190–191. Whitehead wrote in the preface:
"Such algebras have an intrinsic value for separate detailed study; also they are worthy of comparative study, for the sake of the light thereby thrown on the general theory of symbolic reasoning, and on algebraic symbolism in particular ... The idea of a generalized conception of space has been made prominent, in the belief that the properties and operations involved in it can be made to form a uniform method of interpretation of the various algebras."Alfred North Whitehead, A Treatise on Universal Algebra (Cambridge: Cambridge University Press, 1898), v. Available online at http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?handle=euclid.chmm/1263316510&view=body&content-type=pdf_1
Whitehead, however, had no results of a general nature. His hope of "form[ing] a uniform method of interpretation of the various algebras" presumably would have been developed in Volume II, had Whitehead completed it. Further work on the subject was minimal until the early 1930s, when Garrett Birkhoff and Øystein Ore began publishing on universal algebras.Barron Brainerd, "Review of Universal Algebra by P. M. Cohn", American Mathematical Monthly, 74 (1967): 878–880.
Principia Mathematica
210px|right|thumb|The title page of the shortened version of the Principia Mathematica to *56
Principia Mathematica (1910–1913) is Whitehead's most famous mathematical work. Co-written with former student Bertrand Russell, Principia Mathematica is considered one of the twentieth century's most important works in mathematics, and placed 23rd in a list of the top 100 English-language nonfiction books of the twentieth century by Modern Library.
Principia Mathematicas purpose was to describe a set of axioms and inference rules in symbolic logic from which all mathematical truths could in principle be proven. Whitehead and Russell were working on such a foundational level of mathematics and logic that it took them until page 86 of Volume II to prove that 1+1=2, a proof humorously accompanied by the comment, "The above proposition is occasionally useful."Alfred North Whitehead, Principia Mathematica Volume 2, Second Edition (Cambridge: Cambridge University Press, 1950), 83.
Whitehead and Russell had thought originally that Principia Mathematica would take a year to complete; it ended up taking them ten years.Hal Hellman, Great Feuds in Mathematics: Ten of the Liveliest Disputes Ever (Hoboken: John Wiley & Sons, 2006). Available online at https://books.google.com/books?id=ft8bEGf_OOcC&pg=PT12&lpg=PT12#v=onepage&q&f=false To add insult to injury, when it came time for publication, the three-volume work was so long (more than 2,000 pages) and its audience so narrow (professional mathematicians) that it was initially published at a loss of 600 pounds, 300 of which was paid by Cambridge University Press, 200 by the Royal Society of London, and 50 apiece by Whitehead and Russell themselves. Despite the initial loss, today there is likely no major academic library in the world which does not hold a copy of Principia Mathematica."Principia Mathematica", last modified December 3, 2013, Andrew David Irvine, ed. Edward N. Zalta, The Stanford Encyclopedia of Philosophy (Winter 2013 Edition), accessed December 5, 2013, http://plato.stanford.edu/entries/principia-mathematica/#HOPM.
The ultimate substantive legacy of Principia Mathematica is mixed. It is generally accepted that Kurt Gödel's incompleteness theorem of 1931 definitively demonstrated that for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths of mathematics which could not be deduced from them, and hence that Principia Mathematica could never achieve its aims.Stephen Cole Kleene, Mathematical Logic (New York: Wiley, 1967), 250. However, Gödel could not have come to this conclusion without Whitehead and Russell's book. In this way, Principia Mathematica'''s legacy might be described as its key role in disproving the possibility of achieving its own stated goals."'Principia Mathematica' Celebrates 100 Years", last modified December 22, 2010, NPR, accessed November 21, 2013, http://www.npr.org/2010/12/22/132265870/Principia-Mathematica-Celebrates-100-Years But beyond this somewhat ironic legacy, the book popularized modern mathematical logic and drew important connections between logic, epistemology, and metaphysics."Principia Mathematica", last modified December 3, 2013, Andrew David Irvine, ed. Edward N. Zalta, The Stanford Encyclopedia of Philosophy (Winter 2013 Edition), accessed December 5, 2013, http://plato.stanford.edu/entries/principia-mathematica/#SOPM.
An Introduction to Mathematics
Unlike Whitehead's previous two books on mathematics, An Introduction to Mathematics (1911) was not aimed exclusively at professional mathematicians, but was intended for a larger audience. The book covered the nature of mathematics, its unity and internal structure, and its applicability to nature. Whitehead wrote in the opening chapter:
"The object of the following Chapters is not to teach mathematics, but to enable students from the very beginning of their course to know what the science is about, and why it is necessarily the foundation of exact thought as applied to natural phenomena."Alfred North Whitehead, An Introduction to Mathematics, (New York: Henry Holt and Company, 1911), 8.
The book can be seen as an attempt to understand the growth in unity and interconnection of mathematics as a whole, as well as an examination of the mutual influence of mathematics and philosophy, language, and physics.Christoph Wassermann, "The Relevance of An Introduction to Mathematics to Whitehead's Philosophy", Process Studies 17 (1988): 181–182. Available online at http://www.religion-online.org/showarticle.asp?title=2753 Although the book is little-read, in some ways it prefigures certain points of Whitehead's later work in philosophy and metaphysics.Christoph Wassermann, "The Relevance of An Introduction to Mathematics to Whitehead's Philosophy", Process Studies 17 (1988): 182. Available online at http://www.religion-online.org/showarticle.asp?title=2753
Views on education
Whitehead showed a deep concern for educational reform at all levels. In addition to his numerous individually written works on the subject, Whitehead was appointed by Britain's Prime Minister David Lloyd George as part of a 20-person committee to investigate the educational systems and practices of the UK in 1921 and recommend reform.Committee To Inquire Into the Position of Classics in the Educational System of the United Kingdom, Report of the Committee Appointed by the Prime Minister to Inquire into the Position of Classics in the Educational System of the United Kingdom, (London: His Majesty's Stationery Office, 1921), 1, 282. Available online at https://archive.org/details/reportofcommitt00grea.
Whitehead's most complete work on education is the 1929 book The Aims of Education and Other Essays, which collected numerous essays and addresses by Whitehead on the subject published between 1912 and 1927. The essay from which Aims of Education derived its name was delivered as an address in 1916 when Whitehead was president of the London Branch of the Mathematical Association. In it, he cautioned against the teaching of what he called "inert ideas" – ideas that are disconnected scraps of information, with no application to real life or culture. He opined that "education with inert ideas is not only useless: it is, above all things, harmful."Alfred North Whitehead, The Aims of Education and Other Essays (New York: The Free Press, 1967), 1–2.
Rather than teach small parts of a large number of subjects, Whitehead advocated teaching a relatively few important concepts that the student could organically link to many different areas of knowledge, discovering their application in actual life.Alfred North Whitehead, The Aims of Education and Other Essays (New York: The Free Press, 1967), 2. For Whitehead, education should be the exact opposite of the multidisciplinary, value-free school model – it should be transdisciplinary, and laden with values and general principles that provide students with a bedrock of wisdom and help them to make connections between areas of knowledge that are usually regarded as separate.
In order to make this sort of teaching a reality, however, Whitehead pointed to the need to minimize the importance of (or radically alter) standard examinations for school entrance. Whitehead writes:
"Every school is bound on pain of extinction to train its boys for a small set of definite examinations. No headmaster has a free hand to develop his general education or his specialist studies in accordance with the opportunities of his school, which are created by its staff, its environment, its class of boys, and its endowments. I suggest that no system of external tests which aims primarily at examining individual scholars can result in anything but educational waste."Alfred North Whitehead, The Aims of Education and Other Essays (New York: The Free Press, 1967), 13.
Whitehead argued that curriculum should be developed specifically for its own students by its own staff, or else risk total stagnation, interrupted only by occasional movements from one group of inert ideas to another.
Above all else in his educational writings, Whitehead emphasized the importance of imagination and the free play of ideas. In his essay "Universities and Their Function", Whitehead writes provocatively on imagination:
"Imagination is not to be divorced from the facts: it is a way of illuminating the facts. It works by eliciting the general principles which apply to the facts, as they exist, and then by an intellectual survey of alternative possibilities which are consistent with those principles. It enables men to construct an intellectual vision of a new world."Alfred North Whitehead, The Aims of Education and Other Essays (New York: The Free Press, 1967), 93.
Whitehead's philosophy of education might adequately be summarized in his statement that "knowledge does not keep any better than fish."Alfred North Whitehead, The Aims of Education and Other Essays (New York: The Free Press, 1967), 98. In other words, bits of disconnected knowledge are meaningless; all knowledge must find some imaginative application to the students' own lives, or else it becomes so much useless trivia, and the students themselves become good at parroting facts but not thinking for themselves.
Philosophy and metaphysics
thumb|left|400px|Richard Rummell's 1906 watercolor landscape view of Harvard University, facing northeast."An Iconic College View: Harvard University, circa 1900. Richard Rummell (1848–1924)", last modified July 6, 2011, Graham Arader, accessed December 5, 2013, http://grahamarader.blogspot.com/2011/07/iconic-college-view-harvard-university.html. Whitehead taught at Harvard from 1924 to 1937.
Whitehead did not begin his career as a philosopher. In fact, he never had any formal training in philosophy beyond his undergraduate education. Early in his life he showed great interest in and respect for philosophy and metaphysics, but it is evident that he considered himself a rank amateur. In one letter to his friend and former student Bertrand Russell, after discussing whether science aimed to be explanatory or merely descriptive, he wrote: "This further question lands us in the ocean of metaphysic, onto which my profound ignorance of that science forbids me to enter."Alfred North Whitehead to Bertrand Russell, February 13, 1895, Bertrand Russell Archives, Archives and Research Collections, McMaster Library, McMaster University, Hamilton, Ontario, Canada. Ironically, in later life Whitehead would become one of the 20th century's foremost metaphysicians.
However, interest in metaphysics – the philosophical investigation of the nature of the universe and existence – had become unfashionable by the time Whitehead began writing in earnest about it in the 1920s. The ever-more impressive accomplishments of empirical science had led to a general consensus in academia that the development of comprehensive metaphysical systems was a waste of time because they were not subject to empirical testing.A. J. Ayer, Language, Truth and Logic, (New York: Penguin, 1971), 22.
Whitehead was unimpressed by this objection. In the notes of one of his students for a 1927 class, Whitehead was quoted as saying: "Every scientific man in order to preserve his reputation has to say he dislikes metaphysics. What he means is he dislikes having his metaphysics criticized."George P. Conger, "Whitehead lecture notes: Seminary in Logic: Logical and Metaphysical Problems", 1927, Manuscripts and Archives, Yale University Library, Yale University, New Haven, Connecticut. In Whitehead's view, scientists and philosophers make metaphysical assumptions about how the universe works all the time, but such assumptions are not easily seen precisely because they remain unexamined and unquestioned. While Whitehead acknowledged that "philosophers can never hope finally to formulate these metaphysical first principles,"Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 4. he argued that people need to continually re-imagine their basic assumptions about how the universe works if philosophy and science are to make any real progress, even if that progress remains permanently asymptotic. For this reason Whitehead regarded metaphysical investigations as essential to both good science and good philosophy.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 11.
Perhaps foremost among what Whitehead considered faulty metaphysical assumptions was the Cartesian idea that reality is fundamentally constructed of bits of matter that exist totally independently of one another, which he rejected in favor of an event-based or "process" ontology in which events are primary and are fundamentally interrelated and dependent on one another.Alfred North Whitehead, Science and the Modern World (New York: The Free Press, 1967), 17. He also argued that the most basic elements of reality can all be regarded as experiential, indeed that everything is constituted by its experience. He used the term "experience" very broadly, so that even inanimate processes such as electron collisions are said to manifest some degree of experience. In this, he went against Descartes' separation of two different kinds of real existence, either exclusively material or else exclusively mental.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 18. Whitehead referred to his metaphysical system as "philosophy of organism", but it would become known more widely as "process philosophy."
Whitehead's philosophy was highly original, and soon garnered interest in philosophical circles. After publishing The Concept of Nature in 1920, he served as president of the Aristotelian Society from 1922 to 1923, and Henri Bergson was quoted as saying that Whitehead was "the best philosopher writing in English."Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 127, 133. So impressive and different was Whitehead's philosophy that in 1924 he was invited to join the faculty at Harvard University as a professor of philosophy at 63 years of age.
right|thumb|250px|Eckhart Hall at the University of Chicago. Beginning with the arrival of Henry Nelson Wieman in 1927, Chicago's Divinity School become closely associated with Whitehead's thought for about thirty years.Gary Dorrien, The Making of American Liberal Theology: Crisis, Irony, and Postmodernity, 1950–2005 (Louisville: Westminster John Knox Press, 2006), 123–124.
This is not to say that Whitehead's thought was widely accepted or even well-understood. His philosophical work is generally considered to be among the most difficult to understand in all of the western canon. Even professional philosophers struggled to follow Whitehead's writings. One famous story illustrating the level of difficulty of Whitehead's philosophy centers around the delivery of Whitehead's Gifford lectures in 1927–28 – following Arthur Eddington's lectures of the year previous – which Whitehead would later publish as Process and Reality:
Eddington was a marvellous popular lecturer who had enthralled an audience of 600 for his entire course. The same audience turned up to Whitehead's first lecture but it was completely unintelligible, not merely to the world at large but to the elect. My father remarked to me afterwards that if he had not known Whitehead well he would have suspected that it was an imposter making it up as he went along ... The audience at subsequent lectures was only about half a dozen in all.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol II (Baltimore: The Johns Hopkins Press, 1990), 250.
Indeed, it may not be inappropriate to speculate that some fair portion of the respect generally shown to Whitehead by his philosophical peers at the time arose from their sheer bafflement. Distinguished University of Chicago Divinity School theologian Shailer Mathews once remarked of Whitehead's 1926 book Religion in the Making: "It is infuriating, and I must say embarrassing as well, to read page after page of relatively familiar words without understanding a single sentence."Gary Dorrien, "The Lure and Necessity of Process Theology", CrossCurrents 58 (2008): 320.
However, Mathews' frustration with Whitehead's books did not negatively affect his interest. In fact, there were numerous philosophers and theologians at Chicago's Divinity School that perceived the importance of what Whitehead was doing without fully grasping all of the details and implications. In 1927 they invited one of America's only Whitehead experts – Henry Nelson Wieman – to Chicago to give a lecture explaining Whitehead's thought. Wieman's lecture was so brilliant that he was promptly hired to the faculty and taught there for twenty years, and for at least thirty years afterward Chicago's Divinity School was closely associated with Whitehead's thought.
Shortly after Whitehead's book Process and Reality appeared in 1929, Wieman famously wrote in his 1930 review:
"Not many people will read Whitehead's recent book in this generation; not many will read it in any generation. But its influence will radiate through concentric circles of popularization until the common man will think and work in the light of it, not knowing whence the light came. After a few decades of discussion and analysis one will be able to understand it more readily than can now be done."Henry Nelson Wieman, "A Philosophy of Religion", The Journal of Religion 10 (1930): 137.
Wieman's words proved prophetic. Though Process and Reality has been called "arguably the most impressive single metaphysical text of the twentieth century,"Peter Simons, "Metaphysical systematics: A lesson from Whitehead", Erkenntnis 48 (1998), 378. it has been little-read and little-understood, partly because it demands – as Isabelle Stengers puts it – "that its readers accept the adventure of the questions that will separate them from every consensus."Isabelle Stengers, Thinking with Whitehead: A Free and Wild Creation of Concepts, trans. Michael Chase (Cambridge, Massachusetts: Harvard University Press, 2011), 6. Whitehead questioned western philosophy's most dearly held assumptions about how the universe works, but in doing so he managed to anticipate a number of 21st century scientific and philosophical problems and provide novel solutions.David Ray Griffin, Whitehead's Radically Different Postmodern Philosophy: An Argument for Its Contemporary Relevance (Albany: State University of New York Press, 2007), viii–ix.
Whitehead's conception of reality
Whitehead was convinced that the scientific notion of matter was misleading as a way of describing the ultimate nature of things. In his 1925 book Science and the Modern World, he wrote that
"There persists ... [a] fixed scientific cosmology which presupposes the ultimate fact of an irreducible brute matter, or material, spread through space in a flux of configurations. In itself such a material is senseless, valueless, purposeless. It just does what it does do, following a fixed routine imposed by external relations which do not spring from the nature of its being. It is this assumption that I call 'scientific materialism.' Also it is an assumption which I shall challenge as being entirely unsuited to the scientific situation at which we have now arrived."
In Whitehead's view, there are a number of problems with this notion of "irreducible brute matter." First, it obscures and minimizes the importance of change. By thinking of any material thing (like a rock, or a person) as being fundamentally the same thing throughout time, with any changes to it being secondary to its "nature", scientific materialism hides the fact that nothing ever stays the same. For Whitehead, change is fundamental and inescapable; he emphasizes that "all things flow."Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 208.
In Whitehead's view, then, concepts such as "quality", "matter", and "form" are problematic. These "classical" concepts fail to adequately account for change, and overlook the active and experiential nature of the most basic elements of the world. They are useful abstractions, but are not the world's basic building blocks.Alfred North Whitehead, Science and the Modern World (New York: The Free Press, 1967), 52–55. What is ordinarily conceived of as a single person, for instance, is philosophically described as a continuum of overlapping events.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 34–35. After all, people change all the time, if only because they have aged by another second and had some further experience. These occasions of experience are logically distinct, but are progressively connected in what Whitehead calls a "society" of events.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 34. By assuming that enduring objects are the most real and fundamental things in the universe, materialists have mistaken the abstract for the concrete (what Whitehead calls the "fallacy of misplaced concreteness").Alfred North Whitehead, Science and the Modern World (New York: The Free Press, 1967), 54–55.
To put it another way, a thing or person is often seen as having a "defining essence" or a "core identity" that is unchanging, and describes what the thing or person really is. In this way of thinking, things and people are seen as fundamentally the same through time, with any changes being qualitative and secondary to their core identity (e.g. "Mark's hair has turned gray as he has gotten older, but he is still the same person"). But in Whitehead's cosmology, the only fundamentally existent things are discrete "occasions of experience" that overlap one another in time and space, and jointly make up the enduring person or thing. On the other hand, what ordinary thinking often regards as "the essence of a thing" or "the identity/core of a person" is an abstract generalization of what is regarded as that person or thing's most important or salient features across time. Identities do not define people, people define identities. Everything changes from moment to moment, and to think of anything as having an "enduring essence" misses the fact that "all things flow", though it is often a useful way of speaking.
Whitehead pointed to the limitations of language as one of the main culprits in maintaining a materialistic way of thinking, and acknowledged that it may be difficult to ever wholly move past such ideas in everyday speech.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 183. After all, each moment of each person's life can hardly be given a different proper name, and it is easy and convenient to think of people and objects as remaining fundamentally the same things, rather than constantly keeping in mind that each thing is a different thing from what it was a moment ago. Yet the limitations of everyday living and everyday speech should not prevent people from realizing that "material substances" or "essences" are a convenient generalized description of a continuum of particular, concrete processes. No one questions that a ten-year-old person is quite different by the time he or she turns thirty years old, and in many ways is not the same person at all; Whitehead points out that it is not philosophically or ontologically sound to think that a person is the same from one second to the next.
210px|left|thumb|John Locke was one of Whitehead's primary influences. In the preface to Process and Reality, Whitehead wrote: "The writer who most fully anticipated the main positions of the philosophy of organism is John Locke in his Essay."
A second problem with materialism is that it obscures the importance of relations. It sees every object as distinct and discrete from all other objects. Each object is simply an inert clump of matter that is only externally related to other things. The idea of matter as primary makes people think of objects as being fundamentally separate in time and space, and not necessarily related to anything. But in Whitehead's view, relations take a primary role, perhaps even more important than the relata themselves.Alfred North Whitehead, Symbolism: Its Meaning and Effect (New York: Fordham University Press, 1985), 38–39. A student taking notes in one of Whitehead's fall 1924 classes wrote that:
"Reality applies to connections, and only relatively to the things connected. (A) is real for (B), and (B) is real for (A), but [they are] not absolutely real independent of each other."Louise R. Heath, "Notes on Whitehead's Philosophy 3b: Philosophical Presuppositions of Science", September 27, 1924, Whitehead Research Project, Center for Process Studies, Claremont, California.
In fact, Whitehead describes any entity as in some sense nothing more and nothing less than the sum of its relations to other entities – its synthesis of and reaction to the world around it.Alfred North Whitehead, Symbolism: Its Meaning and Effect (New York: Fordham University Press, 1985), 26. A real thing is just that which forces the rest of the universe to in some way conform to it; that is to say, if theoretically a thing made strictly no difference to any other entity (i.e. it was not related to any other entity), it could not be said to really exist.Alfred North Whitehead, Symbolism: Its Meaning and Effect (New York: Fordham University Press, 1985), 39. Relations are not secondary to what a thing is, they are what the thing is.
It must be emphasized, however, that an entity is not merely a sum of its relations, but also a valuation of them and reaction to them.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 19. For Whitehead, creativity is the absolute principle of existence, and every entity (whether it is a human being, a tree, or an electron) has some degree of novelty in how it responds to other entities, and is not fully determined by causal or mechanistic laws.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 21. Of course, most entities do not have consciousness.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 23. As a human being's actions cannot always be predicted, the same can be said of where a tree's roots will grow, or how an electron will move, or whether it will rain tomorrow. Moreover, inability to predict an electron's movement (for instance) is not due to faulty understanding or inadequate technology; rather, the fundamental creativity/freedom of all entities means that there will always remain phenomena that are unpredictable.Charles Hartshorne, "Freedom Requires Indeterminism and Universal Causality", The Journal of Philosophy 55 (1958): 794.
The other side of creativity/freedom as the absolute principle is that every entity is constrained by the social structure of existence (i.e., its relations) – each actual entity must conform to the settled conditions of the world around it. Freedom always exists within limits. But an entity's uniqueness and individuality arise from its own self-determination as to just how it will take account of the world within the limits that have been set for it.John B. Cobb, A Christian Natural Theology (Louisville: Westminster John Knox Press, 1978), 52.
In summary, Whitehead rejects the idea of separate and unchanging bits of matter as the most basic building blocks of reality, in favor of the idea of reality as interrelated events in process. He conceives of reality as composed of processes of dynamic "becoming" rather than static "being", emphasizing that all physical things change and evolve, and that changeless "essences" such as matter are mere abstractions from the interrelated events that are the final real things that make up the world.
Theory of perception
Since Whitehead's metaphysics described a universe in which all entities experience, he needed a new way of describing perception that was not limited to living, self-conscious beings. The term he coined was "prehension", which comes from the Latin prehensio, meaning "to seize."David Ray Griffin, Reenchantment Without Supernaturalism: A Process Philosophy of Religion (Ithaca: Cornell University Press, 2001), 79. The term is meant to indicate a kind of perception that can be conscious or unconscious, applying to people as well as electrons. It is also intended to make clear Whitehead's rejection of the theory of representative perception, in which the mind only has private ideas about other entities. For Whitehead, the term "prehension" indicates that the perceiver actually incorporates aspects of the perceived thing into itself. In this way, entities are constituted by their perceptions and relations, rather than being independent of them. Further, Whitehead regards perception as occurring in two modes, causal efficacy (or "physical prehension") and presentational immediacy (or "conceptual prehension").
Whitehead describes causal efficacy as "the experience dominating the primitive living organisms, which have a sense for the fate from which they have emerged, and the fate towards which they go."Alfred North Whitehead, Symbolism: Its Meaning and Effect (New York: Fordham University Press, 1985), 44. It is, in other words, the sense of causal relations between entities, a feeling of being influenced and affected by the surrounding environment, unmediated by the senses. Presentational immediacy, on the other hand, is what is usually referred to as "pure sense perception", unmediated by any causal or symbolic interpretation, even unconscious interpretation. In other words, it is pure appearance, which may or may not be delusive (e.g. mistaking an image in a mirror for "the real thing").Alfred North Whitehead, Symbolism: Its Meaning and Effect (New York: Fordham University Press, 1985), 24.
In higher organisms (like people), these two modes of perception combine into what Whitehead terms "symbolic reference", which links appearance with causation in a process that is so automatic that both people and animals have difficulty refraining from it. By way of illustration, Whitehead uses the example of a person's encounter with a chair. An ordinary person looks up, sees a colored shape, and immediately infers that it is a chair. However, an artist, Whitehead supposes, "might not have jumped to the notion of a chair", but instead "might have stopped at the mere contemplation of a beautiful color and a beautiful shape."Alfred North Whitehead, Symbolism: Its Meaning and Effect (New York: Fordham University Press, 1985), 3. This is not the normal human reaction; most people place objects in categories by habit and instinct, without even thinking about it. Moreover, animals do the same thing. Using the same example, Whitehead points out that a dog "would have acted immediately on the hypothesis of a chair and would have jumped onto it by way of using it as such."Alfred North Whitehead, Symbolism: Its Meaning and Effect (New York: Fordham University Press, 1985), 4. In this way symbolic reference is a fusion of pure sense perceptions on the one hand and causal relations on the other, and that it is in fact the causal relationships that dominate the more basic mentality (as the dog illustrates), while it is the sense perceptions which indicate a higher grade mentality (as the artist illustrates).Alfred North Whitehead, Symbolism: Its Meaning and Effect (New York: Fordham University Press, 1985), 49.
Evolution and value
Whitehead believed that when asking questions about the basic facts of existence, questions about value and purpose can never be fully escaped. This is borne out in his thoughts on abiogenesis, or the hypothetical natural process by which life arises from simple organic compounds.
Whitehead makes the startling observation that "life is comparatively deficient in survival value."Alfred North Whitehead, The Function of Reason (Boston: Beacon Press, 1958), 4. If humans can only exist for about a hundred years, and rocks for eight hundred million, then one is forced to ask why complex organisms ever evolved in the first place; as Whitehead humorously notes, "they certainly did not appear because they were better at that game than the rocks around them."Alfred North Whitehead, The Function of Reason (Boston: Beacon Press, 1958), 4–5. He then observes that the mark of higher forms of life is that they are actively engaged in modifying their environment, an activity which he theorizes is directed toward the three-fold goal of living, living well, and living better.Alfred North Whitehead, The Function of Reason (Boston: Beacon Press, 1958), 8. In other words, Whitehead sees life as directed toward the purpose of increasing its own satisfaction. Without such a goal, he sees the rise of life as totally unintelligible.
For Whitehead, there is no such thing as wholly inert matter. Instead, all things have some measure of freedom or creativity, however small, which allows them to be at least partly self-directed. Process philosopher David Ray Griffin coined the term "panexperientialism" (the idea that all entities experience) to describe Whitehead's view, and to distinguish it from panpsychism (the idea that all matter has consciousness).David Ray Griffin, Reenchantment Without Supernaturalism: A Process Philosophy of Religion (Ithaca: Cornell University Press, 2001), 97.
God
Whitehead's idea of God differs from traditional monotheistic notions.Roland Faber, God as Poet of the World: Exploring Process Theologies (Louisville: Westminster John Knox Press, 2008), chapters 4–5. Perhaps his most famous and pointed criticism of the Christian conception of God is that "the Church gave unto God the attributes which belonged exclusively to Caesar."Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 342. Here Whitehead is criticizing Christianity for defining God as primarily a divine king who imposes his will on the world, and whose most important attribute is power. As opposed to the most widely accepted forms of Christianity, Whitehead emphasized an idea of God that he called "the brief Galilean vision of humility":
"It does not emphasize the ruling Caesar, or the ruthless moralist, or the unmoved mover. It dwells upon the tender elements in the world, which slowly and in quietness operates by love; and it finds purpose in the present immediacy of a kingdom not of this world. Love neither rules, nor is it unmoved; also it is a little oblivious as to morals. It does not look to the future; for it finds its own reward in the immediate present."Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 343.
It should be emphasized, however, that for Whitehead God is not necessarily tied to religion.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 207. Rather than springing primarily from religious faith, Whitehead saw God as necessary for his metaphysical system. His system required that an order exist among possibilities, an order that allowed for novelty in the world and provided an aim to all entities. Whitehead posited that these ordered potentials exist in what he called the primordial nature of God. However, Whitehead was also interested in religious experience. This led him to reflect more intensively on what he saw as the second nature of God, the consequent nature. Whitehead's conception of God as a "dipolar"Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 345. entity has called for fresh theological thinking.
The primordial nature he described as "the unlimited conceptual realization of the absolute wealth of potentiality," i.e., the unlimited possibility of the universe. This primordial nature is eternal and unchanging, providing entities in the universe with possibilities for realization. Whitehead also calls this primordial aspect "the lure for feeling, the eternal urge of desire,"Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 344. pulling the entities in the universe toward as-yet unrealized possibilities.
God's consequent nature, on the other hand, is anything but unchanging – it is God's reception of the world's activity. As Whitehead puts it, "[God] saves the world as it passes into the immediacy of his own life. It is the judgment of a tenderness which loses nothing that can be saved."Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 346. In other words, God saves and cherishes all experiences forever, and those experiences go on to change the way God interacts with the world. In this way, God is really changed by what happens in the world and the wider universe, lending the actions of finite creatures an eternal significance.
Whitehead thus sees God and the world as fulfilling one another. He sees entities in the world as fluent and changing things that yearn for a permanence which only God can provide by taking them into God's self, thereafter changing God and affecting the rest of the universe throughout time. On the other hand, he sees God as permanent but as deficient in actuality and change: alone, God is merely eternally unrealized possibilities, and requires the world to actualize them. God gives creatures permanence, while the creatures give God actuality and change. Here it is worthwhile to quote Whitehead at length:
"In this way God is completed by the individual, fluent satisfactions of finite fact, and the temporal occasions are completed by their everlasting union with their transformed selves, purged into conformation with the eternal order which is the final absolute 'wisdom.' The final summary can only be expressed in terms of a group of antitheses, whose apparent self-contradictions depend on neglect of the diverse categories of existence. In each antithesis there is a shift of meaning which converts the opposition into a contrast.
"It is as true to say that God is permanent and the World fluent, as that the World is permanent and God is fluent.
"It is as true to say that God is one and the World many, as that the World is one and God many.
"It is as true to say that, in comparison with the World, God is actual eminently, as that, in comparison with God, the World is actual eminently.
"It is as true to say that the World is immanent in God, as that God is immanent in the World.
"It is as true to say that God transcends the World, as that the World transcends God.
"It is as true to say that God creates the World, as that the World creates God ...
"What is done in the world is transformed into a reality in heaven, and the reality in heaven passes back into the world ... In this sense, God is the great companion – the fellow-sufferer who understands."Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 347–348, 351.
The above is some of Whitehead's most evocative writing about God, and was powerful enough to inspire the movement known as process theology, a vibrant theological school of thought that continues to thrive today.Bruce G. Epperly, Process Theology: A Guide for the Perplexed (New York: T&T Clark, 2011), 12.Roland Faber, God as Poet of the World: Exploring Process Theologies (Louisville: Westminster John Knox Press, 2008), chapter 1.
Religion
For Whitehead the core of religion was individual. While he acknowledged that individuals cannot ever be fully separated from their society, he argued that life is an internal fact for its own sake before it is an external fact relating to others.Alfred North Whitehead, Religion in the Making (New York: Fordham University Press, 1996), 15–16. His most famous remark on religion is that "religion is what the individual does with his own solitariness ... and if you are never solitary, you are never religious."Alfred North Whitehead, Religion in the Making (New York: Fordham University Press, 1996), 16–17. Whitehead saw religion as a system of general truths that transformed a person's character.Alfred North Whitehead, Religion in the Making (New York: Fordham University Press, 1996), 15. He took special care to note that while religion is often a good influence, it is not necessarily good – an idea which he called a "dangerous delusion" (e.g., a religion might encourage the violent extermination of a rival religion's adherents).Alfred North Whitehead, Religion in the Making (New York: Fordham University Press, 1996), 18.
However, while Whitehead saw religion as beginning in solitariness, he also saw religion as necessarily expanding beyond the individual. In keeping with his process metaphysics in which relations are primary, he wrote that religion necessitates the realization of "the value of the objective world which is a community derivative from the interrelations of its component individuals."Alfred North Whitehead, Religion in the Making (New York: Fordham University Press, 1996), 59. In other words, the universe is a community which makes itself whole through the relatedness of each individual entity to all the others – meaning and value do not exist for the individual alone, but only in the context of the universal community. Whitehead writes further that each entity "can find no such value till it has merged its individual claim with that of the objective universe. Religion is world-loyalty. The spirit at once surrenders itself to this universal claim and appropriates it for itself."Alfred North Whitehead, Religion in the Making (New York: Fordham University Press, 1996), 60. In this way the individual and universal/social aspects of religion are mutually dependent.
Whitehead also described religion more technically as "an ultimate craving to infuse into the insistent particularity of emotion that non-temporal generality which primarily belongs to conceptual thought alone."Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 16. In other words, religion takes deeply felt emotions and contextualizes them within a system of general truths about the world, helping people to identify their wider meaning and significance. For Whitehead, religion served as a kind of bridge between philosophy and the emotions and purposes of a particular society.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 15. It is the task of religion to make philosophy applicable to the everyday lives of ordinary people.
Influence and legacy
Isabelle Stengers wrote that "Whiteheadians are recruited among both philosophers and theologians, and the palette has been enriched by practitioners from the most diverse horizons, from ecology to feminism, practices that unite political struggle and spirituality with the sciences of education." Indeed, in recent decades attention to Whitehead's work has become more widespread, with interest extending to intellectuals in Europe and China, and coming from such diverse fields as ecology, physics, biology, education, economics, and psychology. One of the first theologians to attempt to interact with Whitehead's thought was the future Archbishop of Canterbury, William Temple. In Temple's Gifford Lectures of 1932-1934 (subsequently published as "Nature, Man and God"), Whitehead is one of a number of philosophers of the emergent evolution approach Temple interacts with.George Garin, "Theistic Evolution in a Sacramental Universe: The Theology of William Temple Against the Background of Process Thinkers (Whitehead, Alexander, Etc.)," (Protestant University Press, Kinshasa, The Congo, 1991). However, it was not until the 1970s and 1980s that Whitehead's thought drew much attention outside of a small group of philosophers and theologians, primarily Americans, and even today he is not considered especially influential outside of relatively specialized circles.
Early followers of Whitehead were found primarily at the University of Chicago's Divinity School, where Henry Nelson Wieman initiated an interest in Whitehead's work that would last for about thirty years. Professors such as Wieman, Charles Hartshorne, Bernard Loomer, Bernard Meland, and Daniel Day Williams made Whitehead's philosophy arguably the most important intellectual thread running through the Divinity School.Gary Dorrien, "The Lure and Necessity of Process Theology", CrossCurrents 58 (2008): 321–322. They taught generations of Whitehead scholars, the most notable of which is John B. Cobb, Jr.
Although interest in Whitehead has since faded at Chicago's Divinity School, Cobb effectively grabbed the torch and planted it firmly in Claremont, California, where he began teaching at Claremont School of Theology in 1958 and founded the Center for Process Studies with David Ray Griffin in 1973.David Ray Griffin, "John B. Cobb, Jr.: A Theological Biography", in Theology and the University: Essays in Honor of John B. Cobb, Jr., ed. David Ray Griffin and Joseph C. Hough, Jr. (Albany: State University of New York Press, 1991), 229. Largely due to Cobb's influence, today Claremont remains strongly identified with Whitehead's process thought.Gary Dorrien, "The Lure and Necessity of Process Theology", CrossCurrents 58 (2008): 334.Victor Lowe, Alfred North Whitehead: The Man and his Work, Vol I (Baltimore: The Johns Hopkins Press, 1985), 5.
But while Claremont remains the most concentrated hub of Whiteheadian activity, the place where Whitehead's thought currently seems to be growing the most quickly is in China. In order to address the challenges of modernization and industrialization, China has begun to blend traditions of Taoism, Buddhism, and Confucianism with Whitehead's "constructive post-modern" philosophy in order to create an "ecological civilization.""China embraces Alfred North Whitehead", last modified December 10, 2008, Douglas Todd, The Vancouver Sun, accessed December 5, 2013, http://blogs.vancouversun.com/2008/12/10/china-embraces-alfred-north-whitehead/. To date, the Chinese government has encouraged the building of twenty-three university-based centers for the study of Whitehead's philosophy, and books by process philosophers John Cobb and David Ray Griffin are becoming required reading for Chinese graduate students. Cobb has attributed China's interest in process philosophy partly to Whitehead's stress on the mutual interdependence of humanity and nature, as well as his emphasis on an educational system that includes the teaching of values rather than simply bare facts.
Overall, however, Whitehead's influence is very difficult to characterize. In English-speaking countries, his primary works are little-studied outside of Claremont and a select number of liberal graduate-level theology and philosophy programs. Outside of these circles his influence is relatively small and diffuse, and has tended to come chiefly through the work of his students and admirers rather than Whitehead himself."Whitehead, Alfred North", last modified May 8, 2007, Gary L. Herstein, Internet Encyclopedia of Philosophy, accessed July 20, 2015, http://www.iep.utm.edu/whitehed/. For instance, Whitehead was a teacher and long-time friend and collaborator of Bertrand Russell, and he also taught and supervised the dissertation of Willard Van Orman Quine,"Quine Biography", last modified October 2003, John J. O'Connor and Edmund F. Robertson, MacTutor History of Mathematics archive, University of St Andrews, accessed December 5, 2013, http://www-history.mcs.st-andrews.ac.uk/Biographies/Quine.html. both of whom are important figures in analytic philosophy – the dominant strain of philosophy in English-speaking countries in the 20th century.John Searle, "Contemporary Philosophy in the United States", in N. Bunnin and E.P. Tsui-James, eds., The Blackwell Companion to Philosophy, 2nd ed., (Oxford: Blackwell, 2003), 1. Whitehead has also had high-profile admirers in the continental tradition, such as French post-structuralist philosopher Gilles Deleuze, who once dryly remarked of Whitehead that "he stands provisionally as the last great Anglo-American philosopher before Wittgenstein's disciples spread their misty confusion, sufficiency, and terror."Gilles Deleuze, The Fold: Leibniz and the Baroque, trans. Tom Conley (Minneapolis: University of Minnesota Press, 1993), 76. French sociologist and anthropologist Bruno Latour even went so far as to call Whitehead "the greatest philosopher of the 20th century."Bruno Latour, preface to Thinking with Whitehead: A Free and Wild Creation of Concepts, by Isabelle Stengers, trans. Michael Chase (Cambridge, Massachusetts: Harvard University Press, 2011), x.
Deleuze's and Latour's opinions, however, are minority ones, as Whitehead has not been recognized as particularly influential within the most dominant philosophical schools."Alfred North Whitehead", last modified March 10, 2015, Andrew David Irvine, ed. Edward N. Zalta, The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), accessed July 20, 2015, http://plato.stanford.edu/entries/whitehead/#WI It is impossible to say exactly why Whitehead's influence has not been more widespread, but it may be partly due to his metaphysical ideas seeming somewhat counter-intuitive (such as his assertion that matter is an abstraction), or his inclusion of theistic elements in his philosophy,"Alfred North Whitehead", last modified October 1, 2013, Andrew David Irvine, ed. Edward N. Zalta, The Stanford Encyclopedia of Philosophy (Winter 2013 Edition), accessed November 21, 2013, http://plato.stanford.edu/entries/whitehead/#WI or the perception of metaphysics itself as passé, or simply the sheer difficulty and density of his prose.
Process philosophy and theology
200px|left|thumb|Philosopher Nicholas Rescher. Rescher is a proponent of both Whiteheadian process philosophy and American pragmatism.
Historically Whitehead's work has been most influential in the field of American progressive theology. The most important early proponent of Whitehead's thought in a theological context was Charles Hartshorne, who spent a semester at Harvard as Whitehead's teaching assistant in 1925, and is widely credited with developing Whitehead's process philosophy into a full-blown process theology.Charles Hartshorne, A Christian Natural Theology, 2nd edition (Louisville, Westminster John Knox Press, 2007), 112. Other notable process theologians include John B. Cobb, Jr., David Ray Griffin, Marjorie Hewitt Suchocki, C. Robert Mesle, Roland Faber, and Catherine Keller.
Process theology typically stresses God's relational nature. Rather than seeing God as impassive or emotionless, process theologians view God as "the fellow sufferer who understands", and as the being who is supremely affected by temporal events.Alfred North Whitehead, Process and Reality (New York: The Free Press, 1978), 351. Hartshorne points out that people would not praise a human ruler who was unaffected by either the joys or sorrows of his followers – so why would this be a praise-worthy quality in God?Charles Hartshorne, The Divine Relativity: A Social Conception of God (New Haven: Yale University Press, 1964), 42–43. Instead, as the being who is most affected by the world, God is the being who can most appropriately respond to the world. However, process theology has been formulated in a wide variety of ways. C. Robert Mesle, for instance, advocates a "process naturalism", i.e. a process theology without God.See part IV of Mesle's Process Theology: A Basic Introduction (St. Louis: Chalice Press, 1993).
In fact, process theology is difficult to define because process theologians are so diverse and transdisciplinary in their views and interests. John B. Cobb, Jr. is a process theologian who has also written books on biology and economics. Roland Faber and Catherine Keller integrate Whitehead with poststructuralist, postcolonialist, and feminist theory. Charles Birch was both a theologian and a geneticist. Franklin I. Gamwell writes on theology and political theory. In Syntheism - Creating God in The Internet Age, futurologists Alexander Bard and Jan Söderqvist repeatedly credit Whitehead for the process theology they see rising out of the participatory culture expected to dominate the digital era.
Process philosophy is even more difficult to pin down than process theology. In practice, the two fields cannot be neatly separated. The 32-volume State University of New York series in constructive postmodern thought edited by process philosopher and theologian David Ray Griffin displays the range of areas in which different process philosophers work, including physics, ecology, medicine, public policy, nonviolence, politics, and psychology."Search Results For: SUNY series in Constructive Postmodern Thought", Sunypress.edu, accessed December 5, 2013, http://www.sunypress.edu/Searchadv.aspx?IsSubmit=true&CategoryID=6899.
One philosophical school which has historically had a close relationship with process philosophy is American pragmatism. Whitehead himself thought highly of William James and John Dewey, and acknowledged his indebtedness to them in the preface to Process and Reality. Charles Hartshorne (along with Paul Weiss) edited the collected papers of Charles Sanders Peirce, one of the founders of pragmatism. Noted neopragmatist Richard Rorty was in turn a student of Hartshorne."Richard Rorty", last modified June 16, 2007, Bjørn Ramberg, ed. Edward N. Zalta, The Stanford Encyclopedia of Philosophy (Spring 2009 Edition), accessed December 5, 2013, http://plato.stanford.edu/archives/spr2009/entries/rorty/. Today, Nicholas Rescher is one example of a philosopher who advocates both process philosophy and pragmatism.
In addition, while they might not properly be called process philosophers, Whitehead has been influential in the philosophy of Gilles Deleuze, Milič Čapek, Isabelle Stengers, Bruno Latour, Susanne Langer, and Maurice Merleau-Ponty.
Science
150px|right|thumb|Theoretical physicist David Bohm. Bohm is one example of a scientist influenced by Whitehead's philosophy.See David Ray Griffin, Physics and the Ultimate Significance of Time (Albany: State University of New York Press, 1986).
In recent years, Whiteheadian thought has become a stimulating influence in scientific research. Timothy E. Eastman and Hank Keeton's Physics and Whitehead (2004)Timothy E. Eastman and Hank Keeton, eds., Physics and Whitehead: Quantum, Process, and Experience (Albany: State University of New York Press, 2004). and Michael Epperson's Quantum Mechanics and the Philosophy of Alfred North Whitehead (2004) aim to offer Whiteheadian approaches to physics, while Brian G. Henning, Adam Scarfe, and Dorion Sagan's Beyond Mechanism (2013) and Rupert Sheldrake's Science Set Free (2012) are recent examples of Whiteheadian approaches to biology.
In physics, Whitehead's thought has had some influence. He articulated a view that might perhaps be regarded as dual to Einstein's general relativity, see Whitehead's theory of gravitation. It has been severely criticized.Chandrasekhar, S. (1979). Einstein and general relativity, Am. J. Phys. 47: 212–217.Will, C.M. (1981/1993). Theory and Experiment in Gravitational Physics, revised edition, Cambridge University Press, Cambridge UK, ISBN 978-0-521-43973-2, p. 139. Yutaka Tanaka, who suggests that the gravitational constant disagrees with experimental findings, proposes that Einstein's work does not actually refute Whitehead's formulation.Yutaka Tanaka, "The Comparison between Whitehead's and Einstein's Theories of Relativity", Historia Scientiarum 32 (1987). Whitehead's view has now been rendered obsolete, with the discovery of gravitational waves. They are phenonena observed locally that largely violate the kind of local flatness of space that Whitehead assumes. Consequently, Whitehead's cosmology must be regarded as a local approximation, and his assumption of a uniform spatio-temporal geometry, Minkowskian in particular, as an often-locally-adequate approximation. An exact replacement of Whitehead's cosmology would need to admit a Riemannian geometry. Also, although Whitehead himself gave only secondary consideration to quantum theory, his metaphysics of processes has proved attractive to some physicists in that field. Henry Stapp and David Bohm are among those whose work has been influenced by Whitehead.
Other scientists for whom Whitehead's work has been influential include physical chemist Ilya Prigogine, biologist Conrad Hal Waddington, and geneticists Charles Birch and Sewall Wright.Charles Birch, "Why Aren't We Zombies? Neo-Darwinism and Process Thought", in Back to Darwin: A Richer Account of Evolution, ed. John B. Cobb, Jr., (Grand Rapids: William B. Eerdmans Publishing Company, 2008), 252.
Ecology, economy, and sustainability
250px|left|thumb|Theologian, philosopher, and environmentalist John B. Cobb, Jr. founded the Center for Process Studies in Claremont, California with David Ray Griffin in 1973, and is often regarded as the preeminent scholar in the field of process philosophy and process theology.Roland Faber, God as Poet of the World: Exploring Process Theologies (Louisville: Westminster John Knox Press, 2008), 35.C. Robert Mesle, Process Theology (St. Louis: Chalice Press, 1993), 126.Gary Dorrien, "The Lure and Necessity of Process Theology", CrossCurrents 58 (2008): 316.Monica Coleman, Nancy R. Howell, and Helene Tallon Russell, Creating Women's Theology: A Movement Engaging Process Thought (Wipf and Stock, 2011), 13.
One of the most promising applications of Whitehead's thought in recent years has been in the area of ecological civilization, sustainability, and environmental ethics.
"Because Whitehead's holistic metaphysics of value lends itself so readily to an ecological point of view, many see his work as a promising alternative to the traditional mechanistic worldview, providing a detailed metaphysical picture of a world constituted by a web of interdependent relations."
This work has been pioneered by John B. Cobb, Jr., whose book Is It Too Late? A Theology of Ecology (1971) was the first single-authored book in environmental ethics."History of Environmental Ethics for the Novice", last modified March 15, 2011, The Center for Environmental Philosophy, accessed November 21, 2013, http://www.cep.unt.edu/novice.html. Cobb also co-authored a book with leading ecological economist and steady-state theorist Herman Daly entitled For the Common Good: Redirecting the Economy toward Community, the Environment, and a Sustainable Future (1989), which applied Whitehead's thought to economics, and received the Grawemeyer Award for Ideas Improving World Order. Cobb followed this with a second book, Sustaining the Common Good: A Christian Perspective on the Global Economy (1994), which aimed to challenge "economists' zealous faith in the great god of growth."John B. Cobb, Jr., Sustaining the Common Good: A Christian Perspective on the Global Economy (Cleveland: The Pilgrim Press, 1994), back cover.
Education
Whitehead is widely known for his influence in education theory. His philosophy inspired the formation of the Association for Process Philosophy of Education (APPE), which published eleven volumes of a journal titled Process Papers on process philosophy and education from 1996 to 2008.See Process Papers, a publication of the Association for Process Philosophy of Education. Volume 1 published in 1996, Volume 11 (final volume) published in 2008. Whitehead's theories on education also led to the formation of new modes of learning and new models of teaching.
One such model is the ANISA model developed by Daniel C. Jordan, which sought to address a lack of understanding of the nature of people in current education systems. As Jordan and Raymond P. Shepard put it: "Because it has not defined the nature of man, education is in the untenable position of having to devote its energies to the development of curricula without any coherent ideas about the nature of the creature for whom they are intended."Daniel C. Jordan and Raymond P. Shepard, "The Philosophy of the ANISA Model", Process Papers 6, 38–39.
Another model is the FEELS model developed by Xie Bangxiu and deployed successfully in China. "FEELS" stands for five things in curriculum and education: Flexible-goals, Engaged-learner, Embodied-knowledge, Learning-through-interactions, and Supportive-teacher."FEELS: A Constructive Postmodern Approach To Curriculum and Education", Xie Bangxiu, JesusJazzBuddhism.org, accessed December 5, 2013, http://www.jesusjazzbuddhism.org/feels.html. It is used for understanding and evaluating educational curriculum under the assumption that the purpose of education is to "help a person become whole." This work is in part the product of cooperation between Chinese government organizations and the Institute for the Postmodern Development of China.
Whitehead's philosophy of education has also found institutional support in Canada, where the University of Saskatchewan created a Process Philosophy Research Unit and sponsored several conferences on process philosophy and education."International Conferences – University of Saskatchewan", University of Saskatchewan, accessed December 5, 2013, http://www.usask.ca/usppru/international-conferences.php. Dr. Howard Woodhouse at the University of Saskatchewan remains a strong proponent of Whiteheadian education."Dr. Howard Woodhouse", University of Saskatchewan, accessed December 5, 2013
Two recent books which further develop Whitehead's philosophy of education include: Modes of Learning: Whitehead's Metaphysics and the Stages of Education (2012) by George Allan; and The Adventure of Education: Process Philosophers on Learning, Teaching, and Research (2009) by Adam Scarfe.
Business administration
Whitehead has had some influence on philosophy of business administration and organizational theory. This has led in part to a focus on identifying and investigating the effect of temporal events (as opposed to static things) within organizations through an “organization studies” discourse that accommodates a variety of 'weak' and 'strong' process perspectives from a number of philosophers.Tor Hernes, A Process Theory of Organization (Oxford University Press, 2014) One of the leading figures having an explicitly Whiteheadian and panexperientialist stance towards management is Mark Dibben,Mark R. Dibben and John B. Cobb, Jr., "Special Focus: Process Thought and Organization Studies," in Process Studies 32 (2003). who works in what he calls "applied process thought" to articulate a philosophy of management and business administration as part of a wider examination of the social sciences through the lens of process metaphysics. For Dibben, this allows "a comprehensive exploration of life as perpetually active experiencing, as opposed to occasional – and thoroughly passive – happening.""Mark Dibben – School of Management – University of Tasmania, Australia", last modified July 16, 2013, University of Tasmania, accessed November 21, 2013, http://www.utas.edu.au/business-and-economics/people/profiles/accounting/Mark-Dibben. Dibben has published two books on applied process thought, Applied Process Thought I: Initial Explorations in Theory and Research (2008), and Applied Process Thought II: Following a Trail Ablaze (2009), as well as other papers in this vein in the fields of philosophy of management and business ethics.Mark Dibben, "Exploring the Processual Nature of Trust and Cooperation in Organisations: A Whiteheadian Analysis," in Philosophy of Management 4 (2004): 25-39; Mark Dibben, "Organisations and Organising: Understanding and Applying Whitehead’s Processual Account," in Philosophy of Management 7 (2009); Cristina Neesham and Mark Dibben, "The Social Value of Business: Lessons from Political Economy and Process Philosophy," in Applied Ethics: Remembering Patrick Primeaux (Research in Ethical Issues in Organizations, Volume 8), ed. Michael Schwartz and Howard Harris (Emerald Group Publishing Limited, 2012): 63-83.
Margaret Stout and Carrie M. Staton have also written recently on the mutual influence of Whitehead and Mary Parker Follett, a pioneer in the fields of organizational theory and organizational behavior. Stout and Staton see both Whitehead and Follett as sharing an ontology that "understands becoming as a relational process; difference as being related, yet unique; and the purpose of becoming as harmonizing difference."Margaret Stout & Carrie M. Staton, "The Ontology of Process Philosophy in Follett's Administrative Theory" Administrative Theory & Praxis 33 (2011): 268. This connection is further analyzed by Stout and Jeannine M. Love in Integrative Process: Follettian Thinking from Ontology to AdministrationMargaret Stout & Jeannine M. Love, Integrative Process: Follettian Thinking from Ontology to Administration, (Anoka, MN: Process Century Press 2015).
Political views
Whitehead's political views sometimes appear to be Libertarian without the label. He wrote:
On the other hand, many Whitehead scholars read his work as providing a philosophical foundation for the social liberalism of the New Liberal movement that was prominent throughout Whitehead's adult life. Morris wrote that "...there is good reason for claiming that Whitehead shared the social and political ideals of the new liberals."Morris, Randall C., Journal of the History of Ideas 51: 75-92. p. 92.
Primary works
Books written by Whitehead, listed by date of publication.
A Treatise on Universal Algebra. Cambridge: Cambridge University Press, 1898. ISBN 1-4297-0032-7. Available online at http://projecteuclid.org/euclid.chmm/1263316509.
The Axioms of Descriptive Geometry. Cambridge: Cambridge University Press, 1907.F.W. Owens, "Review: The Axioms of Descriptive Geometry by A. N. Whitehead", Bulletin of the American Mathematical Society 15 (1909): 465–466. Available online at http://www.ams.org/journals/bull/1909-15-09/S0002-9904-1909-01815-4/S0002-9904-1909-01815-4.pdf. Available online at http://quod.lib.umich.edu/u/umhistmath/ABN2643.0001.001.
with Bertrand Russell. Principia Mathematica, Volume I. Cambridge: Cambridge University Press, 1910. Available online at http://www.hti.umich.edu/cgi/b/bib/bibperm?q1=AAT3201.0001.001. Vol. 1 to *56 is available as a CUP paperback.James Byrnie Shaw, "Review: Principia Mathematica by A. N. Whitehead and B. Russell, Vol. I, 1910", Bulletin of the American Mathematical Society 18 (1912): 386–411. Available online at http://www.ams.org/journals/bull/1912-18-08/S0002-9904-1912-02233-4/S0002-9904-1912-02233-4.pdf.Benjamin Abram Bernstein, "Review: Principia Mathematica by A. N. Whitehead and B. Russell, Vol. I, Second Edition, 1925", Bulletin of the American Mathematical Society 32 (1926): 711–713. Available online at http://www.ams.org/journals/bull/1926-32-06/S0002-9904-1926-04306-8/S0002-9904-1926-04306-8.pdf.Alonzo Church, "Review: Principia Mathematica by A. N. Whitehead and B. Russell, Volumes II and III, Second Edition, 1927", Bulletin of the American Mathematical Society 34 (1928): 237–240. Available online at http://www.ams.org/journals/bull/1928-34-02/S0002-9904-1928-04525-1/S0002-9904-1928-04525-1.pdf.
An Introduction to Mathematics. Cambridge: Cambridge University Press, 1911. Available online at http://quod.lib.umich.edu/u/umhistmath/AAW5995.0001.001. Vol. 56 of the Great Books of the Western World series.
with Bertrand Russell. Principia Mathematica, Volume II. Cambridge: Cambridge University Press, 1912. Available online at http://www.hti.umich.edu/cgi/b/bib/bibperm?q1=AAT3201.0002.001.
with Bertrand Russell. Principia Mathematica, Volume III. Cambridge: Cambridge University Press, 1913. Available online at http://www.hti.umich.edu/cgi/b/bib/bibperm?q1=AAT3201.0003.001.
The Organization of Thought Educational and Scientific. London: Williams & Norgate, 1917. Available online at https://archive.org/details/organisationofth00whit.
An Enquiry Concerning the Principles of Natural Knowledge. Cambridge: Cambridge University Press, 1919. Available online at https://archive.org/details/enquiryconcernpr00whitrich.
The Concept of Nature. Cambridge: Cambridge University Press, 1920. Based on the November 1919 Tarner Lectures delivered at Trinity College. Available online at https://archive.org/details/cu31924012068593.
The Principle of Relativity with Applications to Physical Science. Cambridge: Cambridge University Press, 1922. Available online at https://archive.org/details/theprincipleofre00whituoft.
Science and the Modern World. New York: Macmillan Company, 1925. Vol. 55 of the Great Books of the Western World series.
Religion in the Making. New York: Macmillan Company, 1926. Based on the 1926 Lowell Lectures.
Symbolism, Its Meaning and Effect. New York: Macmillan Co., 1927. Based on the 1927 Barbour-Page Lectures delivered at the University of Virginia.
Process and Reality: An Essay in Cosmology. New York: Macmillan Company, 1929. Based on the 1927–28 Gifford Lectures delivered at the University of Edinburgh. The 1978 Free Press "corrected edition" edited by David Ray Griffin and Donald W. Sherburne corrects many errors in both the British and American editions, and also provides a comprehensive index.
The Aims of Education and Other Essays. New York: Macmillan Company, 1929.
The Function of Reason. Princeton: Princeton University Press, 1929. Based on the March 1929 Louis Clark Vanuxem Foundation Lectures delivered at Princeton University.
Adventures of Ideas. New York: Macmillan Company, 1933. Also published by Cambridge: Cambridge University Press, 1933.
Nature and Life. Chicago: University of Chicago Press, 1934.
Modes of Thought. New York: MacMillan Company, 1938.
"Mathematics and the Good." In The Philosophy of Alfred North Whitehead, edited by Paul Arthur Schilpp, 666–681. Evanston and Chicago: Northwestern University Press, 1941.
"Immortality." In The Philosophy of Alfred North Whitehead, edited by Paul Arthur Schilpp, 682–700. Evanston and Chicago: Northwestern University Press, 1941.
Essays in Science and Philosophy. London: Philosophical Library, 1947.
with Allison Heartz Johnson, ed. The Wit and Wisdom of Whitehead. Boston: Beacon Press, 1948.
In addition, the Whitehead Research Project of the Center for Process Studies is currently working on a critical edition of Whitehead's writings, which is set to include notes taken by Whitehead's students during his Harvard classes, correspondence, and corrected editions of his books.
Paul A. Bogaard and Jason Bell, eds. The Harvard Lectures of Alfred North Whitehead, 1924-1925: Philosophical Presuppositions of Science. Cambridge: Cambridge University Press, 2017.
See also
Relationalism
References
Further readingFor the most comprehensive list of resources related to Whitehead, see the thematic bibliography of the Center for Process Studies.Casati, Roberto, and Achille C. Varzi. Parts and Places: The Structures of Spatial Representation. Cambridge, Massachusetts: The MIT Press, 1999.
Ford, Lewis. Emergence of Whitehead's Metaphysics, 1925–1929. Albany: State University of New York Press, 1985.
Hartshorne, Charles. Whitehead's Philosophy: Selected Essays, 1935–1970. Lincoln and London: University of Nebraska Press, 1972.
Henning, Brian G. The Ethics of Creativity: Beauty, Morality, and Nature in a Processive Cosmos. Pittsburgh: University of Pittsburgh Press, 2005.
Holtz, Harald and Ernest Wolf-Gazo, eds. Whitehead und der Prozeßbegriff / Whitehead and The Idea of Process. Proceedings of The First International Whitehead-Symposion. Verlag Karl Alber, Freiburg i. B. / München, 1984. ISBN 3-495-47517-6
Jones, Judith A. Intensity: An Essay in Whiteheadian Ontology. Nashville: Vanderbilt University Press, 1998.
Kraus, Elizabeth M. The Metaphysics of Experience. New York: Fordham University Press, 1979.
McDaniel, Jay. What is Process Thought?: Seven Answers to Seven Questions. Claremont: P&F Press, 2008.
McHenry, Leemon. The Event Universe: The Revisionary Metaphysics of Alfred North Whitehead. Edinburgh: Edinburgh University Press, 2015.
Nobo, Jorge L. Whitehead's Metaphysics of Extension and Solidarity. Albany: State University of New York Press, 1986.
Price, Lucien. Dialogues of Alfred North Whitehead. New York: Mentor Books, 1956.
Quine, Willard Van Orman. "Whitehead and the rise of modern logic." In The Philosophy of Alfred North Whitehead, edited by Paul Arthur Schilpp, 125–163. Evanston and Chicago: Northwestern University Press, 1941.
Rapp, Friedrich and Reiner Wiehl, eds. Whiteheads Metaphysik der Kreativität. Internationales Whitehead-Symposium Bad Homburg 1983. Verlag Karl Alber, Freiburg i. B. / München, 1986. ISBN 3-495-47612-1
Rescher, Nicholas. Process Metaphysics. Albany: State University of New York Press, 1995.
Rescher, Nicholas. Process Philosophy: A Survey of Basic Issues. Pittsburgh: University of Pittsburgh Press, 2001.
Schilpp, Paul Arthur, ed. The Philosophy of Alfred North Whitehead. Evanston and Chicago: Northwestern University Press, 1941. Part of the Library of Living Philosophers series.
Siebers, Johan. The Method of Speculative Philosophy: An Essay on the Foundations of Whitehead's Metaphysics. Kassel: Kassel University Press GmbH, 2002. ISBN 3-933146-79-8
Smith, Olav Bryant. Myths of the Self: Narrative Identity and Postmodern Metaphysics. Lanham: Lexington Books, 2004. ISBN 0-7391-0843-3
– Contains a section called "Alfred North Whitehead: Toward a More Fundamental Ontology" that is an overview of Whitehead's metaphysics.
Weber, Michel. Whitehead's Pancreativism — The Basics. Frankfurt: Ontos Verlag, 2006.
Weber, Michel. Whitehead’s Pancreativism — Jamesian Applications, Frankfurt / Paris: Ontos Verlag, 2011.
Weber, Michel and Will Desmond (eds.). Handbook of Whiteheadian Process Thought, Frankfurt / Lancaster: Ontos Verlag, 2008.
Alan Van Wyk and Michel Weber (eds.). Creativity and Its Discontents. The Response to Whitehead's Process and Reality, Frankfurt / Lancaster: Ontos Verlag, 2009.
Will, Clifford. Theory and Experiment in Gravitational Physics. Cambridge: Cambridge University Press, 1993.
External links
The Philosophy of Organism in Philosophy Now magazine. An accessible summary of Alfred North Whitehead's philosophy.
Center for Process Studies in Claremont, California. A faculty research center of Claremont School of Theology, in association with Claremont Graduate University. The Center organizes conferences and events and publishes materials pertaining to Whitehead and process thought. It also maintains extensive Whitehead-related bibliographies.
Summary of Whitehead's Philosophy A Brief Introduction to Whitehead's Metaphysics
Society for the Study of Process Philosophies, a scholarly society that holds periodic meetings in conjunction with each of the divisional meetings of the American Philosophical Association, as well as at the annual meeting of the Society for the Advancement of American Philosophy.
"Alfred North Whitehead" in the MacTutor History of Mathematics archive'', by John J. O'Connor and Edmund F. Robertson.
"Alfred North Whitehead: New World Philosopher" at the Harvard Square Library.
Jesus, Jazz, and Buddhism: Process Thinking for a More Hospitable World
"What is Process Thought?" an introductory video series to process thought by Jay McDaniel.
Centre de philosophie pratique « Chromatiques whiteheadiennes »
"Whitehead's Principle of Relativity" by John Lighton Synge on arXiv.org
Whitehead at Monoskop.org, with extensive bibliography.
Category:1861 births
Category:1947 deaths
Category:19th-century English writers
Category:20th-century English writers
Category:20th-century philosophers
Category:20th-century English theologians
Category:Academics of Imperial College London
Category:Academics of University College London
Category:Alumni of Trinity College, Cambridge
Category:Cambridge University Moral Sciences Club
Category:English mathematicians
Category:English philosophers
Category:English theologians
Category:Fellows of the Royal Society
Category:Former atheists and agnostics
Category:Harvard University faculty
Category:English logicians
Category:Mathematics popularizers
Category:Metaphysicians
Category:Mystics
Category:People educated at Sherborne School
Category:Ontologists
Category:People from Ramsgate
Category:Philosophers of science
Category:20th-century English mathematicians
Category:American philosophers
Category:American theologians
Category:Process philosophy
Category:Presidents of the Aristotelian Society | 43,395 | 2017-01 |
Russian language | Russian (, , pronounced ) is an East Slavic language and an official language in Russia, Belarus, Kazakhstan, Kyrgyzstan and many minor or unrecognised territories. It is an unofficial but widely spoken language in Ukraine and Latvia, and to a lesser extent, the other countries that were once constituent republics of the Soviet Union and former participants of the Eastern Bloc. Russian belongs to the family of Indo-European languages and is one of the four living members of the East Slavic languages. Written examples of Old East Slavonic are attested from the 10th century and beyond.
It is the most geographically widespread language of Eurasia and the most widely spoken of the Slavic languages. It is also the largest native language in Europe, with 144 million native speakers in Russia, Ukraine and Belarus. Russian is the eighth most spoken language in the world by number of native speakers and the seventh by total number of speakers. The language is one of the six official languages of the United Nations.
Russian distinguishes between consonant phonemes with palatal secondary articulation and those without, the so-called soft and hard sounds. This distinction is found between pairs of almost all consonants and is one of the most distinguishing features of the language. Another important aspect is the reduction of unstressed vowels. Stress, which is unpredictable, is not normally indicated orthographically though an optional acute accent () may be used to mark stress, such as to distinguish between homographic words, for example замо́к (zamok, meaning a lock) and за́мок (zamok, meaning a castle), or to indicate the proper pronunciation of uncommon words or names.
Classification
Russian is a Slavic language of the Indo-European family. It is a lineal descendant of the language used in Kievan Rus'. From the point of view of the spoken language, its closest relatives are Ukrainian, Belarusian, and Rusyn, the other three languages in the East Slavic group. In many places in eastern and southern Ukraine and throughout Belarus, these languages are spoken interchangeably, and in certain areas traditional bilingualism resulted in language mixtures such as Surzhyk in eastern Ukraine and Trasianka in Belarus. An East Slavic Old Novgorod dialect, although vanished during the 15th or 16th century, is sometimes considered to have played a significant role in the formation of modern Russian. Also Russian has notable lexical similarities with Bulgarian due to a common Church Slavonic influence on both languages, as well as because of later interaction in the 19th and 20th centuries, although Bulgarian grammar differs markedly from Russian. In the 19th century, the language was often called "Great Russian" to distinguish it from Belarusian, then called "White Russian" and Ukrainian, then called "Little Russian".
The vocabulary (mainly abstract and literary words), principles of word formations, and, to some extent, inflections and literary style of Russian have been also influenced by Church Slavonic, a developed and partly russified form of the South Slavic Old Church Slavonic language used by the Russian Orthodox Church. However, the East Slavic forms have tended to be used exclusively in the various dialects that are experiencing a rapid decline. In some cases, both the East Slavic and the Church Slavonic forms are in use, with many different meanings. For details, see Russian phonology and History of the Russian language.
Over the course of centuries, the vocabulary and literary style of Russian have also been influenced by Western and Central European languages such as Greek, Latin, Polish, Dutch, German, French, Italian and English, and to a lesser extent the languages to the south and the east: Uralic, Turkic, Persian, Arabic, as well as Hebrew.Colin Baker,Sylvia Prys Jones Encyclopedia of Bilingualism and Bilingual Education pp 219 Multilingual Matters, 1998 ISBN 1-85359-362-1
According to the Defense Language Institute in Monterey, California, Russian is classified as a level III language in terms of learning difficulty for native English speakers, requiring approximately 1,100 hours of immersion instruction to achieve intermediate fluency. It is also regarded by the United States Intelligence Community as a "hard target" language, due to both its difficulty to master for English speakers and its critical role in American world policy.
Standard Russian
The standard form of Russian is generally regarded as the modern Russian literary language (). It arose in the beginning of the 18th century with the modernization reforms of the Russian state under the rule of Peter the Great, and developed from the Moscow (Middle or Central Russian) dialect substratum under the influence of some of the previous century's Russian chancellery language.
Mikhail Lomonosov first compiled a normalizing grammar book in 1755; in 1783 the Russian Academy's first explanatory Russian dictionary appeared. During the end of the 18th and 19th centuries, a period known as the "Golden Age", the grammar, vocabulary and pronunciation of the Russian language was stabilized and standardized, and it became the nationwide literary language; meanwhile, Russia's world-famous literature flourished.
Until the 20th century, the language's spoken form was the language of only the upper noble classes and urban population, as Russian peasants from the countryside continued to speak in their own dialects. By the mid-20th century, such dialects were forced out with the introduction of the compulsory education system that was established by the Soviet government. Despite the formalization of Standard Russian, some nonstandard dialectal features (such as fricative in Southern Russian dialects) are still observed in colloquial speech.
Geographic distribution
thumb|upright=1.8|Competence of Russian in the countries of the former USSR, 2004
In 2010, there were 259.8 million speakers of Russian in the world: in Russia – 137.5, in the CIS and Baltic countries – 93.7, in Eastern Europe and the Balkans – 12.9, Western Europe – 7.3, Asia – 2.7, Middle East and North Africa – 1.3, Sub-Saharan Africa – 0.1, Latin America – 0.2, USA, Canada, Australia and New Zealand – 4.1. Thus, the Russian language is the 6th largest in the world by number of speakers, after English, Mandarin, Hindi/Urdu, Spanish and Arabic.
Russian is one of the six official languages of the United Nations. Education in Russian is still a popular choice for both Russian as a second language (RSL) and native speakers in Russia as well as many of the former Soviet republics. Russian is still seen as an important language for children to learn in most of the former Soviet republics.Russia's Language Could Be Ticket in for Migrants Gallup Retrieved on May 26, 2010 Samuel P. Huntington wrote in the Clash of Civilizations, "During the heyday of the Soviet Union, Russian was the lingua franca from Prague to Hanoi."
Europe
In Belarus, Russian is co-official alongside Belarusian per the Constitution of Belarus. 77% of the population was fluent in Russian in 2006, and 67% used it as the main language with family, friends or at work.http://demoscope.ru/weekly/2008/0329/tema03.php
In Estonia, Russian is officially considered a foreign language. Russian is spoken by 29.6% of the population according to a 2011 estimate from the World Factbook.
Despite large Russian-speaking minorities in Latvia (26.9% ethnic Russians, 2011)Population Census 2011 – Key Indicators Russian is officially considered a foreign language. 55% of the population was fluent in Russian in 2006, and 26% used it as the main language with family, friends or at work.
In Lithuania Russian is not official, but it still retains the function of lingua franca. In contrast to the other two Baltic states, Lithuania has a relatively small Russian-speaking minority (5.0% as of 2008).Ethnic and Language Policy of the Republic of Lithuania: Basis and Practice, Jan Andrlík
In Moldova, Russian is considered to be the language of inter-ethnic communication under a Soviet-era law. 50% of the population was fluent in Russian in 2006, and 19% used it as the main language with family, friends or at work.
According to the 2010 census in Russia, Russian language skills were indicated by 138 million people (99.4% of the population), while according to the 2002 census – 142.6 million people (99.2% of the population).
In Ukraine, Russian is seen as a language of inter-ethnic communication, and a minority language, under the 1996 Constitution of Ukraine. According to estimates from Demoskop Weekly, in 2004 there were 14,400,000 native speakers of Russian in the country, and 29 million active speakers.http://demoscope.ru/weekly/2006/0251/tema01.php 65% of the population was fluent in Russian in 2006, and 38% used it as the main language with family, friends or at work.
In the 20th century, Russian was a mandatory language taught in the schools of the members of the old Warsaw Pact and in other countries that used to be satellites of the USSR. According to the Eurobarometer 2005 survey, fluency in Russian remains fairly high (20–40%) in some countries, in particular those where the people speak a Slavic language and thereby have an edge in learning Russian (namely, Poland, Czech Republic, Slovakia, and Bulgaria).
Significant Russian-speaking groups also exist in Western Europe. These have been fed by several waves of immigrants since the beginning of the 20th century, each with its own flavor of language. The United Kingdom, Germany, Spain, Portugal, France, Italy, Belgium, Greece, Norway, and Austria have significant Russian-speaking communities.
Asia
In Armenia Russian has no official status, but it is recognized as a minority language under the Framework Convention for the Protection of National Minorities. 30% of the population was fluent in Russian in 2006, and 2% used it as the main language with family, friends or at work.
In Azerbaijan Russian has no official status, but is a lingua franca of the country.http://www.fundeh.org/files/publications/90/vedenie_obshchee_sostoyanie_russkogo_yazyka.pdf 26% of the population was fluent in Russian in 2006, and 5% used it as the main language with family, friends or at work.
In Georgia Russian has no official status, but it is recognized as a minority language under the Framework Convention for the Protection of National Minorities. Russian is the language of 9% of the population according to the World Factbook. Ethnologue cites Russian as the country's de facto working language.http://www.ethnologue.com/language/rus
In Kazakhstan Russian is not a state language, but according to article 7 of the Constitution of Kazakhstan its usage enjoys equal status to that of the Kazakh language in state and local administration. The 2009 census reported that 10,309,500 people, or 84.8% of the population aged 15 and above, could read and write well in Russian, as well as understand the spoken language.
In Kyrgyzstan Russian is an official language per article 5 of the Constitution of Kyrgyzstan. The 2009 census states that 482,200 people speak Russian as a native language, or 8.99% of the population. Additionally, 1,854,700 residents of Kyrgyzstan aged 15 and above fluently speak Russian as a second language, or 49.6% of the population in the age group.
In Tajikistan Russian is the language of inter-ethnic communication under the Constitution of Tajikistan and is permitted in official documentation. 28% of the population was fluent in Russian in 2006, and 7% used it as the main language with family, friends or at work. The World Factbook notes that Russian is widely used in government and business.
In Turkmenistan Russian lost its status as the official lingua franca in 1996. Russian is spoken by 12% of the population according to an undated estimate from the World Factbook.
In Uzbekistan Russian has some official roles, being permitted in official documentation and is the lingua franca of the country and the language of the élite. Russian is spoken by 14.2% of the population according to an undated estimate from the World Factbook.
In 2005, Russian was the most widely taught foreign language in Mongolia, and was compulsory in Year 7 onward as a second foreign language in 2006.
Russian is also spoken in Israel by at least 1,000,000 ethnic Jewish immigrants from the former Soviet Union, according to the 1999 census. The Israeli press and websites regularly publish material in Russian. See also Russian language in Israel.
Russian is also spoken as a second language by a small number of people in Afghanistan.Awde and Sarwan, 2003
North America
The language was first introduced in North America when Russian explorers voyaged into Alaska and claimed it for Russia during the 1700s. Although most Russian colonists left after the United States bought the land in 1867, a handful stayed and preserved the Russian language in this region to this day, although only a few elderly speakers of this unique dialect are left. Sizable Russian-speaking communities also exist in North America, especially in large urban centers of the U.S. and Canada, such as New York City, Philadelphia, Boston, Los Angeles, Nashville, San Francisco, Seattle, Spokane, Toronto, Baltimore, Miami, Chicago, Denver and Cleveland. In a number of locations they issue their own newspapers, and live in ethnic enclaves (especially the generation of immigrants who started arriving in the early 1960s). Only about 25% of them are ethnic Russians, however. Before the dissolution of the Soviet Union, the overwhelming majority of Russophones in Brighton Beach, Brooklyn in New York City were Russian-speaking Jews. Afterward, the influx from the countries of the former Soviet Union changed the statistics somewhat, with ethnic Russians and Ukrainians immigrating along with some more Russian Jews and Central Asians. According to the United States Census, in 2007 Russian was the primary language spoken in the homes of over 850,000 individuals living in the United States.
Australia
Australian cities Melbourne and Sydney have Russian-speaking populations, with the most Russians living in southeast Melbourne, particularly the suburbs of Carnegie and Caulfield. Two-thirds of them are actually Russian-speaking descendants of Germans, Greeks, Jews, Azerbaijanis, Armenians or Ukrainians, who either repatriated after the USSR collapsed, or are just looking for temporary employment.
Russian as an international language
Russian is one of the official languages (or has similar status and interpretation must be provided into Russian) of the United Nations, International Atomic Energy Agency, World Health Organization, International Civil Aviation Organization, UNESCO, World Intellectual Property Organization, International Telecommunication Union, World Meteorological Organization, Food and Agriculture Organization, International Fund for Agricultural Development, International Criminal Court, International Monetary Fund, International Olympic Committee, Universal Postal Union, World Bank, Commonwealth of Independent States, Organization for Security and Co-operation in Europe, Shanghai Cooperation Organisation, Eurasian Economic Community, Collective Security Treaty Organization, Antarctic Treaty Secretariat, International Organization for Standardization, GUAM Organization for Democracy and Economic Development, International Mathematical Olympiad. The Russian language is also one of two official languages aboard the International Space Station – NASA astronauts who serve alongside Russian cosmonauts usually take Russian language courses. This practice goes back to the Apollo-Soyuz mission, which first flew in 1975.
In March 2013 it was announced that Russian is now the second-most used language on the Internet after English. People use the Russian language on 5.9% of all websites, slightly ahead of German and far behind English (54.7%). Russian is used not only on 89.8% of .ru sites, but also on 88.7% of sites with the former Soviet Union domain .su. The websites of former Soviet Union nations also use high levels of Russian: 79.0% in Ukraine, 86.9% in Belarus, 84.0% in Kazakhstan, 79.6% in Uzbekistan, 75.9% in Kyrgyzstan and 81.8% in Tajikistan. However, Russian is the sixth-most used language on the top 1,000 sites, behind English, Chinese, French, German and Japanese.
Dialects
thumb|upright=1.35|
Russian dialects in 1915
Northern dialects
Central dialects
Southern dialects
Other
Russian is a rather homogeneous language, in terms of dialectal variation, due to the early political centralization under the Moscow rule, compulsory education, mass migration from rural to urban areas in the 20th century, as well as other factors. The standard language is used in written and spoken form almost everywhere in the country, from Kaliningrad and Saint Petersburg in the West to Vladivostok and Petropavlovsk-Kamchatsky in the East, notwithstanding the enormous distance in between.
Despite leveling after 1900, especially in matters of vocabulary and phonetics, a number of dialects still exist in Russia. Some linguists divide the dialects of Russian into two primary regional groupings, "Northern" and "Southern", with Moscow lying on the zone of transition between the two. Others divide the language into three groupings, Northern, Central (or Middle) and Southern, with Moscow lying in the Central region.David Dalby. 1999–2000. The Linguasphere Register of the World's Languages and Speech Communities. Linguasphere Press. Pg. 442. All dialects also divided in two main chronological categories: the dialects of primary formation (the territory of the Eastern Rus' or Muscovy, roughly consists of the modern Central and Northwestern Federal districts); and secondary formation (other territory). Dialectology within Russia recognizes dozens of smaller-scale variants. The dialects often show distinct and non-standard features of pronunciation and intonation, vocabulary and grammar. Some of these are relics of ancient usage now completely discarded by the standard language.
The Northern Russian dialects and those spoken along the Volga River typically pronounce unstressed clearly, a phenomenon called okanye (). Besides the absence of vowel reduction, some dialects have high or diphthongal in the place of and in stressed closed syllables (as in Ukrainian) instead of Standard Russian and . An interesting morphological feature is a post-posed definite article -to, -ta, -te similarly to that existing in Bulgarian and Macedonian.
In the Southern Russian dialects, instances of unstressed and following palatalized consonants and preceding a stressed syllable are not reduced to (as occurs in the Moscow dialect), being instead pronounced in such positions (e.g. is pronounced , not ) – this is called yakanye ().
Consonants include a fricative , a semivowel and , whereas the Standard and Northern dialects have the consonants , , and final and , respectively.
The morphology features a palatalized final in 3rd person forms of verbs (this is unpalatalized in the Standard and Northern dialects). Some of these features such as akanye and yakanye, a debuccalized or lenited , a semivowel and palatalized final in 3rd person forms of verbs are also present in modern Belarusian and some dialects of Ukrainian (Eastern Polesian), indicating a linguistic continuum.
The city of Veliky Novgorod has historically displayed a feature called chokanye or tsokanye ( or ), in which and were switched or merged. So, ('heron') has been recorded as . Also, the second palatalization of velars did not occur there, so the so-called ě² (from the Proto-Slavic diphthong *ai) did not cause to shift to ; therefore, where Standard Russian has ('chain'), the form is attested in earlier texts.
Among the first to study Russian dialects was Lomonosov in the 18th century. In the 19th, Vladimir Dal compiled the first dictionary that included dialectal vocabulary. Detailed mapping of Russian dialects began at the turn of the 20th century. In modern times, the monumental Dialectological Atlas of the Russian Language ( ), was published in three folio volumes 1986–1989, after four decades of preparatory work.
Derived languages
Balachka, a dialect, spoken in Krasnodar region, Don, Kuban and Terek, brought by relocated Cossacks in 1793 and is based on south-west Ukrainian dialect. During russification of aforementioned regions in 1920s to 1950s it was forcefully replaced by Russian language, however is still sometimes used even in media.
Fenya, a criminal argot of ancient origin, with Russian grammar, but with distinct vocabulary
Medny Aleut language, a nearly extinct mixed language spoken on Bering Island that is characterized by its Aleut nouns and Russian verbs
Padonkaffsky jargon, a slang language developed by padonki of Runet
Quelia, a macaronic language with Russian-derived basic structure and part of the lexicon (mainly nouns and verbs) borrowed from German
Runglish, a Russian-English pidgin. This word is also used by English speakers to describe the way in which Russians attempt to speak English using Russian morphology and/or syntax.
Russenorsk, an extinct pidgin language with mostly Russian vocabulary and mostly Norwegian grammar, used for communication between Russians and Norwegian traders in the Pomor trade in Finnmark and the Kola Peninsula
Trasianka, a heavily russified variety of Belarusian used by a large portion of the rural population in Belarus
Taimyr Pidgin Russian, spoken by the Nganasan on the Taimyr Peninsula
Alphabet
thumb|A page from Azbuka (Alphabet book), the first Russian printed textbook. Printed by Ivan Fyodorov in 1574. This page features the Cyrillic script.
Russian is written using a Cyrillic alphabet. The Russian alphabet consists of 33 letters. The following table gives their upper case forms, along with IPA values for each letter's typical sound:
Older letters of the Russian alphabet include , which merged to ( or ); and , which both merged to (); , which merged to (); , which merged to (); , which merged to ( or ); and and , which later were graphically reshaped into and merged phonetically to or . While these older letters have been abandoned at one time or another, they may be used in this and related articles. The yers and originally indicated the pronunciation of ultra-short or reduced , .
Transliteration
Because of many technical restrictions in computing and also because of the unavailability of Cyrillic keyboards abroad, Russian is often transliterated using the Latin alphabet. For example, ('frost') is transliterated moroz, and ('mouse'), mysh or myš. Once commonly used by the majority of those living outside Russia, transliteration is being used less frequently by Russian-speaking typists in favor of the extension of Unicode character encoding, which fully incorporates the Russian alphabet. Free programs leveraging this Unicode extension are available which allow users to type Russian characters, even on Western 'QWERTY' keyboards.
Computing
The Russian alphabet has many systems of character encoding. KOI8-R was designed by the Soviet government and was intended to serve as the standard encoding. This encoding was and still is widely used in UNIX-like operating systems. Nevertheless, the spread of MS-DOS and OS/2 (IBM866), traditional Macintosh (ISO/IEC 8859-5) and Microsoft Windows (CP1251) created chaos and ended by establishing different encodings as de facto standards, with Windows-1251 becoming a de facto standard in Russian Internet and e-mail communication during the period of roughly 1995–2005.
All the obsolete 8-bit encodings are rarely used in the communication protocols and text-exchange data formats, being mostly replaced with UTF-8. A number of encoding conversion applications were developed. "iconv" is an example that is supported by most versions of Linux, Macintosh and some other operating systems; but converters are rarely needed unless accessing texts created more than a few years ago.
In addition to the modern Russian alphabet, Unicode (and thus UTF-8) encodes the Early Cyrillic alphabet (which is very similar to the Greek alphabet), as well as all other Slavic and non-Slavic but Cyrillic-based alphabets.
Orthography
Russian spelling is reasonably phonemic in practice. It is in fact a balance among phonemics, morphology, etymology, and grammar; and, like that of most living languages, has its share of inconsistencies and controversial points. A number of rigid spelling rules introduced between the 1880s and 1910s have been responsible for the former whilst trying to eliminate the latter.
The current spelling follows the major reform of 1918, and the final codification of 1956. An update proposed in the late 1990s has met a hostile reception, and has not been formally adopted. The punctuation, originally based on Byzantine Greek, was in the 17th and 18th centuries reformulated on the French and German models.
According to the Institute of Russian Language of the Russian Academy of Sciences, an optional acute accent () may, and sometimes should, be used to mark stress. For example, it is used to distinguish between otherwise identical words, especially when context does not make it obvious: – ("lock" – "castle"), – ("worthwhile" – "standing"), – ("this is odd" – "this is marvelous"), – ("attaboy" – "fine young man"), – ("I shall learn it" – "I recognize it"), – ("to be cutting" – "to have cut"); to indicate the proper pronunciation of uncommon words, especially personal and family names (, , , , ), and to show which is the stressed word in a sentence ( "Was it you who ate the cookie? – Did you eat the cookie? – Was it the cookie that you ate?"). Stress marks are mandatory in lexical dictionaries and books for children or Russian learners.
Phonology
The phonological system of Russian is inherited from Common Slavonic; it underwent considerable modification in the early historical period before being largely settled around the year 1400.
The language possesses five vowels (or six, under the St. Petersburg Phonological School), which are written with different letters depending on whether or not the preceding consonant is palatalized. The consonants typically come in plain vs. palatalized pairs, which are traditionally called hard and soft. (The hard consonants are often velarized, especially before front vowels, as in Irish). The standard language, based on the Moscow dialect, possesses heavy stress and moderate variation in pitch. Stressed vowels are somewhat lengthened, while unstressed vowels tend to be reduced to near-close vowels or an unclear schwa. (See also: vowel reduction in Russian.)
The Russian syllable structure can be quite complex with both initial and final consonant clusters of up to 4 consecutive sounds. Using a formula with V standing for the nucleus (vowel) and C for each consonant the structure can be described as follows:
(C)(C)(C)(C)V(C)(C)(C)(C)
Clusters of four consonants are not very common, however, especially within a morpheme. Examples: (, 'glance'), (, 'state'), (, 'construction').
Consonants
+ Consonant phonemes Labial Alveolar/Dental Post-alveolar Palatal Velar plain pal. plain pal. plain pal. plain pal. Nasal Stop Affricate Fricative ːː Approximant(Lateral) Trill
Russian is notable for its distinction based on palatalization of most of the consonants. While do have palatalized allophones , only might be considered a phoneme, though it is marginal and generally not considered distinctive. The only native minimal pair that argues for being a separate phoneme is (, 'it weaves') – (, 'this cat'). Palatalization means that the center of the tongue is raised during and after the articulation of the consonant. In the case of and , the tongue is raised enough to produce slight frication (affricate sounds). The sounds are dental, that is pronounced with the tip of the tongue against the teeth rather than against the alveolar ridge.
Grammar
Russian has preserved an Indo-European synthetic-inflectional structure, although considerable levelling has taken place.
Russian grammar encompasses:
a highly fusional morphology a syntax''' that, for the literary language, is the conscious fusion of three elements:
a polished vernacular foundation;
a Church Slavonic inheritance;
a Western European style.
The spoken language has been influenced by the literary one but continues to preserve characteristic forms. The dialects show various non-standard grammatical features, some of which are archaisms or descendants of old forms since discarded by the literary language.
The Church Slavonic language was introduced to Moskovy in the late 15th century and was adopted as official language for correspondence for convenience. Firstly with the newly conquered south-western regions of former Kyivan Rus and Grand Duchy of Lithuania, later, when Moskovy cut its ties with the Golden Horde, for communication between all newly consolidated regions of Moskovy.
Vocabulary
thumb|This page from an "ABC" book printed in Moscow in 1694 shows the letter П.
See History of the Russian language for an account of the successive foreign influences on Russian.
The number of listed words or entries in some of the major dictionaries published during the past two centuries, and the total vocabulary of Alexander Pushkin (who is credited with greatly augmenting and codifying literary Russian), are as follows:What types of dictionaries exist? from www.gramota.ru A catalogue of Russian explanatory dictionaries
WorkYearWordsNotesAcademic dictionary, I Ed.1789–179443,257Russian and Church Slavonic with some Old Russian vocabulary.Academic dictionary, II Ed1806–182251,388Russian and Church Slavonic with some Old Russian vocabulary.Dictionary of Pushkin's language1810–1837>21,000The dictionary of virtually all words from his works was published in 1956–1961. Some consider his works to contain 101,105.Academic dictionary, III Ed.1847114,749Russian and Church Slavonic with Old Russian vocabulary.Explanatory Dictionary of the Living Great Russian Language (Dahl's)1880–1882195,84444,000 entries lexically grouped; attempt to catalogue the full vernacular language. Contains many dialectal, local and obsolete words.Explanatory Dictionary of the Russian Language (Ushakov's)1934–194085,289Current language with some archaisms.Academic Dictionary of the Russian Language (Ozhegov's)1950–19651991 (2nd ed.)120,480"Full" 17-volumed dictionary of the contemporary language. The second 20-volumed edition was begun in 1991, but not all volumes have been finished.Lopatin's dictionary1999–2013≈200,000Orthographic, current language, several editionsGreat Explanatory Dictionary of the Russian Language1998–2009≈130,000Current language, the dictionary has many subsequent editions from the first one of 1998.
History and examples
The history of Russian language may be divided into the following periods.
Kievan period and feudal breakup
The Moscow period (15th–17th centuries)
Empire (18th–19th centuries)
Soviet period and beyond (20th century)
Judging by the historical records, by approximately 1000 AD the predominant ethnic group over much of modern European Russia, Ukraine and Belarus was the Eastern branch of the Slavs, speaking a closely related group of dialects. The political unification of this region into Kievan Rus' in about 880, from which modern Russia, Ukraine and Belarus trace their origins, established Old East Slavic as a literary and commercial language. It was soon followed by the adoption of Christianity in 988 and the introduction of the South Slavic Old Church Slavonic as the liturgical and official language. Borrowings and calques from Byzantine Greek began to enter the Old East Slavic and spoken dialects at this time, which in their turn modified the Old Church Slavonic as well.
thumb|left|The Ostromir Gospels of 1056 is the second oldest East Slavic book known, one of many medieval illuminated manuscripts preserved in the Russian National Library.
Dialectal differentiation accelerated after the breakup of Kievan Rus' in approximately 1100. On the territories of modern Belarus and Ukraine emerged Ruthenian and in modern Russia medieval Russian. They became distinct since the 13th century, i.e. following the division of that land between the Grand Duchy of Lithuania, Poland and Hungary in the west and independent Novgorod and Pskov feudal republics plus numerous small duchies (which came to be vassals of the Tatars) in the east.
The official language in Moscow and Novgorod, and later, in the growing Muscovy, was Church Slavonic, which evolved from Old Church Slavonic and remained the literary language for centuries, until the Petrine age, when its usage became limited to biblical and liturgical texts. Russian developed under a strong influence of Church Slavonic until the close of the 17th century; afterward the influence reversed, leading to corruption of liturgical texts.
The political reforms of Peter the Great (Пётр Вели́кий, Pyótr Velíkiy) were accompanied by a reform of the alphabet, and achieved their goal of secularization and Westernization. Blocks of specialized vocabulary were adopted from the languages of Western Europe. By 1800, a significant portion of the gentry spoke French daily, and German sometimes. Many Russian novels of the 19th century, e.g. Leo Tolstoy's (Лев Толсто́й) War and Peace'', contain entire paragraphs and even pages in French with no translation given, with an assumption that educated readers would not need one.
The modern literary language is usually considered to date from the time of Alexander Pushkin () in the first third of the 19th century. Pushkin revolutionized Russian literature by rejecting archaic grammar and vocabulary (so-called — "high style") in favor of grammar and vocabulary found in the spoken language of the time. Even modern readers of younger age may only experience slight difficulties understanding some words in Pushkin's texts, since relatively few words used by Pushkin have become archaic or changed meaning. In fact, many expressions used by Russian writers of the early 19th century, in particular Pushkin, Mikhail Lermontov (), Nikolai Gogol (), Aleksander Griboyedov (), became proverbs or sayings which can be frequently found even in modern Russian colloquial speech.
The political upheavals of the early 20th century and the wholesale changes of political ideology gave written Russian its modern appearance after the spelling reform of 1918. Political circumstances and Soviet accomplishments in military, scientific and technological matters (especially cosmonautics), gave Russian a worldwide prestige, especially during the mid-20th century.
During the Soviet period, the policy toward the languages of the various other ethnic groups fluctuated in practice. Though each of the constituent republics had its own official language, the unifying role and superior status was reserved for Russian, although it was declared the official language only in 1990."Закон СССР от 24 April 1990 О языках народов СССР" (The 1990 USSR Law about the Languages of the USSR) Following the break-up of the USSR in 1991, several of the newly independent states have encouraged their native languages, which has partly reversed the privileged status of Russian, though its role as the language of post-Soviet national discourse throughout the region has continued.
The Russian language in the world is reduced due to the decrease in the number of Russians in the world and diminution of the total population in Russia (where Russian is an official language). The collapse of the Soviet Union and reduction in influence of Russia also has reduced the popularity of the Russian language in the rest of the world.
+ Recent estimates of the total number of speakers of Russian Source Native speakers Native rank Total speakers Total rank G. Weber, "Top Languages",Language Monthly,3: 12–18, 1997, ISSN 1369-9733 160,000,000 8 285,000,000 5 World Almanac (1999) 145,000,000 8 (2005) 275,000,000 5 SIL (2000 WCD) 145,000,000 8 255,000,000 5–6 (tied with Arabic) CIA World Factbook (2005) 160,000,000 8
According to figures published in 2006 in the journal "Demoskop Weekly" research deputy director of Research Center for Sociological Research of the Ministry of Education and Science (Russia) Arefyev A. L., the Russian language is gradually losing its position in the world in general, and in Russia in particular. In 2012, A. L. Arefyev published a new study "Russian language at the turn of the 20th-21st centuries", in which he confirmed his conclusion about the trend of further weakening of the Russian language in all regions of the world (findings published in 2013 in the journal "Demoskop Weekly").Русский язык на рубеже XX-ХХI веков — М.: Центр социального прогнозирования и маркетинга, 2012. — 482 стр. In the countries of the former Soviet Union the Russian language is gradually being replaced by local languages. Currently the number speakers of Russian language in the world depends on the number of Russians in the world (as the main sources distribution Russian language) and total population Russia (where Russian is an official language).
+The changing proportion of Russian speakers in the world (assessment Aref'eva 2012) Year worldwide population, million population Russian Empire, Soviet Union and Russian Federation, million share in world population, % total number of speakers of Russian, million share in world population, % 1900 1,650 138.0 8.4 105 6.4 1914 1,782 182.2 10.2 140 7.9 1940 2,342 205.0 8.8 200 7.6 1980 4,434 265.0 6.0 280 6.3 1990 5,263 286.0 5.4 312 5.9 2004 6,400 146.0 2.3 278 4.3 2010 6,820 142.7 2.1 260 3.8
See also
Computer Russification
List of English words of Russian origin
List of Russian language topics
Non-native pronunciations of English
Russian humour
Slavic Voice of America
Volapuk encoding
References
Bibliography
In English
In Russian
журнал «Демоскоп Weekly» № 571 – 572 14 – 31 октября 2013. А. Арефьев. Тема номера: сжимающееся русскоязычие. Демографические изменения - не на пользу русскому языку
Русский язык на рубеже XX-ХХI веков — М.: Центр социального прогнозирования и маркетинга, 2012. — 482 стр. Аннотация книги в РУССКИЙ ЯЗЫК НА РУБЕЖЕ XX-ХХI ВЕКОВ
журнал «Демоскоп Weekly» № 329 – 330 14 – 27 апреля 2008. К. Гаврилов. Е. Козиевская. Е. Яценко. Тема номера: русский язык на постсоветских просторах. Где есть потребность в изучении русского языка
журнал «Демоскоп Weekly» № 251 – 252 19 июня - 20 августа 2006. А. Арефьев. Тема номера: сколько людей говорят и будут говорить по-русски? Будет ли русский в числе мировых языков в будущем?
Жуковская Л. П. (отв. ред.) Древнерусский литературный язык и его отношение к старославянскому. — М.: «Наука», 1987.
Иванов В. В. Историческая грамматика русского языка. — М.: «Просвещение», 1990.
Новиков Л. А. Современный русский язык: для высшей школы. -— М.: Лань, 2003.
Филин Ф. П. О словарном составе языка Великорусского народа. // Вопросы языкознания. — М., 1982, № 5. — С. 18—28
External links
Oxford Dictionaries Russian Dictionary
USA Foreign Service Institute Russian basic course
Free English to Russian Translation
Russian – YouTube: playlist of (mostly half-hour-long) video lessons from Dallas Schools Television
Free Online Russian Language WikiTranslate Video Course
Национальный корпус русского языка National Corpus of the Russian Language
Russian Language Institute Language regulator of the Russian language
Top 7 foreign universities where studied Russian language
Category:Languages of Russia
Category:Languages of Estonia
Category:Languages of Latvia
Category:Languages of Lithuania
Category:Languages of Poland
Category:Languages of Belarus
Category:Languages of Moldova
Category:Languages of Armenia
Category:Languages of Azerbaijan
Category:Languages of Kazakhstan
Category:Languages of Kyrgyzstan
Category:Languages of Uzbekistan
Category:Languages of Turkmenistan
Category:Languages of Tajikistan
Category:Languages of Mongolia
Category:Languages of China
Category:Languages of North Korea
Category:Languages of Japan
Category:Languages of the United States
Category:Languages of Israel
Category:Languages of Finland
Category:Languages of Norway
Category:East Slavic languages
Category:Languages of Abkhazia
Category:Languages of Georgia (country)
Category:Languages of the Caucasus
Category:Languages of Transnistria
Category:Languages of Turkey
Category:Languages of Ukraine
Category:Stress-timed languages
Category:Subject–verb–object languages | 25,431 | 2017-01 |
A cappella | A cappella (Italian for "in the manner of the chapel") music is specifically group or solo singing without instrumental accompaniment, or a piece intended to be performed in this way. It contrasts with cantata, which is accompanied singing. The term "a cappella" was originally intended to differentiate between Renaissance polyphony and Baroque concertato style. In the 19th century a renewed interest in Renaissance polyphony coupled with an ignorance of the fact that vocal parts were often doubled by instrumentalists led to the term coming to mean unaccompanied vocal music. The term is also used, albeit rarely, as a synonym for alla breve.
Religious origins
A cappella music was originally used in religious music, especially church music as well as anasheed and zemirot. Gregorian chant is an example of a cappella singing, as is the majority of secular vocal music from the Renaissance. The madrigal, up until its development in the early Baroque into an instrumentally-accompanied form, is also usually in a cappella form. Jewish and Christian music were originally a cappella, and this practice has continued in both of these religions as well as in Islam.
Christian
The polyphony of Christian a cappella music began to develop in Europe around the late 15th century AD, with compositions by Josquin des Prez. The early a cappella polyphonies may have had an accompanying instrument, although this instrument would merely double the singers' parts and was not independent. By the 16th century, a cappella polyphony had further developed, but gradually, the cantata began to take the place of a cappella forms. 16th century a cappella polyphony, nonetheless, continued to influence church composers throughout this period and to the present day. Recent evidence has shown that some of the early pieces by Palestrina, such as what was written for the Sistine Chapel was intended to be accompanied by an organ "doubling" some or all of the voices. Such is seen in the life of Palestrina becoming a major influence on Bach, most notably in the aforementioned Mass in B Minor. Other composers that utilized the a cappella style, if only for the occasional piece, were Claudio Monteverdi and his masterpiece, Lagrime d'amante al sepolcro dell'amata (A lover's tears at his beloved's grave), which was composed in 1610, and Andrea Gabrieli when upon his death it was discovered many choral pieces, one of which was in the unaccompanied style. Learning from the preceding two composeres, Heinrich Schütz utilized the a cappella style in numerous pieces, chief among these were the pieces in the oratorio style, which were traditionally performed during the Easter week and dealt with the religious subject matter of that week, such as Christ's suffering and the Passion. Five of Schutz's Historien were Easter pieces, and of these the latter three, which dealt with the passion from three different viewpoints, those of Matthew, Luke and John, were all done a cappella style. This was a near requirement for this type of piece, and the parts of the crowd were sung while the solo parts which were the quoted parts from either Christ or the authors were performed in a plainchant.
Byzantine Rite
In the Byzantine Rite of the Eastern Orthodox Church and the Eastern Catholic Churches, the music performed in the liturgies is exclusively sung without instrumental accompaniment. Bishop Kallistos Ware says, "The service is sung, even though there may be no choir... In the Orthodox Church today, as in the early Church, singing is unaccompanied and instrumental music is not found." This a cappella behavior arises from strict interpretation of Psalms 150, which states, Let every thing that hath breath praise the Lord. Praise ye the Lord. In keeping with this philosophy, early Russian musika which started appearing in the late 17th century, in what was known as khorovïye kontsertï (choral concertos) made a cappella adaptations of Venetian-styled pieces, such as the treatise, Grammatika musikiyskaya (1675), by Nikolai Diletsky. Divine Liturgies and Western Rite masses composed by famous composers such as Peter Tchaikovsky, Sergei Rachmaninoff, Alexander Arkhangelsky, and Mykola Leontovych are fine examples of this.
Opposition to instruments in worship
Present-day Christian religious bodies known for conducting their worship services without musical accompaniment include some Presbyterian churches devoted to the regulative principle of worship, Old Regular Baptists, Primitive Baptists, Plymouth Brethren, Churches of Christ, Church of God (Guthrie, Oklahoma), the Old German Baptist Brethren, Doukhobors the Byzantine Rite and the Amish, Old Order Mennonites and Conservative Mennonites. Certain high church services and other musical events in liturgical churches (such as the Roman Catholic Mass and the Lutheran Divine Service) may be a cappella, a practice remaining from apostolic times. Many Mennonites also conduct some or all of their services without instruments. Sacred Harp, a type of folk music, is an a cappella style of religious singing with shape notes, usually sung at singing conventions.
Opponents of musical instruments in the Christian worship believe that such opposition is supported by the Christian scriptures and Church history. The scriptures typically referenced are Matthew 26:30; Acts 16:25; Romans 15:9; 1 Corinthians 14:15; Ephesians 5:19; Colossians 3:16; Hebrews 2:12, 13:15; James 5:13, which show examples and exhortations for Christians to sing.
There is no reference to instrumental music in early church worship in the New Testament, or in the worship of churches for the first six centuries. Several reasons have been posited throughout church history for the absence of instrumental music in church worship.
Christians who believe in a cappella music today believe that in the Israelite worship assembly during Temple worship only the Priests of Levi sang, played, and offered animal sacrifices, whereas in the church era, all Christians are commanded to sing praises to God. They believe that if God wanted instrumental music in New Testament worship, He would have commanded not just singing, but singing and playing like he did in the Hebrew scriptures.
The first recorded example of a musical instrument in Roman Catholic worship was a pipe organ introduced by Pope Vitalian into a cathedral in Rome around 670.American Encyclopedia, Volume 12, p. 688
Instruments have divided Christendom since their introduction into worship. They were considered a Catholic innovation, not widely practiced until the 18th century, and were opposed vigorously in worship by a number of Protestant Reformers, including Martin Luther (1483–1546), Ulrich Zwingli, John Calvin (1509–1564) and John Wesley (1703–1791). Alexander Campbell referred to the use of an instrument in worship as "a cow bell in a concert". In Sir Walter Scott's The Heart of Midlothian, the heroine, Jeanie Deans, a Scottish Presbyterian, writes to her father about the church situation she has found in England (bold added):
The folk here are civil, and, like the barbarians unto the holy apostle, have shown me much kindness; and there are a sort of chosen people in the land, for they have some kirks without organs that are like ours, and are called meeting-houses, where the minister preaches without a gown.
Acceptance of instruments in worship
Those who do not adhere to the regulative principle of interpreting Christian scripture, believe that limiting praise to the unaccompanied chant of the early church is not commanded in scripture, and that churches in any age are free to offer their songs with or without musical instruments.
Those who subscribe to this interpretation believe that since the Christian scriptures never counter instrumental language with any negative judgment on instruments, opposition to instruments instead comes from an interpretation of history. There is no written opposition to musical instruments in any setting in the first century and a half of Christian churches (33 AD to 180AD). The use of instruments for Christian worship during this period is also undocumented. Toward the end of the 2nd century, Christians began condemning the instruments themselves. Those who oppose instruments today believe these Church Fathers had a better understanding of God's desire for the church, but there are significant differences between the teachings of these Church Fathers and Christian opposition to instruments today.
Modern Christians typically believe it is acceptable to play instruments or to attend weddings, funerals, banquets, etc., where instruments are heard playing religious music. The Church Fathers made no exceptions. Since the New Testament never condemns instruments themselves, much less in any of these settings, it is believed that "the church Fathers go beyond the New Testament in pronouncing a negative judgment on musical instruments."
Written opposition to instruments in worship began near the turn of the 5th century. Modern opponents of instruments typically do not make the same assessment of instruments as these writers, who argued that God had allowed David the "evil" of using musical instruments in praise. While the Old Testament teaches that God specifically asked for musical instruments, modern concern is for worship based on the New Testament.
Since "a cappella" singing brought a new polyphony (more than one note at a time) with instrumental accompaniment, it is not surprising that Protestant reformers who opposed the instruments (such as Calvin and Zwingli) also opposed the polyphony. While Zwingli was burning organs in Switzerland – Luther called him a fanatic – the Church of England was burning books of polyphony.
Some Holiness Churches such as the Free Methodist Church opposed the use of musical instruments in church worship until the mid-20th century. The Free Methodist Church allowed for local church decision on the use of either an organ or piano in the 1943 Conference before lifting the ban entirely in 1955.
Jewish
While worship in the Temple in Jerusalem included musical instruments (), traditional Jewish religious services in the Synagogue, both before and after the last destruction of the Temple, did not include musical instruments given the practice of scriptural cantillation. The use of musical instruments is traditionally forbidden on the Sabbath out of concern that players would be tempted to repair (or tune) their instruments, which is forbidden on those days. (This prohibition has been relaxed in many Reform and some Conservative congregations.) Similarly, when Jewish families and larger groups sing traditional Sabbath songs known as zemirot outside the context of formal religious services, they usually do so a cappella, and Bar and Bat Mitzvah celebrations on the Sabbath sometimes feature entertainment by a cappella ensembles. During the Three Weeks musical instruments are prohibited. Many Jews consider a portion of the 49-day period of the counting of the omer between Passover and Shavuot to be a time of semi-mourning and instrumental music is not allowed during that time. This has led to a tradition of a cappella singing sometimes known as sefirah music.
The popularization of the Jewish chant may be found in the writings of the Jewish philosopher Philo, born 20 BCE. Weaving together Jewish and Greek thought, Philo promoted praise without instruments, and taught that "silent singing" (without even vocal chords) was better still. This view parted with the Jewish scriptures, where Israel offered praise with instruments by God's own command (). The shofar is the only temple instrument still being used today in the synagogue, and it is only used from Rosh Chodesh Elul through the end of Yom Kippur. The shofar is used by itself, without any vocal accompaniment, and is limited to a very strictly defined set of sounds and specific places in the synagogue service. However, silver trumpets, as described in , have been made in recent years and used in prayer services at the Western Wall.http://www.jewishpress.com/news/silver-trumpets-pierce-the-heavens-in-prayer-rally-opposite-temple-mount/2016/03/23/
In the United States
thumb|The Hullabahoos, a popular a cappella group at the University of Virginia, were featured in the movie Pitch Perfect
Peter Christian Lutkin, dean of the Northwestern University School of Music, helped popularize a cappella music in the United States by founding the Northwestern A Cappella Choir in 1906. The A Cappella Choir was "the first permanent organization of its kind in America."
A strong and prominent a cappella tradition was begun in the midwest part of the United States in 1911 by F. Melius Christiansen, a music faculty member at St. Olaf College in Northfield, Minnesota. The St. Olaf College Choir was established as an outgrowth of the local St. John's Lutheran Church, where Christiansen was organist and the choir was composed, at least partially, of students from the nearby St. Olaf campus. The success of the ensemble was emulated by other regional conductors, and a rich tradition of a cappella choral music was born in the region at colleges like Concordia College (Moorhead, Minnesota), Augustana College (Rock Island, Illinois), Wartburg College (Waverly, Iowa), Luther College (Decorah, Iowa), Gustavus Adolphus College (St. Peter, Minnesota), Augustana College (Sioux Falls, South Dakota), and Augsburg College (Minneapolis, Minnesota). The choirs typically range from 40 to 80 singers and are recognized for their efforts to perfect blend, intonation, phrasing and pitch in a large choral setting.
Major movements in modern a cappella over the past century include Barbershop and doo wop. The Barbershop Harmony Society, Sweet Adelines International, and Harmony Inc. host educational events including Harmony University, Directors University, and the International Educational Symposium, and international contests and conventions, recognizing international champion choruses and quartets.
These days, many a cappella groups can be found in high schools and colleges. There are amateur Barbershop Harmony Society and professional groups that sing a cappella exclusively. Although a cappella is technically defined as singing without instrumental accompaniment, some groups use their voices to emulate instruments; others are more traditional and focus on harmonizing. A cappella styles range from gospel music to contemporary to barbershop quartets and choruses.
A cappella music was popularized between the late 2000s and the mid 2010s with media hits such as the 2009–2014 TV show The Sing-Off, the musical Perfect Harmony, and the musical comedy film series Pitch Perfect.
Recording artists
In July 1943, as a result of the American Federation of Musicians boycott of US recording studios, the a cappella vocal group The Song Spinners had a best-seller with "Comin' In On A Wing And A Prayer". In the 1950s several recording groups, notably The Hi-Los and the Four Freshmen, introduced complex jazz harmonies to a cappella performances. The King's Singers are credited with promoting interest in small-group a cappella performances in the 1960s. Frank Zappa loves Doo wop and A cappella, so Zappa released The Persuasionshttp://www.discogs.com/Persuasions-Acappella/.../4540661 first album from his label in 1970. In 1983 an a cappella group known as The Flying Pickets had a Christmas 'number one' in the UK with a cover of Yazoo's (known in the US as Yaz) "Only You". A cappella music attained renewed prominence from the late 1980s onward, spurred by the success of Top 40 recordings by artists such as The Manhattan Transfer, Bobby McFerrin, Huey Lewis and the News, All-4-One, The Nylons, Backstreet Boys and Boyz II Men.
Contemporary a cappella includes many vocal groups and bands who add vocal percussion or beatboxing to create a pop/rock/gospel sound, in some cases very similar to bands with instruments. Examples of such professional groups include Straight No Chaser, Pentatonix, The House Jacks, Rockapella, Mosaic, Home Free and M-pact. There also remains a strong a cappella presence within Christian music, as some denominations purposefully do not use instruments during worship. Examples of such groups are Take 6, Glad and Acappella. Arrangements of popular music for small a cappella ensembles typically include one voice singing the lead melody, one singing a rhythmic bass line, and the remaining voices contributing chordal or polyphonic accompaniment.
A cappella can also describe the isolated vocal track(s) from a multitrack recording that originally included instrumentation. These vocal tracks may be remixed or put onto vinyl records for DJs, or released to the public so that fans can remix them. One such example is the a cappella release of Jay-Z's Black Album, which Danger Mouse mixed with The Beatles' White Album to create The Grey Album.
A cappella's growth is not limited to live performance, with hundreds of recorded a cappella albums produced over the past decade. As of December 2006, the Recorded A Cappella Review Board (RARB) had reviewed over 660 a cappella albums since 1994, and its popular discussion forum had over 900 users and 19,000 articles.
On their 1966 album titled Album, Peter, Paul and Mary included the song "Normal Normal." All the sounds on that song, both vocals and instruments, were created by Paul's voice, with no actual instruments used.
In 2013, an artist by the name Smooth McGroove rose to prominence with his style of a cappella music. He is best known for his a cappella covers of video game music tracks on YouTube.
in 2015, an a cappella version of Jerusalem by multi-instrumentalist Jacob Collier was selected for Beats by Dre "The Game Starts Here" for the England Rugby World Cup campaign.
Musical theater
A cappella has been used as the sole orchestration for original works of musical theater that have had commercial runs Off-Broadway (theaters in New York City with 99 to 500 seats) only four times. The first was Avenue X which opened on 28 January 1994 and ran for 77 performances. It was produced by Playwrights Horizons with book by John Jiler, music and lyrics by Ray Leslee. The musical style of the show's score was primarily Doo-Wop as the plot revolved around Doo-Wop group singers of the 1960s.
In 2001, The Kinsey Sicks, produced and starred in the critically acclaimed off-Broadway hit, "DRAGAPELLA! Starring the Kinsey Sicks" at New York's legendary Studio 54. That production received a nomination for a Lucille Lortel award as Best Musical and a Drama Desk nomination for Best Lyrics. It was directed by Glenn Casale with original music and lyrics by Ben Schatz.
The a cappella musical Perfect Harmony, a comedy about two high school a cappella groups vying to win the National championship, made its Off Broadway debut at Theatre Row’s Acorn Theatre on 42nd Street in New York City in October, 2010 after a successful out-of-town run at the Stoneham Theatre, in Stoneham, Massachusetts. Perfect Harmony features the hit music of The Jackson 5, Pat Benatar, Billy Idol, Marvin Gaye, Scandal, Tiffany, The Romantics, The Pretenders, The Temptations, The Contours, The Commodores, Tommy James & the Shondells and The Partridge Family, and has been compared to a cross between Altar Boyz and The 25th Annual Putnam County Spelling Bee.
The fourth a cappella musical to appear Off-Broadway, In Transit, premiered 5 October 2010 and was produced by Primary Stages with book, music, and lyrics by Kristen Anderson-Lopez, James-Allen Ford, Russ Kaplan, and Sara Wordsworth. Set primarily in the New York City subway system its score features an eclectic mix of musical genres (including jazz, hip hop, Latin, rock, and country). In Transit incorporates vocal beat boxing into its contemporary a cappella arrangements through the use of a subway beat boxer character. Beat boxer and actor Chesney Snow performed this role for the 2010 Primary Stages production. According to the show's website, it is scheduled to reopen for an open-ended commercial run in the Fall of 2011. In 2011 the production received four Lucille Lortel Award nominations including Outstanding Musical, Outer Critics Circle and Drama League nominations, as well as five Drama Desk nominations including Outstanding Musical and won for Outstanding Ensemble Performance.
In December 2016, In Transit became the first a cappella musical on Broadway.http://www.playbill.com/article/in-transit-new-a-cappella-musical-opens-on-broadway
Barbershop style
Barbershop music is one of several uniquely American art forms. The earliest reports of this style of a cappella music involved African Americans. The earliest documented quartets all began in barbershops. In 1938, the first formal men's barbershop organization was formed, known as the Society for the Preservation and Encouragement of Barber Shop Quartet Singing in America (S.P.E.B.S.Q.S.A), and in 2004 rebranded itself and officially changed its public name to the Barbershop Harmony Society (BHS). Today the BHS has over 22,000 members in approximately 800 chapters across the United States, and the barbershop style has spread around the world with organizations in many other countries. The Barbershop Harmony Society provides a highly organized competition structure for a cappella quartets and choruses singing in the barbershop style.
In 1945, the first formal women's barbershop organization, Sweet Adelines, was formed. In 1953 Sweet Adelines became an international organization, although it didn't change its name to Sweet Adelines International until 1991. The membership of nearly 25,000 women, all singing in English, includes choruses in most of the fifty United States as well as in Australia, Canada, England, Finland, Germany, Ireland, Japan, New Zealand, Scotland, Sweden, Wales and the Netherlands. Headquartered in Tulsa, Oklahoma, the organization encompasses more than 1,200 registered quartets and 600 choruses.
In 1959, a second women's barbershop organization started as a break off from Sweet Adelines due to ideological differences. Based on democratic principles which continue to this day, Harmony, Inc. is smaller than its counterpart, but has an atmosphere of friendship and competition. With about 2,500 members in the United States and Canada, Harmony, Inc. uses the same rules in contest that the Barbershop Harmony Society uses. Harmony, Inc. is registered in Providence, Rhode Island.
Amateur and high school
The popularity of a cappella among high schools and amateurs was revived by television shows and movies such as Glee and Pitch Perfect. High school groups have conductors or student leaders who keep the tempo for the group.
In other countries
Sri Lanka
Composer Dinesh Subasinghe became the first Sri Lankan to write a cappella pieces for SATB choirs. He wrote "The Princes of the Lost Tribe" and "Ancient Queen of Somawathee" for Menaka De Shabandu and Bridget Halpe's choirs, respectively, based on historical incidents in ancient Sri Lanka. Voice Print is also a professional a cappella music group in Sri Lanka.http://www.music.lk/showcase.php?search_keyword=voice+print&button=Search
Sweden
The European a cappella tradition is especially strong in the countries around the Baltic and perhaps most so in Sweden as described by Richard Sparks in his doctoral thesis The Swedish Choral Miracle in 2000.
Swedish a cappella choirs have over the last 25 years won around 25% of the annual prestigious European Grand Prix for Choral Singing (EGP) that despite its name is open to choirs from all over the world (see list of laureates in the Wikipedia article on the EGP competition).
The reasons for the strong Swedish dominance are as explained by Richard Sparks manifold; suffice to say here that there is a long-standing tradition, an unsusually large proportion of the populations (5% is often cited) regularly sing in choirs, the Swedish choral director Eric Ericson had an enormous impact on a cappella choral development not only in Sweden but around the world, and finally there are a large number of very popular primary and secondary schools (music schools) with high admission standards based on auditions that combine a rigid academic regimen with high level choral singing on every school day, a system that started with Adolf Fredrik's Music School in Stockholm in 1939 but has spread over the country.
United Kingdom
thumb|The Oxford Alternotives, the oldest a cappella group at the University of Oxford in the UK
thumb|The Sweet Nothings are one of the University of Exeter's eight A Capella groups. They are one of the oldest and most successful girl groups in the UK
A cappella has gained attention in the UK in recent years, with many groups forming at British universities by students seeking an alternative singing pursuit to traditional choral and chapel singing. This movement has been bolstered by organisations such as The Voice Festival UK.
Collegiate
It is not clear exactly where collegiate a cappella began. The Rensselyrics of Rensselaer Polytechnic Institute (formerly known as the RPI Glee Club), established in 1873 is perhaps the oldest known collegiate a cappella group. However the longest continuously-singing group is probably The Whiffenpoofs of Yale University, which was formed in 1909 and once included Cole Porter as a member. Collegiate a cappella groups grew throughout the 20th century. Some notable historical groups formed along the way include Colgate University's The Colgate 13 (1942), Dartmouth College's Aires (1946), Cornell University's Cayuga's Waiters (1949) and The Hangovers (1968), the University of Maine Maine Steiners (1958), the Columbia University Kingsmen (1949), the Jabberwocks of Brown University (1949), and the University of Rochester YellowJackets (1956). All-women a cappella groups followed shortly, frequently as a parody of the men's groups: the Smiffenpoofs of Smith College (1936), The Shwiffs of Connecticut College (The She-Whiffenpoofs, 1944), and The Chattertocks of Brown University (1951). A cappella groups exploded in popularity beginning in the 1990s, fueled in part by a change in style popularized by the Tufts University Beelzebubs and the Boston University Dear Abbeys. The new style used voices to emulate modern rock instruments, including vocal percussion/"beatboxing". Some larger universities now have multiple groups. Groups often join one another in on-campus concerts, such as the Georgetown Chimes' Cherry Tree Massacre, a 3-weekend a cappella festival held each February since 1975, where over a hundred collegiate groups have appeared, as well as International Quartet Champions The Boston Common and the contemporary commercial a cappella group Rockapella. Co-ed groups have produced many up-and-coming and major artists, including John Legend, an alumnus of the Counterparts at the University of Pennsylvania, and Sara Bareilles, an alumna of Awaken A Cappella at University of California, Los Angeles. Mira Sorvino is an alumna of the Harvard-Radcliffe Veritones of Harvard College, where she had the solo on Only You by Yaz.
A cappella is gaining popularity among South Asians with the emergence of primarily Hindi-English College groups. The first South Asian a cappella group was Penn Masala, founded in 1996 at the University of Pennsylvania. Co-ed South Asian a cappella groups are also gaining in popularity. The first co-ed south Asian a cappella was Anokha, from the University of Maryland, formed in 2001. Also, Dil se, another co-ed a cappella from UC Berkeley, hosts the "Anahat" competition at the University of California, Berkeley annually. Maize Mirchi, the co-ed a cappella group from the University of Michigan hosts "Sa Re Ga Ma Pella", an annual South Asian a cappella invitational with various groups from the Midwest. Another South Asian group from the Midwest is Chai Town who is based in the University of Illinois at Urbana- Champaign.
Jewish-interest groups such as Tufts University's Shir Appeal, University of Chicago's Rhythm and Jews, Binghamton University's Kaskeset, Ohio State University's Meshuganotes, Rutgers University's Kol Halayla, New York University's Ani V'Ata and Yale University's Magevet are also gaining popularity across the U.S.
Increased interest in modern a cappella (particularly collegiate a cappella) can be seen in the growth of awards such as the Contemporary A Cappella Recording Awards (overseen by the Contemporary A Cappella Society) and competitions such as the International Championship of Collegiate A Cappella for college groups and the Harmony Sweepstakes for all groups. In December 2009, a new television competition series called The Sing-Off aired on NBC. The show featured eight a cappella groups from the United States and Puerto Rico vying for the prize of $100,000 and a recording contract with Epic Records/Sony Music. The show was judged by Ben Folds, Shawn Stockman, and Nicole Scherzinger and was won by an all-male group from Puerto Rico called Nota. The show returned for a second and third season, won by Committed and Pentatonix, respectively.
Each year, hundreds of Collegiate a cappella groups submit their strongest songs in a competition to be on The Best of College A Cappella (BOCA), an album compilation of tracks from the best college a cappella groups around the world. The album is produced by Varsity Vocals – which also produces the International Championship of Collegiate A Cappella – and Deke Sharon. A group chosen to be on the BOCA album earns much credibility among the a cappella community.
Collegiate a cappella groups may also submit their tracks to Voices Only, a two-disc series released at the beginning of each school year. A Voices Only album has been released every year since 2005.
In addition, all women's a cappella groups can send their strongest song tracks to the Women’s A Cappella Association (WACA) for its annual best of women's a cappella album. WACA offers another medium for women's voices to receive recognition and has released an album every year since 2014, featuring women's groups from across the United States.
Emulating instruments
In addition to singing words, some a cappella singers also emulate instrumentation by reproducing instrumental sounds with their vocal cords and mouth. One of the earliest 20th century practitioners of this method were The Mills Brothers whose early recordings of the 1930s clearly stated on the label that all instrumentation was done vocally. More recently, "Twilight Zone" by 2 Unlimited was sung a cappella to the instrumentation on the comedy television series Tompkins Square. Another famous example of emulating instrumentation instead of singing the words is the theme song for The New Addams Family series on Fox Family Channel (now ABC Family). Groups such as Vocal Sampling and Undivided emulate Latin rhythms a cappella. In the 1960s, the Swingle Singers used their voices to emulate musical instruments to Baroque and Classical music. Vocal artist Bobby McFerrin is famous for his instrumental emulation. A cappella group Naturally Seven recreates entire songs using vocal tones for every instrument.
The Swingle Singers used nonsense words to sound like instruments, but have been known to produce non-verbal versions of musical instruments. Like the other groups, examples of their music can be found on YouTube. Beatboxing, more accurately known as vocal percussion, is a technique used in a cappella music popularized by the hip-hop community, where rap is often performed a cappella also. The advent of vocal percussion added new dimensions to the a cappella genre and has become very prevalent in modern arrangements. Jazz vocalist Petra Haden used a four-track recorder to produce an a cappella version of The Who Sell Out including the instruments and fake advertisements on her album Petra Haden Sings: The Who Sell Out in 2005. Haden has also released a cappella versions of Journey's "Don't Stop Believin'", The Beach Boys' "God Only Knows" and Michael Jackson's "Thriller".
Christian rock group Relient K recorded the song "Plead the Fifth" a cappella on its album Five Score and Seven Years Ago. The group recorded lead singer Matt Thiessen making drum noises and played them with an electronic drum machine to record the song.
The German metal band van Canto uses vocal noises to imitate guitars on covers of well-known rock and metal songs (such as "Master of Puppets" by Metallica) as well as original compositions. Although they are generally classified as a cappella metal, the band also includes a drummer, and uses amplifiers on some songs to distort the voice to sound more like an electric guitar.
See also
Barbershop music – four-part a cappella (in close harmony)
Collegiate a cappella
The Contemporary A Cappella Society
Home Free – quintet, winners of NBC's Sing-Off Season 4
Klapa – a cappella style found in Dalmatia, Croatia
List of collegiate a cappella groups
List of professional a cappella groups
List of university a cappella groups in the United Kingdom
Pentatonix – quintet, winners of NBC's Sing-Off Season 3 and Grammy-winning a cappella group
Perfect Harmony – an a cappella musical comedy
Pitch Perfect – a 2012 film widely focusing on an a cappella talent competition
Straight No Chaser – 10 man a cappella ground founded at Indiana University
Sweet Adelines International
Notes
Footnotes
References
External links
Contemporary A Cappella Society of America (CASA)
Harmony Sweepstakes A Cappella Festival
A Cappella News
Primarily A Cappella
The Recorded A Cappella Review Board (RARB)
In Transit the Musical
Melbourne A Cappella Festival
British Contemporary A Cappella Society
History of Barbershop
Category:A cappella
Category:Singing
Category:Vocal music
Category:Musical terminology | 2,411 | 2017-01 |
Richmond, Virginia | Richmond ( ) is the capital of Virginia, in the United States. It is the center of the Richmond Metropolitan Statistical Area (MSA) and the Greater Richmond Region.
It was incorporated in 1742, and has been an independent city since 1871.
As of the 2010 census, the population was 204,214; in 2015, the population was estimated to be 220,289, the fourth-most populous city in Virginia. The Richmond Metropolitan Area has a population of 1,260,029, the third-most populous metro in the state.
Richmond is located at the fall line of the James River, west of Williamsburg, east of Charlottesville, and south of Washington, D.C. Surrounded by Henrico and Chesterfield counties, the city is located at the intersections of Interstate 95 and Interstate 64, and encircled by Interstate 295 and Virginia State Route 288. Major suburbs include Midlothian to the southwest, Glen Allen to the north and west, Short Pump to the west and Mechanicsville to the northeast.
The site of Richmond had been an important village of the Powhatan Confederacy, and was briefly settled by English colonists from Jamestown in 1609, and in 1610–1611. The present city of Richmond was founded in 1737. It became the capital of the Colony and Dominion of Virginia in 1780. During the Revolutionary War period, several notable events occurred in the city, including Patrick Henry's "Give me liberty or give me death" speech in 1775 at St. John's Church, and the passage of the Virginia Statute for Religious Freedom written by Thomas Jefferson. During the American Civil War, Richmond served as the capital of the Confederate States of America. The city entered the 20th century with one of the world's first successful electric streetcar systems, as well as a national hub of African-American commerce and culture, the Jackson Ward neighborhood.
Richmond's economy is primarily driven by law, finance, and government, with federal, state, and local governmental agencies, as well as notable legal and banking firms, located in the downtown area. The city is home to both the United States Court of Appeals for the Fourth Circuit, one of 13 United States courts of appeals, and the Federal Reserve Bank of Richmond, one of 12 Federal Reserve Banks. Dominion Resources and MeadWestvaco, Fortune 500 companies, are headquartered in the city, with others in the metropolitan area.
History
Colonial era
After the first permanent English-speaking settlement was established in April 1607, at Jamestown, Virginia, Captain Christopher Newport led explorers northwest up the James River, to an area that was inhabited by Powhatan Native Americans.
In 1737, planter William Byrd II commissioned Major William Mayo to lay out the original town grid. Byrd named the city "Richmond" after the English town of Richmond near (and now part of) London, because the view of the James River was strikingly similar to the view of the River Thames from Richmond Hill in England, where he had spent time during his youth. The settlement was laid out in April 1737, and was incorporated as a town in 1742.
Revolution and early United States
thumb||Patrick Henry delivered his "Liberty or Death" speech at St. John's Church in Richmond, helping to ignite the American Revolution
In 1775, Patrick Henry delivered his famous "Give me Liberty or Give me Death" speech in St. John's Church in Richmond, crucial for deciding Virginia's participation in the First Continental Congress and setting the course for revolution and independence.Grafton, John. "The Declaration of Independence and Other Great Documents of American History: 1775–1864." 2000, Courier Dover Publications, pp. 1–4. On April 18, 1780, the state capital was moved from the colonial capital of Williamsburg to Richmond, to provide a more centralized location for Virginia's increasing westerly population, as well as to isolate the capital from British attack."April dates in Virginia history." Virginia Historical Society. Retrieved on July 11, 2007. The latter motive proved to be in vain, and in 1781, under the command of Benedict Arnold, Richmond was burned by British troops, causing Governor Thomas Jefferson to flee as the Virginia militia, led by Sampson Mathews, defended the city.
Richmond recovered quickly from the war, and by 1782 was once again a thriving city.Morrissey, Brendan. "Yorktown 1781: The World Turned Upside Down." Published 1997, Osprey Publishing, pp. 14–16. In 1786, the Virginia Statute for Religious Freedom (drafted by Thomas Jefferson) was passed at the temporary capitol in Richmond, providing the basis for the separation of church and state, a key element in the development of the freedom of religion in the United States.Peterson, Merrill D.; Vaughan, Robert C. The Virginia Statute for Religious Freedom: Its Evolution and Consequences in American History. Published 1988, Cambridge University Press. Retrieved on July 11, 2007. A permanent home for the new government, the Virginia State Capitol building, was designed by Thomas Jefferson with the assistance of Charles-Louis Clérisseau, and was completed in 1788.
After the American Revolutionary War, Richmond emerged as an important industrial center. To facilitate the transfer of cargo from the flat-bottomed James River bateaux above the fall line to the ocean-faring ships below, George Washington helped design the James River and Kanawha Canal from Westham to Richmond, in the 18th century to bypass Richmond's rapids, with the intent of providing a water route across the Appalachians to the Kanawha River. The legacy of the canal boatmen is represented by the figure in the center of the city flag. As a result of this and ample access to hydropower due to the falls, Richmond became home to some of the largest manufacturing facilities in the country, including iron works and flour mills, the largest facilities of their kind in the South. The resistance to the slave trade was growing by the mid-nineteenth century; in one famous case in 1848, Henry "Box" Brown made history by having himself nailed into a small box and shipped from Richmond to abolitionists in Philadelphia, Pennsylvania, escaping slavery.Switala, William J. "The Underground Railroad in Pennsylvania." Published 2001, Stackpole Books. pp. 1–4.
Civil War
right|thumb|Retreating Confederates burned one-fourth of Richmond in April 1865
On April 17, 1861, five days after the Confederate attack on Fort Sumter, the legislature voted to secede from the United States and joined the Confederacy. Official action came in May, after the Confederacy promised to move its national capital to Richmond. The city was at the end of a long supply line, which made it somewhat difficult to defend, although supplies continued to reach the city by canal and wagon for years, since it was protected by the Army of Northern Virginia and arguably the Confederacy's best troops and commanders.Bruce Levine, The Fall of the House of Dixie (New York, Random House 2014) pp. 269–70 It became the main target of Union armies, especially in the campaigns of 1862 and 1864–65.
In addition to Virginia and Confederate government offices and hospitals, a railroad hub, and one of the South's largest slave markets, Richmond had the largest factory in the Confederacy, the Tredegar Iron Works, which turned out artillery and other munitions, including the 723 tons of armor plating that covered the Virginia, the world's first ironclad used in war, as well as much of the Confederates' heavy ordnance machinery.Time-Life Books. The Blockade: Runners and Raiders. Published 1983, Time-Life, Inc. ISBN 978-0-8094-4709-1 The Confederate Congress shared quarters with the Virginia General Assembly in the Virginia State Capitol, with the Confederacy's executive mansion, the "White House of the Confederacy", located two blocks away. The Seven Days Battles followed in late June and early July 1862, during which Union General McClellan threatened to take Richmond but ultimately failed.
Three years later, as March 1865 ended, the Confederate capitol became indefensible. On March 25, Confederate General John B. Gordon's desperate attack on Fort Stedman east of Petersburg failed. On April 1, General Philip Sheridan, assigned to interdict the Southside Railroad, met brigades commanded by George Pickett at the Five Forks junction, smashing them, taking thousands of prisoners, and encouraging General Grant to order a general advance. When the Union Sixth Corps broke through Confederate lines on Boydton Plank Road south of Petersburg, Confederate casualties exceeded 5,000, or about a tenth of Lee's defending army. General Lee then informed Jefferson Davis that he was about to evacuate Richmond.Levine pp. 271–72
Davis and his cabinet left the city by train that night, as government officials burned documents and departing Confederate troops burned tobacco and other warehouses to deny their contents to the victors. On April 2, 1865, General Godfrey Weitzel, commander of the 25th corps of the United States Colored Troops, accepted the city's surrender from the mayor and group of leading citizens who remained.Levine, pp. 272–73 The Union troops eventually managed to stop the raging fires but about 25% of the city's buildings were destroyed.Mike Wright, City Under Siege: Richmond in the Civil War (Rowman & Littlefield, 1995)
President Abraham Lincoln visited General Grant at Petersburg on April 3, and took a launch to Richmond the next day, while Jefferson Davis attempted to organize his Confederate government at Danville. Lincoln met Confederate assistant secretary of War John A. Campbell, and handed him a note inviting Virginia's legislature to end their rebellion. After Campbell spun the note to Confederate legislators as a possible end to the Emancipation Proclamation, Lincoln rescinded his offer and ordered General Weitzel to prevent the Confederate state legislature from meeting. Union forces killed, wounded or captured 8,000 Confederate troops at Saylor's Creek southwest of Petersburg on April 6. General Lee continued to reject General Grant's surrender suggestion until Sheridan's infantry and cavalry appeared in front of his retreating army on April 8. He surrendered his remaining approximately 10,000 troops at Appomattox Court House the following morning.Levine pp. 275–78 Jefferson Davis retreated to North Carolina, then further south, after Lincoln was assassinated a few days later, the White House rejected the surrender terms negotiated by General Sherman and envoys of North Carolina governor Zebuon Vance, which failed to mention slavery but generally were the same terms agreed to by Grant and Lee i.e. rebel troops could keep their horses and their arms. According to at least one school of Civil War historians, this was the main reason the post-Lincoln White House rejected the surrender terms. Jefferson Davis was captured on May 10 near Irwinville, Georgia and taken back to Virginia, where he was charged with treason and imprisoned for two years at Fort Monroe until freed on bail.Levine pop. 279–82
Postbellum
Richmond emerged a decade after the smoldering rubble of the Civil War to resume its position as an economic powerhouse, with iron front buildings and massive brick factories. Canal traffic peaked in the 1860s and slowly gave way to railroads, allowing Richmond to become a major railroad crossroads, eventually including the site of the world's first triple railroad crossing.Dunaway, Wayland F. "History of the James River and Kanawha Company." Published 1922, Columbia University. Retrieved on July 11, 2007. Tobacco warehousing and processing continued to play a role, boosted by the world's first cigarette-rolling machine, invented by James Albert Bonsack of Roanoke in 1880/81. Contributing to Richmond's resurgence was the first successful electrically powered trolley system in the United States, the Richmond Union Passenger Railway. Designed by electric power pioneer Frank J. Sprague, the trolley system opened its first line in 1888, and electric streetcar lines rapidly spread to other cities across the country.Smil, Vaclav. Creating the Twentieth Century: Technical Innovations of 1867–1914 and Their Lasting Impact. Published 2005, Oxford University Press, p. 94. ISBN 978-0-19-516874-7 Sprague's system used an overhead wire and trolley pole to collect current, with electric motors on the car's trucks.Harwood, Jr., Herbert H. Baltimore Streetcars: The Postwar Years. Published 2003, Johns Hopkins University Press, p. vii. ISBN 978-0-8018-7190-0 In Richmond, the transition from streetcars to buses began in May 1947 and was completed on November 25, 1949."Transit Topics." Published November 27, 1949 and November 30, 1957, Virginia Transit Company, Richmond, Virginia.
20th century
right|thumb|By the early 20th century, Richmond had an extensive network of electric streetcars, as shown here crossing the Mayo Bridge across the James River, ca. 1917
By the beginning of the 20th century, the city's population had reached 85,050 in , making it the most densely populated city in the Southern United States.Gibson, Campbell. "Population of the 100 Largest Cities and Other Urban Places in the United States: 1790 to 1990 Archived copy at WebCite (July 10, 2007).." United States Census Bureau, June 1998. Retrieved on July 11, 2007. In 1900, the Census Bureau reported Richmond's population as 62.1% white and 37.9% black. Freed slaves and their descendants created a thriving African-American business community, and the city's historic Jackson Ward became known as the "Wall Street of Black America." In 1903, African-American businesswoman and financier Maggie L. Walker chartered St. Luke Penny Savings Bank, and served as its first president, as well as the first female bank president in the United States. Today, the bank is called the Consolidated Bank and Trust Company, and it is the oldest surviving African-American bank in the U.S.Felder, Deborah G. "A Century of Women: The Most Influential Events in Twentieth-Century Women's History, 1999, Citadel Press, p. 338. ISBN 978-1-55972-485-2 Other figures from this time included John Mitchell, Jr. In 1910, the former city of Manchester was consolidated with the city of Richmond, and in 1914, the city annexed Barton Heights, Ginter Park, and Highland Park areas of Henrico County.Chesson, Michael B. "Richmond After the War, 1865 to 1890." Published 1981, Virginia State Library, p. 177. In May 1914, Richmond became the headquarters of the Fifth District of the Federal Reserve Bank.
Several major performing arts venues were constructed during the 1920s, including what are now the Landmark Theatre, Byrd Theatre, and Carpenter Theatre. The city's first radio station, WRVA, began broadcasting in 1925. WTVR-TV (CBS 6), the first television station in Richmond, was the first television station south of Washington, D.C.Tyler-McGraw, Marie. "At the Falls: Richmond, Virginia, and Its People." Published 1994, UNC Press, p. 257. ISBN 978-0-8078-4476-2
Between 1963 and 1965, there was a "downtown boom" that led to the construction of more than 700 buildings in the city. In 1968, Virginia Commonwealth University was created by the merger of the Medical College of Virginia with the Richmond Professional Institute."About VCU." Virginia Commonwealth University. Retrieved on July 11, 2007. In 1970, Richmond's borders expanded by an additional on the south. After several years of court cases in which Chesterfield County fought annexation, more than 47,000 people who once were Chesterfield County residents found themselves within the city's perimeters on January 1, 1970."City of Richmond v. United States, 422 U.S. 358." 1975. United States Supreme Court. Retrieved on July 11, 2007. In 1996, still-sore tensions arose amid controversy involved in placing a statue of African American Richmond native and tennis star Arthur Ashe to the famed series of statues of Confederate heroes of the Civil War on Monument Avenue.Edds, Margaret; Little, Robert. "Why Richmond voted to Honor Arthur Ashe on Monument Avenue. The Final, Compelling Argument for Supporters: A Street Reserved for Confederate Heroes had no Place in this City." The Virginian-Pilot. July 19, 1995. After several months of controversy, the bronze statue of Ashe was finally completed on Monument Avenue facing the opposite direction from the Confederate Heroes on July 10, 1996.Staff Writer. "Arthur Ashe Statue Set Up in Richmond at Last." New York Times. July 5, 1996. Retrieved on January 20, 2010.
A multimillion-dollar flood wall was completed in 1995, in order to protect low-lying areas of city from the oft-rising waters of the James River. As a result, the River District businesses grew rapidly, and today the area is home to much of Richmond's entertainment, dining and nightlife activity, bolstered by the creation of a Canal Walk along the city's former industrial canals."River District History." Richmond River District. Retrieved on July 11, 2007."The Canal Walk." Richmond.com. July 31, 2009. Retrieved on January 20, 2010.
Geography and climate
thumb|The Richmond area, seen from the International Space Station in early-April 2013.
Richmond is located at (37.538, −77.462). According to the United States Census Bureau, the city has a total area of , of which is land and of it (4.3%) is water. The city is located in the Piedmont region of Virginia, at the highest navigable point of the James River. The Piedmont region is characterized by relatively low, rolling hills, and lies between the low, flat Tidewater region and the Blue Ridge Mountains. Significant bodies of water in the region include the James River, the Appomattox River, and the Chickahominy River.
The Richmond-Petersburg Metropolitan Statistical Area (MSA), the 44th largest in the United States, includes the independent cities of Richmond, Colonial Heights, Hopewell, and Petersburg, as well as the counties of Charles City, Chesterfield, Dinwiddie, Goochland, Hanover, Henrico, New Kent, Powhatan, and Prince George."The Richmond-Petersburg MSA at a Glance." Richmond Regional Planning District Commission. January 2006. Retrieved on July 12, 2007. , the total population of the Richmond—Petersburg MSA was 1,258,251.
Cityscape
thumb|Richmond is often subdivided into North Side, Southside, East End and West End
Richmond's original street grid, laid out in 1737, included the area between what are now Broad, 17th, and 25th Streets and the James River. Modern Downtown Richmond is located slightly farther west, on the slopes of Shockoe Hill. Nearby neighborhoods include Shockoe Bottom, the historically significant and low-lying area between Shockoe Hill and Church Hill, and Monroe Ward, which contains the Jefferson Hotel. Richmond's East End includes neighborhoods like rapidly gentrifying Church Hill, home to St. John's Church, as well as poorer areas like Fulton, Union Hill, and Fairmont, and public housing projects like Mosby Court, Whitcomb Court, Fairfield Court, and Creighton Court closer to Interstate 64."Neighborhood Guide." City of Richmond. Retrieved on July 12, 2007.
The area between Belvidere Street, Interstate 195, Interstate 95, and the river, which includes Virginia Commonwealth University, is socioeconomically and architecturally diverse. North of Broad Street, the Carver and Newtowne West neighborhoods are demographically similar to neighboring Jackson Ward, with Carver experiencing some gentrification due to its proximity to VCU. The affluent area between the Boulevard, Main Street, Broad Street, and VCU, known as the Fan, is home to Monument Avenue, an outstanding collection of Victorian architecture, and many students. West of the Boulevard is the Museum District, the location of the Virginia Historical Society and the Virginia Museum of Fine Arts. South of the Downtown Expressway are Byrd Park, Maymont, Hollywood Cemetery, the predominantly black working class Randolph neighborhood, and white working class Oregon Hill. Cary Street between Interstate 195 and the Boulevard is a popular commercial area called Carytown.
thumb|left|View of the Carillon from across the James River
Richmond's Northside is home to numerous listed historic districts.http://www.dhr.virginia.gov/tax_credits/Historic_District_Maps/RichmondNorth_20120926.pdf Neighborhoods such as Chestnut Hill-Plateau and Barton Heights began to develop at the end of the 19th century when the new streetcar system made it possible for people to live on the outskirts of town and still commute to jobs downtown. Other prominent Northside neighborhoods include Azalea, Barton Heights, Bellevue, Chamberlayne, Ginter Park, Highland Park, and Rosedale.
Farther west is the affluent, suburban West End. Windsor Farms is among its best-known sections. The West End also includes middle to lower income neighborhoods, such as Laurel, Farmington and the areas surrounding the Regency Mall. More affluent areas include Glen Allen, Tuckahoe, and Short Pump, which can all be found north and northwest of the city. The University of Richmond and the Country Club of Virginia can be found here as well, which are located just inside the City Limits.
The portion of the city south of the James River is known as the Southside. Neighborhoods in the city's Southside area range from affluent and middle class suburban neighborhoods Westover Hills, Forest Hill, Southampton, Stratford Hills, Oxford, Huguenot Hills, Hobby Hill, and Woodland Heights to the impoverished Manchester and Blackwell areas, the Hillside Court housing projects, and the ailing Jefferson Davis Highway commercial corridor. Other Southside neighborhoods include Fawnbrook, Broad Rock, Cherry Gardens, Cullenwood, and Beaufont Hills. Much of Southside developed a suburban character as part of Chesterfield County before being annexed by Richmond, most notably in 1970.
Climate
thumb|Flooding of Old Manchester during Hurricane Agnes, 1972
Richmond has a humid subtropical climate (Köppen Cfa), with hot and humid summers and generally cool winters. The mountains to the west act as a partial barrier to outbreaks of cold, continental air in winter; Arctic air is delayed long enough to be modified, then further warmed as it subsides in its approach to Richmond. The open waters of the Chesapeake Bay and Atlantic Ocean contribute to the humid summers and mild winters. The coldest weather normally occurs from late December to early February, and the January daily mean temperature is , with an average of 6.0 days with highs at or below the freezing mark. Downtown areas and suburbs to the east of Richmond straddle the border between USDA Hardiness zones 7a and 7b due to the urban heat index while surrounding suburban and rural areas to the west are in the 6b or 7a Hardiness Zone. and temperatures seldom lower to , with the most recent subzero (°F) reading occurring on January 28, 2000, when the temperature reached ."FAQs & HOLIDAY CLIMATOLOGY RICHMOND." National Oceanic and Atmospheric Administration. 1897-4/10/2010. The July daily mean temperature is , and high temperatures reach or exceed approximately 43 days out of the year; while temperatures are not uncommon, they do not occur every year.". "National Oceanic and Atmospheric Administration." Extremes in temperature have ranged from on January 19, 1940 up to on August 6, 1918.
Precipitation is rather uniformly distributed throughout the year. However, dry periods lasting several weeks do occur, especially in autumn when long periods of pleasant, mild weather are most common. There is considerable variability in total monthly amounts from year to year so that no one month can be depended upon to be normal. Snow has been recorded during seven of the twelve months. Falls of or more within 24 hours occur an average once per year. Annual snowfall, however, is usually light, averaging per season. Snow typically remains on the ground only one or two days at a time, but remained for 16 days in 2010 (January 30 to February 14). Ice storms (freezing rain or glaze) are not uncommon, but they are seldom severe enough to do any considerable damage.
The James River reaches tidewater at Richmond where flooding may occur in every month of the year, most frequently in March and least in July. Hurricanes and tropical storms have been responsible for most of the flooding during the summer and early fall months. Hurricanes passing near Richmond have produced record rainfalls. In 1955, three hurricanes brought record rainfall to Richmond within a six-week period. The most noteworthy of these were Hurricane Connie and Hurricane Diane that brought heavy rains five days apart. And in 2004, the downtown area suffered extensive flood damage after the remnants of Hurricane Gaston dumped up to of rainfall."Flooding devastates historic Richmond, VA." MSNBC. September 1, 2004.
Damaging storms occur mainly from snow and freezing rain in winter and from hurricanes, tornadoes, and severe thunderstorms in other seasons. Damage may be from wind, flooding, or rain, or from any combination of these. Tornadoes are infrequent but some notable occurrences have been observed within the Richmond area.
Based on the 1981–2010 period, the average first occurrence of at or below freezing temperatures in the fall is November 4 and the average last occurrence in the spring is April 5.
Demographics
As of the 2010 United States Census, there were 204,214 people residing in the city. 50.6% were Black or African American, 40.8% White, 5.0% Asian, 0.3% Native American, 0.1% Pacific Islander, 3.6% of some other race and 2.3% of two or more races. 6.3% were Hispanic or Latino (of any race).
As of the census of 2000, there were 197,790 people, 84,549 households, and 43,627 families residing in the city. The population density was 3,292.6 people per square mile (1,271.3/km²). There were 92,282 housing units at an average density of 1,536.2 per square mile (593.1/km²). The racial makeup of the city was 38.3% White, 57.2% African American, 0.2% Native American, 1.3% Asian, 0.1% Pacific Islander, 1.5% from other races, and 1.5% from two or more races. Hispanic or Latino of any race were 2.6% of the population.
There were 84,549 households out of which 23.1% had children under the age of 18 living with them, 27.1% were married couples living together, 20.4% had a female householder with no husband present, and 48.4% were non-families. 37.6% of all households were made up of individuals and 10.9% had someone living alone who was 65 years of age or older. The average household size was 2.21 and the average family size was 2.95.
In the city the age distribution of the population shows 21.8% under the age of 18, 13.1% from 18 to 24, 31.7% from 25 to 44, 20.1% from 45 to 64, and 13.2% who were 65 years of age or older. The median age was 34 years. For every 100 females there were 87.1 males. For every 100 females age 18 and over, there were 83.5 males.
The median income for a household in the city was $31,121, and the median income for a family was $38,348. Males had a median income of $30,874 versus $25,880 for females. The per capita income for the city was $20,337. About 17.1% of families and 21.4% of the population were below the poverty line, including 32.9% of those under age 18 and 15.8% of those age 65 or over.
Crime
During the late 1980s and early 1990s, Richmond experienced a spike in overall crime, in particular, the city's murder rate. The city had 93 murders for the year of 1985, with a murder rate of 41.9 killings committed per 100,000 residents. Over the next decade, the city saw a major increase in total homicides. In 1990 there were 114 murders, for a murder rate of 56.1 killings per 100,000 residents. There were 120 murders in 1995, resulting in a murder rate of 59.1 killings per 100,000 residents, one of the highest in the United States.
In 2004, Morgan Quitno Press ranked Richmond as the ninth (out of 354) most dangerous city in the United States. In 2005, Richmond was ranked as the fifth most dangerous city overall and the 12th most dangerous metropolitan area in the United States. The following year, Richmond saw a decline in crime, ranking as the 15th most dangerous city in the United States. By 2008, Richmond's position on the list had fallen to 49th. By 2012, Richmond was no longer in the 'top' 200.
Richmond's rate of major crime, including violent and property crimes, decreased 47 percent between 2004 and 2009 to its lowest level in more than a quarter of a century. Various forms of crime tend to be declining, yet remaining above state and national averages. In 2008, the city had recorded the lowest homicide rate since 1971.
FBI Uniform Crime Reports for Richmond for the year of 2013:
City of Richmond only Richmond MSA Rate per 100,000 inhabitants Violent crime 1,327 3,029 243.8 Murder and non-negligent manslaughter 37 77 6.2 Rape 43 249 20.0 Robbery 624 1,128 90.8 Aggravated assault 623 1,575 126.8 Property crime 8,704 29,761 2,395.7 Burglary 1,817 5,533 445.4 Larceny/Theft 5,949 22,329 1,797.4 Motor vehicle theft 938 1,899 152.9
In recent years, as in many other American cities, Richmond has witnessed a rise in homicides. The Richmond Times-Dispatch reported 61 murders in Richmond in 2016, marking it "the city's deadliest year in a decade."Rockett, Ali (January 14, 2017). "61 people were slain in Richmond in 2016. Here are their stories." Richmond Times-Dispatch. Retrieved January 15, 2017.
Religion
thumb|right|St. John's Episcopal Church, built in 1741, is the oldest church in the city
In 1786, the Virginia Statute for Religious Freedom, penned in 1779 by Thomas Jefferson, was adopted by the Virginia General Assembly in Richmond. The site is now commemorated by the First Freedom Center.
Richmond has several historic churches. Because of its early English colonial history from the early 17th century to 1776, Richmond has a number of prominent Anglican/Episcopal churches including Monumental Church, St. Paul's Episcopal Church and St. John's Episcopal Church. Methodists and Baptists made up another section of early churches, and First Baptist Church of Richmond was the first of these, established in 1780. In the Reformed church tradition, the first Presbyterian Church in the City of Richmond was First Presbyterian Church, organized on June 18, 1812. On February 5, 1845, Second Presbyterian Church of Richmond was founded, which was a historic church where Stonewall Jackson attended and was the first Gothic building and the first gas-lit church to be built in Richmond."History of Second Presbyterian Church, Richmond." Second Presbyterian Church. Retrieved on January 20, 2010. St. Peter's Church was dedicated and became the first Catholic church in Richmond on May 25, 1834. The city is also home to the historic Cathedral of the Sacred Heart which is the mother church for the Roman Catholic Diocese of Richmond.
thumb|left|The Cathedral of the Sacred Heart, dedicated in 1906
The first Jewish congregation in Richmond was Kahal Kadosh Beth Shalom. Kahal Kadosh Beth Shalom was the sixth congregation in the United States. By 1822 K.K. Beth Shalom members worshipped in the first synagogue building in Virginia. They eventually merged with Congregation Beth Ahabah, an offshoot of Beth Shalom. There are two Orthodox Synagogues, Keneseth Beth Israel and Chabad of Virginia. There is an Orthodox Yeshivah K–12 school system known as Rudlin Torah academy, which also includes a post high-school program. There are two Conservative synagogues, Beth El and Or Atid. There are three Reform synagogues, Bonay Kodesh, Beth Ahabah and Or Ami. Along with such religious congregations, there are a variety of other Jewish charitable, educational and social service institutions, each serving the Jewish and general communities. These include the Weinstein Jewish Community Center, Jewish Family Services, Jewish Community Federation of Richmond and Richmond Jewish Foundation.
Due to the influx of German immigrants in the 1840s, St. John's German Evangelical church was formed in 1843. Saints Constantine and Helen Greek Orthodox Cathedral held its first worship service in a rented room at 309 North 7th Street in 1917. The cathedral relocated to 30 Malvern Avenue in 1960 and is noted as one of two Eastern Orthodox churches in Richmond and home to the annual Richmond Greek Festival.Richmond Greek Festival. Retrieved on January 20, 2010.
There are seven current masjids in the Greater Richmond area, with three more currently in construction, accommodating the growing Muslim population, the first one being Masjid Bilal."History of Local Masajid." Islamic Society of Greater Richmond. February 2006. Retrieved on February 22, 2007. In the 1950s, Muslims from the East End got organized under Nation of Islam (NOI). They used to meet in Temple #24 located on North Avenue. After the NOI split in 1975, the Muslims who joined mainstream Islam, start meeting at Shabaaz Restaurant on Nine Mile Road. By 1976, the Muslims used to meet in a rented church. They tried to buy this church, but due to financial difficulties the Muslims instead bought an old grocery store at Chimbarazoo Boulevard, the present location of Masjid Bilal. Initially, the place was called "Masjid Muhammad #24". Only by 1990 did the Muslims renamed it to "Masjid Bilal". Masjid Bilal was followed by the Islamic Center of Virginia, ICVA masjid. The ICVA was established in 1973 as a non profit tax exempt organization. With aggressive fundraising, ICVA was able to buy land on Buford road. Construction of the new masjid began in the early 1980s. The rest of the five current masjids in the Richmond area are Islamic Center of Richmond (ICR) in the west end, Masjid Umm Barakah on 2nd street downtown, Islamic Society of Greater Richmond (ISGR) in the west end, Masjidullah in the north side, and Masjid Ar-Rahman in the east end.
thumb|Watts Hall at Union Presbyterian Seminary
Hinduism is actively practiced, particularly in suburban areas of Henrico and Chesterfield. Some 6,000 families of Indian descent resided in the Richmond Region as of 2011. Hindus are served by several temples and cultural centers. The two most familiar are the Cultural Center of India (CCI) located off of Iron Bridge Road in Chesterfield County and the Hindu Center of Virginia in Henrico County which has garnered national fame and awards for being the first LEED certified religious facility in the commonwealth.
Seminaries in Richmond include: the school of theology at Virginia Union University; a Presbyterian seminary, Union Presbyterian Seminary, and the Baptist Theological Seminary at Richmond. The McCollough Theological Seminary of the United House of Prayer For All People is located in the Church Hill neighborhood of the city.
Bishops that sit in Richmond include those of the Episcopal Diocese of Virginia (the denomination's largest); the Richmond Area of the United Methodist Church (Virginia Annual Conference), the nation's second-largest and one of the oldest. The Presbytery of the James—Presbyterian Church (USA) – also is based in the Richmond area.
The Roman Catholic Diocese of Richmond was canonically erected by Pope Pius VII on July 11, 1820. Today there are 235,816 Catholics at 146 parishes in the Diocese of Richmond. The city of Richmond is home to 19 Catholic parishes. Cathedral of the Sacred Heart is home to the current bishop Most Reverend Francis Xavier DiLorenzo who was appointed by Pope John Paul II on March 31, 2004.
Economy
thumb|Richmond tobacco warehouse ca. 1910s
Richmond's strategic location on the James River, built on undulating hills at the rocky fall line separating the Piedmont and Tidewater regions of Virginia, provided a natural nexus for the development of commerce. Throughout these three centuries and three modes of transportation, the downtown has always been a hub, with the Great Turning Basin for boats, the world's only triple crossing of rail lines, and the intersection of two major interstates.
Law and finance have long been driving forces in the economy. The city is home to both the United States Court of Appeals for the Fourth Circuit, one of 13 United States courts of appeals, and the Federal Reserve Bank of Richmond, one of 12 Federal Reserve Banks, as well as offices for international companies such as Genworth Financial, CapitalOne, Philip Morris USA, and numerous other banks and brokerages. Richmond is also home to four of the largest law firms in the United States: Hunton & Williams, McGuireWoods, Williams Mullen, and LeClairRyan. Another law firm with a major Richmond presence is Troutman Sanders, which merged with Richmond-based Mays & Valentine LLP in 2001.
Since the 1960s Richmond has been a prominent hub for advertising agencies and advertising related businesses, including The Martin Agency, named 2009 U.S. Agency of the Year by AdWeek. As a result of local advertising agency support, VCU's graduate advertising school (VCU Brandcenter) is consistently ranked the No. 1 advertising graduate program in the country.The Top 5. Creativity. March 2005.
Richmond is home to the rapidly developing Virginia BioTechnology Research Park, which opened in 1995 as an incubator facility for biotechnology and pharmaceutical companies. Located adjacent to the Medical College of Virginia (MCV) Campus of Virginia Commonwealth University, the park currently has more than of research, laboratory and office space for a diverse tenant mix of companies, research institutes, government laboratories and non-profit organizations. The United Network for Organ Sharing, which maintains the nation's organ transplant waiting list, occupies one building in the park. Philip Morris USA opened a $350 million research and development facility in the park in 2007. Once fully developed, park officials expect the site to employ roughly 3,000 scientists, technicians and engineers.
Richmond's revitalized downtown includes the Canal Walk, a new Greater Richmond Convention Center, and expansion on both VCU campuses. A new performing arts center, Richmond CenterStage, opened on September 12, 2009.Ruggieri, Melissa. "Richmond CenterStage opens its doors Saturday." Richmond Times-Dispatch. September 9, 2009. Retrieved on January 20, 2010. The complex included a renovation of the Carpenter Center and construction of a new multipurpose hall, community playhouse, and arts education center in parts of the old Thalhimers department store.Jones, Will. "Showtime's set." "Richmond Times-Dispatch". January 14, 2007. Retrieved on February 22, 2007.
Richmond is also fast-becoming known for its food scene, with several restaurants in the Fan, Church Hill, Jackson Ward and elsewhere around the city generating regional and national attention for their fare. Departures magazine named Richmond "The Next Great American Food City" in August 2014.Peifer, Karri. "Richmond is 'The Next Great American Food City'", Richmond.com, Richmond, 25 August 2014. Retrieved on 25 August 2014. Also in 2014, Southern Living magazine named three Richmond restaurants – Comfort, Heritage and The Roosevelt – among its "100 Best Restaurants in the South",Cole, Jennifer. "100 Best Restaurants in the South", Southern Living, 12 August 2014. Retrieved on 12 August 2014. while Metzger Bar & Butchery made its "Best New Restaurants: 12 To Watch" list.Cole, Jennifer. "Best New Restaurants: 12 To Watch", Southern Living, 12 August 2014. Retrieved on 12 August 2014. Craft beer and liquor production is also growing in the River City, with twelve micro-breweries in city proper; the oldest is Legend Brewery, founded in 1994. Three distilleries, Reservoir Distillery, Belle Isle Craft Spirits and James River Distillery, were established in 2010, 2013 and 2014, respectively.
Additionally, Richmond is gaining attention from the film and television industry, with several high-profile films shot in the metro region in the past few years, including the major motion picture Lincoln which led to Daniel Day-Lewis's third Oscar, Killing Kennedy with Rob Lowe, airing on the National Geographic Channel and Turn, starring Jamie Bell and airing on AMC. In 2015 Richmond will be the main filming location for the upcoming PBS drama series Mercy Street, which will premiere in Winter 2016. Several organizations, including the Virginia Film Office and the Virginia Production Alliance, along with events like the Richmond International Film Festival and French Film Festival, continue to put draw supporters of film and media to the region.
Fortune 500 companies and other large corporations
thumb|Six Fortune 500 companies are headquartered in the Richmond area
The Greater Richmond area was named the third-best city for business by MarketWatch in September 2007, ranking behind only the Minneapolis and Denver areas and just above Boston. The area is home to six Fortune 500 companies: electric utility Dominion Resources; CarMax; Owens & Minor; Genworth Financial; WestRock Company; McKesson Medical-Surgical, Markel Corporation, and Altria Group. However, only Dominion Resources and WestRock Company are headquartered within the city of Richmond; the others are located in the neighboring counties of Henrico and Hanover. In 2008, Altria moved its corporate HQ from New York City to Henrico County, adding another Fortune 500 corporation to Richmond's list. In February 2006, MeadWestvaco announced that they would move from Stamford, Connecticut, to Richmond in 2008 with the help of the Greater Richmond Partnership, a regional economic development organization that also helped locate Aditya Birla Minacs, Amazon.com, and Honeywell International, to the region. In July 2015, MeadWestvaco merged with Georgia-based Rock-Tenn Company creating WestRock Company.
Other Fortune 500 companies, while not headquartered in the area, do have a major presence. These include SunTrust Bank (based in Atlanta), Capital One Financial Corporation (officially based in McLean, Virginia, but founded in Richmond with its operations center and most employees in the Richmond area), and the medical and pharmaceutical giant McKesson (based in San Francisco). Capital One and Altria company's Philip Morris USA are two of the largest private Richmond-area employers. DuPont maintains a production facility in South Richmond known as the Spruance Plant. UPS Freight, the less-than-truckload division of UPS and formerly known as Overnite Transportation, has its corporate headquarters in Richmond.
Other companies based in Richmond include chemical company NewMarket; Brink's, a security and armored car company; Estes Express Lines, a freight carrier, Universal Corporation, a tobacco merchant; Cavalier Telephone, now Windstream, a telephone, internet, and digital television provider formed in Richmond in 1998; Cherry Bekaert & Holland, a top 30 accounting firm serving the Southeast; the law firm of McGuireWoods; and Media General, a company specializing in broadcast media.
Arts and culture
Museums and monuments
thumb|1936 entrance to the Virginia Museum of Fine Arts
thumb|Lee Monument on Monument Avenue
Several of the city's large general museums are located near the Boulevard. On Boulevard proper are the Virginia Historical Society and the Virginia Museum of Fine Arts, lending their name to what is sometimes called the Museum District. Nearby on Broad Street is the Science Museum of Virginia, housed in the neoclassical former 1919 Broad Street Union Station. Immediately adjacent is the Children's Museum of Richmond, and two blocks away, the Virginia Center for Architecture. Within the downtown are the Library of Virginia and the Valentine Richmond History Center. Elsewhere are the Virginia Holocaust Museum and the Old Dominion Railway Museum.
As the primary former Capital of the Confederate States of America, Richmond is home to many museums and battlefields of the American Civil War. Near the riverfront is the Richmond National Battlefield Park Visitors Center and the American Civil War Center at Historic Tredegar, both housed in the former buildings of the Tredegar Iron Works, where much of the ordnance for the war was produced. In Court End, near the Virginia State Capitol, is the Museum of the Confederacy, along with the Davis Mansion, also known as the White House of the Confederacy; both feature a wide variety of objects and material from the era. The temporary home of former Confederate General Robert E. Lee still stands on Franklin Street in downtown Richmond. The history of slavery and emancipation are also increasingly represented: there is a former slave trail along the river that leads to Ancarrow's Boat Ramp and Historic Site which has been developed with interpretive signage, and in 2007, the Reconciliation Statue was placed in Shockoe Bottom, with parallel statues placed in Liverpool and Benin representing points of the Triangle Trade.
Other historical points of interest include St. John's Church, the site of Patrick Henry's famous "Give me liberty or give me death" speech, and the Edgar Allan Poe Museum, features many of his writings and other artifacts of his life, particularly when he lived in the city as a child, a student, and a successful writer. The John Marshall House, the home of the former Chief Justice of the United States, is also located downtown and features many of his writings and objects from his life. Hollywood Cemetery is the burial grounds of two U.S. Presidents as well as many Civil War officers and soldiers.
The city is home to many monuments and memorials, most notably those along Monument Avenue. Other monuments include the A.P. Hill monument, the Bill "Bojangles" Robinson monument in Jackson Ward, the Christopher Columbus monument near Byrd Park, and the Confederate Soldiers and Sailors Monument on Libby Hill. Located near Byrd Park is the famous World War I Memorial Carillon, a 56-bell carillon tower. Dedicated in 1956, the Virginia War Memorial is located on Belvedere overlooking the river, and is a monument to Virginians who died in battle in World War II, the Korean War, the Vietnam War, the Gulf War, the War in Afghanistan, and the Iraq War.
Agecroft Hall is a Tudor manor house and estate located on the James River in the Windsor Farms neighborhood of Richmond. The manor house was built in the late 15th century, and was originally located in the Agecroft area of Pendlebury, in the historic county of Lancashire in England.
Visual and performing arts
Richmond has a significant arts community, some of which is contained in formal public-supported venues, and some of which is more DIY, such as local privately owned galleries, and private music venues, nonprofit arts organizations, or organic and venueless arts movements (e.g., house shows, busking, itinerant folk shows). This has led to tensions, as the city Richmond City levied an "admissions tax" to fund large arts projects like CentreStage, leading to criticism that it is funding civic initiatives on the backs of the organic local culture. Traditional Virginian folk music, including blues, country, and bluegrass are also notably present, and play a large part in the annual Richmond Folk Festival. The following is a list of the more formal arts establishments (Companies, theaters, galleries, and other large venues) in Richmond. Richmond is also the home and birthplace of famous metal act GWAR. This is a fact members of the band consistently allude to with pride. GWAR is run by an art collective known as Slavepit Incorporated, which has over the years involved hundreds of Richmond locals.Gwar Inc. How the most vile, disgusting, offensive group of musicians in town became Richmond's most famous musical export. Style Weekly, March 27, 2012
Murals
As of 2015 a variety of murals from internationally recognized street artists have appeared throughout the city as a result of the efforts of Art Whino and RVA Magazine with The Richmond Mural Project and the RVA Street Art Festival. Artists who have produced work in the city as a result of these festivals include ROA, Pixel Pancho, Gaia, Aryz, Alexis Diaz, Ever Siempre, Jaz, 2501, Natalia Rak, Pose MSK, Vizie, Jeff Soto, Mark Jenkins, Etam Cru- and local artists Hamilton Glass, Nils Westergard, El Kamino, Nico Cathcart,http://rvamag.com/articles/full/24711/broad-street-mural-a-bright-spot-in-a-sea-of-grayness and Ed Trask. Both festivals are expected to continue this year with artists such as Ron English slated to produce work.
Professional performing companies
From earliest days, Virginia, and Richmond in particular, have welcomed live theatrical performances. From Lewis Hallam's early productions of Shakespeare in Williamsburg, the focus shifted to Richmond's antebellum prominence as a main colonial and early 19th century performance venue for such celebrated American and English actors as William Macready, Edwin Forrest,Macready, William, The diaries of William Charles Macready, 1833–1851, Volume 2, p. 416 and the Booth family. In the 20th century, Richmonders' love of theater continued with many amateur troupes and regular touring professional productions. In the 1960s a small renaissance or golden age accompanied the growth of professional dinner theaters and the fostering of theater by the Virginia Museum, reaching a peak in the 1970s with the establishment of a resident Equity company at the Virginia Museum Theater (now the Leslie Cheek) and the birth of Theatre IV, a company that continues to this day under the name Virginia Repertory Theatre.
Virginia Repertory Theatre is Central Virginia's largest professional theatre organization. It was created in 2012 when Barksdale Theatre and Theatre IV, which had shared one staff for over a decade, merged to become one company. With an annual budget of over $5 million, the theatre employs over 240 artists each year, presenting a season at the November Theatre (formerly the Empire Theatre) and Theatre Gym at Virginia Rep Center, as well as productions at the Hanover Tavern and The Children's Theatre in The Shops at Willow Lawn. It is currently run under the leadership of Artistic Director Bruce Miller and Managing Director Phil Whiteway.
Richmond Ballet, founded in 1957.
Richmond Triangle Players, founded in 1993, delivers theater programs exploring themes of equality, identity, affection and family across sexual orientation and gender spectrums.
Richmond Symphony
Virginia Opera, the Official Opera Company of the Commonwealth of Virginia, founded in 1974. Presents eight mainstage performances every year at the Carpenter Theater.
Other venues and companies
thumb|The Carpenter Theatre
Other venues and companies include:
The Altria Theater, the city-owned opera house.
The Leslie Cheek Theater, after lying dormant for eight years, re-opened in 2011 in the heart of the Virginia Museum of Fine Arts at 200 N. Boulevard. The elegant 500-seat proscenium stage was constructed in 1955 to match then museum director Leslie Cheek's vision of a theater worthy of a fine arts institution. Operating for years as the Virginia Museum Theater (VMT), it supported an amateur community theater under the direction of Robert Telford. When Cheek retired, he advised trustees on the 1969 appointment of Keith Fowler as head of the theater arts division and artistic director of VMT. Fowler led the theater to become the city's first resident Actors Equity\LORT theater, adding major foreign authors and the premieres of new American works to the repertory. Under his leadership VMT reached a "golden age," gaining international recognitionKass, Carole, "Play Prompts Praise..." in Richmond Times-Dispatch, February 9, 1975 and more than doubling its subscription base. Successive artistic administrations changed the name of the theater to "TheatreVirginia." Deficits caused TheatreVirginia to close its doors in 2002. Now, renovated and renamed for its founder, the Leslie Cheek is restoring live performance to VMFA and, while no longer supporting a resident company, it is available for special theatrical and performance events.
The National Theater is Richmond's premier music venue. It holds 1500 people and has shows regularly throughout the week. It opened winter of 2007 and was built in 1923. It features a state-of-the-art V-DOSC sound system, only the sixth installed in the country and only the third installed on the East Coast.
Visual Arts Center of Richmond, a not-for-profit organization that is one of the largest nongovernmental arts learning centers in the state of Virginia, founded in 1963. Serves 28,000 individuals annually.
Richmond CenterStage, a performing arts center that opened in Downtown Richmond in 2009 as part of an expansion of earlier facilities. The complex includes a renovation of the 1,700-seat Carpenter Theater and construction of a new multipurpose hall, community playhouse, and arts education center in the location of the old Thalhimers department store.
The Byrd Theatre in Carytown, a movie palace from the 1920s that features second-run movies, as well as the French Film Festival.
Virginia Commonwealth University School of the Arts, consistently ranked as one of the best in the nation."Top-ranked Graduate and First Professional Programs ." U.S. News & World Report. March 31, 2006. Retrieved on February 22, 2007.
Dogwood Dell, an amphitheatre in Byrd Park, where the Richmond Department of Recreation and Parks presents an annual Festival of the Arts.
SPARC (School of the Performing Arts in the Richmond Community). SPARC was founded in 1981, and trained children to become "triple threats", meaning they were equally versed in singing, acting, and dancing. SPARC has become the largest community-based theater arts education program in Virginia and it offers classes to every age group, during the summer and throughout the year.
Classic Amphitheatre at Strawberry Hill, the former summer concert venue located at Richmond International Raceway.
Commercial art galleries include Metro Space Gallery and Gallery 5 in a newly designated arts district.
Not-for-profit galleries include Visual Arts Center of Richmond, 1708 Gallery and Artspace.
In addition, in 2008, a new Gay Community Center opened on the city's north side, which hosts meetings of many kinds, and includes a large art gallery space.
Literary arts
Richmond has long been a hub for literature and writers, not limited to those identified with the South. Edgar Allan Poe was a child in the city, and the town's oldest stone house is now a museum to his life and works.http://www.poemuseum.org The Southern Literary Messenger, which included his writing, is just one of many notable publications that began in Richmond. Other noteworthy authors who have called Richmond home include Pulitzer-winning Ellen Glasgow, controversial figure James Branch Cabell, Meg Medina, Dean King, David L. Robbins, and MacArthur Fellow Paule Marshall. Tom Wolfe was born in Richmond, as was Breaking Bad creator Vince Gilligan. David Baldacci graduated from Virginia Commonwealth University, where the creative writing faculty has included Marshall, Claudia Emerson, Kathleen Graber, T. R. Hummer, Dave Smith, David Wojahn, Susann Cokal, Thomas De Haven and Larry Levis. Notable graduates include Sheri Reynolds, Jon Pineda, Anna Journey and Joshua Poteat. https://english.vcu.edu/mfa/creative-writing-faculty/ A community-based organization called James River Writers serves the greater Richmond area; it sponsors many programs for writers at all stages of their careers and puts on an annual writers' conference that draws attendees from miles away.
Architecture
thumb|left|Thomas Jefferson designed the Virginia State Capitol in Richmond
Richmond is home to many significant structures, including some designed by notable architects. The city contains diverse styles, including significant examples of Georgian, Federal, Greek Revival, Neoclassical, Egyptian Revival, Romanesque Revival, Gothic Revival, Tudor Revival, Italianate, Queen Anne, Colonial Revival, Art Deco, Modernist, International, and Postmodern buildings.
Much of Richmond's early architecture was destroyed by the Evacuation Fire in 1865. It is estimated that 25% of all buildings in Richmond were destroyed during this fire.Hansen, Harry. "The Civil War: A History." Published 2002, Signet Classic. ISBN 978-0-451-52849-0 Even fewer now remain due to construction and demolition that has taken place since Reconstruction. In spite of this, Richmond contains many historically significant buildings and districts. Buildings remain from Richmond's colonial period, such as the Patteson-Schutte House and the Edgar Allan Poe Museum (Richmond, Virginia), both built before 1750.
thumb|right|Egyptian Building of the VCU School of Medicine (1845), Richmond, Virginia
Architectural classicism is heavily represented in all districts of the city, particularly in Downtown, the Fan, and the Museum District. Several notable classical architects have designed buildings in Richmond. The Virginia State Capitol was designed by Thomas Jefferson and Charles-Louis Clérisseau in 1785. It is the second-oldest US statehouse in continuous use (after Maryland's) and was the first US government building built in the neo-classical style of architecture, setting the trend for other state houses and the federal government buildings (including the White House and The Capitol) in Washington, D.C."Jefferson & The Capital Of Virginia." An Exhibition at the Library of Virginia; January 7 – June 15, 2002. Retrieved on January 20, 2010. Robert Mills designed Monumental Church on Broad Street. Adjoining it is the 1845 Egyptian Building, one of the few Egyptian Revival buildings in the United States.
thumb|left|The Science Museum of Virginia, housed in Broad Street Station, designed by John Russell Pope
The firm of John Russell Pope designed Broad Street Station as well as Branch House on Monument Avenue, designed as a private residence in the Tudor style, now serving as the Branch Museum of Architecture and Design. Broad Street Station (or Union Station), designed in the Beaux-Arts style, is no longer a functioning station but is now home to the Science Museum of Virginia. Main Street Station, designed by Wilson, Harris, and Richards, has been returned to use in its original purpose. The Jefferson Hotel and the Commonwealth Club were both designed by the classically trained Beaux-Arts architects Carrère and Hastings. Many buildings on the University of Richmond campus, including Jeter Hall and Ryland Hall, were designed by Ralph Adams Cram, most famous for his Princeton University Chapel and the Cathedral of Saint John the Divine.
Richmond's urban residential neighborhoods also hold particular significance to the city's fabric. The Fan, the Museum District, Jackson Ward, Carver, Carytown, Oregon Hill and Church Hill (among others) are largely single use town homes and mixed use or full retail/dining establishments. These districts are anchored by large streets such as Franklin Street, Cary Street, the Boulevard, and Monument Avenue. The city's growth in population over the last decade has been concentrated in these areas.
Among Richmond's most interesting architectural features is its Cast-iron architecture. Second only to New Orleans in its concentration of cast iron work, the city is home to a unique collection of cast iron porches, balconies, fences, and finials. Richmond's position as a center of iron production helped to fuel its popularity within the city. At the height of production in the 1890, 25 foundries operated in the city employing nearly 3,500 metal workers. This number is seven times the number of general construction workers being employed in Richmond at the time which illustrates the importance of its iron exports.Robert P. Winthrop, Cast and Wrought: The Architectural Metalwork of Richmond, Virginia, (Richmond, Virginia: Valentine Museum, 1980), 93. Porches and fences in urban neighborhoods such as Jackson Ward, Church Hill, and Monroe Ward are particularly elaborate, often featuring ornate iron casts never replicated outside of Richmond. In some cases cast were made for a single residential or commercial application.
Richmond is home to several notable instances of various styles of modernism. Minoru Yamasaki designed the Federal Reserve Building which dominates the downtown skyline. The architectural firm of Skidmore, Owings & Merrill has designed two buildings: the Library of Virginia and the General Assembly Offices at the Eighth and Main Building. Philip Johnson designed the WRVA Building. The Richard Neutra-designed Rice House, a residence on a private island on the James River, remains Richmond's only true International Style home. The W.G. Harris residence in Richmond was designed by famed early modern architect and member of the Harvard Five,"The Harvard Five in New Canaan", William D. Earls AIA, W. W Norton and Co., 2006 ISBN 978-0-393-73183-5 Landis Gores. Other notable architects to have worked in the city include Rick Mather, I.M. Pei, and Gordon Bunshaft.
VCU is currently raising funds for a new Institute of Contemporary Arts designed by Steven Holl. The ICA is to be funded by private donors and will hopefully be opened by 2015.
Historic districts
Richmond's City Code provides for the creation of old and historic districts so as to "recognize and protect the historic, architectural, cultural, and artistic heritage of the City."City Code of Richmond, Virginia, Section 30-930.2. Pursuant to that authority, the city has designated 45 districts throughout the city.City Code of Richmond, Virginia, Section 30-930.5. The majority of these districts are also listed in the Virginia Landmarks Register ("VLR") and the National Register of Historic Places ("NRHP").
Fifteen of the districts represent broad sections of the city:Detailed descriptions of these districts are provided by the city in Old & Historic Districts of Richmond, Virginia, Handbook and Design Review Guidelines (1st Edition, December, 2006, updated January, 2015), p. 11.
Historic District City VLR NRHPThe Virginia Department of Historic Resources maintains copies of the applications filed with the National Registry of Historic Places. Boulevard (Grace St. to Idlewood Ave) 1992 1986 1986 Broad Street (Belvidere St. to First St.)198519861987 2004 2007 Chimborazo Park (32nd to 36th Sts. & Marshall St. to Chimborazo Park)198720042005 Church Hill North (Marshall to Cedar Sts. & Jefferson Ave. to N. 29th St.)200719961997 2000 Hermitage Road (Laburnum Ave. to Westbrook Ave.)198820052006 Jackson Ward (Belvidere to 2nd Sts. & Jackson to Marshall Sts.) 198719761976 Monument Avenue (Birch St. to Roseneath Rd.) 197119691970 St. John's Church (21st to 32nd Sts. & Broad to Franklin Sts.) 195719691966 Shockoe Slip (12th to 15th Sts. & Main to Canal/Dock Sts.) 197919711972 Shockoe Valley (18th to 21st Sts. & Marshall to Franklin Sts.) 197719811983 Springhill (19th to 22nd Sts. & Riverside Dr. to Semmes Ave.) 200620132014 200 Block West Franklin Street (Madison to Jefferson Sts.) 197719771977 West Franklin Street (Birch to Harrison Sts.) 199019721972 West Grace Street (Ryland St. to Boulevard) 199619971998 Zero Blocks East and West Franklin (Adams to First Sts. & Grace to Main Sts.) 198719791980
The remaining thirty districts are limited to an individual building or group of buildings throughout the city:
Historic District VLR NRHP The Barret House (15 South Fifth Street) 1971 1972 Belgian Building (Lombardy Street and Brook Road)19691970 Bolling Haxall House (211 East Franklin Street)19711972 Centenary United Methodist Church (409 East Grace Street)19791979 Crozet House (100-102 East Main Street)19711972 Glasgow House (1 West Main Street)19721972 Hancock-Wirt-Caskie House (2 North Fifth Street)19691970 2008 Henry Coalter Cabell House (116 South Third Street)19711971 Jefferson Hotel (114 West Main Street)19681969 John Marshall House (818 East Marshall Street)19691966 Leigh Street Baptist Church (East Leigh and Twenty-Fifth Streets)19711972 Linden Row (100-114 East Franklin Street)19711971 Mayo Memorial House (110 West Franklin Street)19721973 William W. Morien House (2226 West Main Street) Norman Stewart House (707 East Franklin Street)19721972 Old Stone House (1916 East Main Street)19731973 Pace House (100 West Franklin Street) St. Andrew's Episcopal Church (Northwest corner South Laurel Street and Idlewood Avenue)19791979 St. Paul's Episcopal Church (815 East Grace Street)19681969 St. Peter's Catholic Church (800 East Grace Street)19681969 Second Presbyterian Church (9 North Fifth Street)19711972 Sixth Mount Zion Baptist Church (12-14 West Duval Street)19961996 Stonewall Jackson School (1520 West Main Street)19841984 Talavera (2315 West Grace Street) Valentine Museum and Wickham-Valentine House (1005-1015 East Clay Street)19681969 Virginia House (4301 Sulgrave Road)19891990 White House of the Confederacy (1200 East Clay Street)19691966 Wilton (215 South Wilton Road)19751976 Joseph P. Winston House (103 East Grace Street)19781979 Woodward House-Rockets (3017 Williamsburg Avenue)19741974
Food
Richmond has been recognized in recent years for being a "foodie city", particularly for its modern renditions of traditional southern cuisine. The city also claims the invention of the sailor sandwich, which includes pastrami, knockwurst, Swiss cheese and mustard on rye bread. Richmond is also where, in 1935, canned beer was made commercially available for the first time.
Parks and outdoor recreation
thumb|Lewis Ginter Botanical Garden
The city operates one of the oldest municipal park systems in the country. The park system began when the city council voted in 1851 to acquire , now known as Monroe Park. Today, Monroe Park sits adjacent to the Virginia Commonwealth University campus and is one of more than 40 parks comprising a total of more than .
Several parks are located along the James River, and the James River Parks System offers bike trails, hiking and nature trails, and many scenic overlooks along the river's route through the city. The trails are used as part of the Xterra East Championship course for both the running and mountain biking portions of the off-road triathlon.
There are also parks on two major islands in the river: Belle Isle and Brown's Island. Belle Isle, at various former times a Powhatan fishing village, colonial-era horse race track, and Civil War prison camp, is the larger of the two, and contains many bike trails as well as a small cliff that is used for rock climbing instruction. One can walk the island and still see many of the remains of the Civil War prison camp, such as an arms storage room and a gun emplacement that was used to quell prisoner riots. Brown's Island is a smaller island and a popular venue of a large number of free outdoor concerts and festivals in the spring and summer, such as the weekly Friday Cheers concert series or the James River Beer and Seafood Festival.
thumb|left|Japanese Garden at Maymont
Two other major parks in the city along the river are Byrd Park and Maymont, located near the Fan District. Byrd Park features a running track, with exercise stops, a public dog park, and a number of small lakes for small boats, as well as two monuments, Buddha house, and an amphitheatre. Prominently featured in the park is the World War I Memorial Carillon, built in 1926 as a memorial to those that died in the war. Maymont, located adjacent to Byrd Park, is a Victorian estate with a museum, formal gardens, native wildlife exhibits, nature center, carriage collection, and children's farm. Other parks in the city include Joseph Bryan Park Azalea Garden, Forest Hill Park (former site of the Forest Hill Amusement Park), Chimborazo Park (site of the National Battlefield Headquarters), among others.
The James River itself through Richmond is renowned as one of the best in the country for urban white-water rafting/canoeing/kayaking. Several rafting companies offer complete services. There are also several easily accessed riverside areas within the city limits for rock-hopping, swimming, and picnicking.
Lewis Ginter Botanical Garden is located adjacent to the city in Henrico County. Founded in 1984, Lewis Ginter Botanical Garden is located on and features a glass conservatory, a rose garden, a healing garden, and an accessible-to-all children's garden. The Garden is a public place for the display and scientific study of plants. Lewis Ginter Botanical Garden is one of only two independent public botanical gardens in Virginia and is designated a state botanical garden.
Several theme parks are also located near the city, including Kings Dominion to the north, and Busch Gardens to the east, near Williamsburg.
Sports
Richmond is not home to any major league professional sports teams, but since 2013, the Washington Redskins of the National Football League have held their summer training camp in the city. There are also several minor league sports in the city, including the Richmond Kickers of the United Soccer League (third tier of American soccer) and the Richmond Flying Squirrels of the Class AA Eastern League of Minor League Baseball (an affiliate of the San Francisco Giants). The Kickers began playing in Richmond in 1993, and currently play at City Stadium. The Squirrels opened their first season at The Diamond on April 15, 2010. From 1966 through 2008, the city was home to the Richmond Braves, a AAA affiliate of the Atlanta Braves of Major League Baseball, until the franchise relocated to Georgia.
It is also the home to the Richmond Black Widows, the city's first women's football team, founded in 2015 by Sarah Schkeeper. They are a part of the Women's Football Alliance. Their game season begins in April, with preseason beginning in January.
Another significant sports venue is the 6,000-seat Arthur Ashe Athletic Center, a multi-purpose arena named for tennis great and Richmond resident Arthur Ashe. This facility hosts a variety of local sporting events, concerts, and other activities. As the home of Arthur Ashe, the sport of tennis is also popular in Richmond, and in 2010, the United States Tennis Association named Richmond as the third "Best Tennis Town", behind Charleston, South Carolina, and Atlanta, Georgia.
Auto racing is also popular in the area. The Richmond International Raceway (RIR) has hosted NASCAR Sprint Cup races since 1953, as well as the Capital City 400 from 1962 − 1980. RIR also hosted IndyCar's Suntrust Indy Challenge from 2001 − 2009. Another track, Southside Speedway, has operated since 1959 and sits just southwest of Richmond in Chesterfield County. This oval short-track has become known as the "Toughest Track in the South" and "The Action Track", and features weekly stock car racing on Friday nights. Southside Speedway has acted as the breeding grounds for many past NASCAR legends including Richard Petty, Bobby Allison and Darrell Waltrip, and claims to be the home track of NASCAR superstar Denny Hamlin.
In 2015, Richmond hosted the 2015 UCI Road World Championships, which had cyclists from 76 countries and an economic impact on the Greater Richmond Region estimated to be $158.1 million, from both event staging and visitor spending.
College basketball has also had recent success with the Richmond Spiders and the VCU Rams, both of the Atlantic 10 Conference. The Spiders' men's and women's teams play at Robins Center and the Rams' men's and women's teams play at the Stuart C. Siegel Center.
Media
The Richmond Times-Dispatch, the local daily newspaper in Richmond with a Sunday circulation of 120,000, is owned by BH Media, a subsidiary of Warren Buffett's Berkshire Hathaway company. Style Weekly is a standard weekly publication covering popular culture, arts, and entertainment, owned by Landmark Communications. RVA Magazine is the city's only independent art music and culture publication, was once monthly, but is now issued quarterly. The Richmond Free Press and the Voice cover the news from an African-American perspective.
The Richmond metro area is served by many local television and radio stations. , the Richmond-Petersburg designated market area (DMA) is the 58th largest in the U.S. with 553,950 homes according to Nielsen Market Research. The major network television affiliates are WTVR-TV 6 (CBS), WRIC-TV 8 (ABC), WWBT 12 (NBC), WRLH-TV 35 (Fox), and WUPV 65 (CW). Public Broadcasting Service stations include WCVE-TV 23 and WCVW 57. There are also a wide variety of radio stations in the Richmond area, catering to many different interests, including news, talk radio, and sports, as well as an eclectic mix of musical interests.
Government and politics
thumb|Richmond City Hall
Richmond city government consists of a city council with representatives from nine districts serving in a legislative and oversight capacity, as well as a popularly elected, at-large mayor serving as head of the executive branch. Citizens in each of the nine districts elect one council representative each to serve a four-year term. Beginning with the November 2008 election Council terms was lengthened to 4 years. The city council elects from among its members one member to serve as Council President and one to serve as Council Vice President. The city council meets at City Hall, located at 900 E. Broad St., 2nd Floor, on the second and fourth Mondays of every month, except August.
In 1977, a federal district court ruled in favor of Curtis Holt Jr. who had claimed the council's existing election process — an at large voting system — was racially biased. The verdict required the city to rebuild its council into nine distinct wards. Within the year the city council switched from majority white to majority black, reflecting the city's populace. This new city council elected Richmond's first black mayor, Henry L. Marsh.
In 1990 religion and politics intersected to impact the outcome of the Eighth District election in South Richmond. With the endorsements of black power brokers, black clergy and the Richmond Crusade for Voters, South Richmond residents made history, electing Reverend A. Carl Prince to the Richmond City Council. As the first African American Baptist Minister elected to the Richmond City Council, Prince's election paved the way for a political paradigm shift in politics that persist today. Following Prince's election, Reverend Gwendolyn Hedgepeth and the Reverend Leonidas Young, former Richmond Mayor were elected to public office. Prior to Prince's election black clergy made political endorsements and served as appointees to the Richmond School Board and other boards throughout the city. Today religion and politics continues to thrive in the Commonwealth of Virginia. The Honorable Dwight C. Jones, a prominent Baptist pastor and former Chairman of the Richmond School Board and Member of the Virginia House of Delegates serves as Mayor of the City of Richmond.
Richmond's government changed in 2004 from a council-manager form of government to an at-large, popularly elected Mayor. In a landslide election, incumbent mayor Rudy McCollum was defeated by L. Douglas Wilder, who previously served Virginia as the first elected African American governor in the United States since Reconstruction. The current mayor of Richmond is Dwight Clinton Jones, who was elected in 2008 and re-elected in 2012. The mayor is not a part of the Richmond City Council.
, the Richmond City Council consisted of:
Michelle R. Mosby, 9th District (South Central), President of Council
Chris A. Hilbert, 3rd District (Northside), Vice-President of Council
Jonathan T. Baliles, 1st District (West End)
Charles R. Samuels, 2nd District (North Central)
Kathy C. Graziano, 4th District (Southwest)
Parker C. Agelasto, 5th District (Central)
Ellen F. Robertson, 6th District (Gateway)
Cynthia I Newbille, 7th District (East End)
Reva M. Trammell, 8th District (Southside)
Education
thumb|The Art Deco-styled Thomas Jefferson High School in the near West End
The city of Richmond operates 28 elementary schools, nine middle schools, and eight high schools, serving a total student population of 24,000 students. There is one Governor's School in the city − the Maggie L. Walker Governor's School for Government and International Studies. In 2008, it was named as one of Newsweek magazine's 18 "public elite" high schools, and in 2012, it was rated #16 of America's best high schools overall. Richmond's public school district also runs one of Virginia's four public charter schools, the Patrick Henry School of Science and Arts, which was founded in 2010.
As of 2008, there were 36 private schools serving grades one or higher in the city of Richmond. Some of these schools include: Benedictine High School, St. Bridget School, Brook Road Academy, Collegiate School, St. Christopher's School, St. Gertrude High School, St. Catherine's School, Southside Baptist Christian School, Northstar Academy, The Steward School, Trinity Episcopal School, and Veritas School.
Colleges and universities
The Richmond area has many major institutions of higher education, including Virginia Commonwealth University (public), University of Richmond (private), Virginia Union University (private), Virginia College (private), South University–Richmond (private, for-profit), Union Theological Seminary & Presbyterian School of Christian Education (private), and the Baptist Theological Seminary in Richmond (BTSR—private). Several community colleges are found in the metro area, including J. Sargeant Reynolds Community College and John Tyler Community College (Chesterfield County). In addition, there are several Technical Colleges in Richmond including ITT Technical Institute, ECPI College of Technology and Centura College. There are several vocational colleges also, such as Fortis College and Bryant Stratton College.
Virginia State University is located about south of Richmond, in the suburb of Ettrick, just outside Petersburg. Randolph-Macon College is located about north of Richmond, in the incorporated town of Ashland.
Infrastructure
Transportation
thumb|Richmond's downtown Main Street Station
The Greater Richmond area is served by the Richmond International Airport , located in nearby Sandston, seven miles (11 km) southeast of Richmond and within an hour drive of historic Williamsburg, Virginia. Richmond International is now served by nine airlines with over 200 daily flights providing non-stop service to major destination markets and connecting flights to destinations worldwide. A record 3.3 million passengers used Richmond International Airport in 2006, a 13% increase over 2005.
Richmond is a major hub for intercity bus company Greyhound Lines, with its terminal at 2910 N Boulevard. Multiple runs per day connect directly with Washington, D.C., New York, Raleigh, and elsewhere. Direct trips to New York take approximately 7.5 hours. Discount carrier Megabus also provides curbside service from outside of Main Street Station, with fares starting at $1. Direct service is available to Washington, D.C., Hampton Roads, Charlotte, Raleigh, Baltimore, and Philadelphia. Most other connections to Megabus served cites, such as New York, can be made from Washington, D.C. Richmond, and the surrounding metropolitan area, was granted a roughly $25 million grant from the U.S. Department of Transportation in 2014 to support a newly proposed Rapid Transit System, which would run along Broad Street from Willow Lawn to Landing, in the first phase of an improved public transportation hub for the region.
Local transit and paratransit bus service in Richmond, Henrico, and Chesterfield counties is provided by the Greater Richmond Transit Company (GRTC). The GRTC, however, serves only small parts of the suburban counties. The far West End (Innsbrook and Short Pump) and almost all of Chesterfield County have no public transportation despite dense housing, retail, and office development. According to a 2008 GRTC operations analysis report, a majority of GRTC riders utilize their services because they do not have an available alternative such as a private vehicle.
The Richmond area also has two railroad stations served by Amtrak. Each station receives regular service from north of Richmond including Washington, D.C., Philadelphia, and New York. The suburban Staples Mill Road Station is located on a major north-south freight line and receives all service to and from all points south including, Raleigh, Durham, Savannah, Newport News, Williamsburg and Florida. Richmond's only railway station located within the city limits, the historic Main Street Station, was renovated in 2004. As of 2010, the station only receives trains headed to and from Newport News and Williamsburg due to track layout. As a result, the Staples Mill Road station receives more trains and serves more passengers overall.
Richmond also benefits from an excellent position in reference to the state's transportation network, lying at the junction of east-west Interstate 64 and north-south Interstate 95, two of the most heavily traveled highways in the state, as well as along several major rail lines.
Major highways
(Broad Street)
Utilities
Electricity in the Richmond Metro area is provided by Dominion Virginia Power. The company, based in Richmond, is one of the nation's largest producers of energy, serving retail energy customers in nine states. Electricity is provided in the Richmond area primarily by the North Anna Nuclear Generating Station and Surry Nuclear Generating Station, as well as a coal-fired station in Chester, Virginia. These three plants provide a total of 4,453 megawatts of power. Several other natural gas plants provide extra power during times of peak demand. These include facilities in Chester, and Surry, and two plants in Richmond (Gravel Neck and Darbytown).Dominion Virginia Power Website.
Natural gas in the Richmond Metro area is provided by the city's Department of Public Utilities and also serves portions of Henrico and Chesterfield counties.
Water is provided by the city's Department of Public Utilities, and is one of the largest water producers in Virginia, with a modern plant that can treat up to 132 million gallons of water a day from the James River. The facility also provides water to the surrounding area through wholesale contracts with Henrico, Chesterfield, and Hanover counties. Overall, this results in a facility that provides water for approximately 500,000 people.
The wastewater treatment plant and distribution system of water mains, pumping stations and storage facilities provide water to approximately 62,000 customers in the city. There is also a wastewater treatment plant located on the south bank of the James River. This plant can treat up to 70 million gallons of water per day of sanitary sewage and stormwater before returning it to the river. The wastewater utility also operates and maintains of sanitary sewer and pumping stations, of intercepting sewer lines, and the Shockoe Retention Basin, a 44-million-gallon stormwater reservoir used during heavy rains.
International relations
Sister cities
Richmond maintains the following five sister city relationships:
Richmond-upon-Thames, England, United Kingdom
Saitama, Japan
Windhoek, Namibia
Zhengzhou, China
Ségou, Mali
See also
List of Richmonders
National Register of Historic Places listings in Richmond, Virginia
New South
Richmond Police Department
Notes
References
Further reading
Bill, Alfred Hoyt. The Beleaguered City: Richmond, 1861-1865 (1946).
Calcutt, Rebecca Barbour. Richmond's Wartime Hospitals (Pelican Publishing, 2005).
Chesson, Michael B. Richmond after the war, 1865-1890 (Virginia State Library, 1981).
Furgurson, Ernest B. Ashes of glory: Richmond at war (1996).
Hoffman, Steven J. Race, Class and Power in the Building of Richmond, 1870-1920 (McFarland, 2004).
Thomas, Emory M. The Confederate State of Richmond: A Biography of the Capital (LSU Press, 1998).
Trammell, Jack. The Richmond Slave Trade: The Economic Backbone of the Old Dominion (The History Press, 2012).
Wright, Mike. City Under Siege: Richmond in the Civil War (Rowman & Littlefield, 1995)
External links
Greater Richmond Chamber of Commerce
Richmond Metropolitan Convention & Visitors Bureau
Greater Richmond Convention Center
Richmond, Virginia, a National Park Service Discover Our Shared Heritage travel itinerary
Richmond Lions Rugby Football Club Website
Category:Cities in Virginia
Category:Populated places on the James River (Virginia)
Category:Populated places established in 1737
Category:1737 establishments in the Thirteen Colonies
Category:Capitals of former nations | 53,274 | 2017-01 |
Genocide | Genocide is intentional action to destroy a people (usually defined as an ethnic, national, racial, or religious group) in whole or in part. The hybrid word "genocide" is a combination of the Greek word génos ("race, people") and the Latin suffix -cide ("act of killing").What is genocide? by Genocide Watch The United Nations Genocide Convention defines genocide as "acts committed with intent to destroy, in whole or in part, a national, ethnic, racial or religious group".What Is Genocide?
The term genocide was coined in response the Armenian Genocide and subsequently applied to the Holocaust. It has subsequently been applied to many other mass killings, well-known examples including the Greek genocide, the Assyrian genocide, the Holodomor, the 1971 Bangladesh genocide, the Cambodian genocide, and, more recently, the Guatemalan genocide, the Kurdish genocide, the Bosnian genocide, and the Rwandan genocide.
Etymology
Genocide has become an official term used in international relations. Before 1944, various terms, including "massacre" and "crimes against humanity" were used to describe intentional, systematic killings (and in 1941, Winston Churchill described the mass killing of Russian POWs and civilians by the German army as "a crime without a name.") In 1943 "In 1943, in the course of his monumental study Axis Rule in Occupied Europe, the late Raphael Lemkin coined the word genocide - from the Greek genos (race or tribe) and the Latin cide (killing) - to describe the deliberate 'destruction of a nation or of an ethnic group.'" or 1944, Raphael Lemkin created the term genocide, being inspired by the Armenian experience at the hands of the Ottoman Turks,Yair Auron. The Banality of Denial: Israel and the Armenian Genocide. — Transaction Publishers, 2004. — p. 9:"...when Raphael Lemkin coined the word genocide in 1944 he cited the 1915 annihilation of Armenians as a seminal example of genocide"William Schabas. Genocide in international law: the crimes of crimes. — Cambridge University Press, 2000. — p. 25:"Lemkin’s interest in the subject dates to his days as a student at Lvov University, when he intently followed attempts to prosecute the perpetration of the massacres of the ArmeniansA. Dirk Moses. Genocide and settler society: frontier violence and stolen indigenous children in Australian history. — Berghahn Books, 2004. — p. 21:"Indignant that the perpetrators of the Armenian genocide had largely escaped prosecution, Lemkin, who was a young state prosecutor in Poland, began lobbying in the early 1930s for international law to criminalize the destruction of such groups." to describe policies of systematic murder, in particular those being carried out by the Nazis, and the word was quickly adopted by many in the international community. The word genocide is the combination of the Greek prefix geno- (meaning "tribe" or "race") and caedere (the Latin word for "to kill"), and is defined as a specific set of violent crimes that are committed against a certain group with the attempt to remove the entire group from existence or to destroy them.
The word genocide was later included as a descriptive term to the process of indictment, but not yet as a formal legal term<ref what is genocide "What Is Genocide?". United States Holocaust Memorial Museum. United States Holocaust Memorial Council, 20 June 2014. 24 Feb. 2015. According to Lemkin, genocide was defined as "a coordinated strategy to destroy a group of people, a process that could be accomplished through total annihilation as well as strategies that eliminate key elements of the group's basic existence, including language, culture, and economic infrastructure.” He created a concept of mobilizing much of the international relations and community, to working together and preventing the occurrence of such events happening within history and the international society.Rothenberg, Daniel. "Genocide." Encyclopedia of Genocide and Crimes Against Humanity. Ed. Dinah L. Shelton. Vol. 1. Detroit: Macmillan Reference USA, 2005. 395–397. Gale Virtual Reference Library. Web. 4 Mar. 2015. Australian anthropologist Peg LeVine coined the term "ritualcide" to describe the destruction of a group's cultural identity without necessarily destroying its members.
The study of genocide has mainly been focused towards the legal aspect of the term. Formally recognizing the act of genocide as a crime involves the undergoing prosecution that begins with not only seeing genocide as outrageous past any moral standpoint, but also may be a legal liability within international relations. When genocide is looked at in a general aspect it is viewed as the deliberate killing of a certain group. Yet is commonly seen to escape the process of trial and prosecution due to the fact that genocide is more often than not committed by the officials in power of a state or area. In 1648 before the term genocide had been coined, the Peace of Westphalia was established to protect ethnic, national, racial and in some instances religious groups. During the 19th century humanitarian intervention was needed due to the fact of conflict and justification of some of the actions executed by the military.Schabas, William A. "United Nations Audiovisual Library of International Law." United Nations Audiovisual Library of International Law. National University of Ireland, n.d. Web. 04 Mar. 2015. <http://legal.un.org/avl/ha/cppcg/cppcg.html>.
Raphael Lemkin, in his work Axis Rule in Occupied Europe (1944), or possibly in 1943, coined the term "genocide" by combining Greek genos (, 'race, people') and Latin caedere ('to kill').genocide in the Oxford English Dictionary, 2nd ed.
Lemkin defined genocide as follows:
Generally speaking, genocide does not necessarily mean the immediate destruction of a nation, except when accomplished by mass killings of all members of a nation. It is intended rather to signify a coordinated plan of different actions aiming at the destruction of essential foundations of the life of national groups, with the aim of annihilating the groups themselves. The objectives of such a plan would be the disintegration of the political and social institutions, of culture, language, national feelings, religion, and the economic existence of national groups, and the destruction of the personal security, liberty, health, dignity, and even the lives of the individuals belonging to such groups.
The preamble to the Genocide Convention (CPPCG) notes that instances of genocide have taken place throughout history,Office of the High Commissioner for Human Rights. Convention on the Prevention and Punishment of the Crime of Genocide but it was not until Lemkin coined the term and the prosecution of perpetrators of the Holocaust at the Nuremberg trials that the United Nations defined the crime of genocide under international law in the Genocide Convention.
During a video interview with Raphael Lemkin for the CBS, news commentator Quincy Howe asked him about how he came to be interested in the crime of genocide. He replied: "I became interested in genocide because it happened so many times. It happened to the Armenians, then after the Armenians, Hitler took action".
The Greek author Nikos Sarantakos claims that the first official use of the term "genocide" was made not in relation to the Holocaust, but by an international committee in 1948 referring to the kidnapping of children by the communists during the Greek Civil War.Nikos Sarantakos, "Genocides and Criminalisation", 5-9-2014. In Greek language.
As a crime
International law
thumb|250px|Members of the Sonderkommando burn corpses of Jews in pits at Auschwitz II-Birkenau, an extermination camp
After the Holocaust, which had been perpetrated by the Nazi Germany and its allies prior to and during World War II, Lemkin successfully campaigned for the universal acceptance of international laws defining and forbidding genocides. In 1946, the first session of the United Nations General Assembly adopted a resolution that "affirmed" that genocide was a crime under international law, but did not provide a legal definition of the crime. In 1948, the UN General Assembly adopted the Convention on the Prevention and Punishment of the Crime of Genocide (CPPCG) which defined the crime of genocide for the first time.
The CPPCG was adopted by the UN General Assembly on 9 December 1948 and came into effect on 12 January 1951 (Resolution 260 (III)). It contains an internationally recognized definition of genocide which has been incorporated into the national criminal legislation of many countries, and was also adopted by the Rome Statute of the International Criminal Court, which established the International Criminal Court (ICC). Article II of the Convention defines genocide as:
The first draft of the Convention included political killings, but these provisions were removed in a political and diplomatic compromise following objections from some countries, including the USSR, a permanent security council member. The USSR argued that the Convention's definition should follow the etymology of the term, and may have feared greater international scrutiny of its own Great Purge. Other nations feared that including political groups in the definition would invite international intervention in domestic politics. However leading genocide scholar William Schabas states: “Rigorous examination of the travaux fails to confirm a popular impression in the literature that the opposition to inclusion of political genocide was some Soviet machination. The Soviet views were also shared by a number of other States for whom it is difficult to establish any geographic or social common denominator: Lebanon, Sweden, Brazil, Peru, Venezuela, the Philippines, the Dominican Republic, Iran, Egypt, Belgium, and Uruguay. The exclusion of political groups was in fact originally promoted by a non-governmental organization, the World Jewish Congress, and it corresponded to Raphael Lemkin’s vision of the nature of the crime of genocide.” William A. Schabas (2009), Genocide in International Law: The Crime of Crimes, 2nd Ed., pg 160
The convention's purpose and scope was later described by the United Nations Security Council as follows:
Specific provisions
"Intent to destroy"
In 2007 the European Court of Human Rights (ECHR), noted in its judgement on Jorgic v. Germany case that in 1992 the majority of legal scholars took the narrow view that "intent to destroy" in the CPPCG meant the intended physical-biological destruction of the protected group and that this was still the majority opinion. But the ECHR also noted that a minority took a broader view and did not consider biological-physical destruction was necessary as the intent to destroy a national, racial, religious or ethnic group was enough to qualify as genocide.European Court of Human Rights Judgement in Jorgic v. Germany (Application no. 74613/01) paragraphs 18, 36,74
In the same judgement the ECHR reviewed the judgements of several international and municipal courts judgements. It noted that International Criminal Tribunal for the Former Yugoslavia and the International Court of Justice had agreed with the narrow interpretation, that biological-physical destruction was necessary for an act to qualify as genocide. The ECHR also noted that at the time of its judgement, apart from courts in Germany which had taken a broad view, that there had been few cases of genocide under other Convention States municipal laws and that "There are no reported cases in which the courts of these States have defined the type of group destruction the perpetrator must have intended in order to be found guilty of genocide".European Court of Human Rights Judgement in Jorgic v. Germany (Application no. 74613/01) paragraphs 43–46
"In part"
thumb|Armenian Genocide victims
The phrase "in whole or in part" has been subject to much discussion by scholars of international humanitarian law.What is Genocide? McGill Faculty of Law (McGill University) The International Criminal Tribunal for the Former Yugoslavia found in Prosecutor v. Radislav Krstic – Trial Chamber I – Judgment – IT-98-33 (2001) ICTY8 (2 August 2001)Prosecutor v. Radislav Krstic – Trial Chamber I – Judgment – IT-98-33 (2001) ICTY8 (2 August 2001) that Genocide had been committed. In Prosecutor v. Radislav Krstic – Appeals Chamber – Judgment – IT-98-33 (2004) ICTY 7 (19 April 2004)Prosecutor v. Radislav Krstic – Appeals Chamber – Judgment – IT-98-33 (2004) ICTY 7 (19 April 2004) paragraphs 8, 9, 10, and 11 addressed the issue of in part and found that "the part must be a substantial part of that group. The aim of the Genocide Convention is to prevent the intentional destruction of entire human groups, and the part targeted must be significant enough to have an impact on the group as a whole." The Appeals Chamber goes into details of other cases and the opinions of respected commentators on the Genocide Convention to explain how they came to this conclusion.
The judges continue in paragraph 12, "The determination of when the targeted part is substantial enough to meet this requirement may involve a number of considerations. The numeric size of the targeted part of the group is the necessary and important starting point, though not in all cases the ending point of the inquiry. The number of individuals targeted should be evaluated not only in absolute terms, but also in relation to the overall size of the entire group. In addition to the numeric size of the targeted portion, its prominence within the group can be a useful consideration. If a specific part of the group is emblematic of the overall group, or is essential to its survival, that may support a finding that the part qualifies as substantial within the meaning of Article 4 [of the Tribunal's Statute]."Prosecutor v. Radislav Krstic – Appeals Chamber – Judgment – IT-98-33 (2004) ICTY 7 (19 April 2004) See Paragraph 6: "Article 4 of the Tribunal's Statute, like the Genocide Convention, covers certain acts done with "intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such."Statute of the International Tribunal for the Prosecution of Persons Responsible for Serious Violations of International Humanitarian Law Committed in the Territory of the Former Yugoslavia since 1991, U.N. Doc. S/25704 at 36, annex (1993) and S/25704/Add.1 (1993), adopted by Security Council on 25 May 1993, Resolution 827 (1993).
In paragraph 13 the judges raise the issue of the perpetrators' access to the victims: "The historical examples of genocide also suggest that the area of the perpetrators’ activity and control, as well as the possible extent of their reach, should be considered. ... The intent to destroy formed by a perpetrator of genocide will always be limited by the opportunity presented to him. While this factor alone will not indicate whether the targeted group is substantial, it can—in combination with other factors—inform the analysis."
CPPCG(Convention on the Prevention & Punishment of the Crime of Genocide) coming into force
The Convention came into force as international law on 12 January 1951 after the minimum 20 countries became parties. At that time however, only two of the five permanent members of the UN Security Council were parties to the treaty: France and the Republic of China. The Soviet Union ratified in 1954, the United Kingdom in 1970, the People's Republic of China in 1983 (having replaced the Taiwan-based Republic of China on the UNSC in 1971), and the United States in 1988. This long delay in support for the Convention by the world's most powerful nations caused the Convention to languish for over four decades. Only in the 1990s did the international law on the crime of genocide begin to be enforced.
UN Security Council on genocide
UN Security Council Resolution 1674, adopted by the United Nations Security Council on 28 April 2006, "reaffirms the provisions of paragraphs 138 and 139 of the 2005 World Summit Outcome Document regarding the responsibility to protect populations from genocide, war crimes, ethnic cleansing and crimes against humanity".Resolution Resolution 1674 (2006) The resolution committed the Council to action to protect civilians in armed conflict.Security Council passes landmark resolution – world has responsibility to protect people from genocide Oxfam Press Release – 28 April 2006
In 2008 the UN Security Council adopted resolution 1820, which noted that "rape and other forms of sexual violence can constitute war crimes, crimes against humanity or a constitutive act with respect to genocide".http://www.un.org/News/Press/docs/2008/sc9364.doc.htm
Municipal law
Since the Convention came into effect in January 1951 about 80 United Nations member states have passed legislation that incorporates the provisions of CPPCG into their municipal law.The Crime of Genocide in Domestic Laws and Penal Codes website of prevent genocide international.
Criticisms of the CPPCG and other definitions of genocide
William Schabas has suggested that a permanent body as recommended by the Whitaker Report to monitor the implementation of the Genocide Convention, and require States to issue reports on their compliance with the convention (such as were incorporated into the United Nations Optional Protocol to the Convention against Torture), would make the convention more effective.William Schabas War crimes and human rights: essays on the death penalty, justice and accountability, Cameron May 2008 ISBN 1-905017-63-4, ISBN 978-1-905017-63-8. p. 791
Writing in 1998 Kurt Jonassohn and Karin Björnson stated that the CPPCG was a legal instrument resulting from a diplomatic compromise. As such the wording of the treaty is not intended to be a definition suitable as a research tool, and although it is used for this purpose, as it has an international legal credibility that others lack, other definitions have also been postulated. Jonassohn and Björnson go on to say that none of these alternative definitions have gained widespread support for various reasons.Kurt Jonassohn & Karin Solveig Björnson, Genocide and Gross Human Rights Violations in Comparative Perspective: In Comparative Perspective, Transaction Publishers, 1998, ISBN 0-7658-0417-4, ISBN 978-0-7658-0417-4. pp. 133–135
Jonassohn and Björnson postulate that the major reason why no single generally accepted genocide definition has emerged is because academics have adjusted their focus to emphasise different periods and have found it expedient to use slightly different definitions to help them interpret events. For example, Frank Chalk and Kurt Jonassohn studied the whole of human history, while Leo Kuper and R. J. Rummel in their more recent works concentrated on the 20th century, and Helen Fein, Barbara Harff and Ted Gurr have looked at post World War II events. Jonassohn and Björnson are critical of some of these studies, arguing that they are too expansive, and conclude that the academic discipline of genocide studies is too young to have a canon of work on which to build an academic paradigm.
The exclusion of social and political groups as targets of genocide in the CPPCG legal definition has been criticized by some historians and sociologists, for example M. Hassan Kakar in his book The Soviet Invasion and the Afghan Response, 1979–1982M. Hassan Kakar Afghanistan: The Soviet Invasion and the Afghan Response, 1979–1982 University of California press 1995 The Regents of the University of California. argues that the international definition of genocide is too restricted,M. Hassan Kakar 4. The Story of Genocide in Afghanistan: 13. Genocide Throughout the Country and that it should include political groups or any group so defined by the perpetrator and quotes Chalk and Jonassohn: "Genocide is a form of one-sided mass killing in which a state or other authority intends to destroy a group, as that group and membership in it are defined by the perpetrator."Frank Chalk, Kurt Jonassohn The History and Sociology of Genocide: Analyses and Case Studies, Yale University Press, 1990, ISBN 0-300-04446-1 While there are various definitions of the term, Adam Jones states that the majority of genocide scholars consider that "intent to destroy" is a requirement for any act to be labelled genocide, and that there is growing agreement on the inclusion of the physical destruction criterion.Jones, Adam. Genocide: A Comprehensive Introduction, Routledge/Taylor & Francis Publishers, 2006. ISBN 0-415-35385-8. Chapter 1: The Origins of Genocide pp.20–21
Barbara Harff and Ted Gurr defined genocide as "the promotion and execution of policies by a state or its agents which result in the deaths of a substantial portion of a group ...[when] the victimized groups are defined primarily in terms of their communal characteristics, i.e., ethnicity, religion or nationality."What is Genocide? McGill Faculty of Law (McGill University) source cites Barbara Harff and Ted Gurr Toward empirical theory of genocides and politicides, International Studies Quarterly, 37:3, 1988 Harff and Gurr also differentiate between genocides and politicides by the characteristics by which members of a group are identified by the state. In genocides, the victimized groups are defined primarily in terms of their communal characteristics, i.e., ethnicity, religion or nationality. In politicides the victim groups are defined primarily in terms of their hierarchical position or political opposition to the regime and dominant groups.Origins and Evolution of the Concept in the Science Encyclopedia by Net Industries. states "Politicide, as [Barbara] Harff and [Ted R.] Gurr define it, refers to the killing of groups of people who are targeted not because of shared ethnic or communal traits, but because of 'their hierarchical position or political opposition to the regime and dominant groups' (p. 360)". But does not give the book title to go with the page number.Staff. There are NO Statutes of Limitations on the Crimes of Genocide! On the website of the American Patriot Friends Network. Cites Barbara Harff and Ted Gurr "Toward empirical theory of genocides and politicides," International Studies Quarterly 37, 3 [1988]. Daniel D. Polsby and Don B. Kates, Jr. state that "... we follow Harff's distinction between genocides and 'pogroms,' which she describes as 'short-lived outbursts by mobs, which, although often condoned by authorities, rarely persist.' If the violence persists for long enough, however, Harff argues, the distinction between condonation and complicity collapses." (cites Harff 1992, see other note)
According to R. J. Rummel, genocide has 3 different meanings. The ordinary meaning is murder by government of people due to their national, ethnic, racial, or religious group membership. The legal meaning of genocide refers to the international treaty, the Convention on the Prevention and Punishment of the Crime of Genocide. This also includes non-killings that in the end eliminate the group, such as preventing births or forcibly transferring children out of the group to another group. A generalized meaning of genocide is similar to the ordinary meaning but also includes government killings of political opponents or otherwise intentional murder. It is to avoid confusion regarding what meaning is intended that Rummel created the term democide for the third meaning.Domocide versus genocide; which is what?
Highlighting the potential for state and non-state actors to commit genocide in the 21st century, for example, in failed states or as non-state actors acquire weapons of mass destruction, Adrian Gallagher defined genocide as 'When a source of collective power (usually a state) intentionally uses its power base to implement a process of destruction in order to destroy a group (as defined by the perpetrator), in whole or in substantial part, dependent upon relative group size'.Adrian Gallagher, Genocide and Its Threat to Contemporary International Order (Palgrave Macmillan, 2013) p. 37. The definition upholds the centrality of intent, the multidimensional understanding of destroy, broadens the definition of group identity beyond that of the 1948 definition yet argues that a substantial part of a group has to be destroyed before it can be classified as genocide (dependent on relative group size).
A major criticism of the international community's response to the Rwandan Genocide was that it was reactive, not proactive. The international community has developed a mechanism for prosecuting the perpetrators of genocide but has not developed the will or the mechanisms for intervening in a genocide as it happens.
International prosecution of genocide
By ad hoc tribunals
thumb|Nuon Chea, the Khmer Rouge's chief ideologist, before the Cambodian Genocide Tribunal on 5 December 2011.
All signatories to the CPPCG are required to prevent and punish acts of genocide, both in peace and wartime, though some barriers make this enforcement difficult. In particular, some of the signatories—namely, Bahrain, Bangladesh, India, Malaysia, the Philippines, Singapore, the United States, Vietnam, Yemen, and former Yugoslavia—signed with the proviso that no claim of genocide could be brought against them at the International Court of Justice without their consent.United Nations Treaty Collection (As of 9 October 2001): Convention on the Prevention and Punishment of the Crime of Genocide on the web site of the Office of the United Nations High Commissioner for Human Rights Despite official protests from other signatories (notably Cyprus and Norway) on the ethics and legal standing of these reservations, the immunity from prosecution they grant has been invoked from time to time, as when the United States refused to allow a charge of genocide brought against it by former Yugoslavia following the 1999 Kosovo War.(See for example the submission by Agent of the United States, Mr. David Andrews to the ICJ Public Sitting, 11 May 1999)
It is commonly accepted that, at least since World War II, genocide has been illegal under customary international law as a peremptory norm, as well as under conventional international law. Acts of genocide are generally difficult to establish for prosecution, because a chain of accountability must be established. International criminal courts and tribunals function primarily because the states involved are incapable or unwilling to prosecute crimes of this magnitude themselves.
Nuremberg Tribunal (1945–1946)
Because the universal acceptance of international laws which in 1948 defined and forbade genocide with the promulgation of the Convention on the Prevention and Punishment of the Crime of Genocide (CPPCG), those criminals who were prosecuted after the war in international courts for taking part in the Holocaust were found guilty of crimes against humanity and other more specific crimes like murder. Nevertheless, the Holocaust is universally recognized to have been a genocide and the term, that had been coined the year before by Raphael Lemkin,Oxford English Dictionary: 1944 R. Lemkin Axis Rule in Occupied Europe ix. 79 "By 'genocide' we mean the destruction of a nation or of an ethnic group." appeared in the indictment of the 24 Nazi leaders, Count 3, which stated that all the defendants had "conducted deliberate and systematic genocide—namely, the extermination of racial and national groups..."Oxford English Dictionary "Genocide" citing Sunday Times 21 October 1945
International Criminal Tribunal for the Former Yugoslavia (1993 to present)
thumb|200px|right|The cemetery at the Srebrenica-Potočari Memorial and Cemetery to Genocide Victims
thumb|200px|A boy at a grave during the 2006 funeral of genocide victims
The term Bosnian genocide is used to refer either to the genocide committed by Serb forces in Srebrenica in 1995,Staff. Bosnian genocide suspect extradited, BBC, 2 April 2002 or to ethnic cleansing that took place during the 1992–1995 Bosnian War.European Court of Human Rights . Jorgic v. Germany Judgment, 12 July 2007. § 47
In 2001, the International Criminal Tribunal for the Former Yugoslavia (ICTY) judged that the 1995 Srebrenica massacre was an act of genocide.The International Criminal Tribunal for the Former Yugoslavia found in Prosecutor v. Radislav Krstic – Trial Chamber I – Judgment – IT-98-33 (2001) ICTY8 (2 August 2001) that genocide had been committed. (see paragraph 560 for name of group in English on whom the genocide was committed). It was upheld in Prosecutor v. Radislav Krstic – Appeals Chamber – Judgment – IT-98-33 (2004) ICTY 7 (19 April 2004)
On 26 February 2007, the International Court of Justice (ICJ), in the Bosnian Genocide Case upheld the ICTY's earlier finding that the Srebrenica massacre in Srebrenica and Zepa constituted genocide, but found that the Serbian government had not participated in a wider genocide on the territory of Bosnia and Herzegovina during the war, as the Bosnian government had claimed.
On 12 July 2007, European Court of Human Rights when dismissing the appeal by Nikola Jorgić against his conviction for genocide by a German court (Jorgic v. Germany) noted that the German courts wider interpretation of genocide has since been rejected by international courts considering similar cases.ECHR Jorgic v. Germany. § 42 citing Prosecutor v. Krstic, IT-98-33-T, judgment of 2 August 2001, §§ 580ECHR Jorgic v. Germany Judgment, 12 July 2007. § 44 citing Prosecutor v. Kupreskic and Others (IT-95-16-T, judgment of 14 January 2000), § 751. On 14 January 2000, the ICTY ruled in the Prosecutor v. Kupreskic and Others case that the killing of 116 Muslims in order to expel the Muslim population from a village amounted to persecution, not genocide.ICJ press release 2007/8 26 February 2007 The ECHR also noted that in the 21st century "Amongst scholars, the majority have taken the view that ethnic cleansing, in the way in which it was carried out by the Serb forces in Bosnia and Herzegovina in order to expel Muslims and Croats from their homes, did not constitute genocide. However, there are also a considerable number of scholars who have suggested that these acts did amount to genocide, and the ICTY has found in the Momcilo Krajisnik case that the actus reu, of genocide was met in Prijedor "With regard to the charge of genocide, the Chamber found that in spite of evidence of acts perpetrated in the municipalities which constituted the actus reus of genocide".http://icty.org/x/cases/krajisnik/cis/en/cis_krajisnik_en.pdf
About 30 people have been indicted for participating in genocide or complicity in genocide during the early 1990s in Bosnia. To date, after several plea bargains and some convictions that were successfully challenged on appeal two men, Vujadin Popović and Ljubiša Beara, have been found guilty of committing genocide, Zdravko Tolimir has been found guilty of committing genocide and conspiracy to commit genocide, and two others, Radislav Krstić and Drago Nikolić, have been found guilty of aiding and abetting genocide. Three others have been found guilty of participating in genocides in Bosnia by German courts, one of whom Nikola Jorgić lost an appeal against his conviction in the European Court of Human Rights. A further eight men, former members of the Bosnian Serb security forces were found guilty of genocide by the State Court of Bosnia and Herzegovina (See List of Bosnian genocide prosecutions).
Slobodan Milošević, as the former President of Serbia and of Yugoslavia, was the most senior political figure to stand trial at the ICTY. He died on 11 March 2006 during his trial where he was accused of genocide or complicity in genocide in territories within Bosnia and Herzegovina, so no verdict was returned. In 1995, the ICTY issued a warrant for the arrest of Bosnian Serbs Radovan Karadžić and Ratko Mladić on several charges including genocide. On 21 July 2008, Karadžić was arrested in Belgrade, and he is currently in The Hague on trial accused of genocide among other crimes. Ratko Mladić was arrested on 26 May 2011 by Serbian special police in Lazarevo, Serbia. Karadzic was convicted of ten of the eleven charges laid against him and sentenced to 40 years in prison on March 24, 2016.
International Criminal Tribunal for Rwanda (1994 to present)
thumb|Rwandan Genocide Victims
The International Criminal Tribunal for Rwanda (ICTR) is a court under the auspices of the United Nations for the prosecution of offenses committed in Rwanda during the genocide which occurred there during April 1994, commencing on 6 April. The ICTR was created on 8 November 1994 by the Security Council of the United Nations in order to judge those people responsible for the acts of genocide and other serious violations of the international law performed in the territory of Rwanda, or by Rwandan citizens in nearby states, between 1 January and 31 December 1994.
So far, the ICTR has finished nineteen trials and convicted twenty seven accused persons. On 14 December 2009 two more men were accused and convicted for their crimes. Another twenty five persons are still on trial. Twenty-one are awaiting trial in detention, two more added on 14 December 2009. Ten are still at large.These figures need revising they are from the ICTR page which says see www.ictr.org The first trial, of Jean-Paul Akayesu, began in 1997. In October 1998, Akayesu was sentenced to life imprisonment. Jean Kambanda, interim Prime Minister, pleaded guilty.
Extraordinary Chambers in the Courts of Cambodia (2003 to present)
thumb|Rooms of the Tuol Sleng Genocide Museum contain thousands of photos taken by the Khmer Rouge of their victims.
right|thumb|Skulls in the Choeung Ek.
The Khmer Rouge, led by Pol Pot, Ta Mok and other leaders, organized the mass killing of ideologically suspect groups. The total number of victims is estimated at approximately 1.7 million Cambodians between 1975–1979, including deaths from slave labour.Cambodian Genocide Program, Yale University's MacMillan Center for International and Area Studies
On 6 June 2003 the Cambodian government and the United Nations reached an agreement to set up the Extraordinary Chambers in the Courts of Cambodia (ECCC) which would focus exclusively on crimes committed by the most senior Khmer Rouge officials during the period of Khmer Rouge rule of 1975–1979. The judges were sworn in early July 2006.Doyle, Kevin. "Putting the Khmer Rouge on Trial", Time, 26 July 2007MacKinnon, Ian "Crisis talks to save Khmer Rouge trial", The Guardian, 7 March 2007The Khmer Rouge Trial Task Force, Royal Cambodian Government
The genocide charges related to killings of Cambodia's Vietnamese and Cham minorities, which is estimated to make up tens of thousand killings and possibly moreCase 002 The Extraordinary Chambers in the Courts of Cambodia. Retrieved 14 August 2014Former Khmer Rouge leaders begin genocide trial BBC. 30 July 2014
The investigating judges were presented with the names of five possible suspects by the prosecution on 18 July 2007.
Kang Kek Iew was formally charged with war crime and crimes against humanity and detained by the Tribunal on 31 July 2007. He was indicted on charges of war crimes and crimes against humanity on 12 August 2008. His appeal against his conviction for war crimes and crimes against humanity was rejected on 3 February 2012, and he is serving a sentence of life imprisonment.
Nuon Chea, a former prime minister, who was indicted on charges of genocide, war crimes, crimes against humanity and several other crimes under Cambodian law on 15 September 2010. He was transferred into the custody of the ECCC on 19 September 2007. His trial started on 27 June 2011 and ended on 7 August 2014, with a life sentence imposed for crimes against humanity.McKirdy, Euan (7 August 2014). "Top Khmer Rouge leaders found guilty of crimes against humanity, sentenced to life in prison". CNN. Retrieved 7 August 2014.
Khieu Samphan, a former head of state, who was indicted on charges of genocide, war crimes, crimes against humanity and several other crimes under Cambodian law on 15 September 2010. He was transferred into the custody of the ECCC on 19 September 2007. His trial began on 27 June 2011. and also ended on 7 August 2014, with a life sentence imposed for crimes against humanity.McKirdy, Euan (7 August 2014). "Top Khmer Rouge leaders found guilty of crimes against humanity, sentenced to life in prison". CNN. Retrieved 7 August 2014.
Ieng Sary, a former foreign minister, who was indicted on charges of genocide, war crimes, crimes against humanity and several other crimes under Cambodian law on 15 September 2010. He was transferred into the custody of the ECCC on 12 November 2007. His trial started on 27 June 2011, and ended with his death on 14 March 2013. He was never convicted.
Ieng Thirith, a former minister for social affairs and wife of Ieng Sary, who was indicted on charges of genocide, war crimes, crimes against humanity and several other crimes under Cambodian law on 15 September 2010. She was transferred into the custody of the ECCC on 12 November 2007. Proceedings against her have been suspended pending a health evaluation.
There has been disagreement between some of the international jurists and the Cambodian government over whether any other people should be tried by the Tribunal.
By the International Criminal Court
Since 2002, the International Criminal Court can exercise its jurisdiction if national courts are unwilling or unable to investigate or prosecute genocide, thus being a "court of last resort," leaving the primary responsibility to exercise jurisdiction over alleged criminals to individual states. Due to the United States concerns over the ICC, the United States prefers to continue to use specially convened international tribunals for such investigations and potential prosecutions. 23 November 2005
Darfur, Sudan
thumb|upright|A mother with her sick baby at Abu Shouk IDP camp in North Darfur
There has been much debate over categorizing the situation in Darfur as genocide.Jafari, Jamal and Paul Williams (2005) "Word Games: The UN and Genocide in Darfur" JURIST The ongoing conflict in Darfur, Sudan, which started in 2003, was declared a "genocide" by United States Secretary of State Colin Powell on 9 September 2004 in testimony before the Senate Foreign Relations Committee.POWELL DECLARES KILLING IN DARFUR 'GENOCIDE', The NewsHour with Jim Lehrer, 9 September 2004 Since that time however, no other permanent member of the UN Security Council followed suit. In fact, in January 2005, an International Commission of Inquiry on Darfur, authorized by UN Security Council Resolution 1564 of 2004, issued a report to the Secretary-General stating that "the Government of the Sudan has not pursued a policy of genocide." , 25 January 2005, at 4 Nevertheless, the Commission cautioned that "The conclusion that no genocidal policy has been pursued and implemented in Darfur by the Government authorities, directly or through the militias under their control, should not be taken in any way as detracting from the gravity of the crimes perpetrated in that region. International offences such as the crimes against humanity and war crimes that have been committed in Darfur may be no less serious and heinous than genocide."
In March 2005, the Security Council formally referred the situation in Darfur to the Prosecutor of the International Criminal Court, taking into account the Commission report but without mentioning any specific crimes. Two permanent members of the Security Council, the United States and China, abstained from the vote on the referral resolution.SECURITY COUNCIL REFERS SITUATION IN DARFUR, SUDAN, TO PROSECUTOR OF INTERNATIONAL CRIMINAL COURT, UN Press Release SC/8351, 31 March 2005 As of his fourth report to the Security Council, the Prosecutor has found "reasonable grounds to believe that the individuals identified [in the UN Security Council Resolution 1593] have committed crimes against humanity and war crimes," but did not find sufficient evidence to prosecute for genocide. , Office of the Prosecutor of the International Criminal Court, 14 December 2006.
In April 2007, the Judges of the ICC issued arrest warrants against the former Minister of State for the Interior, Ahmad Harun, and a Militia
Janjaweed leader, Ali Kushayb, for crimes against humanity and war crimes.Statement by Mr. Luis Moreno Ocampo, Prosecutor of the International Criminal Court, to the United Nations Security Council pursuant to UNSCR 1593 (2005), International Criminal Court, 5 June 2008
On 14 July 2008, prosecutors at the International Criminal Court (ICC), filed ten charges of war crimes against Sudan's President Omar al-Bashir: three counts of genocide, five of crimes against humanity and two of murder. The ICC's prosecutors claimed that al-Bashir "masterminded and implemented a plan to destroy in substantial part" three tribal groups in Darfur because of their ethnicity.
On 4 March 2009, the ICC issued a warrant of arrest for Omar Al Bashir, President of Sudan as the ICC Pre-Trial Chamber I concluded that his position as head of state does not grant him immunity against prosecution before the ICC. The warrant was for war crimes and crimes against humanity. It did not include the crime of genocide because the majority of the Chamber did not find that the prosecutors had provided enough evidence to include such a charge.ICC issues a warrant of arrest for Omar Al Bashir, President of Sudan (ICC-CPI-20090304-PR394), ICC press release, 4 March 2009
Genocide in history
thumb|Naked Soviet POWs held by the Nazis in Mauthausen concentration camp. "... the murder of at least 3.3 million Soviet POWs is one of the least-known of modern genocides; there is still no full-length book on the subject in English." —Adam JonesAdam Jones (2010), Genocide: A Comprehensive Introduction (2nd ed.), p.271. – Next to the Jews in Europe," wrote Alexander Werth', "the biggest single German crime was undoubtedly the extermination by hunger, exposure and in other ways of . . . Russian war prisoners." Yet the murder of at least 3.3 million Soviet POWs is one of the least-known of modern genocides; there is still no full-length book on the subject in English. It also stands as one of the most intensive genocides of all time: "a holocaust that devoured millions," as Catherine Merridale acknowledges. The large majority of POWs, some 2.8 million, were killed in just eight months of 1941–42, a rate of slaughter matched (to my knowledge) only by the 1994 Rwanda genocide."
The concept of genocide can be applied to historical events of the past. The preamble to the CPPCG states that "at all periods of history genocide has inflicted great losses on humanity."
Revisionist attempts to challenge or affirm claims of genocide are illegal in some countries. For example, several European countries ban the denial of the Holocaust or the Armenian Genocide, while in Turkey referring to the mass killings of Armenians, Greeks, Assyrians and Maronites as genocides may be prosecuted under Article 301.Pair guilty of 'insulting Turkey', BBC News, 11 October 2007.
William Rubinstein argues that the origin of 20th century genocides can be traced back to the collapse of the elite structure and normal modes of government in parts of Europe following the First World War:
Stages of genocide, influences leading to genocide, and efforts to prevent it
In 1996 Gregory Stanton, the president of Genocide Watch, presented a briefing paper called "The 8 Stages of Genocide" at the United States Department of State.Gregory Stanton. The 8 Stages of Genocide, Genocide Watch, 1996 In it he suggested that genocide develops in eight stages that are "predictable but not inexorable".The FBI has found somewhat similar stages for hate groups.
The Stanton paper was presented to the State Department, shortly after the Rwandan Genocide and much of its analysis is based on why that genocide occurred. The preventative measures suggested, given the briefing paper's original target audience, were those that the United States could implement directly or indirectly by using its influence on other governments.
Stage Characteristics Preventive measures 1.Classification People are divided into "us and them". "The main preventive measure at this early stage is to develop universalistic institutions that transcend... divisions." 2.Symbolization "When combined with hatred, symbols may be forced upon unwilling members of pariah groups..." "To combat symbolization, hate symbols can be legally forbidden as can hate speech". 3.Dehumanization "One group denies the humanity of the other group. Members of it are equated with animals, vermin, insects, or diseases." "Local and international leaders should condemn the use of hate speech and make it culturally unacceptable. Leaders who incite genocide should be banned from international travel and have their foreign finances frozen." 4.Organization "Genocide is always organized... Special army units or militias are often trained and armed..." "The U.N. should impose arms embargoes on governments and citizens of countries involved in genocidal massacres, and create commissions to investigate violations" 5.Polarization "Hate groups broadcast polarizing propaganda..." "Prevention may mean security protection for moderate leaders or assistance to human rights groups...Coups d’état by extremists should be opposed by international sanctions." 6.Preparation "Victims are identified and separated out because of their ethnic or religious identity..." "At this stage, a Genocide Emergency must be declared. ..." 7.Extermination "It is 'extermination' to the killers because they do not believe their victims to be fully human". "At this stage, only rapid and overwhelming armed intervention can stop genocide. Real safe areas or refugee escape corridors should be established with heavily armed international protection." 8.Denial "The perpetrators... deny that they committed any crimes..." "The response to denial is punishment by an international tribunal or national courts"
In April 2012, it was reported that Stanton would soon be officially adding two new stages, Discrimination and Persecution, to his original theory, which would make for a 10-stage theory of genocide.http://aipr.wordpress.com/2012/04/19/genprev-in-the-news-19-april-2012/
In a paper for the Social Science Research Council Dirk Moses criticises the Stanton approach concluding:
Other authors have focused on the structural conditions leading up to genocide and the psychological and social processes that create an evolution toward genocide. Ervin Staub showed that economic deterioration and political confusion and disorganization were starting points of increasing discrimination and violence in many instances of genocides and mass killing. They lead to scapegoating a group and ideologies that identified that group as an enemy. A history of devaluation of the group that becomes the victim, past violence against the group that becomes the perpetrator leading to psychological wounds, authoritarian cultures and political systems, and the passivity of internal and external witnesses (bystanders) all contribute to the probability that the violence develops into genocide.Staub, E (1989). The roots of evil: The origins of genocide and other group violence. New York: Cambridge University Press. Intense conflict between groups that is unresolved, becomes intractable and violent can also lead to genocide. The conditions that lead to genocide provide guidance to early prevention, such as humanizing a devalued group, creating ideologies that embrace all groups, and activating bystander responses. There is substantial research to indicate how this can be done, but information is only slowly transformed into action.Staub, E. (2011) Overcoming evil: Genocide, violent conflict and terrorism New York: Oxford University Press.
Kjell Anderson uses a dichotomistic classification of genocides: "hot genocides, motivated by hate and the victims’ threatening nature, with low-intensity cold genocides, rooted in victims’ supposed inferiority."p. 9. Anderson, Kjell. (2015) Colonialism and Cold Genocide: The Case of West Papua. Genocide Studies and Prevention: An International Journal Vol. 9: Iss. 2: 9–25.
See also
Notes
References
Further reading
Articles
Christopher R. Browning, "The Two Different Ways of Looking at Nazi Murder" (review of Philippe Sands, East West Street: On the Origins of "Genocide" and "Crimes Against Humanity", Knopf, 425 pp., $32.50; and Christian Gerlach, The Extermination of the European Jews, Cambridge University Press, 508 pp., $29.99 [paper]), The New York Review of Books, vol. LXIII, no. 18 (November 24, 2016), pp. 56–58. Discusses Hersch Lauterpacht's legal concept of "crimes against humanity", contrasted with Rafael Lemkin's legal concept of "genocide". All genocides are crimes against humanity, but not all crimes against humanity are genocides; genocides require a higher standard of proof, as they entail intent to destroy a particular group.
The Genocide in Darfur is Not What It Seems Christian Science Monitor
Suharto’s Purge, Indonesia’s Silence. Joshua Oppenheimer for The New York Times, September 29, 2015.
(in Spanish) Aizenstatd, Najman Alexander. "Origen y Evolución del Concepto de Genocidio". Vol. 25 Revista de Derecho de la Universidad Francisco Marroquín 11 (2007). ISSN 1562-2576
No Lessons Learned from the Holocaust? Assessing Risks of Genocide and Political Mass Murder since 1955 American Political Science Review. Vol. 97, No. 1. February 2003.
(in Spanish) Marco, Jorge. "Genocidio y Genocide Studies: Definiciones y debates", en: Aróstegui, Julio, Marco, Jorge y Gómez Bravo, Gutmaro (coord.): "De Genocidios, Holocaustos, Exterminios...", Hispania Nova, 10 (2012). Véase
What Really Happened in Rwanda? Christian Davenport and Allan C. Stam.
Reyntjens, F. (2004). "Rwanda, Ten Years On: From Genocide to Dictatorship." African Affairs 103(411): 177–210.
Brysk, Alison. 1994. "The Politics of Measurement: The Contested Count of the Disappeared in Argentina." Human Rights Quarterly 16: 676–92.
Davenport, C. and P. Ball (2002). "Views to a Kill: Exploring the Implications of Source Selection in the Case of Guatemalan State Terror, 1977–1996." Journal of Conflict Resolution 46(3): 427–450.
Krain, M. (1997). "State-Sponsored Mass Murder: A Study of the Onset and Severity of Genocides and Politicides." Journal of Conflict Resolution 41(3): 331–360.
Books
Ball, P., P. Kobrak, and H. Spirer (1999). State Violence in Guatemala, 1960–1996: A Quantitative Reflection. Washington, D.C.: American Association for the Advancement of Science.
Bloxham, Donald & Moses, A. Dirk [editors]: The Oxford Handbook of Genocide Studies. [Interdisciplinary Contributions about Past & Present Genocides]. Oxford University Press, second edition 2013. ISBN 978-0-19-967791-7
Corradi, Juan, Patricia Weiss Fagen, and Manuel Antonio Garreton, eds. 1992. Fear at the Edge: State Terror and Resistance in Latin America. Berkeley: University of California Press.
Elliot, G. (1972). Twentieth Century Book of the Dead. New York, C. Scribner.
ISBN 0816080836
Levene, M. (2005). Genocide in the Age of the Nation State. New York, Palgrave Macmillan.
Lewy, Guenter (2012). Essays on Genocide and Humanitarian Intervention. University of Utah Press. ISBN 978-1-60781-168-8.
Mamdani, M. (2001). When Victims Become Killers: Colonialism, Nativism, and the Genocide in Rwanda. Princeton, N.J., Princeton University Press.
Schmid, A. P. (1991). Repression, State Terrorism, and Genocide: Conceptual Clarifications. State Organized Terror: The Case of Violent Internal Repression. P. T. Bushnell. Boulder, Colo.: Westview Press. 312 p.
Staub, Ervin (1989). The roots of evil: The origins of genocide and other group violence. New York: Cambridge University Press. 978-0521-42214-7
Staub, Ervin (2011). Overcoming Evil: Genocide, violent conflict and terrorism. New York: Oxford University Press. 978-0-19-538204-4
Tams, Christian J.; Berster, Lars; Schiffbauer, Björn (2014). Convention on the Prevention and Punishment of the Crime of Genocide: A Commentary. Munich: C.H. Beck. ISBN 978-3-406-60317-4.
Van den Berghe, P. L. (1990). State Violence and Ethnicity. Niwot, Colo., University of Colorado Press.
, report by Minority Rights Group International, 2006
External links
Overviews
Institute for the Study of Genocide/International Association of Genocide Scholars
Genocide Intervention Network
OneWorld Perspectives Magazine: Preventing Genocide (April/May 2006)- global human rights and development network looks at genocide from a variety of perspectives
Committee on Conscience of the United States Holocaust Memorial Museum; Responding to Threats of Genocide
Staff, The Crime of Genocide in Domestic Laws and Penal Codes, Prevent Genocide International
Voices of the Holocaust—a learning resource at the British Library
Convention on the Prevention and Punishment of the Crime of Genocide at Law-Ref.org—fully indexed and crosslinked with other documents
Documents and Resources on War, War Crimes and Genocide
International Network of Genocide Scholars (INoGS)
stages of genocide
Genocide & Crimes Against Humanity—a learning resource, highlighting the cases of Myanmar, Bosnia, the DRC, and Darfur
Whitaker Report
Resources
Auschwitz Institute for Peace and Reconciliation
USA for UNHCR Web site
Research programs
Centre for the Study of Genocide and Mass Violence, Sheffield, United Kingdom
Center for Holocaust and Genocide Studies, Amsterdam, the Netherlands
Center for Holocaust and Genocide Studies, University of Minnesota
Genocide Studies Program, Yale University
Montreal Institute for Genocide Studies, Concordia University
Minorities at Risk project at the University of Maryland
The Inforce Foundation (International Forensic Centre of Excellence), UK
Foundation for the International Prevention of Genocide and Mass Atrocities, Hungary
Category:Crimes
Category:Murder
Category:International criminal law
Category:Population
Category:Words coined in the 1940s | 12,441 | 2017-01 |
Great Plains | The Great Plains is the broad expanse of flat land (a plain), much of it covered in prairie, steppe and grassland, that lies west of the Mississippi River tallgrass prairie states and east of the Rocky Mountains in the United States and Canada. This area covers parts, but not all, of the states of Colorado, Kansas, Montana, Nebraska, New Mexico, North Dakota, Oklahoma, South Dakota, Texas, and Wyoming, Minnesota, Iowa and the Canadian provinces of Alberta, Manitoba and Saskatchewan. The region is known for supporting extensive cattle ranching and dry farming.
The Canadian portion of the Plains is known as the Prairies. Some geographers include some territory of northern Mexico in the Plains, but many stop at the Rio Grande.
Usage
The term "Great Plains" is used in the United States to describe a sub-section of the even more vast Interior Plains physiographic division, which covers much of the interior of North America. It also has currency as a region of human geography, referring to the Plains Indians or the Plains States.
In Canada the term is little used; Natural Resources Canada, the government department responsible for official mapping and equivalent to the United States Geological Survey, treats the Interior Plains as one unit consisting of several related plateaux and plains. There is no region referred to as the "Great Plains" in The Atlas of Canada.Atlas.nrcan.gc.ca In terms of human geography, the term prairie is more commonly used in Canada, and the region is known as the Prairie Provinces or simply "the Prairies."
The North American Environmental Atlas, produced by the Commission for Environmental Cooperation, a NAFTA agency composed of the geographical agencies of the Mexican, American, and Canadian governments, uses the "Great Plains" as an ecoregion synonymous with predominant prairies and grasslands rather than as physiographic region defined by topography.CEC.org The Great Plains ecoregion includes five sub-regions: Temperate Prairies, West-Central Semi-Arid Prairies, South-Central Semi-Arid Prairies, Texas Louisiana Coastal Plains, and Tamaulipus-Texas Semi-Arid Plain, which overlap or expand upon other Great Plains designations.
Extent
right|thumb|350px|Ecoregions of the Great Plains
thumb|The Great Plains before the native grasses were ploughed under, Haskell County, Kansas, 1897, showing a man sitting behind a buffalo wallow.
The region is about east to west and north to south. Much of the region was home to American bison herds until they were hunted to near extinction during the mid/late 19th century. It has an area of approximately . Current thinking regarding the geographic boundaries of the Great Plains is shown by this map at the Center for Great Plains Studies, University of Nebraska–Lincoln.
The term "Great Plains", for the region west of about the 96th or 98th meridian and east of the Rocky Mountains, was not generally used before the early 20th century. Nevin Fenneman's 1916 study, Physiographic Subdivision of the United States, brought the term Great Plains into more widespread usage. Before that the region was almost invariably called the High Plains, in contrast to the lower Prairie Plains of the Midwestern states. Today the term "High Plains" is used for a subregion of the Great Plains.
Geography
The Great Plains are the westernmost portion of the vast North American Interior Plains, which extend east to the Appalachian Plateau. The United States Geological Survey divides the Great Plains in the United States into ten physiographic subdivisions:
Coteau du Missouri or Missouri Plateau, glaciated – east-central South Dakota, northern and eastern North Dakota and northeastern Montana;
Coteau du Missouri, unglaciated – western South Dakota, northeastern Wyoming, southwestern North Dakota and southeastern Montana;
Black Hills – western South Dakota;
High Plains – southeastern Wyoming, southwestern South Dakota, western Nebraska (including the Sand Hills), eastern Colorado, western Kansas, western Oklahoma, eastern New Mexico, and northwestern Texas (including the Llano Estacado and Texas Panhandle);
Plains Border – central Kansas and northern Oklahoma (including the Flint, Red and Smoky Hills);
Colorado Piedmont – eastern Colorado;
Raton section – northeastern New Mexico;
Pecos Valley – eastern New Mexico;
Edwards Plateau – south-central Texas; and
Central Texas section – central Texas.
Paleontology
During the Cretaceous Period (145–66 million years ago), the Great Plains were covered by a shallow inland sea called the Western Interior Seaway. However, during the Late Cretaceous to the Paleocene (65–55 million years ago), the seaway had begun to recede, leaving behind thick marine deposits and a relatively flat terrain which the seaway had once occupied.
During the Cenozoic era, specifically about 25 million years ago during the Miocene and Pliocene epochs, the continental climate became favorable to the evolution of grasslands. Existing forest biomes declined and grasslands became much more widespread. The grasslands provided a new niche for mammals, including many ungulates and glires, that switched from browsing diets to grazing diets. Traditionally, the spread of grasslands and the development of grazers have been strongly linked. However, an examination of mammalian teeth suggests that it is the open, gritty habitat and not the grass itself which is linked to diet changes in mammals, giving rise to the "grit, not grass" hypothesis.Phillip E. Jardine, Christine M. Janis, Sarda Sahney, Michael J. Benton. "Grit not grass: Concordant patterns of early origin of hypsodonty in Great Plains ungulates and Glires." Palaeogeography, Palaeoclimatology, Palaeoecology. December 2012:365–366, 1–10
Paleontological finds in the area have yielded bones of mammoths, saber-toothed cats and other ancient animals,"Ice Age Animals". Illinois State Museum. as well as dozens of other megafauna (large animals over ) – such as giant sloths, horses, mastodons, and American lion – that dominated the area of the ancient Great Plains for thousands to millions of years. The vast majority of these animals became extinct in North America at the end of the Pleistocene (around 13,000 years ago)."A Plan For Reintroducing Megafauna To North America". ScienceDaily. October 2, 2006.
Climate
thumb|Bison at the Tallgrass Prairie Preserve in Oklahoma
thumb|A glimpse of the southern Great Plains in southern Oklahoma north of Burkburnett, Texas
In general, the Great Plains have a wide variety of weather through the year, with very cold and harsh winters and very hot and humid summers. Wind speeds are often very high, especially in winter. Grasslands are among the least protected biomes.Threats Assessment for the Northern Great Plains Ecoregion Humans have converted much of the prairies for agricultural purposes or to create pastures. The Great Plains have dust storms mostly every year or so.
The 100th meridian roughly corresponds with the line that divides the Great Plains into an area that receive or more of rainfall per year and an area that receives less than 20 inches. In this context, the High Plains, as well as Southern Alberta, south-western Saskatchewan and Eastern Montana are mainly semi arid steppe land and are generally characterised by rangeland or marginal farmland. The region (especially the High Plains) is periodically subjected to extended periods of drought; high winds in the region may then generate devastating dust storms. The eastern Great Plains near the eastern boundary falls in the humid subtropical climate zone in the southern areas, and the northern and central areas fall in the humid continental climate.
Many thunderstorms occur in the plains in the spring through summer. The southeastern portion of the Great Plains is the most tornado active area in the world and is sometimes referred to as Tornado Alley.
Flora
The Great Plains are part of the floristic North American Prairies Province, which extends from the Rocky Mountains to the Appalachians.
History
Original American contact
thumb|Buffalo hunt under the wolf-skin mask, 1832–33.
The first Americans (Paleo-Indians) who arrived to the Great Plains were successive indigenous cultures who are known to have inhabited the Great Plains for thousands of years, over 15,000 years ago. Humans entered the North American continent in waves of migration, mostly over Beringia, the Bering Straits land bridge.
Historically the Great Plains were the range of the bison and of the culture of the Plains Indians, whose tribes included the Blackfoot, Crow, Sioux, Cheyenne, Arapaho, Comanche, and others. Eastern portions of the Great Plains were inhabited by tribes who lived in semi-permanent villages of earth lodges, such as the Arikara, Mandan, Pawnee and Wichita.
European contact
thumb|Great Plains in North Dakota 2007, where communities began settling in the 1870s.
With the arrival of Francisco Vázquez de Coronado, a Spanish conquistador, the first recorded history of encounter between Europeans and Native Americans in the Great Plains occurred in Texas, Kansas and Nebraska from 1540 to 1542. In that same time period, Hernando de Soto crossed a west-northwest direction in what is now Oklahoma and Texas. Today this is known as the De Soto Trail. The Spanish thought the Great Plains were the location of the mythological Quivira and Cíbola, a place said to be rich in gold.
Over the next one hundred years, founding of the fur trade brought thousands of ethnic Europeans into the Great Plains. Fur trappers from France, Spain, Britain, Russia and the young United States made their way across much of the region, making regular contacts with Native Americans. After the United States acquired the Louisiana Purchase in 1803 and conducted the Lewis and Clark Expedition in 1804–1806, more information about the Plains became available and various pioneers entered the areas.
Manuel Lisa, based in St. Louis, established a major fur trading site at his Fort Lisa on the Missouri River in Nebraska. Fur trading posts were often the basis of later settlements. Through the 19th century, more European Americans and Europeans migrated to the Great Plains as part of a vast westward expansion of population. New settlements became dotted across the Great Plains.
The new immigrants also brought diseases against which the Native Americans had no resistance. Between a half and two-thirds of the Plains Indians are thought to have died of smallpox by the time of the 1803 Louisiana Purchase."Emerging Infections: Microbial Threats to Health in the United States (1992)". Institute of Medicine (IOM).
Early European settlements on the Great Plains
French
British
American
thumb|right|Homesteaders in central Nebraska in 1886
thumb|Wheat field on Dutch flats near Mitchell, Nebraska
Fort Lisa (1809), North Dakota
Fort Lisa (1812), Nebraska
Fontenelle's Post (1822), Nebraska
Cabanne's Trading Post (1822), Nebraska
Pioneer settlement
After 1870, the new railroads across the Plains brought hunters who killed off almost all the bison for their hides. The railroads offered attractive packages of land and transportation to European farmers, who rushed to settle the land. They (and Americans as well) also took advantage of the homestead laws to obtain free farms. Land speculators and local boosters identified many potential towns, and those reached by the railroad had a chance, while the others became ghost towns. In Kansas, for example, nearly 5000 towns were mapped out, but by 1970 only 617 were actually operating. In the mid-20th century, closeness to an interstate exchange determined whether a town would flourish or struggle for business.Raymond A. Mohl, The New City: Urban America in the Industrial Age, 1860–1920 (1985) p. 69
Much of the Great Plains became open range, or rangeland where cattle roamed free, hosting ranching operations where anyone was theoretically free to run cattle. In the spring and fall, ranchers held roundups where their cowboys branded new calves, treated animals and sorted the cattle for sale. Such ranching began in Texas and gradually moved northward. Between 1866 and 1895, cowboys herded 10 million cattle north to rail heads such as Dodge City, KansasRobert R. Dykstra, Cattle Towns: A Social History of the Kansas Cattle Trading Centers (1968) and Ogallala, Nebraska; from there, cattle were shipped eastward.John Rossel, "The Chisholm Trail," Kansas Historical Quarterly (1936) Vol. 5, No. 1 pp 3–14 online edition
thumb|Cattle herd and cowboy, circa 1902
Many foreign investors, especially British, financed the great ranches of the era. Overstocking of the range and the terrible winter of 1886 resulted in a disaster, with many cattle starved and frozen to death. Theodore Roosevelt, a rancher in the Dakotas, lost his entire investment; he returned east to reenter politics. From then on, ranchers generally raised feed to ensure they could keep their cattle alive over winter.
To allow for agricultural development of the Great Plains and house a growing population, the US passed the Homestead Acts of 1862: it allowed a settler to claim up to of land, provided that he lived on it for a period of five years and cultivated it. The provisions were expanded under the Kinkaid Act of 1904 to include a homestead of an entire section. Hundreds of thousands of people claimed such homesteads, sometimes building sod houses out of the very turf of their land. Many of them were not skilled dryland farmers and failures were frequent. Much of the Plains were settled during relatively wet years. Government experts did not understand how farmers should cultivate the prairies and gave advice counter to what would have worked. Germans from Russia who had previously farmed, under similar circumstances, in what is now Ukraine were marginally more successful than other homesteaders. The Dominion Lands Act of 1871 served a similar function for establishing homesteads on the prairies in Canada.Ian Frazier, Great Plains (2001) p. 72
Social life
thumb|Grange in session, 1873
The railroads opened up the Great Plains for settlement, for now it was possible to ship wheat and other crops at low cost to the urban markets in the East, and Europe. Homestead land was free for American settlers. Railroads sold their land at cheap rates to immigrants in expectation they would generate traffic as soon as farms were established. Immigrants poured in, especially from Germany and Scandinavia. On the plains, very few single men attempted to operate a farm or ranch by themselves; they clearly understood the need for a hard-working wife, and numerous children, to handle the many chores, including child-rearing, feeding and clothing the family, managing the housework, feeding the hired hands, and, especially after the 1930s, handling paperwork and financial details.Deborah Fink, Agrarian Women: Wives and Mothers in Rural Nebraska, 1880–1940 (1992). During the early years of settlement, farm women played an integral role in assuring family survival by working outdoors. After approximately one generation, women increasingly left the fields, thus redefining their roles within the family. New technology including sewing and washing machines encouraged women to turn to domestic roles. The scientific housekeeping movement, promoted across the land by the media and government extension agents, as well as county fairs which featured achievements in home cookery and canning, advice columns for women regarding farm bookkeeping, and home economics courses in the schools.Chad Montrie, "'Men Alone Cannot Settle a Country:' Domesticating Nature in the Kansas-Nebraska Grasslands", Great Plains Quarterly, Fall 2005, Vol. 25 Issue 4, pp. 245–258. Online
Although the eastern image of farm life in the prairies emphasized the isolation of the lonely farmer and wife, plains residents created busy social lives for themselves. They often sponsored activities that combined work, food and entertainment such as barn raisings, corn huskings, quilting bees,Karl Ronning, "Quilting in Webster County, Nebraska, 1880–1920", Uncoverings, 1992, Vol. 13, pp. 169–191. Grange meetings, church activities and school functions. Women organized shared meals and potluck events, as well as extended visits between families.Nathan B. Sanderson, "More Than a Potluck", Nebraska History, Fall 2008, Vol. 89 Issue 3, pp. 120–131. The Grange was a nationwide farmers' organization, they reserved high offices for women, and gave them a voice in public affairs.Donald B. Marti, Women of the Grange: Mutuality and Sisterhood in Rural America, 1866–1920 (1991)
After 19th century
thumb|320px|Withdrawal rates from the Ogallala Aquifer.
The region roughly centered on the Oklahoma Panhandle, including southeastern Colorado, southwestern Kansas, the Texas Panhandle, and extreme northeastern New Mexico was known as the Dust Bowl during the late 1920s and early 1930s. The effect of an extended drought, inappropriate cultivation, and financial crises of the Great Depression, forced many farmers off the land throughout the Great Plains.
From the 1950s on, many areas of the Great Plains have become productive crop-growing areas because of extensive irrigation on large landholdings. The United States is a major exporter of agricultural products. The southern portion of the Great Plains lies over the Ogallala Aquifer, a huge underground layer of water-bearing strata dating from the last ice age. Center pivot irrigation is used extensively in drier sections of the Great Plains, resulting in aquifer depletion at a rate that is greater than the ground's ability to recharge.Bobby A. Stewart and Terry A. Howell, Encyclopedia of water science (2003) p. 43
Population decline
The rural Plains have lost a third of their population since 1920. Several hundred thousand square miles of the Great Plains have fewer than —the density standard Frederick Jackson Turner used to declare the American frontier "closed" in 1893. Many have fewer than . There are more than 6,000 ghost towns in the state of Kansas alone, according to Kansas historian Daniel Fitzgerald. This problem is often exacerbated by the consolidation of farms and the difficulty of attracting modern industry to the region. In addition, the smaller school-age population has forced the consolidation of school districts and the closure of high schools in some communities. The continuing population loss has led some to suggest that the current use of the drier parts of the Great Plains is not sustainable,Amanda Rees, The Great Plains region (2004) p. xvi and there has been a proposal – the "Buffalo Commons" – to return approximately of these drier parts to native prairie land.
Wind power
thumb|Wind farm in the plains of West Texas
The Great Plains contribute substantially to wind power in the United States. In July 2008, oilman turned wind-farm developer T. Boone Pickens called for the U.S. to invest $1 trillion to build an additional 200,000 MW of wind power nameplate capacity in the Plains, as part of his Pickens Plan. Pickens cited Sweetwater, Texas as an example of economic revitalization driven by wind power development.
Sweetwater was a struggling town typical of the Plains, steadily losing businesses and population, until wind turbines came to the surrounding Nolan County. Wind power brought jobs to local residents, along with royalty payments to landowners who leased sites for turbines, reversing the town's population decline. Pickens claims the same economic benefits are possible throughout the Plains, which he refers to as North America's "wind corridor."
See also
1837 Great Plains smallpox epidemic
Bison hunting
Llano Estacado
Great American Desert
Great bison belt
Great Plains Art Museum
Great Plains Conservation Program
Northern Great Plains History Conference
Territories of the United States on stamps
Dust Bowl
International steppe-lands
Eurasian Steppe
Kazakh Steppe
Pampas, Argentina, Uruguay, Brazil
Pontic-Caspian steppe
Puszta
References
Further reading
Bonnifield, Paul. The Dust Bowl: Men, Dirt, and Depression, University of New Mexico Press, Albuquerque, New Mexico, 1978, hardcover, ISBN 0-8263-0485-0.
Courtwright, Julie. Prairie Fire: A Great Plains History (University Press of Kansas, 2011) 274 pp.
Danbom, David B. Sod Busting: How families made farms on the 19th-century Plains (2014)
Eagan, Timothy. The Worst Hard Time : the Untold Story of Those Who Survived the Great American Dust Bowl. Boston : Houghton Mifflin Co., 2006.
Forsberg, Michael, Great Plains: America's Lingering Wild, University of Chicago Press, Chicago, Illinois, 2009, ISBN 978-0-226-25725-9
Gilfillan, Merrill. Chokecherry Places, Essays from the High Plains, Johnson Press, Boulder, Colorado, trade paperback, ISBN 1-55566-227-7.
Grant, Michael Johnston. Down and Out on the Family Farm: Rural Rehabilitation in the Great Plains, 1929–1945, University of Nebraska Press, 2002, ISBN 0-8032-7105-0
Hurt, R. Douglas. The Big Empty: The Great Plains in the Twentieth Century (University of Arizona Press; 2011) 315 pages; the environmental, social, economic, and political history of the region.
Hurt, R. Douglas. The Great Plains during World War II. University of Nebraska Press. 2008. Pp. xiii, 507.
Mills, David W. Cold War in a Cold Land: Fighting Communism on the Northern Plains (2015) Col War era; excerpt
Peirce, Neal R. The Great Plains States of America: People, Politics, and Power in the Nine Great Plains States (1973)
Raban, Jonathan. Bad Land: An American Romance. Vintage Departures, division of Vintage Books, New York, 1996. Winner of the National Book Critics Circle Award for Nonfiction.
Rees, Amanda. The Great Plains Region: The Greenwood Encyclopedia of American Regional Cultures (2004)
Stegner, Wallace. Wolf Willow, A history, a story, and a memory of the last plains frontier, Viking Compass Book, New York, 1966, trade paperback, ISBN 0-670-00197-X
Wishart, David J. ed. Encyclopedia of the Great Plains, University of Nebraska Press, 2004, ISBN 0-8032-4787-7. complete text online
External links
Kansas Heritage Group: Native Prairie, Preserve, Flowers, and Research
Library of Congress: Great Plains
University of Nebraska-Lincoln: Center for Great Plains Studies
Oklahoma Digital Maps: Digital Collections of Oklahoma and Indian Territory
Category:Plains of the United States
Category:Temperate grasslands, savannas, and shrublands in the United States
Category:Ecoregions of the United States
Category:Physiographic provinces
Category:Plains of Canada
Category:Regions of the United States
Category:Regions of the Western United States | 51,464 | 2017-01 |
British Empire | The British Empire comprised the dominions, colonies, protectorates, mandates and other territories ruled or administered by the United Kingdom and its predecessor states. It originated with the overseas possessions and trading posts established by England between the late 16th and early 18th centuries. At its height, it was the largest empire in history and, for over a century, was the foremost global power. By 1913, the British Empire held sway over 412 million people, % of the world population at the time,Maddison 2001, pp. 97 "The total population of the Empire was 412 million [in 1913]", 241 "[World population in 1913 (in thousands):] 1 791 020". and by 1920, it covered , % of the Earth's total land area. As a result, its political, legal, linguistic and cultural legacy is widespread. At the peak of its power, the phrase "the empire on which the sun never sets" was often used to describe the British Empire, because its expanse around the globe meant that the sun was always shining on at least one of its territories.
During the Age of Discovery in the 15th and 16th centuries, Portugal and Spain pioneered European exploration of the globe, and in the process established large overseas empires. Envious of the great wealth these empires generated,Russo 2012, p. 15 chapter 1 'Great Expectations': "The dramatic rise in Spanish fortunes sparked both envy and fear among northern, mostly Protestant, Europeans.". England, France, and the Netherlands began to establish colonies and trade networks of their own in the Americas and Asia. A series of wars in the 17th and 18th centuries with the Netherlands and France left England and then, following union between England and Scotland in 1707, Great Britain, the dominant colonial power in North America and India.
The independence of the Thirteen Colonies in North America in 1783 after the American War of Independence caused Britain to lose some of its oldest and most populous colonies. British attention soon turned towards Asia, Africa, and the Pacific. After the defeat of France in the Revolutionary and Napoleonic Wars (1792–1815), Britain emerged as the principal naval and imperial power of the 19th century.Tellier, L.-N. (2009). Urban World History: an Economic and Geographical Perspective. Quebec: PUQ. p. 463. ISBN 2-7605-1588-5. Unchallenged at sea, British dominance was later described as Pax Britannica ("British Peace"), a period of relative peace in Europe and the world (1815–1914) during which the British Empire became the global hegemon and adopted the role of global policeman.Johnston, pp. 508–10.Porter, p. 332.Sondhaus, L. (2004). Navies in Modern World History. London: Reaktion Books. p. 9. ISBN 1-86189-202-0. In the early 19th century, the Industrial Revolution began to transform Britain; by the time of the Great Exhibition in 1851 the country was described as the "workshop of the world". The British Empire expanded to include India, large parts of Africa and many other territories throughout the world. Alongside the formal control that Britain exerted over its own colonies, its dominance of much of world trade meant that it effectively controlled the economies of many regions, such as Asia and Latin America.
In Britain, political attitudes favoured free trade and laissez-faire policies and a gradual widening of the voting franchise. During the 19th Century, Britain's population increased at a dramatic rate, accompanied by rapid urbanisation, which caused significant social and economic stresses. To seek new markets and sources of raw materials, the Conservative Party under Benjamin Disraeli launched a period of imperialist expansion in Egypt, South Africa, and elsewhere. Canada, Australia, and New Zealand became self-governing dominions.
By the start of the 20th century, Germany and the United States had begun to challenge Britain's economic lead. Subsequent military and economic tensions between Britain and Germany were major causes of the First World War, during which Britain relied heavily upon its empire. The conflict placed enormous strain on the military, financial and manpower resources of Britain. Although the British Empire achieved its largest territorial extent immediately after World War I, Britain was no longer the world's pre-eminent industrial or military power. In the Second World War, Britain's colonies in Southeast Asia were occupied by Imperial Japan. Despite the final victory of Britain and its allies, the damage to British prestige helped to accelerate the decline of the empire. India, Britain's most valuable and populous possession, achieved independence as part of a larger decolonisation movement in which Britain granted independence to most territories of the empire. The transfer of Hong Kong to China in 1997 marked for many the end of the British Empire. Fourteen overseas territories remain under British sovereignty. After independence, many former British colonies joined the Commonwealth of Nations, a free association of independent states. The United Kingdom is now one of 16 Commonwealth nations, a grouping known informally as the Commonwealth realms, that share a monarch, Queen Elizabeth II.
Origins (1497–1583)
thumb|upright|A replica of The Matthew, John Cabot's ship used for his second voyage to the New World.
The foundations of the British Empire were laid when England and Scotland were separate kingdoms. In 1496 King Henry VII of England, following the successes of Spain and Portugal in overseas exploration, commissioned John Cabot to lead a voyage to discover a route to Asia via the North Atlantic. Cabot sailed in 1497, five years after the European discovery of America, and although he successfully made landfall on the coast of Newfoundland (mistakenly believing, like Christopher Columbus, that he had reached Asia),Andrews 1985, p. 45. there was no attempt to found a colony. Cabot led another voyage to the Americas the following year but nothing was heard of his ships again.
No further attempts to establish English colonies in the Americas were made until well into the reign of Queen Elizabeth I, during the last decades of the 16th century.Canny, p. 35. In the meantime the Protestant Reformation had turned England and Catholic Spain into implacable enemies. In 1562, the English Crown encouraged the privateers John Hawkins and Francis Drake to engage in slave-raiding attacks against Spanish and Portuguese ships off the coast of West AfricaThomas, pp. 155–158 with the aim of breaking into the Atlantic slave trade. This effort was rebuffed and later, as the Anglo-Spanish Wars intensified, Elizabeth I gave her blessing to further privateering raids against Spanish ports in the Americas and shipping that was returning across the Atlantic, laden with treasure from the New World. At the same time, influential writers such as Richard Hakluyt and John Dee (who was the first to use the term "British Empire")Canny, p. 62. were beginning to press for the establishment of England's own empire. By this time, Spain had become the dominant power in the Americas and was exploring the Pacific Ocean, Portugal had established trading posts and forts from the coasts of Africa and Brazil to China, and France had begun to settle the Saint Lawrence River area, later to become New France.Lloyd, pp. 4–8.
Plantations of Ireland
Although, England trailed behind other European powers in establishing overseas colonies, it had been engaged during the 16th century in the settlement of Ireland with Protestants from England and Scotland, drawing on precedents dating back to the Norman invasion of Ireland in 1169.Canny, p. 7.Kenny, p. 5. Several people who helped establish the Plantations of Ireland also played a part in the early colonisation of North America, particularly a group known as the West Country men.Taylor, pp. 119,123.
"First" British Empire (1583–1783)
In 1578, Elizabeth I granted a patent to Humphrey Gilbert for discovery and overseas exploration.Andrews, p. 187. That year, Gilbert sailed for the Caribbean with the intention of engaging in piracy and establishing a colony in North America, but the expedition was aborted before it had crossed the Atlantic.Andrews, p. 188.Canny, p. 63. In 1583 he embarked on a second attempt, on this occasion to the island of Newfoundland whose harbour he formally claimed for England, although no settlers were left behind. Gilbert did not survive the return journey to England, and was succeeded by his half-brother, Walter Raleigh, who was granted his own patent by Elizabeth in 1584. Later that year, Raleigh founded the Roanoke Colony on the coast of present-day North Carolina, but lack of supplies caused the colony to fail.Canny, pp. 63–64.
In 1603, James VI, King of Scots, ascended (as James I) to the English throne and in 1604 negotiated the Treaty of London, ending hostilities with Spain. Now at peace with its main rival, English attention shifted from preying on other nations' colonial infrastructures to the business of establishing its own overseas colonies.Canny, p. 70. The British Empire began to take shape during the early 17th century, with the English settlement of North America and the smaller islands of the Caribbean, and the establishment of joint-stock companies, most notably the East India Company, to administer colonies and overseas trade. This period, until the loss of the Thirteen Colonies after the American War of Independence towards the end of the 18th century, has subsequently been referred to by some historians as the "First British Empire".Canny, p. 34.
Americas, Africa and the slave trade
The Caribbean initially provided England's most important and lucrative colonies,James, p. 17. but not before several attempts at colonisation failed. An attempt to establish a colony in Guiana in 1604 lasted only two years, and failed in its main objective to find gold deposits.Canny, p. 71. Colonies in St Lucia (1605) and Grenada (1609) also rapidly folded, but settlements were successfully established in St. Kitts (1624), Barbados (1627) and Nevis (1628).Canny, p. 221. The colonies soon adopted the system of sugar plantations successfully used by the Portuguese in Brazil, which depended on slave labour, and—at first—Dutch ships, to sell the slaves and buy the sugar.Lloyd, pp. 22–23. To ensure that the increasingly healthy profits of this trade remained in English hands, Parliament decreed in 1651 that only English ships would be able to ply their trade in English colonies. This led to hostilities with the United Dutch Provinces—a series of Anglo-Dutch Wars—which would eventually strengthen England's position in the Americas at the expense of the Dutch.Lloyd, p. 32. In 1655, England annexed the island of Jamaica from the Spanish, and in 1666 succeeded in colonising the Bahamas.Lloyd, pp. 33, 43.
thumb|left|Map of British colonies in North America, 1763–1776.
England's first permanent settlement in the Americas was founded in 1607 in Jamestown, led by Captain John Smith and managed by the Virginia Company. Bermuda was settled and claimed by England as a result of the 1609 shipwreck of the Virginia Company's flagship, and in 1615 was turned over to the newly formed Somers Isles Company.Lloyd, pp. 15–20. The Virginia Company's charter was revoked in 1624 and direct control of Virginia was assumed by the crown, thereby founding the Colony of Virginia.Andrews, pp. 316, 324–326. The London and Bristol Company was created in 1610 with the aim of creating a permanent settlement on Newfoundland, but was largely unsuccessful.Andrews, pp. 20–22. In 1620, Plymouth was founded as a haven for Puritan religious separatists, later known as the Pilgrims.James, p. 8. Fleeing from religious persecution would become the motive of many English would-be colonists to risk the arduous trans-Atlantic voyage: Maryland was founded as a haven for Roman Catholics (1634), Rhode Island (1636) as a colony tolerant of all religions and Connecticut (1639) for Congregationalists. The Province of Carolina was founded in 1663. With the surrender of Fort Amsterdam in 1664, England gained control of the Dutch colony of New Netherland, renaming it New York. This was formalised in negotiations following the Second Anglo-Dutch War, in exchange for Suriname.Lloyd, p. 40. In 1681, the colony of Pennsylvania was founded by William Penn. The American colonies were less financially successful than those of the Caribbean, but had large areas of good agricultural land and attracted far larger numbers of English emigrants who preferred their temperate climates.
thumb|African slaves working in 17th-century Virginia, by an unknown artist, 1670.
In 1670, Charles II incorporated by royal charter the Hudson's Bay Company (HBC), granting it a monopoly on the fur trade in the area known as Rupert's Land, which would later form a large proportion of the Dominion of Canada. Forts and trading posts established by the HBC were frequently the subject of attacks by the French, who had established their own fur trading colony in adjacent New France.Buckner, p. 25.
Two years later, the Royal African Company was inaugurated, receiving from King Charles a monopoly of the trade to supply slaves to the British colonies of the Caribbean.Lloyd, p. 37. From the outset, slavery was the basis of the British Empire in the West Indies. Until the abolition of its slave trade in 1807, Britain was responsible for the transportation of 3.5 million African slaves to the Americas, a third of all slaves transported across the Atlantic. To facilitate this trade, forts were established on the coast of West Africa, such as James Island, Accra and Bunce Island. In the British Caribbean, the percentage of the population of African descent rose from 25% in 1650 to around 80% in 1780, and in the Thirteen Colonies from 10% to 40% over the same period (the majority in the southern colonies).Canny, p. 228. For the slave traders, the trade was extremely profitable, and became a major economic mainstay for such western British cities as Bristol and Liverpool, which formed the third corner of the triangular trade with Africa and the Americas. For the transported, harsh and unhygienic conditions on the slaving ships and poor diets meant that the average mortality rate during the Middle Passage was one in seven.Marshall, pp. 440–64.
In 1695, the Parliament of Scotland granted a charter to the Company of Scotland, which established a settlement in 1698 on the Isthmus of Panama. Besieged by neighbouring Spanish colonists of New Granada, and afflicted by malaria, the colony was abandoned two years later. The Darien scheme was a financial disaster for Scotland — a quarter of Scottish capitalMagnusson, p. 531. was lost in the enterprise — and ended Scottish hopes of establishing its own overseas empire. The episode also had major political consequences, persuading the governments of both England and Scotland of the merits of a union of countries, rather than just crowns.Macaulay, p. 509. This occurred in 1707 with the Treaty of Union, establishing the Kingdom of Great Britain.
Rivalry with the Netherlands in Asia
thumb|right|250px|Fort St. George was founded at Madras in 1639.
At the end of the 16th century, England and the Netherlands began to challenge Portugal's monopoly of trade with Asia, forming private joint-stock companies to finance the voyages—the English, later British, East India Company and the Dutch East India Company, chartered in 1600 and 1602 respectively. The primary aim of these companies was to tap into the lucrative spice trade, an effort focused mainly on two regions; the East Indies archipelago, and an important hub in the trade network, India. There, they competed for trade supremacy with Portugal and with each other.Lloyd, p. 13. Although England ultimately eclipsed the Netherlands as a colonial power, in the short term the Netherlands' more advanced financial system and the three Anglo-Dutch Wars of the 17th century left it with a stronger position in Asia. Hostilities ceased after the Glorious Revolution of 1688 when the Dutch William of Orange ascended the English throne, bringing peace between the Netherlands and England. A deal between the two nations left the spice trade of the East Indies archipelago to the Netherlands and the textiles industry of India to England, but textiles soon overtook spices in terms of profitability, and by 1720, in terms of sales, the British company had overtaken the Dutch.
Global conflicts with France
thumb|right|250px|Defeat of French fireships at Quebec in 1759.
Peace between England and the Netherlands in 1688 meant that the two countries entered the Nine Years' War as allies, but the conflict—waged in Europe and overseas between France, Spain and the Anglo-Dutch alliance—left the English a stronger colonial power than the Dutch, who were forced to devote a larger proportion of their military budget on the costly land war in Europe.Canny, p. 441. The 18th century saw England (after 1707, Britain) rise to be the world's dominant colonial power, and France becoming its main rival on the imperial stage.Pagden, p. 90.
The death of Charles II of Spain in 1700 and his bequeathal of Spain and its colonial empire to Philippe of Anjou, a grandson of the King of France, raised the prospect of the unification of France, Spain and their respective colonies, an unacceptable state of affairs for England and the other powers of Europe. In 1701, England, Portugal and the Netherlands sided with the Holy Roman Empire against Spain and France in the War of the Spanish Succession, which lasted until 1714.
At the concluding Treaty of Utrecht, Philip renounced his and his descendants' right to the French throne and Spain lost its empire in Europe.Shennan, pp. 11–17. The British Empire was territorially enlarged: from France, Britain gained Newfoundland and Acadia, and from Spain, Gibraltar and Minorca. Gibraltar became a critical naval base and allowed Britain to control the Atlantic entry and exit point to the Mediterranean. Spain also ceded the rights to the lucrative asiento (permission to sell slaves in Spanish America) to Britain.James, p. 58.
thumb|right|250px|Robert Clive's victory at the Battle of Plassey established the East India Company as a military as well as a commercial power.
During the middle decades of the 18th century, there were several outbreaks of military conflict on the Indian subcontinent, the Carnatic Wars, as the English East India Company (often known simply as "the Company") and its French counterpart, the French East India Company (Compagnie française des Indes orientales), struggled alongside local rulers to fill the vacuum that had been left by the decline of the Mughal Empire. The Battle of Plassey in 1757, in which the British, led by Robert Clive, defeated the Nawab of Bengal and his French allies, left the British East India Company in control of Bengal and as the major military and political power in India.Smith, p. 17. France was left control of its enclaves but with military restrictions and an obligation to support British client states, ending French hopes of controlling India.Bandyopādhyāẏa, pp. 49–52 In the following decades the British East India Company gradually increased the size of the territories under its control, either ruling directly or via local rulers under the threat of force from the British Indian Army, the vast majority of which was composed of Indian sepoys.Smith, pp. 18–19.
The British and French struggles in India became but one theatre of the global Seven Years' War (1756–1763) involving France, Britain and the other major European powers. The signing of the Treaty of Paris (1763) had important consequences for the future of the British Empire. In North America, France's future as a colonial power effectively ended with the recognition of British claims to Rupert's Land, and the ceding of New France to Britain (leaving a sizeable French-speaking population under British control) and Louisiana to Spain. Spain ceded Florida to Britain. Along with its victory over France in India, the Seven Years' War therefore left Britain as the world's most powerful maritime power.Pagden, p. 91.
Loss of the Thirteen American Colonies
During the 1760s and early 1770s, relations between the Thirteen Colonies and Britain became increasingly strained, primarily because of resentment of the British Parliament's attempts to govern and tax American colonists without their consent. This was summarised at the time by the slogan "No taxation without representation", a perceived violation of the guaranteed Rights of Englishmen. The American Revolution began with rejection of Parliamentary authority and moves towards self-government. In response Britain sent troops to reimpose direct rule, leading to the outbreak of war in 1775. The following year, in 1776, the United States declared independence. The entry of France into the war in 1778 tipped the military balance in the Americans' favour and after a decisive defeat at Yorktown in 1781, Britain began negotiating peace terms. American independence was acknowledged at the Peace of Paris in 1783.Marshall, pp. 312–23.
thumb|left|Surrender of Cornwallis at Yorktown. The loss of the American colonies marked the end of the "first British Empire".
The loss of such a large portion of British America, at the time Britain's most populous overseas possession, is seen by some historians as the event defining the transition between the "first" and "second" empires,Canny, p. 92. in which Britain shifted its attention away from the Americas to Asia, the Pacific and later Africa. Adam Smith's Wealth of Nations, published in 1776, had argued that colonies were redundant, and that free trade should replace the old mercantilist policies that had characterised the first period of colonial expansion, dating back to the protectionism of Spain and Portugal.James, p. 120. The growth of trade between the newly independent United States and Britain after 1783 seemed to confirm Smith's view that political control was not necessary for economic success.James, p. 119.Marshall, p. 585.
The war to the south influenced British policy in Canada, where between 40,000 and 100,000Zolberg, p. 496. defeated Loyalists had migrated from the new United States following independence.Games, pp. 46–48. The 14,000 Loyalists who went to the Saint John and Saint Croix river valleys, then part of Nova Scotia, felt too far removed from the provincial government in Halifax, so London split off New Brunswick as a separate colony in 1784.Kelley & Trebilcock, p. 43. The Constitutional Act of 1791 created the provinces of Upper Canada (mainly English-speaking) and Lower Canada (mainly French-speaking) to defuse tensions between the French and British communities, and implemented governmental systems similar to those employed in Britain, with the intention of asserting imperial authority and not allowing the sort of popular control of government that was perceived to have led to the American Revolution.Smith, p. 28.
Tensions between Britain and the United States escalated again during the Napoleonic Wars, as Britain tried to cut off American trade with France and boarded American ships to impress men into the Royal Navy. The US declared war, the War of 1812, and invaded Canadian territory, in response Britain invaded the US but the pre-war boundaries were reaffirmed by the 1814 Treaty of Ghent, ensuring Canada's future would be separate from that of the United States.Latimer, pp. 8, 30–34, 389–92.Marshall, pp. 388.
Rise of the "Second" British Empire (1783–1815)
Exploration of the Pacific
thumb|160px|James Cook's mission was to find the alleged southern continent Terra Australis.
Since 1718, transportation to the American colonies had been a penalty for various offences in Britain, with approximately one thousand convicts transported per year across the Atlantic.Smith, p. 20. Forced to find an alternative location after the loss of the Thirteen Colonies in 1783, the British government turned to the newly discovered lands of Australia.Smith, pp. 20–21. The western coast of Australia had been discovered for Europeans by the Dutch explorer Willem Jansz in 1606 and was later named New Holland by the Dutch East India Company,Mulligan & Hill, pp. 20–23. but there was no attempt to colonise it. In 1770 James Cook discovered the eastern coast of Australia while on a scientific voyage to the South Pacific Ocean, claimed the continent for Britain, and named it New South Wales.Peters, pp. 5–23. In 1778, Joseph Banks, Cook's botanist on the voyage, presented evidence to the government on the suitability of Botany Bay for the establishment of a penal settlement, and in 1787 the first shipment of convicts set sail, arriving in 1788.James, p. 142. Britain continued to transport convicts to New South Wales until 1840.Britain and the Dominions, p. 159. The Australian colonies became profitable exporters of wool and gold,Fieldhouse, pp. 145–149 mainly because of gold rushes in the colony of Victoria, making its capital Melbourne for a time the richest city in the world and the second largest city (after London) in the British Empire.Statesmen's Year Book 1889
During his voyage, Cook also visited New Zealand, first discovered by Dutch explorer Abel Tasman in 1642, and claimed the North and South islands for the British crown in 1769 and 1770 respectively. Initially, interaction between the indigenous Māori population and Europeans was limited to the trading of goods. European settlement increased through the early decades of the 19th century, with numerous trading stations established, especially in the North. In 1839, the New Zealand Company announced plans to buy large tracts of land and establish colonies in New Zealand. On 6 February 1840, Captain William Hobson and around 40 Maori chiefs signed the Treaty of Waitangi.Smith, p. 45. This treaty is considered by many to be New Zealand's founding document, but differing interpretations of the Maori and English versions of the textPorter, p. 579. have meant that it continues to be a source of dispute.Mein Smith, p. 49.
War with Napoleonic France
Britain was challenged again by France under Napoleon, in a struggle that, unlike previous wars, represented a contest of ideologies between the two nations.James, p. 152. It was not only Britain's position on the world stage that was at risk: Napoleon threatened to invade Britain itself, just as his armies had overrun many countries of continental Europe.
thumb|left|The Battle of Waterloo ended in the defeat of Napoleon
The Napoleonic Wars were therefore ones in which Britain invested large amounts of capital and resources to win. French ports were blockaded by the Royal Navy, which won a decisive victory over a Franco-Spanish fleet at Trafalgar in 1805. Overseas colonies were attacked and occupied, including those of the Netherlands, which was annexed by Napoleon in 1810. France was finally defeated by a coalition of European armies in 1815.Lloyd, pp. 115–118. Britain was again the beneficiary of peace treaties: France ceded the Ionian Islands, Malta (which it had occupied in 1797 and 1798 respectively), Mauritius, Saint Lucia, and Tobago; Spain ceded Trinidad; the Netherlands Guyana, and the Cape Colony. Britain returned Guadeloupe, Martinique, French Guiana, and Réunion to France, and Java and Suriname to the Netherlands, while gaining control of Ceylon (1795–1815).James, p. 165.
Abolition of slavery
With the advent of the Industrial Revolution, goods produced by slavery became less important to the British economy. Added to this was the cost of suppressing regular slave rebellions. With support from the British abolitionist movement, Parliament enacted the Slave Trade Act in 1807, which abolished the slave trade in the empire. In 1808, Sierra Leone was designated an official British colony for freed slaves.Porter, p. 14. Parliamentary reform in 1832 saw the influence of the West India Committee decline. The Slavery Abolition Act, passed the following year, abolished slavery in the British Empire on 1 August 1834, finally bringing the Empire into line with the law in the UK (with the exception of St. Helena, Ceylon and the territories administered by the East India Company, though these exclusions were later repealed). Under the Act, slaves were granted full emancipation after a period of four to six years of "apprenticeship".Hinks, p. 129. The British government compensated slave-owners.
Britain's imperial century (1815–1914)
thumb|250px|An elaborate map of the British Empire in 1886, marked in the traditional colour for imperial British dominions on maps.
Between 1815 and 1914, a period referred to as Britain's "imperial century" by some historians,Hyam, p. 1.Smith, p. 71. around of territory and roughly 400 million people were added to the British Empire.Parsons, p. 3. Victory over Napoleon left Britain without any serious international rival, other than Russia in Central Asia.Porter, p. 401. Unchallenged at sea, Britain adopted the role of global policeman, a state of affairs later known as the Pax Britannica, and a foreign policy of "splendid isolation".Lee 1994, pp. 254–257. Alongside the formal control it exerted over its own colonies, Britain's dominant position in world trade meant that it effectively controlled the economies of many countries, such as China, Argentina and Siam, which has been described by some historians as an "Informal Empire".Porter, p. 8.Marshall, pp. 156–57.
British imperial strength was underpinned by the steamship and the telegraph, new technologies invented in the second half of the 19th century, allowing it to control and defend the empire. By 1902, the British Empire was linked together by a network of telegraph cables, called the All Red Line.Dalziel, pp. 88–91.
East India Company in Asia
left|thumb|upright|An 1876 political cartoon of Benjamin Disraeli (1804–1881) making Queen Victoria Empress of India. The caption reads "New crowns for old ones!"
The East India Company drove the expansion of the British Empire in Asia. The Company's army had first joined forces with the Royal Navy during the Seven Years' War, and the two continued to co-operate in arenas outside India: the eviction of the French from Egypt (1799),Mori, p. 178. the capture of Java from the Netherlands (1811), the acquisition of Singapore (1819) and Malacca (1824) and the defeat of Burma (1826).
From its base in India, the Company had also been engaged in an increasingly profitable opium export trade to China since the 1730s. This trade, illegal since it was outlawed by the Qing dynasty in 1729, helped reverse the trade imbalances resulting from the British imports of tea, which saw large outflows of silver from Britain to China.Martin, pp. 146–148. In 1839, the confiscation by the Chinese authorities at Canton of 20,000 chests of opium led Britain to attack China in the First Opium War, and resulted in the seizure by Britain of Hong Kong Island, at that time a minor settlement.Janin, p. 28.
During the late 18th and early 19th centuries the British Crown began to assume an increasingly large role in the affairs of the Company. A series of Acts of Parliament were passed, including the Regulating Act of 1773, Pitt's India Act of 1784 and the Charter Act of 1813 which regulated the Company's affairs and established the sovereignty of the Crown over the territories that it had acquired.Keay, p. 393 The Company's eventual end was precipitated by the Indian Rebellion, a conflict that had begun with the mutiny of sepoys, Indian troops under British officers and discipline.Parsons, pp. 44–46. The rebellion took six months to suppress, with heavy loss of life on both sides. The following year the British government dissolved the Company and assumed direct control over India through the Government of India Act 1858, establishing the British Raj, where an appointed governor-general administered India and Queen Victoria was crowned the Empress of India.Smith, pp. 50–57. India became the empire's most valuable possession, "the Jewel in the Crown", and was the most important source of Britain's strength.Brown, p. 5.
A series of serious crop failures in the late 19th century led to widespread famines on the subcontinent in which it is estimated that over 15 million people died. The East India Company had failed to implement any coordinated policy to deal with the famines during its period of rule. Later, under direct British rule, commissions were set up after each famine to investigate the causes and implement new policies, which took until the early 1900s to have an effect.Marshall, pp. 133–34.
Rivalry with Russia
thumb|British cavalry charging against Russian forces at Balaclava in 1854.
During the 19th century, Britain and the Russian Empire vied to fill the power vacuums that had been left by the declining Ottoman Empire, Qajar dynasty and Qing Dynasty. This rivalry in Central Asia came to be known as the "Great Game".Hopkirk, pp. 1–12. As far as Britain was concerned, defeats inflicted by Russia on Persia and Turkey demonstrated its imperial ambitions and capabilities and stoked fears in Britain of an overland invasion of India.James, p. 181. In 1839, Britain moved to pre-empt this by invading Afghanistan, but the First Anglo-Afghan War was a disaster for Britain.James, p. 182.
When Russia invaded the Turkish Balkans in 1853, fears of Russian dominance in the Mediterranean and Middle East led Britain and France to invade the Crimean Peninsula to destroy Russian naval capabilities. The ensuing Crimean War (1854–56), which involved new techniques of modern warfare,Royle, preface. was the only global war fought between Britain and another imperial power during the Pax Britannica and was a resounding defeat for Russia. The situation remained unresolved in Central Asia for two more decades, with Britain annexing Baluchistan in 1876 and Russia annexing Kirghizia, Kazakhstan, and Turkmenistan. For a while it appeared that another war would be inevitable, but the two countries reached an agreement on their respective spheres of influence in the region in 1878 and on all outstanding matters in 1907 with the signing of the Anglo-Russian Entente. The destruction of the Russian Navy by the Japanese at the Battle of Port Arthur during the Russo-Japanese War of 1904–05 also limited its threat to the British.Hodge, p. 47.
Cape to Cairo
thumb|The Rhodes Colossus—Cecil Rhodes spanning "Cape to Cairo".
The Dutch East India Company had founded the Cape Colony on the southern tip of Africa in 1652 as a way station for its ships travelling to and from its colonies in the East Indies. Britain formally acquired the colony, and its large Afrikaner (or Boer) population in 1806, having occupied it in 1795 to prevent its falling into French hands during the Flanders Campaign.Smith, p. 85. British immigration began to rise after 1820, and pushed thousands of Boers, resentful of British rule, northwards to found their own—mostly short-lived—independent republics, during the Great Trek of the late 1830s and early 1840s.Smith, pp. 85–86. In the process the Voortrekkers clashed repeatedly with the British, who had their own agenda with regard to colonial expansion in South Africa and to the various native African polities, including those of the Sotho and the Zulu nations. Eventually the Boers established two republics which had a longer lifespan: the South African Republic or Transvaal Republic (1852–77; 1881–1902) and the Orange Free State (1854–1902).Lloyd, pp. 168, 186, 243. In 1902 Britain occupied both republics, concluding a treaty with the two Boer Republics following the Second Boer War (1899–1902).Lloyd, p. 255.
In 1869 the Suez Canal opened under Napoleon III, linking the Mediterranean with the Indian Ocean. Initially the Canal was opposed by the British;Tilby, p. 256. but once opened, its strategic value was quickly recognised and became the "jugular vein of the Empire".Roger 1986, p. 718. In 1875, the Conservative government of Benjamin Disraeli bought the indebted Egyptian ruler Isma'il Pasha's 44% shareholding in the Suez Canal for £4 million (£ in 2013). Although this did not grant outright control of the strategic waterway, it did give Britain leverage. Joint Anglo-French financial control over Egypt ended in outright British occupation in 1882. The French were still majority shareholders and attempted to weaken the British position,James, p. 274. but a compromise was reached with the 1888 Convention of Constantinople, which made the Canal officially neutral territory.
With competitive French, Belgian and Portuguese activity in the lower Congo River region undermining orderly colonisation of tropical Africa, the Berlin Conference of 1884–85 was held to regulate the competition between the European powers in what was called the "Scramble for Africa" by defining "effective occupation" as the criterion for international recognition of territorial claims.Herbst, pp. 71–72. The scramble continued into the 1890s, and caused Britain to reconsider its decision in 1885 to withdraw from Sudan. A joint force of British and Egyptian troops defeated the Mahdist Army in 1896, and rebuffed an attempted French invasion at Fashoda in 1898. Sudan was nominally made an Anglo-Egyptian condominium, but a British colony in reality.Vandervort, pp. 169–183.
British gains in Southern and East Africa prompted Cecil Rhodes, pioneer of British expansion in Southern Africa, to urge a "Cape to Cairo" railway linking the strategically important Suez Canal to the mineral-rich south of the continent.James, p. 298. During the 1880s and 1890s, Rhodes, with his privately owned British South Africa Company, occupied and annexed territories subsequently named after him, Rhodesia.Lloyd, p. 215.
Changing status of the white colonies
thumb|210px|Canada's major industry in terms of employment and value of the product was the timber trade (Ontario, 1900 circa)
The path to independence for the white colonies of the British Empire began with the 1839 Durham Report, which proposed unification and self-government for Upper and Lower Canada, as a solution to political unrest which had erupted in armed rebellions in 1837.Smith, pp. 28–29. This began with the passing of the Act of Union in 1840, which created the Province of Canada. Responsible government was first granted to Nova Scotia in 1848, and was soon extended to the other British North American colonies. With the passage of the British North America Act, 1867 by the British Parliament, Upper and Lower Canada, New Brunswick and Nova Scotia were formed into the Dominion of Canada, a confederation enjoying full self-government with the exception of international relations.Porter, p. 187 Australia and New Zealand achieved similar levels of self-government after 1900, with the Australian colonies federating in 1901.Smith, p. 30. The term "dominion status" was officially introduced at the Colonial Conference of 1907.
The last decades of the 19th century saw concerted political campaigns for Irish home rule. Ireland had been united with Britain into the United Kingdom of Great Britain and Ireland with the Act of Union 1800 after the Irish Rebellion of 1798, and had suffered a severe famine between 1845 and 1852. Home rule was supported by the British Prime minister, William Gladstone, who hoped that Ireland might follow in Canada's footsteps as a Dominion within the empire, but his 1886 Home Rule bill was defeated in Parliament. Although the bill, if passed, would have granted Ireland less autonomy within the UK than the Canadian provinces had within their own federation,Lloyd, p. 213 many MPs feared that a partially independent Ireland might pose a security threat to Great Britain or mark the beginning of the break-up of the empire.James, p. 315. A second Home Rule bill was also defeated for similar reasons. A third bill was passed by Parliament in 1914, but not implemented because of the outbreak of the First World War leading to the 1916 Easter Rising.Smith, p. 92.
World wars (1914–1945)
By the turn of the 20th century, fears had begun to grow in Britain that it would no longer be able to defend the metropole and the entirety of the empire while at the same time maintaining the policy of "splendid isolation".O'Brien, p. 1. Germany was rapidly rising as a military and industrial power and was now seen as the most likely opponent in any future war. Recognising that it was overstretched in the PacificBrown, p. 667. and threatened at home by the Imperial German Navy, Britain formed an alliance with Japan in 1902 and with its old enemies France and Russia in 1904 and 1907, respectively.Lloyd, p. 275.
First World War
thumb|190px|Soldiers of the Australian 5th Division, waiting to attack during the Battle of Fromelles, 19 July 1916.
Britain's fears of war with Germany were realised in 1914 with the outbreak of the First World War. Britain quickly invaded and occupied most of Germany's overseas colonies in Africa. In the Pacific, Australia and New Zealand occupied German New Guinea and Samoa respectively. Plans for a post-war division of the Ottoman Empire, which had joined the war on Germany's side, were secretly drawn up by Britain and France under the 1916 Sykes–Picot Agreement. This agreement was not divulged to the Sharif of Mecca, who the British had been encouraging to launch an Arab revolt against their Ottoman rulers, giving the impression that Britain was supporting the creation of an independent Arab state.Brown, pp. 494–495.
thumb|left|190px|A poster urging men from countries of the British Empire to enlist in the British army.
The British declaration of war on Germany and its allies also committed the colonies and Dominions, which provided invaluable military, financial and material support. Over 2.5 million men served in the armies of the Dominions, as well as many thousands of volunteers from the Crown colonies.Marshall, pp. 78–79. The contributions of Australian and New Zealand troops during the 1915 Gallipoli Campaign against the Ottoman Empire had a great impact on the national consciousness at home, and marked a watershed in the transition of Australia and New Zealand from colonies to nations in their own right. The countries continue to commemorate this occasion on Anzac Day. Canadians viewed the Battle of Vimy Ridge in a similar light.Lloyd, p. 277. The important contribution of the Dominions to the war effort was recognised in 1917 by the British Prime Minister David Lloyd George when he invited each of the Dominion Prime Ministers to join an Imperial War Cabinet to co-ordinate imperial policy.Lloyd, p. 278.
Under the terms of the concluding Treaty of Versailles signed in 1919, the empire reached its greatest extent with the addition of and 13 million new subjects. The colonies of Germany and the Ottoman Empire were distributed to the Allied powers as League of Nations mandates. Britain gained control of Palestine, Transjordan, Iraq, parts of Cameroon and Togoland, and Tanganyika. The Dominions themselves also acquired mandates of their own: the Union of South Africa gained South-West Africa (modern-day Namibia), Australia gained New Guinea, and New Zealand Western Samoa. Nauru was made a combined mandate of Britain and the two Pacific Dominions.Fox, pp. 23–29, 35, 60.
Inter-war period
thumb|400px|right|British Empire at its territorial peak in 1921
The changing world order that the war had brought about, in particular the growth of the United States and Japan as naval powers, and the rise of independence movements in India and Ireland, caused a major reassessment of British imperial policy.Goldstein, p. 4. Forced to choose between alignment with the United States or Japan, Britain opted not to renew its Japanese alliance and instead signed the 1922 Washington Naval Treaty, where Britain accepted naval parity with the United States.Louis, p. 302. This decision was the source of much debate in Britain during the 1930sLouis, p. 294. as militaristic governments took hold in Japan and Germany helped in part by the Great Depression, for it was feared that the empire could not survive a simultaneous attack by both nations.Louis, p. 303. The issue of the empire's security was a serious concern in Britain, as it was vital to the British economy.Lee 1996, p. 305.
In 1919, the frustrations caused by delays to Irish home rule led the MPs of Sinn Féin, a pro-independence party that had won a majority of the Irish seats in the 1918 British general election, to establish an independent parliament in Dublin, at which Irish independence was declared. The Irish Republican Army simultaneously began a guerrilla war against the British administration.Brown, p. 143. The Anglo-Irish War ended in 1921 with a stalemate and the signing of the Anglo-Irish Treaty, creating the Irish Free State, a Dominion within the British Empire, with effective internal independence but still constitutionally linked with the British Crown.Smith, p. 95. Northern Ireland, consisting of six of the 32 Irish counties which had been established as a devolved region under the 1920 Government of Ireland Act, immediately exercised its option under the treaty to retain its existing status within the United Kingdom.Magee, p. 108.
thumb|left|George V with the British and Dominion prime ministers at the 1926 Imperial Conference
A similar struggle began in India when the Government of India Act 1919 failed to satisfy demand for independence. Concerns over communist and foreign plots following the Ghadar Conspiracy ensured that war-time strictures were renewed by the Rowlatt Acts. This led to tension,James, p. 416. particularly in the Punjab region, where repressive measures culminated in the Amritsar Massacre. In Britain public opinion was divided over the morality of the massacre, between those who saw it as having saved India from anarchy, and those who viewed it with revulsion. The subsequent Non-Co-Operation movement was called off in March 1922 following the Chauri Chaura incident, and discontent continued to simmer for the next 25 years.
In 1922, Egypt, which had been declared a British protectorate at the outbreak of the First World War, was granted formal independence, though it continued to be a British client state until 1954. British troops remained stationed in Egypt until the signing of the Anglo-Egyptian Treaty in 1936,Smith, p. 104. under which it was agreed that the troops would withdraw but continue to occupy and defend the Suez Canal zone. In return, Egypt was assisted in joining the League of Nations.Brown, p. 292. Iraq, a British mandate since 1920, also gained membership of the League in its own right after achieving independence from Britain in 1932.Smith, p. 101. In Palestine, Britain was presented with the problem of mediating between the Arabs and increasing numbers of Jews. The 1917 Balfour Declaration, which had been incorporated into the terms of the mandate, stated that a national home for the Jewish people would be established in Palestine, and Jewish immigration allowed up to a limit that would be determined by the mandatory power.Louis, p. 271. This led to increasing conflict with the Arab population, who openly revolted in 1936. As the threat of war with Germany increased during the 1930s, Britain judged the support of Arabs as more important than the establishment of a Jewish homeland, and shifted to a pro-Arab stance, limiting Jewish immigration and in turn triggering a Jewish insurgency.
The right of the Dominions to set their own foreign policy, independent of Britain, was recognised at the 1923 Imperial Conference.McIntyre, p. 187. Britain's request for military assistance from the Dominions at the outbreak of the Chanak Crisis the previous year had been turned down by Canada and South Africa, and Canada had refused to be bound by the 1923 Treaty of Lausanne.Brown, p. 68.McIntyre, p. 186. After pressure from Ireland and South Africa, the 1926 Imperial Conference issued the Balfour Declaration of 1926, declaring the Dominions to be "autonomous Communities within the British Empire, equal in status, in no way subordinate one to another" within a "British Commonwealth of Nations".Brown, p. 69. This declaration was given legal substance under the 1931 Statute of Westminster.Rhodes, Wanna & Weller, pp. 5–15. The parliaments of Canada, Australia, New Zealand, the Union of South Africa, the Irish Free State and Newfoundland were now independent of British legislative control, they could nullify British laws and Britain could no longer pass laws for them without their consent.Turpin & Tomkins, p. 48. Newfoundland reverted to colonial status in 1933, suffering from financial difficulties during the Great Depression.Lloyd, p. 300. The Irish Free State distanced itself further from the British state with the introduction of a new constitution in 1937, making it a republic in all but name.Kenny, p. 21.
Second World War
thumb|During the Second World War, the Eighth Army was made up of units from many different countries in the British Empire and Commonwealth; it fought in North African and Italian campaigns.
Britain's declaration of war against Nazi Germany in September 1939 included the Crown colonies and India but did not automatically commit the Dominions of Australia, Canada, New Zealand, Newfoundland and South Africa. All soon declared war on Germany, but the Irish Free State chose to remain legally neutral throughout the war.Lloyd, pp. 313–14.
After the German occupation of France in 1940, Britain and the empire stood alone against Germany, until the entry of the Soviet Union to the war in 1941. British Prime Minister Winston Churchill successfully lobbied President Franklin D. Roosevelt for military aid from the United States, but Roosevelt was not yet ready to ask Congress to commit the country to war.Gilbert, p. 234. In August 1941, Churchill and Roosevelt met and signed the Atlantic Charter, which included the statement that "the rights of all peoples to choose the form of government under which they live" should be respected. This wording was ambiguous as to whether it referred to European countries invaded by Germany, or the peoples colonised by European nations, and would later be interpreted differently by the British, Americans, and nationalist movements.Lloyd, p. 316.James, p. 513.
In December 1941, Japan launched, in quick succession, attacks on British Malaya, the United States naval base at Pearl Harbor, and Hong Kong. Churchill's reaction to the entry of the United States into the war was that Britain was now assured of victory and the future of the empire was safe,Gilbert, p. 244. but the manner in which British forces were rapidly defeated in the Far East irreversibly harmed Britain's standing and prestige as an imperial power.Louis, p. 337.Brown, p. 319. Most damaging of all was the fall of Singapore, which had previously been hailed as an impregnable fortress and the eastern equivalent of Gibraltar.James, p. 460. The realisation that Britain could not defend its entire empire pushed Australia and New Zealand, which now appeared threatened by Japanese forces, into closer ties with the United States. This resulted in the 1951 ANZUS Pact between Australia, New Zealand and the United States of America.
Decolonisation and decline (1945–1997)
Though Britain and the empire emerged victorious from the Second World War, the effects of the conflict were profound, both at home and abroad. Much of Europe, a continent that had dominated the world for several centuries, was in ruins, and host to the armies of the United States and the Soviet Union, who now held the balance of global power.Abernethy, p. 146. Britain was left essentially bankrupt, with insolvency only averted in 1946 after the negotiation of a $US 4.33 billion loan from the United States,Brown, p. 331. the last instalment of which was repaid in 2006. At the same time, anti-colonial movements were on the rise in the colonies of European nations. The situation was complicated further by the increasing Cold War rivalry of the United States and the Soviet Union. In principle, both nations were opposed to European colonialism. In practice, however, American anti-communism prevailed over anti-imperialism, and therefore the United States supported the continued existence of the British Empire to keep Communist expansion in check.Levine, p. 193. The "wind of change" ultimately meant that the British Empire's days were numbered, and on the whole, Britain adopted a policy of peaceful disengagement from its colonies once stable, non-Communist governments were available to transfer power to. This was in contrast to other European powers such as France and Portugal,Abernethy, p. 148. which waged costly and ultimately unsuccessful wars to keep their empires intact. Between 1945 and 1965, the number of people under British rule outside the UK itself fell from 700 million to five million, three million of whom were in Hong Kong.Brown, p. 330.
Initial disengagement
thumb|right|About 14.5 million lost their homes as a result of the partition of India in 1947.
The pro-decolonisation Labour government, elected at the 1945 general election and led by Clement Attlee, moved quickly to tackle the most pressing issue facing the empire: Indian independence.Lloyd, p. 322. India's two major political parties—the Indian National Congress and the Muslim League—had been campaigning for independence for decades, but disagreed as to how it should be implemented. Congress favoured a unified secular Indian state, whereas the League, fearing domination by the Hindu majority, desired a separate Islamic state for Muslim-majority regions. Increasing civil unrest and the mutiny of the Royal Indian Navy during 1946 led Attlee to promise independence no later than June 30, 1948. When the urgency of the situation and risk of civil war became apparent, the newly appointed (and last) Viceroy, Lord Mountbatten, hastily brought forward the date to 15 August 1947.Smith, p. 67. The borders drawn by the British to broadly partition India into Hindu and Muslim areas left tens of millions as minorities in the newly independent states of India and Pakistan.Lloyd, p. 325. Millions of Muslims subsequently crossed from India to Pakistan and Hindus vice versa, and violence between the two communities cost hundreds of thousands of lives. Burma, which had been administered as part of the British Raj, and Sri Lanka gained their independence the following year in 1948. India, Pakistan and Sri Lanka became members of the Commonwealth, while Burma chose not to join.McIntyre, pp. 355–356.
The British Mandatory Palestine, where an Arab majority lived alongside a Jewish minority, presented the British with a similar problem to that of India.Lloyd, p. 327. The matter was complicated by large numbers of Jewish refugees seeking to be admitted to Palestine following the Holocaust, while Arabs were opposed to the creation of a Jewish state. Frustrated by the intractability of the problem, attacks by Jewish paramilitary organisations and the increasing cost of maintaining its military presence, Britain announced in 1947 that it would withdraw in 1948 and leave the matter to the United Nations to solve.Lloyd, p. 328. The UN General Assembly subsequently voted for a plan to partition Palestine into a Jewish and an Arab state.
Following the defeat of Japan in the Second World War, anti-Japanese resistance movements in Malaya turned their attention towards the British, who had moved to quickly retake control of the colony, valuing it as a source of rubber and tin.Lloyd, p. 335. The fact that the guerrillas were primarily Malayan-Chinese Communists meant that the British attempt to quell the uprising was supported by the Muslim Malay majority, on the understanding that once the insurgency had been quelled, independence would be granted. The Malayan Emergency, as it was called, began in 1948 and lasted until 1960, but by 1957, Britain felt confident enough to grant independence to the Federation of Malaya within the Commonwealth. In 1963, the 11 states of the federation together with Singapore, Sarawak and North Borneo joined to form Malaysia, but in 1965 Chinese-majority Singapore was expelled from the union following tensions between the Malay and Chinese populations.Lloyd, p. 364. Brunei, which had been a British protectorate since 1888, declined to join the unionLloyd, p. 396. and maintained its status until independence in 1984.
Suez and its aftermath
thumb|upright|left|British Prime Minister Anthony Eden's decision to invade Egypt during the Suez Crisis ended his political career and revealed Britain's weakness as an imperial power.
In 1951, the Conservative Party returned to power in Britain, under the leadership of Winston Churchill. Churchill and the Conservatives believed that Britain's position as a world power relied on the continued existence of the empire, with the base at the Suez Canal allowing Britain to maintain its pre-eminent position in the Middle East in spite of the loss of India. However, Churchill could not ignore Gamal Abdul Nasser's new revolutionary government of Egypt that had taken power in 1952, and the following year it was agreed that British troops would withdraw from the Suez Canal zone and that Sudan would be granted self-determination by 1955, with independence to follow.Brown, pp. 339–40. Sudan was granted independence on 1 January 1956.
In July 1956, Nasser unilaterally nationalised the Suez Canal. The response of Anthony Eden, who had succeeded Churchill as Prime Minister, was to collude with France to engineer an Israeli attack on Egypt that would give Britain and France an excuse to intervene militarily and retake the canal.James, p. 581. Eden infuriated US President Dwight D. Eisenhower, by his lack of consultation, and Eisenhower refused to back the invasion. Another of Eisenhower's concerns was the possibility of a wider war with the Soviet Union after it threatened to intervene on the Egyptian side. Eisenhower applied financial leverage by threatening to sell US reserves of the British pound and thereby precipitate a collapse of the British currency. Though the invasion force was militarily successful in its objectives,James, p. 583. UN intervention and US pressure forced Britain into a humiliating withdrawal of its forces, and Eden resigned.Combs, pp. 161–163.
The Suez Crisis very publicly exposed Britain's limitations to the world and confirmed Britain's decline on the world stage, demonstrating that henceforth it could no longer act without at least the acquiescence, if not the full support, of the United States.Brown, p. 342.Smith, p. 105.Burk, p. 602. The events at Suez wounded British national pride, leading one MP to describe it as "Britain's Waterloo"Brown, p. 343. and another to suggest that the country had become an "American satellite".James, p. 585. Margaret Thatcher later described the mindset she believed had befallen Britain's political leaders as "Suez syndrome" where they “went from believing that Britain could do anything to an almost neurotic belief that Britain could do nothing”, from which Britain did not recover until the successful recapture of the Falkland Islands from Argentina in 1982.Thatcher.
While the Suez Crisis caused British power in the Middle East to weaken, it did not collapse.Smith, p. 106. Britain again deployed its armed forces to the region, intervening in Oman (1957), Jordan (1958) and Kuwait (1961), though on these occasions with American approval,James, p. 586. as the new Prime Minister Harold Macmillan's foreign policy was to remain firmly aligned with the United States. Britain maintained a military presence in the Middle East for another decade. On 16 January 1968, a few weeks after the devaluation of the pound, Prime Minister Harold Wilson and his Defence Secretary Denis Healey announced that British troops would be withdrawn from major military bases East of Suez, which included the ones in the Middle East, and primarily from Malaysia and Singapore by the end of 1971, instead of 1975 as earlier planned.Pham 2010 By that time over 50,000 British military personnel were still stationed in the Far East, including 30,000 in Singapore.Melvin Gurtov, Southeast Asia tomorrow, Baltimore: The Johns Hopkins Press, 1970, p. 42 The British withdrew from Aden in 1967, Bahrain in 1971, and Maldives in 1976.Lloyd, pp. 370–371.
Wind of change
thumb|400px|right|British Empire by 1959
Macmillan gave a speech in Cape Town, South Africa in February 1960 where he spoke of "the wind of change blowing through this continent".James, p. 616. Macmillan wished to avoid the same kind of colonial war that France was fighting in Algeria, and under his premiership decolonisation proceeded rapidly.Louis, p. 46. To the three colonies that had been granted independence in the 1950s—Sudan, the Gold Coast and Malaya—were added nearly ten times that number during the 1960s.Lloyd, pp. 427–433.
Britain's remaining colonies in Africa, except for self-governing Southern Rhodesia, were all granted independence by 1968. British withdrawal from the southern and eastern parts of Africa was not a peaceful process. Kenyan independence was preceded by the eight-year Mau Mau Uprising. In Rhodesia, the 1965 Unilateral Declaration of Independence by the white minority resulted in a civil war that lasted until the Lancaster House Agreement of 1979, which set the terms for recognised independence in 1980, as the new nation of Zimbabwe.James, pp. 618–621.
thumb|left|British decolonisation in Africa. By the end of the 1960s, all but Rhodesia (the future Zimbabwe) and the South African mandate of South West Africa (Namibia) had achieved recognised independence.
In the Mediterranean, a guerrilla war waged by Greek Cypriots ended in 1960 leading to an independent Cyprus, with the UK retaining the military bases of Akrotiri and Dhekelia. The Mediterranean islands of Malta and Gozo were amicably granted independence from the UK in 1964 and became the country of Malta, though the idea had been raised in 1955 of integration with Britain.Springhall, pp. 100–102.
Most of the UK's Caribbean territories achieved independence after the departure in 1961 and 1962 of Jamaica and Trinidad from the West Indies Federation, established in 1958 in an attempt to unite the British Caribbean colonies under one government, but which collapsed following the loss of its two largest members.Knight & Palmer, pp. 14–15. Barbados achieved independence in 1966 and the remainder of the eastern Caribbean islands in the 1970s and 1980s, but Anguilla and the Turks and Caicos Islands opted to revert to British rule after they had already started on the path to independence.Clegg, p. 128. The British Virgin Islands,Lloyd, p. 428. Cayman Islands and Montserrat opted to retain ties with Britain,James, p. 622. while Guyana achieved independence in 1966. Britain's last colony on the American mainland, British Honduras, became a self-governing colony in 1964 and was renamed Belize in 1973, achieving full independence in 1981. A dispute with Guatemala over claims to Belize was left unresolved.Lloyd, pp. 401, 427–429.
British territories in the Pacific acquired independence in the 1970s beginning with Fiji in 1970 and ending with Vanuatu in 1980. Vanuatu's independence was delayed because of political conflict between English and French-speaking communities, as the islands had been jointly administered as a condominium with France.Macdonald, pp. 171–191. Fiji, Tuvalu, the Solomon Islands and Papua New Guinea chose to become Commonwealth realms.
End of empire
In 1980, Southern Rhodesia, Britain's last African colony, became the independent nation of Zimbabwe. The New Hebrides achieved independence (as Vanuatu) in 1980, with Belize following suit in 1981. The passage of the British Nationality Act 1981, which reclassified the remaining Crown colonies as "British Dependent Territories" (renamed British Overseas Territories in 2002) meant that, aside from a scattering of islands and outposts the process of decolonisation that had begun after the Second World War was largely complete. In 1982, Britain's resolve in defending its remaining overseas territories was tested when Argentina invaded the Falkland Islands, acting on a long-standing claim that dated back to the Spanish Empire.James, pp. 624–625. Britain's ultimately successful military response to retake the islands during the ensuing Falklands War was viewed by many to have contributed to reversing the downward trend in Britain's status as a world power.James, p. 629. The same year, the Canadian government severed its last legal link with Britain by patriating the Canadian constitution from Britain. The 1982 Canada Act passed by the British parliament ended the need for British involvement in changes to the Canadian constitution.Brown, p. 594. Similarly, the Constitution Act 1986 reformed the constitution of New Zealand to sever its constitutional link with Britain, and the Australia Act 1986 severed the constitutional link between Britain and the Australian states.Brown, p. 689. In 1984, Brunei, Britain's last remaining Asian protectorate, gained its independence.
thumb|Last flag of British Hong Kong
In September 1982 the Prime Minister, Margaret Thatcher, travelled to Beijing to negotiate with the Chinese government, on the future of Britain's last major and most populous overseas territory, Hong Kong.Brendon, p. 654. Under the terms of the 1842 Treaty of Nanking, Hong Kong Island itself had been ceded to Britain in perpetuity, but the vast majority of the colony was constituted by the New Territories, which had been acquired under a 99-year lease in 1898, due to expire in 1997.Joseph, p. 355.Rothermund, p. 100. Thatcher, seeing parallels with the Falkland Islands, initially wished to hold Hong Kong and proposed British administration with Chinese sovereignty, though this was rejected by China.Brendon, pp. 654–55. A deal was reached in 1984—under the terms of the Sino-British Joint Declaration, Hong Kong would become a special administrative region of the People's Republic of China, maintaining its way of life for at least 50 years.Brendon, p. 656. The handover ceremony in 1997 marked for many,Brendon, p. 660. including Charles, Prince of Wales, who was in attendance, "the end of the empire".
Legacy
thumb|The fourteen British Overseas Territories.
Britain retains sovereignty over 14 territories outside the British Isles, which were renamed the British Overseas Territories in 2002.House of Commons Foreign Affairs Committee Overseas Territories Report, pp. 145–147 Some are uninhabited except for transient military or scientific personnel; the remainder are self-governing to varying degrees and are reliant on the UK for foreign relations and defence. The British government has stated its willingness to assist any Overseas Territory that wishes to proceed to independence, where that is an option.House of Commons Foreign Affairs Committee Overseas Territories Report, pp. 146,153 British sovereignty of several of the overseas territories is disputed by their geographical neighbours: Gibraltar is claimed by Spain, the Falkland Islands and South Georgia and the South Sandwich Islands are claimed by Argentina, and the British Indian Ocean Territory is claimed by Mauritius and Seychelles. The British Antarctic Territory is subject to overlapping claims by Argentina and Chile, while many countries do not recognise any territorial claims in Antarctica.House of Commons Foreign Affairs Committee Overseas Territories Report, p. 136
Most former British colonies and protectorates are among the 52 member states of the Commonwealth of Nations, a non-political, voluntary association of equal members, comprising a population of around 2.2 billion people.The Commonwealth – About Us; Online September 2014 Sixteen Commonwealth realms voluntarily continue to share the British monarch, Queen Elizabeth II, as their head of state. These sixteen nations are distinct and equal legal entities – the United Kingdom, Australia, Canada, New Zealand, Papua New Guinea, Antigua and Barbuda, The Bahamas, Barbados, Belize, Grenada, Jamaica, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Solomon Islands and Tuvalu.
thumb|left|Parliament House in Canberra, Australia. Britain's Westminster System of governance has left a legacy of parliamentary democracies in many former colonies.
Decades, and in some cases centuries, of British rule and emigration have left their mark on the independent nations that arose from the British Empire. The empire established the use of English in regions around the world. Today it is the primary language of up to 400 million people and is spoken by about one and a half billion as a first, second or foreign language.Hogg, p. 424 chapter 9 English Worldwide by David Crystal: "approximately one in four of the worlds population are capable of communicating to a useful level in English".
The spread of English from the latter half of the 20th century has been helped in part by the cultural and economic influence of the United States, itself originally formed from British colonies. Except in Africa where nearly all the former colonies have adopted the presidential system, the English parliamentary system has served as the template for the governments for many former colonies, and English common law for legal systems.
thumb|upright|Cricket being played in India. British sports continue to be supported in various parts of the former empire.
The British Judicial Committee of the Privy Council still serves as the highest court of appeal for several former colonies in the Caribbean and Pacific. British Protestant missionaries who travelled around the globe often in advance of soldiers and civil servants spread the Anglican Communion to all continents. British colonial architecture, such as in churches, railway stations and government buildings, can be seen in many cities that were once part of the British Empire.Marshall, pp. 238–40.
Individual and team sports developed in Britain — particularly golf, football, cricket, rugby, netball, lawn bowls, hockey and lawn tennis — were also exported.Torkildsen, p. 347. The British choice of system of measurement, the imperial system, continues to be used in some countries in various ways. The convention of driving on the left hand side of the road has been retained in much of the former empire.Parsons, p. 1.
Political boundaries drawn by the British did not always reflect homogeneous ethnicities or religions, contributing to conflicts in formerly colonised areas. The British Empire was also responsible for large migrations of peoples. Millions left the British Isles, with the founding settler populations of the United States, Canada, Australia and New Zealand coming mainly from Britain and Ireland. Tensions remain between the white settler populations of these countries and their indigenous minorities, and between white settler minorities and indigenous majorities in South Africa and Zimbabwe. Settlers in Ireland from Great Britain have left their mark in the form of divided nationalist and unionist communities in Northern Ireland. Millions of people moved to and from British colonies, with large numbers of Indians emigrating to other parts of the empire, such as Malaysia and Fiji, and Chinese people to Malaysia, Singapore and the Caribbean.Marshall, p. 286. The demographics of Britain itself was changed after the Second World War owing to immigration to Britain from its former colonies.Dalziel, p. 135.
See also
All-Red Route
British Empire Exhibition
British Empire in fiction
Colonial Office
Crown Colonies
Historical flags of the British Empire
Foreign relations of the United Kingdom
History of the foreign relations of the United Kingdom
Government Houses of the British Empire and Commonwealth
Historiography of the British Empire
History of capitalism
Indirect rule
List of British Empire-related topics
Order of the British Empire
References
Further reading
External links
The British Empire. An Internet Gateway
The British Empire
The British Empire audio resources at TheEnglishCollection.com
Category:Former empires
Category:Imperialism
Category:Victorian era
Category:1583 establishments in the British Empire
Category:States and territories established in 1583
Category:States and territories disestablished in 1997
Category:Overseas empires | 4,721 | 2017-01 |
Emotion | thumb|Plutchik's wheel of emotions|250px
Emotion, in everyday speech, is any relatively brief conscious experience characterized by intense mental activity and a high degree of pleasure or displeasure.Cabanac, Michel (2002). "What is emotion?" Behavioural Processes 60(2): 69-83. "[E]motion is any mental experience with high intensity and high hedonic content (pleasure/displeasure)." Scientific discourse has drifted to other meanings and there is no consensus on a definition. Emotion is often intertwined with mood, temperament, personality, disposition, and motivation. In some theories, cognition is an important aspect of emotion. Those acting primarily on the emotions they are feeling may seem as if they are not thinking, but mental processes are still essential, particularly in the interpretation of events. For example, the realization of our believing that we are in a dangerous situation and the subsequent arousal of our body's nervous system (rapid heartbeat and breathing, sweating, muscle tension) is integral to the experience of our feeling afraid. Other theories, however, claim that emotion is separate from and can precede cognition.
Emotions are complex. According to some theories, they are states of feeling that result in physical and psychological changes that influence our behavior. The physiology of emotion is closely linked to arousal of the nervous system with various states and strengths of arousal relating, apparently, to particular emotions. Emotion is also linked to behavioral tendency. Extroverted people are more likely to be social and express their emotions, while introverted people are more likely to be more socially withdrawn and conceal their emotions. Emotion is often the driving force behind motivation, positive or negative.Gaulin, Steven J. C. and Donald H. McBurney. Evolutionary Psychology. Prentice Hall. 2003. ISBN 978-0-13-111529-3, Chapter 6, p 121-142. According to other theories, emotions are not causal forces but simply syndromes of components, which might include motivation, feeling, behavior, and physiological changes, but no one of these components is the emotion. Nor is the emotion an entity that causes these components.Barrett, L.F. and Russell, J.A. The psychological construction of emotion. Guilford Press. 2015. ISBN 978-1462516971.
Emotions involve different components, such as subjective experience, cognitive processes, expressive behavior, psychophysiological changes, and instrumental behavior. At one time, academics attempted to identify the emotion with one of the components: William James with a subjective experience, behaviorists with instrumental behavior, psychophysiologists with physiological changes, and so on. More recently, emotion is said to consist of all the components. The different components of emotion are categorized somewhat differently depending on the academic discipline. In psychology and philosophy, emotion typically includes a subjective, conscious experience characterized primarily by psychophysiological expressions, biological reactions, and mental states. A similar multicomponential description of emotion is found in sociology. For example, Peggy Thoits described emotions as involving physiological components, cultural or emotional labels (anger, surprise, etc.), expressive body actions, and the appraisal of situations and contexts.
Research on emotion has increased significantly over the past two decades with many fields contributing including psychology, neuroscience, endocrinology, medicine, history, sociology, and computer science. The numerous theories that attempt to explain the origin, neurobiology, experience, and function of emotions have only fostered more intense research on this topic. Current areas of research in the concept of emotion include the development of materials that stimulate and elicit emotion. In addition PET scans and fMRI scans help study the affective processes in the brain.Cacioppo, J.T & Gardner, W.L (1999). Emotion. "Annual Review of Psychology", 191.
"Emotions can be defined as a positive or negative experience that is associated with a particular pattern of physiological activity." Emotions produce different physiological, behavioral and cognitive changes. The original role of emotions was to motivate adaptive behaviors that in the past would have contributed to the survival of humans. Emotions are responses to significant internal and external events.Schacter, D. L., Gilbert, D. T., Wegner, D. M., & Hood, B. M. (2011). Psychology (European ed.). Basingstoke: Palgrave Macmillan.
Etymology, definitions, and differentiation
The word "emotion" dates back to 1579, when it was adapted from the French word émouvoir, which means "to stir up". The term emotion was introduced into academic discussion to replace passion.Dixon, Thomas. From passions to emotions: the creation of a secular psychological category. Cambridge University Press. 2003. ISBN 978-0521026697. According to one dictionary, the earliest precursors of the word likely dates back to the very origins of language. The modern word emotion is heterogeneous In some uses of the word, emotions are intense feelings that are directed at someone or something.Hume, D. Emotions and Moods. Organizational Behavior, 258-297. On the other hand, emotion can be used to refer to states that are mild (as in annoyed or content) and to states that are not directed at anything (as in anxiety and depression). One line of research thus looks at the meaning of the word emotion in everyday language and this usage is rather different from that in academic discourse. Another line of research asks about languages other than English, and one interesting finding is that many languages have a similar but not identical termWierzbicka, Anna. Emotions across languages and cultures: diversity and universals. Cambridge University Press. 1999.
Emotions have been described by some theorists as discrete and consistent responses to internal or external events which have a particular significance for the organism. Emotions are brief in duration and consist of a coordinated set of responses, which may include verbal, physiological, behavioral, and neural mechanisms. Psychotherapist Michael C. Graham describes all emotions as existing on a continuum of intensity. Thus fear might range from mild concern to terror or shame might range from simple embarrassment to toxic shame. Emotions have also been described as biologically given and a result of evolution because they provided good solutions to ancient and recurring problems that faced our ancestors. Moods are feelings that tend to be less intense than emotions and that often lack a contextual stimulus.
Emotion can be differentiated from a number of similar constructs within the field of affective neuroscience:
Feelings are best understood as a subjective representation of emotions, private to the individual experiencing them.
Moods are diffuse affective states that generally last for much longer durations than emotions and are also usually less intense than emotions.
Affect is an encompassing term, used to describe the topics of emotion, feelings, and moods together, even though it is commonly used interchangeably with emotion.
In addition, relationships exist between emotions, such as having positive or negative influences, with direct opposites existing. These concepts are described in contrasting and categorization of emotions. Graham differentiates emotions as functional or dysfunctional and argues all functional emotions have benefits.
Components
In Scherer's components processing model of emotion, five crucial elements of emotion are said to exist. From the component processing perspective, emotion experience is said to require that all of these processes become coordinated and synchronized for a short period of time, driven by appraisal processes. Although the inclusion of cognitive appraisal as one of the elements is slightly controversial, since some theorists make the assumption that emotion and cognition are separate but interacting systems, the component processing model provides a sequence of events that effectively describes the coordination involved during an emotional episode.
Cognitive appraisal: provides an evaluation of events and objects.
Bodily symptoms: the physiological component of emotional experience.
Action tendencies: a motivational component for the preparation and direction of motor responses.
Expression: facial and vocal expression almost always accompanies an emotional state to communicate reaction and intention of actions.
Feelings: the subjective experience of emotional state once it has occurred.
Classification
A distinction can be made between emotional episodes and emotional dispositions. Emotional dispositions are also comparable to character traits, where someone may be said to be generally disposed to experience certain emotions. For example, an irritable person is generally disposed to feel irritation more easily or quickly than others do. Finally, some theorists place emotions within a more general category of "affective states" where affective states can also include emotion-related phenomena such as pleasure and pain, motivational states (for example, hunger or curiosity), moods, dispositions and traits.Schwarz, N. H. (1990). Feelings as information: Informational and motivational functions of affective states. Handbook of motivation and cognition: Foundations of social behavior, 2, 527-561.
The classification of emotions has mainly been researched from two fundamental viewpoints. The first viewpoint is that emotions are discrete and fundamentally different constructs while the second viewpoint asserts that emotions can be characterized on a dimensional basis in groupings.
Basic emotions
thumb|Examples of basic emotions
For more than 40 years, Paul Ekman has supported the view that emotions are discrete, measurable, and physiologically distinct. Ekman's most influential work revolved around the finding that certain emotions appeared to be universally recognized, even in cultures that were preliterate and could not have learned associations for facial expressions through media. Another classic study found that when participants contorted their facial muscles into distinct facial expressions (for example, disgust), they reported subjective and physiological experiences that matched the distinct facial expressions. His research findings led him to classify six emotions as basic: anger, disgust, fear, happiness, sadness and surprise.
Robert Plutchik agreed with Ekman's biologically driven perspective but developed the "wheel of emotions", suggesting eight primary emotions grouped on a positive or negative basis: joy versus sadness; anger versus fear; trust versus disgust; and surprise versus anticipation. Some basic emotions can be modified to form complex emotions. The complex emotions could arise from cultural conditioning or association combined with the basic emotions. Alternatively, similar to the way primary colors combine, primary emotions could blend to form the full spectrum of human emotional experience. For example, interpersonal anger and disgust could blend to form contempt. Relationships exist between basic emotions, resulting in positive or negative influences.
Multi-dimensional analysis
.5px|thumbnail|left|Two Dimensions of Emotion
Through the use of multidimensional scaling, psychologists can map out similar emotional experiences, which allows a visual depiction of the "emotional distance" between experiences. A further step can be taken by looking at the map's dimensions of the emotional experiences. The emotional experiences are divided into two dimensions known as valence (how negative or positive the experience feels) and arousal (how energized or enervated the experience feels). These two dimensions can be depicted on a 2D coordinate map. This two-dimensional map was theorized to capture one important component of emotion called core affect. Core affect is not the only component to emotion, but gives the emotion its hedonic and felt energy.
The idea that core affect is but one component of the emotion led to a theory called “psychological construction.” According to this theory, an emotional episode consists of a set of components, each of which is an ongoing process and none of which is necessary or sufficient for the emotion to be instantiated. The set of components is not fixed, either by human evolutionary history or by social norms and roles. Instead, the emotional episode is assembled at the moment of its occurrence to suit its specific circumstances. One implication is that all cases of, for example, fear are not identical but instead bear a family resemblance to one another.
Ancient Greece and Middle Ages
Theories about emotions stretch back to at least as far as the stoics of Ancient Greece and Ancient China. In China, excessive emotion was believed to cause damage to qi, which in turn, damages the vital organs. The four humours theory made popular by Hippocrates contributed to the study of emotion in the same way that it did for medicine.
Western philosophy regarded emotion in varying ways. In stoic theories it was seen as a hindrance to reason and therefore a hindrance to virtue. Aristotle believed that emotions were an essential component of virtue. In the Aristotelian view all emotions (called passions) corresponded to appetites or capacities. During the Middle Ages, the Aristotelian view was adopted and further developed by scholasticism and Thomas Aquinas in particular. There are also theories of emotions in the works of philosophers such as René Descartes, Niccolò Machiavelli, Baruch Spinoza,See for instance Antonio Damasio (2005) Looking for Spinoza. Thomas HobbesLeviathan (1651), VI: Of the Interior Beginnings of Voluntary Notions, Commonly called the Passions; and the Speeches by which They are Expressed and David Hume. In the 19th century emotions were considered adaptive and were studied more frequently from an empiricist psychiatric perspective.
Evolutionary theories
thumb|Illustration from Charles Darwin's The Expression of the Emotions in Man and Animals.
19th century
Perspectives on emotions from evolutionary theory were initiated during the mid-late 19th century with Charles Darwin's 1872 book The Expression of the Emotions in Man and Animals.Darwin, Charles (1872). The Expression of Emotions in Man and Animals. Note: This book was originally published in 1872, but has been reprinted many times thereafter by different publishers Darwin argued that emotions actually served a purpose for humans, in communication and also in aiding their survival. Darwin, therefore, argued that emotions evolved via natural selection and therefore have universal cross-cultural counterparts. Darwin also detailed the virtues of experiencing emotions and the parallel experiences that occur in animals. This led the way for animal research on emotions and the eventual determination of the neural underpinnings of emotion.
Contemporary
More contemporary views along the evolutionary psychology spectrum posit that both basic emotions and social emotions evolved to motivate (social) behaviors that were adaptive in the ancestral environment. Current research suggests that emotion is an essential part of any human decision-making and planning, and the famous distinction made between reason and emotion is not as clear as it seems. Paul D. MacLean claims that emotion competes with even more instinctive responses, on one hand, and the more abstract reasoning, on the other hand. The increased potential in neuroimaging has also allowed investigation into evolutionarily ancient parts of the brain. Important neurological advances were derived from these perspectives in the 1990s by Joseph E. LeDoux and António Damásio.
Research on social emotion also focuses on the physical displays of emotion including body language of animals and humans (see affect display). For example, spite seems to work against the individual but it can establish an individual's reputation as someone to be feared. Shame and pride can motivate behaviors that help one maintain one's standing in a community, and self-esteem is one's estimate of one's status.Wright, Robert. Moral animal.
Somatic theories
Somatic theories of emotion claim that bodily responses, rather than cognitive interpretations, are essential to emotions. The first modern version of such theories came from William James in the 1880s. The theory lost favor in the 20th century, but has regained popularity more recently due largely to theorists such as John Cacioppo, António Damásio,Aziz-Zadeh L, Damasio A. (2008) Embodied semantics for actions: findings from functional brain imaging. J Physiol Paris. ;102(1-3):35-9 Joseph E. LeDouxLeDoux J.E. (1996) The Emotional Brain. New York: Simon & Schuster. and Robert Zajonc who are able to appeal to neurological evidence.
James–Lange theory
In his 1884 article William James argued that feelings and emotions were secondary to physiological phenomena. In his theory, James proposed that the perception of what he called an "exciting fact" directly led to a physiological response, known as "emotion." To account for different types of emotional experiences, James proposed that stimuli trigger activity in the autonomic nervous system, which in turn produces an emotional experience in the brain. The Danish psychologist Carl Lange also proposed a similar theory at around the same time, and therefore this theory became known as the James–Lange theory. As James wrote, "the perception of bodily changes, as they occur, is the emotion." James further claims that "we feel sad because we cry, angry because we strike, afraid because we tremble, and either we cry, strike, or tremble because we are sorry, angry, or fearful, as the case may be."
An example of this theory in action would be as follows: An emotion-evoking stimulus (snake) triggers a pattern of physiological response (increased heart rate, faster breathing, etc.), which is interpreted as a particular emotion (fear). This theory is supported by experiments in which by manipulating the bodily state induces a desired emotional state.Laird, James, Feelings: the Perception of Self, Oxford University Press Some people may believe that emotions give rise to emotion-specific actions, for example, "I'm crying because I'm sad," or "I ran away because I was scared." The issue with the James–Lange theory is that of causation (bodily states causing emotions and being a priori), not that of the bodily influences on emotional experience (which can be argued and is still quite prevalent today in biofeedback studies and embodiment theory).
Although mostly abandoned in its original form, Tim Dalgleish argues that most contemporary neuroscientists have embraced the components of the James-Lange theory of emotions.
Cannon–Bard theory
Walter Bradford Cannon agreed that physiological responses played a crucial role in emotions, but did not believe that physiological responses alone could explain subjective emotional experiences. He argued that physiological responses were too slow and often imperceptible and this could not account for the relatively rapid and intense subjective awareness of emotion. He also believed that the richness, variety, and temporal course of emotional experiences could not stem from physiological reactions, that reflected fairly undifferentiated fight or flight responses. An example of this theory in action is as follows: An emotion-evoking event (snake) triggers simultaneously both a physiological response and a conscious experience of an emotion.
Phillip Bard contributed to the theory with his work on animals. Bard found that sensory, motor, and physiological information all had to pass through the diencephalon (particularly the thalamus), before being subjected to any further processing. Therefore, Cannon also argued that it was not anatomically possible for sensory events to trigger a physiological response prior to triggering conscious awareness and emotional stimuli had to trigger both physiological and experiential aspects of emotion simultaneously.
Two-factor theory
Stanley Schachter formulated his theory on the earlier work of a Spanish physician, Gregorio Marañón, who injected patients with epinephrine and subsequently asked them how they felt. Interestingly, Marañón found that most of these patients felt something but in the absence of an actual emotion-evoking stimulus, the patients were unable to interpret their physiological arousal as an experienced emotion. Schachter did agree that physiological reactions played a big role in emotions. He suggested that physiological reactions contributed to emotional experience by facilitating a focused cognitive appraisal of a given physiologically arousing event and that this appraisal was what defined the subjective emotional experience. Emotions were thus a result of two-stage process: general physiological arousal, and experience of emotion. For example, the physiological arousal, heart pounding, in a response to an evoking stimulus, the sight of a bear in the kitchen. The brain then quickly scans the area, to explain the pounding, and notices the bear. Consequently, the brain interprets the pounding heart as being the result of fearing the bear. With his student, Jerome Singer, Schachter demonstrated that subjects can have different emotional reactions despite being placed into the same physiological state with an injection of epinephrine. Subjects were observed to express either anger or amusement depending on whether another person in the situation (a confederate) displayed that emotion. Hence, the combination of the appraisal of the situation (cognitive) and the participants' reception of adrenaline or a placebo together determined the response. This experiment has been criticized in Jesse Prinz's (2004) Gut Reactions.
Cognitive theories
With the two-factor theory now incorporating cognition, several theories began to argue that cognitive activity in the form of judgments, evaluations, or thoughts were entirely necessary for an emotion to occur. One of the main proponents of this view was Richard Lazarus who argued that emotions must have some cognitive intentionality. The cognitive activity involved in the interpretation of an emotional context may be conscious or unconscious and may or may not take the form of conceptual processing.
Lazarus' theory is very influential; emotion is a disturbance that occurs in the following order:
Cognitive appraisal—The individual assesses the event cognitively, which cues the emotion.
Physiological changes—The cognitive reaction starts biological changes such as increased heart rate or pituitary adrenal response.
Action—The individual feels the emotion and chooses how to react.
For example: Jenny sees a snake.
Jenny cognitively assesses the snake in her presence. Cognition allows her to understand it as a danger.
Her brain activates adrenaline gland which pumps adrenaline through her blood stream resulting in increased heartbeat.
Jenny screams and runs away.
Lazarus stressed that the quality and intensity of emotions are controlled through cognitive processes. These processes underline coping strategies that form the emotional reaction by altering the relationship between the person and the environment.
George Mandler provided an extensive theoretical and empirical discussion of emotion as influenced by cognition, consciousness, and the autonomic nervous system in two books (Mind and Emotion, 1975, and Mind and Body: Psychology of Emotion and Stress, 1984)
There are some theories on emotions arguing that cognitive activity in the form of judgments, evaluations, or thoughts are necessary in order for an emotion to occur.
A prominent philosophical exponent is Robert C. Solomon (for example, The Passions, Emotions and the Meaning of Life, 1993). Solomon claims that emotions are judgments. He has put forward a more nuanced view which response to what he has called the ‘standard objection’ to cognitivism, the idea that a judgment that something is fearsome can occur with or without emotion, so judgment cannot be identified with emotion.
The theory proposed by Nico Frijda where appraisal leads to action tendencies is another example.
It has also been suggested that emotions (affect heuristics, feelings and gut-feeling reactions) are often used as shortcuts to process information and influence behavior.see the Heuristic–Systematic Model, or HSM, (Chaiken, Liberman, & Eagly, 1989) under attitude change. Also see the index entry for "Emotion" in "Beyond Rationality: The Search for Wisdom in a Troubled Time" by Kenneth R. Hammond and in "Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets" by Nassim Nicholas Taleb. The affect infusion model (AIM) is a theoretical model developed by Joseph Forgas in the early 1990s that attempts to explain how emotion and mood interact with one's ability to process information.
Perceptual theory
Theories dealing with perception either use one or multiples perceptions in order to find an emotion (Goldie, 2007).A recent hybrid of the somatic and cognitive theories of emotion is the perceptual theory. This theory is neo-Jamesian in arguing that bodily responses are central to emotions, yet it emphasizes the meaningfulness of emotions or the idea that emotions are about something, as is recognized by cognitive theories. The novel claim of this theory is that conceptually-based cognition is unnecessary for such meaning. Rather the bodily changes themselves perceive the meaningful content of the emotion because of being causally triggered by certain situations. In this respect, emotions are held to be analogous to faculties such as vision or touch, which provide information about the relation between the subject and the world in various ways. A sophisticated defense of this view is found in philosopher Jesse Prinz's book Gut Reactions, and psychologist James Laird's book Feelings.
Affective events theory
This is a communication-based theory developed by Howard M. Weiss and Russell Cropanzano (1996), that looks at the causes, structures, and consequences of emotional experience (especially in work contexts). This theory suggests that emotions are influenced and caused by events which in turn influence attitudes and behaviors. This theoretical frame also emphasizes time in that human beings experience what they call emotion episodes— a "series of emotional states extended over time and organized around an underlying theme." This theory has been utilized by numerous researchers to better understand emotion from a communicative lens, and was reviewed further by Howard M. Weiss and Daniel J. Beal in their article, "Reflections on Affective Events Theory", published in Research on Emotion in Organizations in 2005.
Situated perspective on emotion
A situated perspective on emotion, developed by Paul E. Griffiths and Andrea Scarantino , emphasizes the importance of external factors in the development and communication of emotion, drawing upon the situationism approach in psychology.Griffiths, Paul Edmund and Scarantino, Andrea (2005) Emotions in the wild: The situated perspective on emotion. This theory is markedly different from both cognitivist and neo-Jamesian theories of emotion, both of which see emotion as a purely internal process, with the environment only acting as a stimulus to the emotion. In contrast, a situationist perspective on emotion views emotion as the product of an organism investigating its environment, and observing the responses of other organisms. Emotion stimulates the evolution of social relationships, acting as a signal to mediate the behavior of other organisms. In some contexts, the expression of emotion (both voluntary and involuntary) could be seen as strategic moves in the transactions between different organisms. The situated perspective on emotion states that conceptual thought is not an inherent part of emotion, since emotion is an action-oriented form of skillful engagement with the world. Griffiths and Scarantino suggested that this perspective on emotion could be helpful in understanding phobias, as well as the emotions of infants and animals.
Genetics
Emotions can motivate social interactions and relationships and therefore are directly related with basic physiology, particularly with the stress systems. This is important because emotions are related to the anti-stress complex, with an oxytocin-attachment system, which plays a major role in bonding. Emotional phenotype temperaments affect social connectedness and fitness in complex social systems (Kurt Kortschal 2013). These characteristics are shared with other species and taxa and are due to the effects of genes and their continuous transmission. Information that is encoded in the DNA sequences provides the blueprint for assembling proteins that make up our cells. Zygotes require genetic information from their parental germ cells, and at every speciation event, heritable traits that have enabled its ancestor to survive and reproduce successfully are passed down along with new traits that could be potentially beneficial to the offspring.
In the five million years since the linages leading to modern humans and chimpanzees split, only about 1.2% of their genetic material has been modified. This suggests that everything that separates us from chimpanzees must be encoded in that very small amount of DNA, including our behaviors. Students that study animal behaviors have only identified intraspecific examples of gene-dependent behavioral phenotypes. In voles (Microtus spp.) minor genetic differences have been identified in a vasopressin receptor gene that corresponds to major species differences in social organization and the mating system (Hammock & Young 2005).
Another potential example with behavioral differences is the FOCP2 gene, which is involved in neural circuitry handling speech and language (Vargha-Khadem et al. 2005). Its present form in humans differed from that of the chimpanzees by only a few mutations and has been present for about 200,000 years, coinciding with the beginning of modern humans (Enard et al. 2002). Speech, language, and social organization are all part of the basis for emotions.
Neurocircuitry
Based on discoveries made through neural mapping of the limbic system, the neurobiological explanation of human emotion is that emotion is a pleasant or unpleasant mental state organized in the limbic system of the mammalian brain. If distinguished from reactive responses of reptiles, emotions would then be mammalian elaborations of general vertebrate arousal patterns, in which neurochemicals (for example, dopamine, noradrenaline, and serotonin) step-up or step-down the brain's activity level, as visible in body movements, gestures and postures. Emotions can likely be mediated by pheromones (see fear).
For example, the emotion of love is proposed to be the expression of paleocircuits of the mammalian brain (specifically, modules of the cingulate gyrus) which facilitate the care, feeding, and grooming of offspring. Paleocircuits are neural platforms for bodily expression configured before the advent of cortical circuits for speech. They consist of pre-configured pathways or networks of nerve cells in the forebrain, brain stem and spinal cord.
The motor centers of reptiles react to sensory cues of vision, sound, touch, chemical, gravity, and motion with pre-set body movements and programmed postures. With the arrival of night-active mammals, smell replaced vision as the dominant sense, and a different way of responding arose from the olfactory sense, which is proposed to have developed into mammalian emotion and emotional memory. The mammalian brain invested heavily in olfaction to succeed at night as reptiles slept—one explanation for why olfactory lobes in mammalian brains are proportionally larger than in the reptiles. These odor pathways gradually formed the neural blueprint for what was later to become our limbic brain.
thumb|Lövheim cube of emotion
Emotions are thought to be related to certain activities in brain areas that direct our attention, motivate our behavior, and determine the significance of what is going on around us. Pioneering work by Broca (1878), Papez (1937), and MacLean (1952) suggested that emotion is related to a group of structures in the center of the brain called the limbic system, which includes the hypothalamus, cingulate cortex, hippocampi, and other structures. More recent research has shown that some of these limbic structures are not as directly related to emotion as others are while some non-limbic structures have been found to be of greater emotional relevance.
In 2011, Lövheim proposed a direct relation between specific combinations of the levels of the signal substances dopamine, noradrenaline and serotonin and eight basic emotions. A model was presented where the signal substances form the axes of a coordinate system, and the eight basic emotions according to Silvan Tomkins are placed in the eight corners. Anger is, according to the model, for example produced by the combination of low serotonin, high dopamine and high noradrenaline.
Prefrontal cortex
There is ample evidence that the left prefrontal cortex is activated by stimuli that cause positive approach. If attractive stimuli can selectively activate a region of the brain, then logically the converse should hold, that selective activation of that region of the brain should cause a stimulus to be judged more positively. This was demonstrated for moderately attractive visual stimuli and replicated and extended to include negative stimuli.
Two neurobiological models of emotion in the prefrontal cortex made opposing predictions. The Valence Model predicted that anger, a negative emotion, would activate the right prefrontal cortex. The Direction Model predicted that anger, an approach emotion, would activate the left prefrontal cortex. The second model was supported.
This still left open the question of whether the opposite of approach in the prefrontal cortex is better described as moving away (Direction Model), as unmoving but with strength and resistance (Movement Model), or as unmoving with passive yielding (Action Tendency Model). Support for the Action Tendency Model (passivity related to right prefrontal activity) comes from research on shyness and research on behavioral inhibition. Research that tested the competing hypotheses generated by all four models also supported the Action Tendency Model.
Homeostatic/primordial emotion
Another neurological approach distinguishes two classes of emotion: "classical" emotions such as love, anger and fear that are evoked by environmental stimuli, and "primordial" or "homeostatic emotions" – attention-demanding feelings evoked by body states, such as pain, hunger and fatigue, that motivate behavior (withdrawal, eating or resting in these examples) aimed at maintaining the body's internal milieu at its ideal state.
Derek Denton defines the latter as "the subjective element of the instincts, which are the genetically programmed behavior patterns which contrive homeostasis. They include thirst, hunger for air, hunger for food, pain and hunger for specific minerals etc. There are two constituents of a primordial emotion--the specific sensation which when severe may be imperious, and the compelling intention for gratification by a consummatory act."
EMG studies show The left prefrontal hemisphere is thought to be responsible for Positive emotions (happiness and joy) while the right prefrontal is thought to be tied to Negative states (disgust, stress).Iris B. Mauss and Michael D. Robinson, 2009. [Measures of emotion: A review https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2756702]. doi: 10.1080/02699930802204677
Disciplinary approaches
Many different disciplines have produced work on the emotions. Human sciences study the role of emotions in mental processes, disorders, and neural mechanisms. In psychiatry, emotions are examined as part of the discipline's study and treatment of mental disorders in humans. Nursing studies emotions as part of its approach to the provision of holistic health care to humans. Psychology examines emotions from a scientific perspective by treating them as mental processes and behavior and they explore the underlying physiological and neurological processes. In neuroscience sub-fields such as social neuroscience and affective neuroscience, scientists study the neural mechanisms of emotion by combining neuroscience with the psychological study of personality, emotion, and mood. In linguistics, the expression of emotion may change to the meaning of sounds. In education, the role of emotions in relation to learning is examined.
Social sciences often examine emotion for the role that it plays in human culture and social interactions. In sociology, emotions are examined for the role they play in human society, social patterns and interactions, and culture. In anthropology, the study of humanity, scholars use ethnography to undertake contextual analyses and cross-cultural comparisons of a range of human activities. Some anthropology studies examine the role of emotions in human activities. In the field of communication sciences, critical organizational scholars have examined the role of emotions in organizations, from the perspectives of managers, employees, and even customers. A focus on emotions in organizations can be credited to Arlie Russell Hochschild's concept of emotional labor. The University of Queensland hosts EmoNet, an e-mail distribution list representing a network of academics that facilitates scholarly discussion of all matters relating to the study of emotion in organizational settings. The list was established in January 1997 and has over 700 members from across the globe.
In economics, the social science that studies the production, distribution, and consumption of goods and services, emotions are analyzed in some sub-fields of microeconomics, in order to assess the role of emotions on purchase decision-making and risk perception. In criminology, a social science approach to the study of crime, scholars often draw on behavioral sciences, sociology, and psychology; emotions are examined in criminology issues such as anomie theory and studies of "toughness," aggressive behavior, and hooliganism. In law, which underpins civil obedience, politics, economics and society, evidence about people's emotions is often raised in tort law claims for compensation and in criminal law prosecutions against alleged lawbreakers (as evidence of the defendant's state of mind during trials, sentencing, and parole hearings). In political science, emotions are examined in a number of sub-fields, such as the analysis of voter decision-making.
In philosophy, emotions are studied in sub-fields such as ethics, the philosophy of art (for example, sensory–emotional values, and matters of taste and sentimentality), and the philosophy of music (see also Music and emotion). In history, scholars examine documents and other sources to interpret and analyze past activities; speculation on the emotional state of the authors of historical documents is one of the tools of interpretation. In literature and film-making, the expression of emotion is the cornerstone of genres such as drama, melodrama, and romance. In communication studies, scholars study the role that emotion plays in the dissemination of ideas and messages. Emotion is also studied in non-human animals in ethology, a branch of zoology which focuses on the scientific study of animal behavior. Ethology is a combination of laboratory and field science, with strong ties to ecology and evolution. Ethologists often study one type of behavior (for example, aggression) in a number of unrelated animals.
History
The history of emotions has become an increasingly popular topic recently, with some scholars arguing that it is an essential category of analysis, not unlike class, race, or gender. Historians, like other social scientists, assume that emotions, feelings and their expressions are regulated in different ways by both different cultures and different historical times, and constructivist school of history claims even that some sentiments and meta-emotions, for example Schadenfreude, are learnt and not only regulated by culture. Historians of emotion trace and analyse the changing norms and rules of feeling, while examining emotional regimes, codes, and lexicons from social, cultural or political history perspectives. Others focus on the history of medicine, science or psychology. What somebody can and may feel (and show) in a given situation, towards certain people or things, depends on social norms and rules. It is thus historically variable and open to change. Several research centers have opened in the past few years in Germany, England, Spain, Sweden and Australia.
Furthermore, research in historical trauma suggests that some traumatic emotions can be passed on from parents to offspring to second and even third generation, presented as examples of transgenerational trauma.
Sociology
A common way in which emotions are conceptualized in sociology is in terms of the multidimensional characteristics including cultural or emotional labels (for example, anger, pride, fear, happiness), physiological changes (for example, increased perspiration, changes in pulse rate), expressive facial and body movements (for example, smiling, frowning, baring teeth), and appraisals of situational cues. One comprehensive theory of emotional arousal in humans has been developed by Jonathan Turner (2007: 2009).Turner, J. H. (2007). Human emotions: A sociological theory. London: Routledge. Two of the key eliciting factors for the arousal of emotions within this theory are expectations states and sanctions. When people enter a situation or encounter with certain expectations for how the encounter should unfold, they will experience different emotions depending on the extent to which expectations for Self, other and situation are met or not met. People can also provide positive or negative sanctions directed at Self or other which also trigger different emotional experiences in individuals. Turner analyzed a wide range of emotion theories across different fields of research including sociology, psychology, evolutionary science, and neuroscience. Based on this analysis, he identified four emotions that all researchers consider being founded on human neurology including assertive-anger, aversion-fear, satisfaction-happiness, and disappointment-sadness. These four categories are called primary emotions and there is some agreement amongst researchers that these primary emotions become combined to produce more elaborate and complex emotional experiences. These more elaborate emotions are called first-order elaborations in Turner's theory and they include sentiments such as pride, triumph, and awe. Emotions can also be experienced at different levels of intensity so that feelings of concern are a low-intensity variation of the primary emotion aversion-fear whereas depression is a higher intensity variant.
Attempts are frequently made to regulate emotion according to the conventions of the society and the situation based on many (sometimes conflicting) demands and expectations which originate from various entities. The emotion of anger is in many cultures discouraged in girls and women, while fear is discouraged in boys and men. Expectations attached to social roles, such as "acting as man" and not as a woman, and the accompanying "feeling rules" contribute to the differences in expression of certain emotions. Some cultures encourage or discourage happiness, sadness, or jealousy, and the free expression of the emotion of disgust is considered socially unacceptable in most cultures. Some social institutions are seen as based on certain emotion, such as love in the case of contemporary institution of marriage. In advertising, such as health campaigns and political messages, emotional appeals are commonly found. Recent examples include no-smoking health campaigns and political campaigns emphasizing the fear of terrorism.
Sociological attention to emotion has varied over time. Emilé Durkheim (1915/1965)Durkheim, E. (1915/1912). The elementary forms of the religious life, trans. J. W. Swain. New York: Free Press. wrote about the collective effervescence or emotional energy that was experienced by members of totemic rituals in Australian aborigine society. He explained how the heightened state of emotional energy achieved during totemic rituals transported individuals above themselves giving them the sense that they were in the presence of a higher power, a force, that was embedded in the sacred objects that were worshipped. These feelings of exaltation, he argued, ultimately lead people to believe that there were forces that governed sacred objects.
In the 1990s, sociologists focused on different aspects of specific emotions and how these emotions were socially relevant. For Cooley (1992),Cooley, C. H. (1992). Human nature and the social order. New Brunswick: Transaction Publishers. pride and shame were the most important emotions that drive people to take various social actions. During every encounter, he proposed that we monitor ourselves through the "looking glass" that the gestures and reactions of others provide. Depending on these reactions, we either experience pride or shame and this results in particular paths of action. Retzinger (1991)Retzinger, S. M. (1991). Violent emotions: Shame and rage in marital quarrels. London: SAGE. conducted studies of married couples who experienced cycles of rage and shame. Drawing predominantly on Goffman and Cooley's work, Scheff (1990)Scheff, J. (1990). Microsociology: discourse, emotion and social structure. Chicago: University of Chicago Press. developed a micro sociological theory of the social bond. The formation or disruption of social bonds is dependent on the emotions that people experience during interactions.
Subsequent to these developments, Randall Collins (2004)Collins, R. (2004). Interaction ritual chains. Princeton, NJ: Princeton University Press. formulated his interaction ritual theory by drawing on Durkheim's work on totemic rituals that was extended by Goffman (1964/2013; 1967)Goffman, E. (1967). Interaction ritual. New York: Anchor Books.Goffman, E. (1964/2013). Encounters: Two studies in the sociology of interactions. Mansfiled Centre, CT: Martino Publishing. into everyday focused encounters. Based on interaction ritual theory, we experience different levels or intensities of emotional energy during face-to-face interactions. Emotional energy is considered to be a feeling of confidence to take action and a boldness that one experiences when they are charged up from the collective effervescence generated during group gatherings that reach high levels of intensity.
There is a growing body of research applying the sociology of emotion to understanding the learning experiences of students during classroom interactions with teachers and other students (for example, Milne & Otieno, 2007; Olitsky, 2007;Olitsky, S. (2007). Science learning, status and identity formation in an urban middle school. In W.-M. Roth & K. G. Tobin (Eds.), Science, learning, identity: Sociocultural and cultural-historical perspectives. (pp. 41-62). Rotterdam, The Netherlands: Sense. Tobin, et al., 2013; Zembylas, 2002). These studies show that learning subjects like science can be understood in terms of classroom interaction rituals that generate emotional energy and collective states of emotional arousal like emotional climate.
Apart from interaction ritual traditions of the sociology of emotion, other approaches have been classed into one of 6 other categories (Turner, 2009) including:
evolutionary/biological theories,
symbolic interactionist theories,
dramaturgical theories,
ritual theories,
power and status theories,
stratification theories, and
exchange theories.
This list provides a general overview of different traditions in the sociology of emotion that sometimes conceptualise emotion in different ways and at other times in complementary ways. Many of these different approaches were synthesized by Turner (2007) in his sociological theory of human emotions in an attempt to produce one comprehensive sociological account that draws on developments from many of the above traditions.
Psychotherapy and regulation
Emotion regulation refers to the cognitive and behavioral strategies people use to influence their own emotional experience.Schacter, Daniel. "Psychology". Worth Publishers. 2011. p.316 For example, a behavioral strategy in which one avoids a situation to avoid unwanted emotions (trying not to think about the situation, doing distracting activities, etc.).Schacter, Daniel. "Psychology". Worth Publishers. 2011. p.340 Depending on the particular school's general emphasis on either cognitive components of emotion, physical energy discharging, or on symbolic movement and facial expression components of emotion,Freitas-Magalhães, A., & Castro, E. (2009). Facial Expression: The effect of the smile in the Treatment of Depression. Empirical Study with Portuguese Subjects. In A. Freitas-Magalhães (Ed.), Emotional Expression: The Brain and The Face (pp. 127–140). Porto: University Fernando Pessoa Press. ISBN 978-989-643-034-4 different schools of psychotherapy approach the regulation of emotion differently. Cognitively oriented schools approach them via their cognitive components, such as rational emotive behavior therapy. Yet others approach emotions via symbolic movement and facial expression components (like in contemporary Gestalt therapy).
Cross-cultural research
Research on emotions reveals the strong presence of cross-cultural differences in emotional reactions and that emotional reactions are likely to be culture-specific.Shaver, Phillip R.; Wu, Shelley; Schwartz, Judith C. Cross-cultural similarities and differences in emotion and its representation In: Clark, Margaret S. (Ed), (1992). Emotion. Review of personality and social psychology, No. 13., (pp. 175-212). Thousand Oaks, CA, US: Sage Publications, Inc, ix, 326 pp In strategic settings, cross-cultural research on emotions is required for understanding the psychological situation of a given population or specific actors. This implies the need to comprehend the current emotional state, mental disposition or other behavioral motivation of a target audience located in a different culture, basically founded on its national political, social, economic, and psychological peculiarities but also subject to the influence of circumstances and events. North Atlantic Treaty Organization, Nato Standardization Agency AAP-6 - Glossary of terms and definitions, p 188.
Computer science
In the 2000s, research in computer science, engineering, psychology and neuroscience has been aimed at developing devices that recognize human affect display and model emotions.Fellous, Armony & LeDoux, 2002 In computer science, affective computing is a branch of the study and development of artificial intelligence that deals with the design of systems and devices that can recognize, interpret, and process human emotions. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical enquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper"Affective Computing" MIT Technical Report #321 (Abstract), 1995 on affective computing. Detecting emotional information begins with passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. The detection and processing of facial expression or body gestures is achieved through detectors and sensors.
Notable theorists
thumb|left|upright=0.56|William James
In the late 19th century, the most influential theorists were William James (1842–1910) and Carl Lange (1834–1900). James was an American psychologist and philosopher who wrote about educational psychology, psychology of religious experience/mysticism, and the philosophy of pragmatism. Lange was a Danish physician and psychologist. Working independently, they developed the James–Lange theory, a hypothesis on the origin and nature of emotions. The theory states that within human beings, as a response to experiences in the world, the autonomic nervous system creates physiological events such as muscular tension, a rise in heart rate, perspiration, and dryness of the mouth. Emotions, then, are feelings which come about as a result of these physiological changes, rather than being their cause.
Silvan Tomkins (1911–1991) developed the Affect theory and Script theory. The Affect theory introduced the concept of basic emotions, and was based on the idea that the dominance of the emotion, which he called the affected system, was the motivating force in human life.
Some of the most influential theorists on emotion from the 20th century have died in the last decade. They include Magda B. Arnold (1903–2002), an American psychologist who developed the appraisal theory of emotions;<ref>{{cite journal | last1 = Reisenzein | first1 = R | year = 2006 | title = Arnolds theory of emotion in historical perspective| url = | journal = Cognition & emotion | volume = 20 | issue = 7| pages = 920–951 | doi = 10.1080/02699930600616445 }}</ref> Richard Lazarus (1922–2002), an American psychologist who specialized in emotion and stress, especially in relation to cognition; Herbert A. Simon (1916–2001), who included emotions into decision making and artificial intelligence; Robert Plutchik (1928–2006), an American psychologist who developed a psychoevolutionary theory of emotion; Robert Zajonc (1923–2008) a Polish–American social psychologist who specialized in social and cognitive processes such as social facilitation; Robert C. Solomon (1942–2007), an American philosopher who contributed to the theories on the philosophy of emotions with books such as What Is An Emotion?: Classic and Contemporary Readings (Oxford, 2003); Peter Goldie (1946–2011), a British philosopher who specialized in ethics, aesthetics, emotion, mood and character; Nico Frijda (1927–2015), a Dutch psychologist who advanced the theory that human emotions serve to promote a tendency to undertake actions that are appropriate in the circumstances, detailed in his book The Emotions (1986).
Influential theorists who are still active include the following psychologists, neurologists, philosophers, and sociologists:
Lisa Feldman Barrett – Social philosopher and psychologist specializing in affective science and human emotion.
John Cacioppo – from the University of Chicago, founding father with Gary Berntson of social neuroscience.
Randall Collins - (born 1941) American sociologist from the University of Pennsylvania developed the interaction ritual theory which includes emotional entrainment model.
António Damásio (born 1944) – Portuguese behavioral neurologist and neuroscientist who works in the US.
Richard Davidson (born 1951) – American psychologist and neuroscientist; pioneer in affective neuroscience.
Paul Ekman (born 1934) – Psychologist specializing in the study of emotions and their relation to facial expressions.
Barbara Fredrickson – Social psychologist who specializes in emotions and positive psychology.
Arlie Russell Hochschild (born 1940) – American sociologist whose central contribution was in forging a link between the subcutaneous flow of emotion in social life and the larger trends set loose by modern capitalism within organizations.
Joseph E. LeDoux (born 1949) – American neuroscientist who studies the biological underpinnings of memory and emotion, especially the mechanisms of fear.
George Mandler (born 1924) - American psychologist who wrote influential books on cognition and emotion.
Jaak Panksepp (born 1943) – Estonian-born American psychologist, psychobiologist and neuroscientist; pioneer in affective neuroscience.
Jesse Prinz – American philosopher who specializes in emotion, moral psychology, aesthetics and consciousness.
James A. Russell (born 1947) – American psychologist who developed or co-developed the PAD theory of environmental impact, circumplex model of affect, prototype theory of emotion concepts, a critique of the hypothesis of universal recognition of emotion from facial expression, concept of core affect, developmental theory of differentiation of emotion concepts, and, more recently, the theory of the psychological construction of emotion.
Klaus Scherer (born 1943) – Swiss psychologist and director of the Swiss Center for Affective Sciences in Geneva; he specializes in the psychology of emotion.
Ronald de Sousa (born 1940) – English–Canadian philosopher who specializes in the philosophy of emotions, philosophy of mind and philosophy of biology.
Jonathan H. Turner (born 1942) - American sociologist from the University of California, Riverside who is a general sociological theorist with specialty areas including the sociology of emotions, ethnic relations, social institutions, social stratification, and bio-sociology.
Dominique Moïsi (born 1946) - Authored a book titled The Geopolitics of Emotion focusing on emotions related to globalization.
See also
Affect measures
Affective Computing
Affective forecasting
Affective neuroscience
Affective science
Contrasting and categorization of emotions
CyberEmotions
Emoticons
Emotion classification
Emotion in animals
Emotions and culture
Emotion and memory
Emotional expression
Emotional climate
Emotions in virtual communication
Empathy
Endocrinology
Facial expressions
Fear
Feeling
Fuzzy-trace theory
Group emotion
International Affective Picture System
List of emotions
Measuring Emotions
Neuroendocrinology
Sociology of emotions
Social emotion
Social neuroscience
Social sharing of emotions
Somatic markers hypothesis
References
Notes
Bibliography
Further reading
Dana Sugu & Amita Chaterjee "Flashback: Reshuffling Emotions", International Journal on Humanistic Ideology, Vol. 3 No. 1, Spring–Summer 2010.
Cornelius, R. (1996). The science of emotion. New Jersey: Prentice Hall.
Freitas-Magalhães, A. (Ed.). (2009). Emotional Expression: The Brain and The Face. Porto: University Fernando Pessoa Press. ISBN 978-989-643-034-4.
Freitas-Magalhães, A. (2007). The Psychology of Emotions: The Allure of Human Face. Oporto: University Fernando Pessoa Press.
González, Ana Marta (2012). The Emotions and Cultural Analysis. Burlington, VT : Ashgate. ISBN 978-1-4094-5317-8
Ekman, P. (1999). "Basic Emotions". In: T. Dalgleish and M. Power (Eds.). Handbook of Cognition and Emotion. John Wiley & Sons Ltd, Sussex, UK:.
Frijda, N.H. (1986). The Emotions. Maison des Sciences de l'Homme and Cambridge University Press
ISBN 9780520054547
Hogan, Patrick Colm. (2011). What Literature Teaches Us about Emotion Cambridge: Cambridge University Press.
Hordern, Joshua. (2013). Political Affections: Civic Participation and Moral Theology. Oxford: Oxford University Press. ISBN 0199646813
LeDoux, J.E. (1986). The neurobiology of emotion. Chap. 15 in J.E. LeDoux & W. Hirst (Eds.) Mind and Brain: dialogues in cognitive neuroscience. New York: Cambridge.
Mandler, G. (1984). Mind and Body: Psychology of emotion and stress. New York: Norton.http://www.affective-sciences.org/system/files/2005_Scherer_SSI.pdf
Nussbaum, Martha C. (2001) Upheavals of Thought: The Intelligence of Emotions. Cambridge: Cambridge University Press.
Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In R. Plutchik & H. Kellerman (Eds.), Emotion: Theory, research, and experience: Vol. 1. Theories of emotion (pp. 3–33). New York: Academic.
Roberts, Robert. (2003). Emotions: An Essay in Aid of Moral Psychology. Cambridge: Cambridge University Press.
Solomon, R. (1993). The Passions: Emotions and the Meaning of Life''. Indianapolis: Hackett Publishing.
Wikibook Cognitive psychology and cognitive neuroscience
Dror Green (2011). "Emotional Training, the art of creating a sense of a safe place in a changing world". Bulgaria: Books, Publishers and the Institute of Emotional Training.
External links
The Internet Encyclopedia of Philosophy: Theories of Emotion
The Stanford Encyclopedia of Philosophy: Emotion
Category:Limbic system
Category:Subjective experience | 10,406 | 2017-01 |
Comics | a medium used to express ideas by images, often combined with text or other visual information. Comics frequently the form of juxtaposed sequences of panels of images. Often textual devices such as speech balloons, captions, and onomatopoeia indicate dialogue, narration, sound effects, or other information. Size and arrangement of panels contribute to narrative pacing. Cartooning and similar forms of illustration are the most common image-making means in comics; fumetti is a form which uses photographic images. Common forms of comics include comic strips, editorial and gag cartoons, and comic books. Since the late 20th century, bound volumes such as graphic novels, comic albums, and have become increasingly common, and online webcomics have proliferated in the 21st century.
The history of comics has followed different paths in different cultures. Scholars have posited a pre-history as far back as the Lascaux cave paintings. By the mid-20th century, comics flourished particularly in the United States, western Europe (especially in France and Belgium), and Japan. The history of European comics is often traced to Rodolphe Töpffer's cartoon strips of the 1830s, and became popular following the success in the 1930s of strips and books such as The Adventures of Tintin. American comics emerged as a mass medium in the early 20th century with the advent of newspaper comic strips; magazine-style comic books followed in the 1930s, in which the superhero genre became prominent after Superman appeared in 1938. Histories of Japanese comics and cartooning () propose origins as early as the 12th century. Modern comic strips emerged in Japan in the early 20th century, and the output of comics magazines and books rapidly expanded in the post-World War II era with the popularity of cartoonists such as Osamu Tezuka. had a lowbrow reputation for much of its history, but towards the end of the 20th century began to find greater acceptance with the public and in academia.
The English term comics is used as a singular noun when it refers to the medium and a plural when referring to particular instances, such as individual strips or comic books. Though the term derives from the humorous (or comic) work that predominated in early American newspaper comic strips, it has become standard also for non-humorous works. It is common in English to refer to the comics of different cultures by the terms used in their original languages, such as for Japanese comics, or for French-language comics. There is no consensus amongst theorists and historians on a definition of comics; some emphasize the combination of images and text, some sequentiality or other image relations, and others historical aspects such as mass reproduction or the use of recurring characters. The increasing cross-pollination of concepts from different comics cultures and eras has further made definition difficult.
Origins and traditions
The European, American, and Japanese comics traditions have followed different paths. Europeans have seen their tradition as beginning with the Swiss Rodolphe Töpffer from as early as 1827 and Americans have seen the origin of theirs in Richard F. Outcault's 1890s newspaper strip The Yellow Kid, though many Americans have come to recognize Töpffer's precedence. Japan had a long prehistory of satirical cartoons and comics leading up to the World War II era. The ukiyo-e artist Hokusai popularized the Japanese term for comics and cartooning, , in the early 19th century. In the post-war era modern Japanese comics began to flourish when Osamu Tezuka produced a prolific body of work. Towards the close of the 20th century, these three traditions converged in a trend towards book-length comics: the comic album in Europe, the in Japan, and the graphic novel in the English-speaking countries.
Outside of these genealogies, comics theorists and historians have seen precedents for comics in the Lascaux cave paintings in France (some of which appear to be chronological sequences of images), Egyptian hieroglyphs, Trajan's Column in Rome, the 11th-century Norman Bayeux Tapestry, the 1370 woodcut, the 15th-century and block books, Michelangelo's The Last Judgment in the Sistine Chapel, and William Hogarth's 17th-century sequential engravings, amongst others.
English-language comics
Illustrated humour periodicals were popular in 19th-century Britain, the earliest of which was the short-lived The Glasgow Looking Glass in 1825. The most popular was Punch, which popularized the term cartoon for its humorous caricatures. On occasion the cartoons in these magazines appeared in sequences; the character Ally Sloper featured in the earliest serialized comic strip when the character began to feature in its own weekly magazine in 1884.
American comics developed out of such magazines as Puck, Judge, and Life. The success of illustrated humour supplements in the New York World and later the New York American, particularly Outcault's The Yellow Kid, led to the development of newspaper comic strips. Early Sunday strips were full-page and often in colour. Between 1896 and 1901 cartoonists experimented with sequentiality, movement, and speech balloons.
Shorter, black-and-white daily strips began to appear early in the 20th century, and became established in newspapers after the success in 1907 of Bud Fisher's Mutt and Jeff. In Britain, the Amalgamated Press established a popular style of a sequence of images with text beneath them, including Illustrated Chips and Comic Cuts. Humour strips predominated at first, and in the 1920s and 1930s strips with continuing stories in genres such as adventure and drama also became popular.
Thin periodicals called comic books appeared in the 1930s, at first reprinting newspaper comic strips; by the end of the decade, original content began to dominate. The success in 1938 of Action Comics and its lead hero Superman marked the beginning of the Golden Age of Comic Books, in which the superhero genre was prominent. In the UK and the Commonwealth, the DC Thomson-created Dandy (1937) and Beano (1938) became successful humor-based titles, with a combined circulation of over 2 million copies by the 1950s. Their characters, including "Dennis the Menace", "Desperate Dan" and "The Bash Street Kids" have been read by generations of British schoolboys. The comics originally experimented with superheroes and action stories before settling on humorous strips featuring a mix of the Amalgamated Press and US comic book styles.
thumb|upright|alt=|Superheroes have been a staple of American comic books (Wonderworld Comics 3, 1939; cover: The Flame by Will Eisner).
The popularity of superhero comic books declined following World War II, while comic book sales continued to increase as other genres proliferated, such as romance, westerns, crime, horror, and humour. Following a sales peak in the early 1950s, the content of comic books (particularly crime and horror) was subjected to scrutiny from parent groups and government agencies, which culminated in Senate hearings that led to the establishment of the Comics Code Authority self-censoring body. The Code has been blamed for stunting the growth of American comics and maintaining its low status in American society for much of the remainder of the century. Superheroes re-established themselves as the most prominent comic book genre by the early 1960s. Underground comix challenged the Code and readers with adult, countercultural content in the late 1960s and early 1970s. The underground gave birth to the alternative comics movement in the 1980s and its mature, often experimental content in non-superhero genres.
Comics in the US has had a lowbrow reputation stemming from its roots in mass culture; cultural elites sometimes saw popular culture as threatening culture and society. In the latter half of the 20th century, popular culture won greater acceptance, and the lines between high and low culture began to blur. Comics nevertheless continued to be stigmatized, as the medium was seen as entertainment for children and illiterates.
The graphic novel—book-length comics—began to gain attention after Will Eisner popularized the term with his book A Contract with God (1978). The term became widely known with the public after the commercial success of Maus, Watchmen, and The Dark Knight Returns in the mid-1980s. In the 21st century graphic novels became established in mainstream bookstores and libraries and webcomics became common.
Franco-Belgian and European comics
The francophone Swiss Rodolphe Töpffer produced comic strips beginning in 1827, and published theories behind the form. Cartoons appeared widely in newspapers and magazines from the 19th century. The success of Zig et Puce in 1925 popularized the use of speech balloons in European comics, after which Franco-Belgian comics began to dominate. The Adventures of Tintin, with its signature clear line style, was first serialized in newspaper comics supplements beginning in 1929, and became an icon of Franco-Belgian comics.
thumb|alt=A man drawing a cartoon character on a large vertical drawing board|French cartoonist Albert Uderzo draws the character Asterix.
Following the success of (1934–44), dedicated comics magazines and full-colour comic albums became the primary outlet for comics in the mid-20th century. As in the US, at the time comics were seen as infantile and a threat to culture and literacy; commentators stated that "none bear up to the slightest serious analysis", and that comics were "the sabotage of all art and all literature".
In the 1960s, the term ("drawn strips") came into wide use in French to denote the medium. Cartoonists began creating comics for mature audiences, and the term "Ninth Art" was coined, as comics began to attract public and academic attention as an artform. A group including René Goscinny and Albert Uderzo founded the magazine Pilote in 1959 to give artists greater freedom over their work. Goscinny and Uderzo's The Adventures of Asterix appeared in it and went on to become the best-selling French-language comics series. From 1960, the satirical and taboo-breaking Hara-Kiri defied censorship laws in the countercultural spirit that led to the May 1968 events.
Frustration with censorship and editorial interference led to a group of Pilote cartoonists to found the adults-only L'Écho des savanes in 1972. Adult-oriented and experimental comics flourished in the 1970s, such as in the experimental science fiction of Mœbius and others in Métal hurlant, even mainstream publishers took to publishing prestige-format adult comics.
From the 1980s, mainstream sensibilities were reasserted and serialization became less common as the number of comics magazines decreased and many comics began to be published directly as albums. Smaller publishers such as L'Association that published longer works in non-traditional formats by auteur-istic creators also became common. Since the 1990s, mergers resulted in fewer large publishers, while smaller publishers proliferated. Sales overall continued to grow despite the trend towards a shrinking print market.
Japanese comics
thumb|alt=|Rakuten Kitazawa created the first modern Japanese comic strip. (Tagosaku to Mokube no Tōkyō Kenbutsu, 1902)
Japanese comics and cartooning (), have a history that has been seen as far back as the anthropomorphic characters in the 12th-to-13th-century , 17th-century and picture books, and woodblock prints such as ukiyo-e which were popular between the 17th and 20th centuries. The contained examples of sequential images, movement lines, and sound effects.
Illustrated magazines for Western expatriates introduced Western-style satirical cartoons to Japan in the late 19th century. New publications in both the Western and Japanese styles became popular, and at the end of the 1890s, American-style newspaper comics supplements began to appear in Japan, as well as some American comic strips. 1900 saw the debut of the Jiji Manga in the Jiji Shinpō newspaper—the first use of the word "manga" in its modern sense, and where, in 1902, Rakuten Kitazawa began the first modern Japanese comic strip. By the 1930s, comic strips were serialized in large-circulation monthly girls' and boys' magazine and collected into hardback volumes.
The modern era of comics in Japan began after World War II, propelled by the success of the serialized comics of the prolific Osamu Tezuka and the comic strip Sazae-san. Genres and audiences diversified over the following decades. Stories are usually first serialized in magazines which are often hundreds of pages thick and may contain over a dozen stories; they are later compiled in -format books. At the turn of the 20th and 21st centuries, nearly a quarter of all printed material in Japan was comics. translations became extremely popular in foreign markets—in some cases equaling or surpassing the sales of domestic comics.
Forms and formats
Comic strips are generally short, multipanel comics that traditionally most commonly appeared in newspapers. In the US, daily strips have normally occupied a single tier, while Sunday strips have been given multiple tiers. In the early 20th century, daily strips were typically in black-and-white and Sundays were usually in colour and often occupied a full page.
Specialized comics periodicals formats vary greatly in different cultures. Comic books, primarily an American format, are thin periodicals usually published in colour. European and Japanese comics are frequently serialized in magazines—monthly or weekly in Europe, and usually black-and-white and weekly in Japan. Japanese comics magazine typically run to hundreds of pages.
Book-length comics take different forms in different cultures. European comic albums are most commonly printed in A4-size colour volumes. In English-speaking countries, bound volumes of comics are called graphic novels and are available in various formats. Despite incorporating the term "novel"—a term normally associated with fiction—"graphic novel" also refers to non-fiction and collections of short works. Japanese comics are collected in volumes called tankōbon following magazine serialization.
Gag and editorial cartoons usually consist of a single panel, often incorporating a caption or speech balloon. Definitions of comics which emphasize sequence usually exclude gag, editorial, and other single-panel cartoons; they can be included in definitions that emphasize the combination of word and image. Gag cartoons first began to proliferate in broadsheets published in Europe in the 18th and 19th centuries, and the term "cartoon" was first used to describe them in 1843 in the British humour magazine Punch.
Webcomics are comics that are available on the internet. They are able to reach large audiences, and new readers usually can access archived installments. Webcomics can make use of an infinite canvas—meaning they are not constrained by size or dimensions of a page.
Some consider storyboards and wordless novels to be comics. Film studios, especially in animation, often use sequences of images as guides for film sequences. These storyboards are not intended as an end product and are rarely seen by the public. Wordless novels are books which use sequences of captionless images to deliver a narrative.
Comics studies
Similar to the problems of defining literature and film, no consensus has been reached on a definition of the comics medium, and attempted definitions and descriptions have fallen prey to numerous exceptions. Theorists such as Töpffer, R. C. Harvey, Will Eisner, David Carrier, Alain Rey, and Lawrence Grove emphasize the combination of text and images, though there are prominent examples of pantomime comics throughout its history. Other critics, such as Thierry Groensteen and Scott McCloud, have emphasized the primacy of sequences of images. Towards the close of the 20th century, different cultures' discoveries of each other's comics traditions, the rediscovery of forgotten early comics forms, and the rise of new forms made defining comics a more complicated task.
European comics studies began with Töpffer's theories of his own work in the 1840s, which emphasized panel transitions and the visual–verbal combination. No further progress was made until the 1970s. Pierre Fresnault-Deruelle then took a semiotics approach to the study of comics, analyzing text–image relations, page-level image relations, and image discontinuities, or what Scott McCloud later dubbed "closure". In 1987, Henri Vanlier introduced the term , or "multiframe", to refer to the comics page as a semantic unit. By the 1990s, theorists such as Benoît Peeters and Thierry Groensteen turned attention to artists' poïetic creative choices. Thierry Smolderen and Harry Morgan have held relativistic views of the definition of comics, a medium that has taken various, equally valid forms over its history. Morgan sees comics as a subset of "" (or "drawn literatures"). French theory has come to give special attention to the page, in distinction from American theories such as McCloud's which focus on panel-to-panel transitions. Since the mid-2000s, Neil Cohn has begun analyzing how comics are understood using tools from cognitive science, extending beyond theory by using actual psychological and neuroscience experiments. This work has argued that sequential images and page layouts both use separate rule-bound "grammars" to be understood that extend beyond panel-to-panel transitions and categorical distinctions of types of layouts, and that the brain's comprehension of comics is similar to comprehending other domains, such as language and music.
Historical narratives of manga tend to focus either on its recent, post-WWII history, or on attempts to demonstrates deep roots in the past, such as to the picture scroll of the 12th and 13th centuries, or the early 19th-century Hokusai Manga. The first historical overview of Japanese comics was Seiki Hosokibara's in 1924. Early post-war Japanese criticism was mostly of a left-wing political nature until the 1986 publication for Tomofusa Kure's Modern Manga: The Complete Picture, which de-emphasized politics in favour of formal aspects, such as structure and a "grammar" of comics. The field of studies increased rapidly, with numerous books on the subject appearing in the 1990s. Formal theories of have focused on developing a "manga expression theory", with emphasis on spatial relationships in the structure of images on the page, distinguishing the medium from film or literature, in which the flow of time is the basic organizing element. Comics studies courses have proliferated at Japanese universities, and was established in 2001 to promote comics scholarship. The publication of Frederik L. Schodt's Manga! Manga! The World of Japanese Comics in 1983 led to the spread of use of the word manga outside Japan to mean "Japanese comics" or "Japanese-style comics".
Coulton Waugh attempted the first comprehensive history of American comics with The Comics (1947). Will Eisner's Comics and Sequential Art (1985) and Scott McCloud's Understanding Comics (1993) were early attempts in English to formalize the study of comics. David Carrier's The Aesthetics of Comics (2000) was the first full-length treatment of comics from a philosophical perspective. Prominent American attempts at definitions of comics include Eisner's, McCloud's, and Harvey's. Eisner described what he called "sequential art" as "the arrangement of pictures or images and words to narrate a story or dramatize an idea"; Scott McCloud defined comics as "juxtaposed pictorial and other images in deliberate sequence, intended to convey information and/or to produce an aesthetic response in the viewer", a strictly formal definition which detached comics from its historical and cultural trappings. R. C. Harvey defined comics as "pictorial narratives or expositions in which words (often lettered into the picture area within speech balloons) usually contribute to the meaning of the pictures and vice versa". Each definition has had its detractors. Harvey saw McCloud's definition as excluding single-panel cartoons, and objected to McCloud's de-emphasizing verbal elements, insisting "the essential characteristic of comics is the incorporation of verbal content". Aaron Meskin saw McCloud's theories as an artificial attempt to legitimize the place of comics in art history.
Cross-cultural study of comics is complicated by the great difference in meaning and scope of the words for "comics" in different languages. The French term for comics, ("drawn strip") emphasizes the juxtaposition of drawn images as a defining factor, which can imply the exclusion of even photographic comics. The term is used in Japanese to indicate all forms of comics, cartooning, and caricature.
Terminology
The term comics refers to the comics medium when used as an uncountable noun and thus takes the singular: "comics is a medium" rather than "comics are a medium". When comic appears as a countable noun it refers to instances of the medium, such as individual comic strips or comic books: "Tom's comics are in the basement."
Panels are individual images containing a segment of action, often surrounded by a border. Prime moments in a narrative are broken down into panels via a process called encapsulation. The reader puts the pieces together via the process of closure by using background knowledge and an understanding of panel relations to combine panels mentally into events. The size, shape, and arrangement of panels each affect the timing and pacing of the narrative. The contents of a panel may be asynchronous, with events depicted in the same image not necessarily occurring at the same time.
thumb|alt=A comics panel. In the top left, a caption with a yellow background reads, "Suddenly the street is filled with angry people!" In the main panel, anthropomorphic characters crowd a sidewalk. A monkey, standing to the left on the road beside the curb, says, "Gosh! Where'd all these people come from?" An overweight male on the sidewalk in the middle facing right says to a police officer, "Hey! My watch disappeared from my parlor!" An female near the bottom right, says to a male in the bottom right corner, "My necklace! It's gone from the table!!"|A caption (the yellow box) gives the narrator a voice. The characters' dialogue appears in speech balloons. The tail of the balloon indicates the speaker.
Text is frequently incorporated into comics via speech balloons, captions, and sound effects. Speech balloons indicate dialogue (or thought, in the case of thought balloons), with tails pointing at their respective speakers. Captions can give voice to a narrator, convey characters' dialogue or thoughts, or indicate place or time. Speech balloons themselves are strongly associated with comics, such that the addition of one to an image is sufficient to turn the image into comics. Sound effects mimic non-vocal sounds textually using onomatopoeia sound-words.
Cartooning is most frequently used in making comics, traditionally using ink (especially India ink) with dip pens or ink brushes; mixed media and digital technology have become common. Cartooning techniques such as motion lines and abstract symbols are often employed.
While comics are often the work of a single creator, the labour of making them is frequently divided between a number of specialists. There may be separate writers and artists, and artists may specialize in parts of the artwork such as characters or backgrounds, as is common in Japan. Particularly in American superhero comic books, the art may be divided between a penciller, who lays out the artwork in pencil; an inker, who finishes the artwork in ink; a colourist; and a letterer, who adds the captions and speech balloons.
Etymology
The English term comics derives from the humorous (or "comic") work which predominated in early American newspaper comic strips; usage of the term has become standard for non-humorous works as well. The term "comic book" has a similarly confusing history: they are most often not humorous; nor are they regular books, but rather periodicals. It is common in English to refer to the comics of different cultures by the terms used in their original languages, such as for Japanese comics, or for French-language Franco-Belgian comics.
Many cultures have taken their words for comics from English, including Russian (, ) and German (). Similarly, the Chinese term and the Korean derive from the Chinese characters with which the Japanese term is written.
See also
Animation
Billy Ireland Cartoon Library & Museum
Picture book
See also lists
List of comic books
List of comics creators
List of comics publishing companies
List of comic strip syndicates
List of Franco-Belgian comics series
List of newspaper comic strips
Lists of manga
List of manga artists
List of manga magazines
List of manga publishers
List of years in comics
Notes
References
Works cited
Books
Academic journals
Web
Further reading
External links
Academic journals
The Comics Grid: Journal of Comics Scholarship
ImageTexT: Interdisciplinary Comics Studies
Image [&] Narrative
International Journal of Comic Art
Journal of Graphic Novels and Comics
Archives
Billy Ireland Cartoon Library & Museum
Michigan State University Comic Art Collection
Comic Art Collection at the University of Missouri
Cartoon Art Museum of San Francisco
Time Archives' Collection of Comics
Databases
Comic Book Database
Grand Comics Database
Category:Narrative forms | 145,443 | 2017-01 |
Napoleon | thumb|Imperial coat of arms|260px
Napoleon Bonaparte (Napoléon Bonaparte; ;"Napoleon". Random House Webster's Unabridged Dictionary. , , born "Napoleone di Buonaparte" (); 15 August 1769 – 5 May 1821) was a French military and political leader who rose to prominence during the French Revolution and led several successful campaigns during the French Revolutionary Wars. As Napoleon I, he was Emperor of the French from 1804 until 1814, and again in 1815. Napoleon dominated European and global affairs for more than a decade while leading France against a series of coalitions in the Napoleonic Wars. He won most of these wars and the vast majority of his battles, building a large empire that ruled over continental Europe before its final collapse in 1815. One of the greatest commanders in history, his wars and campaigns are studied at military schools worldwide. Napoleon's political and cultural legacy has ensured his status as one of the most celebrated and controversial leaders in human history.Roberts, Andrew. Napoleon: A Life. Penguin Group, 2014, Introduction.
He was born in Corsica to a relatively modest family from the minor nobility. When the Revolution broke out in 1789, Napoleon was serving as an artillery officer in the French army. Seizing the new opportunities presented by the Revolution, he rapidly rose through the ranks of the military, becoming a general at age 24. The Directory eventually gave him command of the Army of Italy after he suppressed a revolt against the government from royalist insurgents. At age 26, he began his first military campaign against the Austrians and their Italian allies—winning virtually every battle, conquering the Italian Peninsula in a year, and becoming a national hero. In 1798, he led a military expedition to Egypt that served as a springboard to political power. He engineered a coup in November 1799 and became First Consul of the Republic. His ambition and public approval inspired him to go further, and in 1804 he became the first Emperor of the French. Intractable differences with the British meant that the French were facing a Third Coalition by 1805. Napoleon shattered this coalition with decisive victories in the Ulm Campaign and a historic triumph over Russia and Austria at the Battle of Austerlitz, which led to the elimination of the thousand year-old Holy Roman Empire. In 1806, the Fourth Coalition took up arms against him because Prussia became worried about growing French influence on the continent. Napoleon quickly defeated Prussia at the battles of Jena and Auerstedt, then marched the Grand Army deep into Eastern Europe and annihilated the Russians in June 1807 at the Battle of Friedland. France then forced the defeated nations of the Fourth Coalition to sign the Treaties of Tilsit in July 1807, bringing an uneasy peace to the continent. Tilsit signified the high watermark of the French Empire. In 1809, the Austrians and the British challenged the French again during the War of the Fifth Coalition, but Napoleon solidified his grip over Europe after triumphing at the Battle of Wagram in July.
Hoping to extend the Continental System and choke off British trade with the European mainland, Napoleon invaded Iberia and declared his brother Joseph the King of Spain in 1808. The Spanish and the Portuguese revolted with British support. The Peninsular War lasted six years, featured extensive guerrilla warfare, and ended in victory for the Allies. The Continental System caused recurring diplomatic conflicts between France and its client states, especially Russia. Unwilling to bear the economic consequences of reduced trade, the Russians routinely violated the Continental System and enticed Napoleon into another war. The French launched a major invasion of Russia in the summer of 1812. The resulting campaign witnessed the collapse of the Grand Army, the destruction of Russian cities, and inspired a renewed push against Napoleon by his enemies. In 1813, Prussia and Austria joined Russian forces in a Sixth Coalition against France. A lengthy military campaign culminated in a large Allied army defeating Napoleon at the Battle of Leipzig in October 1813. The Allies then invaded France and captured Paris in the spring of 1814, forcing Napoleon to abdicate in April. He was exiled to the island of Elba near Rome and the Bourbons were restored to power. However, Napoleon escaped from Elba in February 1815 and took control of France once again. The Allies responded by forming a Seventh Coalition, which defeated Napoleon at the Battle of Waterloo in June. The British exiled him to the remote island of Saint Helena in the South Atlantic, where he spent the remainder of his years. His death in 1821 at the age of 51 was received with surprise, shock, and grief throughout Europe, leaving behind a memory that still persists.
Napoleon had an extensive and powerful influence on the modern world, bringing liberal reforms to the numerous territories that he conquered and controlled, such as the Low Countries, Switzerland, and large parts of modern Italy and Germany. He implemented fundamental liberal policies in France and throughout Western Europe. His legal achievement, the Napoleonic Code, has influenced the legal systems of more than 70 nations around the world. British historian Andrew Roberts stated, "The ideas that underpin our modern world—meritocracy, equality before the law, property rights, religious toleration, modern secular education, sound finances, and so on—were championed, consolidated, codified and geographically extended by Napoleon. To them he added a rational and efficient local administration, an end to rural banditry, the encouragement of science and the arts, the abolition of feudalism and the greatest codification of laws since the fall of the Roman Empire."Andrew Roberts, Napoleon: A Life (2014), p. xxxiii.
Origins and education
thumb|left|upright|alt=Half-length portrait of a wigged middle-aged man with a well-to-do jacket. His left hand is tucked inside his waistcoat.|Napoleon's father Carlo Buonaparte was Corsica's representative to the court of Louis XVI of France.
Napoleon was born on 15 August 1769, to Carlo Maria di Buonaparte and Maria Letizia Ramolino, in his family's ancestral home Casa Buonaparte in Ajaccio, the capital of the island of Corsica. He was their fourth child and third son. This was a year after the island was transferred to France by the Republic of Genoa. He was christened Napoleone di Buonaparte, probably named after an uncle (an older brother who did not survive infancy was the first of the sons to be called Napoleone). In his 20s, he adopted the more French-sounding Napoléon Bonaparte.
The Corsican Buonapartes were descended from minor Italian nobility of Tuscan origin, who had come to Corsica from Liguria in the 16th century.2012 DNA tests found that some of the family's ancestors were from the Caucasus region; ; The study found haplogroup type E1b1c1*, which originated in Northern Africa circa 1200 BC; the people migrated into the Caucasus and into Europe.
thumb|upright|right|alt=Head and shoulders portrait of a white-haired, portly, middle-aged man with a pinkish complexion, blue velvet coat, and a ruffle|The nationalist Corsican leader Pasquale Paoli; portrait by Richard Cosway, 1798
His father Nobile Carlo Buonaparte was an attorney, and was named Corsica's representative to the court of Louis XVI in 1777. The dominant influence of Napoleon's childhood was his mother, Letizia Ramolino, whose firm discipline restrained a rambunctious child.Cronin 1994, pp. 20–21 Napoleon's maternal grandmother had married into the Swiss Fesch family in her second marriage, and Napoleon's uncle, the cardinal Joseph Fesch, would fulfill a role as protector of the Bonaparte family for some years.
He had an elder brother, Joseph, and younger siblings: Lucien, Elisa, Louis, Pauline, Caroline, and Jérôme. A boy and girl were born before Joseph but died in infancy. Napoleon was baptised as a Catholic.
Napoleon's noble, moderately affluent background afforded him greater opportunities to study than were available to a typical Corsican of the time.Cronin 1994, p.27 In January 1779, he was enrolled at a religious school in Autun. In May, he was admitted to a military academy at Brienne-le-Château. His first language was Corsican, and he always spoke French with a marked Corsican accent and never learned to spell French properly. He was teased by other students for his accent and applied himself to reading. An examiner observed that Napoleon "has always been distinguished for his application in mathematics. He is fairly well acquainted with history and geography... This boy would make an excellent sailor."
On completion of his studies at Brienne in 1784, Napoleon was admitted to the elite École Militaire in Paris. He trained to become an artillery officer and, when his father's death reduced his income, was forced to complete the two-year course in one year. He was the first Corsican to graduate from the École Militaire. He was examined by the famed scientist Pierre-Simon Laplace.
Early career
thumb|upright|Napoleon Bonaparte, aged 23, lieutenant-colonel of a battalion of Corsican Republican volunteers
Upon graduating in September 1785, Bonaparte was commissioned a second lieutenant in La Fère artillery regiment. He served in Valence and Auxonne until after the outbreak of the Revolution in 1789, and took nearly two years' leave in Corsica and Paris during this period. At this time, he was a fervent Corsican nationalist, and wrote to Corsican leader Pasquale Paoli in May 1789, "As the nation was perishing I was born. Thirty thousand Frenchmen were vomited on to our shores, drowning the throne of liberty in waves of blood. Such was the odious sight which was the first to strike me."
He spent the early years of the Revolution in Corsica, fighting in a complex three-way struggle among royalists, revolutionaries, and Corsican nationalists. He was a supporter of the republican Jacobin movement, organising clubs in Corsica, and was given command over a battalion of volunteers. He was promoted to captain in the regular army in July 1792, despite exceeding his leave of absence and leading a riot against French troops.
He came into conflict with Paoli, who had decided to split with France and sabotage the French assault on the Sardinian island of La Maddalena. Bonaparte and his family fled to the French mainland in June 1793 because of the split with Paoli.Roberts 2001, p.xviii
Siege of Toulon
thumb|upright|Bonaparte at the Siege of Toulon
In July 1793, Bonaparte published a pro-republican pamphlet entitled Le souper de Beaucaire (Supper at Beaucaire) which gained him the support of Augustin Robespierre, younger brother of the Revolutionary leader Maximilien Robespierre. With the help of his fellow Corsican Antoine Christophe Saliceti, Bonaparte was appointed artillery commander of the republican forces at the Siege of Toulon.
He adopted a plan to capture a hill where republican guns could dominate the city's harbour and force the British to evacuate. The assault on the position led to the capture of the city, but during it Bonaparte was wounded in the thigh. He was promoted to brigadier general at the age of 24. Catching the attention of the Committee of Public Safety, he was put in charge of the artillery of France's Army of Italy.
Napoleon spent time as inspector of coastal fortifications on the Mediterranean coast near Marseille while he was waiting for confirmation of the Army of Italy post. He devised plans for attacking the Kingdom of Sardinia as part of France's campaign against the First Coalition. Augustin Robespierre and Saliceti were ready to listen to the freshly promoted artillery general.
The French army carried out Bonaparte's plan in the Battle of Saorgio in April 1794, and then advanced to seize Ormea in the mountains. From Ormea, they headed west to outflank the Austro-Sardinian positions around Saorge. After this campaign, Augustin Robespierre sent Bonaparte on a mission to the Republic of Genoa to determine that country's intentions towards France.Patrice Gueniffey, Bonaparte: 1769–1802 (Harvard UP, 2015), pp 137-59.
13 Vendémiaire
Some contemporaries alleged that Bonaparte was put under house arrest at Nice for his association with the Robespierres following their fall in the Thermidorian Reaction in July 1794, but Napoleon's secretary Bourrienne disputed the allegation in his memoirs. According to Bourrienne, jealousy was responsible, between the Army of the Alps and the Army of Italy (with whom Napoleon was seconded at the time).Bourrienne, Memoirs of Napoleon, p.39. Bonaparte dispatched an impassioned defense in a letter to the commissar Salicetti, and he was subsequently acquitted of any wrongdoing.Bourrienne, Memoirs of Napoleon, p.38.
He was released within two weeks and, due to his technical skills, was asked to draw up plans to attack Italian positions in the context of France's war with Austria. He also took part in an expedition to take back Corsica from the British, but the French were repulsed by the British Royal Navy.
By 1795, Bonaparte had become engaged to Désirée Clary, daughter of François Clary. Désirée's sister Julie Clary had married Bonaparte's elder brother Joseph. In April 1795, he was assigned to the Army of the West, which was engaged in the War in the Vendée—a civil war and royalist counter-revolution in Vendée, a region in west central France on the Atlantic Ocean. As an infantry command, it was a demotion from artillery general—for which the army already had a full quota—and he pleaded poor health to avoid the posting.
thumb|alt=Etching of a street, there are a lot pockets of smoke due to a group of republican artillery firing on royalists across the street at the entrance to a building|Journée du 13 Vendémiaire. Artillery fire in front of the Church of Saint-Roch, Paris, Rue Saint-Honoré
He was moved to the Bureau of Topography of the Committee of Public Safety and sought unsuccessfully to be transferred to Constantinople in order to offer his services to the Sultan. During this period, he wrote the romantic novella Clisson et Eugénie, about a soldier and his lover, in a clear parallel to Bonaparte's own relationship with Désirée. On 15 September, Bonaparte was removed from the list of generals in regular service for his refusal to serve in the Vendée campaign. He faced a difficult financial situation and reduced career prospects.
On 3 October, royalists in Paris declared a rebellion against the National Convention. Paul Barras, a leader of the Thermidorian Reaction, knew of Bonaparte's military exploits at Toulon and gave him command of the improvised forces in defence of the Convention in the Tuileries Palace. Napoleon had seen the massacre of the King's Swiss Guard there three years earlier and realised that artillery would be the key to its defence.Roberts 2001, p.xvi
He ordered a young cavalry officer named Joachim Murat to seize large cannons and used them to repel the attackers on 5 October 1795—13 Vendémiaire An IV in the French Republican Calendar. 1,400 royalists died and the rest fled. He had cleared the streets with "a whiff of grapeshot", according to 19th-century historian Thomas Carlyle in The French Revolution: A History.Johnson 2002, p.27
The defeat of the royalist insurrection extinguished the threat to the Convention and earned Bonaparte sudden fame, wealth, and the patronage of the new government, the Directory. Murat married one of Napoleon's sisters and became his brother-in-law; he also served under Napoleon as one of his generals. Bonaparte was promoted to Commander of the Interior and given command of the Army of Italy.
Within weeks, he was romantically attached to Joséphine de Beauharnais, the former mistress of Barras. The couple married on 9 March 1796 in a civil ceremony.Englund (2010) pp 92–94
First Italian campaign
thumb|upright|left|alt=A three-quarter-length depiction of Bonaparte, with black tunic and leather gloves, holding a standard and sword, turning backwards to look at his troops|Bonaparte at the Pont d'Arcole, by Baron Antoine-Jean Gros, (ca. 1801), Musée du Louvre, Paris
Two days after the marriage, Bonaparte left Paris to take command of the Army of Italy. He immediately went on the offensive, hoping to defeat the forces of Piedmont before their Austrian allies could intervene. In a series of rapid victories during the Montenotte Campaign, he knocked Piedmont out of the war in two weeks. The French then focused on the Austrians for the remainder of the war, the highlight of which became the protracted struggle for Mantua. The Austrians launched a series of offensives against the French to break the siege, but Napoleon defeated every relief effort, scoring victories at the battles of Castiglione, Bassano, Arcole, and Rivoli. The decisive French triumph at Rivoli in January 1797 led to the collapse of the Austrian position in Italy. At Rivoli, the Austrians lost up to 14,000 men while the French lost about 5,000.
The next phase of the campaign featured the French invasion of the Habsburg heartlands. French forces in Southern Germany had been defeated by the Archduke Charles in 1796, but the Archduke withdrew his forces to protect Vienna after learning about Napoleon's assault. In the first encounter between the two commanders, Napoleon pushed back his opponent and advanced deep into Austrian territory after winning at the Battle of Tarvis in March 1797. The Austrians were alarmed by the French thrust that reached all the way to Leoben, about 100 km from Vienna, and finally decided to sue for peace. The Treaty of Leoben, followed by the more comprehensive Treaty of Campo Formio, gave France control of most of northern Italy and the Low Countries, and a secret clause promised the Republic of Venice to Austria. Bonaparte marched on Venice and forced its surrender, ending 1,100 years of independence. He also authorized the French to loot treasures such as the Horses of Saint Mark.
His application of conventional military ideas to real-world situations enabled his military triumphs, such as creative use of artillery as a mobile force to support his infantry. He stated later in life: "I have fought sixty battles and I have learned nothing which I did not know at the beginning. Look at Caesar; he fought the first like the last."
Bonaparte could win battles by concealment of troop deployments and concentration of his forces on the "hinge" of an enemy's weakened front. If he could not use his favourite envelopment strategy, he would take up the central position and attack two co-operating forces at their hinge, swing round to fight one until it fled, then turn to face the other. In this Italian campaign, Bonaparte's army captured 150,000 prisoners, 540 cannons, and 170 standards.Harvey 2006, p.179 The French army fought 67 actions and won 18 pitched battles through superior artillery technology and Bonaparte's tactics.
During the campaign, Bonaparte became increasingly influential in French politics. He founded two newspapers: one for the troops in his army and another for circulation in France. The royalists attacked Bonaparte for looting Italy and warned that he might become a dictator. All told, Napoleon's forces extracted an estimated $45 million in funds from Italy during their campaign there, another $12 million in precious metals and jewels; atop that, his forces confiscated more than three-hundred priceless paintings and sculptures. Bonaparte sent General Pierre Augereau to Paris to lead a coup d'état and purge the royalists on 4 September—Coup of 18 Fructidor. This left Barras and his Republican allies in control again but dependent on Bonaparte, who proceeded to peace negotiations with Austria. These negotiations resulted in the Treaty of Campo Formio, and Bonaparte returned to Paris in December as a hero. He met Talleyrand, France's new Foreign Minister—who served in the same capacity for Emperor Napoleon—and they began to prepare for an invasion of Britain.
Egyptian expedition
thumb|alt=Person on a horse looks towards a giant statue of a head in the desert, with a blue sky|Bonaparte Before the Sphinx, (ca. 1868) by Jean-Léon Gérôme, Hearst Castle
thumb|alt=Cavalry battlescene with pyramids in background|Battle of the Pyramids on 21 July 1798 by Louis-François, Baron Lejeune, 1808
After two months of planning, Bonaparte decided that France's naval power was not yet strong enough to confront the British Royal Navy. He decided on a military expedition to seize Egypt and thereby undermine Britain's access to its trade interests in India. Bonaparte wished to establish a French presence in the Middle East, with the ultimate dream of linking with Tipu Sultan, a Muslim enemy of the British in India.
Napoleon assured the Directory that "as soon as he had conquered Egypt, he will establish relations with the Indian princes and, together with them, attack the English in their possessions."Amini 2000, p.12 The Directory agreed in order to secure a trade route to India.
In May 1798, Bonaparte was elected a member of the French Academy of Sciences. His Egyptian expedition included a group of 167 scientists, with mathematicians, naturalists, chemists, and geodesists among them. Their discoveries included the Rosetta Stone, and their work was published in the Description de l'Égypte in 1809.Englund (2010) pp 127-8
En route to Egypt, Bonaparte reached Malta on 9 June 1798, then controlled by the Knights Hospitaller. Grand Master Ferdinand von Hompesch zu Bolheim surrendered after token resistance, and Bonaparte captured an important naval base with the loss of only three men.
General Bonaparte and his expedition eluded pursuit by the Royal Navy and landed at Alexandria on 1 July. He fought the Battle of Shubra Khit against the Mamluks, Egypt's ruling military caste. This helped the French practice their defensive tactic for the Battle of the Pyramids, fought on 21 July, about from the pyramids. General Bonaparte's forces of 25,000 roughly equalled those of the Mamluks' Egyptian cavalry. Twenty-nine French and approximately 2,000 Egyptians were killed. The victory boosted the morale of the French army.
On 1 August, the British fleet under Horatio Nelson captured or destroyed all but two French vessels in the Battle of the Nile, defeating Bonaparte's goal to strengthen the French position in the Mediterranean.Roberts 2001, p.xx His army had succeeded in a temporary increase of French power in Egypt, though it faced repeated uprisings. In early 1799, he moved an army into the Ottoman province of Damascus (Syria and Galilee). Bonaparte led these 13,000 French soldiers in the conquest of the coastal towns of Arish, Gaza, Jaffa, and Haifa. The attack on Jaffa was particularly brutal. Bonaparte discovered that many of the defenders were former prisoners of war, ostensibly on parole, so he ordered the garrison and 1,400 prisoners to be executed by bayonet or drowning to save bullets. Men, women, and children were robbed and murdered for three days.
Bonaparte began with an army of 13,000 men; 1,500 were reported missing, 1,200 died in combat, and thousands perished from disease—mostly bubonic plague. He failed to reduce the fortress of Acre, so he marched his army back to Egypt in May. To speed up the retreat, Bonaparte ordered plague-stricken men to be poisoned with opium; the number who died remains disputed, ranging from a low of 30 to a high of 580. He also brought out 1,000 wounded men.Gueniffey, Bonaparte: 1769–1802 pp 500-2. Back in Egypt on 25 July, Bonaparte defeated an Ottoman amphibious invasion at Abukir.
Ruler of France
thumb|alt=Bonaparte in a simple general uniform in the middle of a scrum of red-robbed members of the Council of Five Hundred|General Bonaparte surrounded by members of the Council of Five Hundred during the Coup of 18 Brumaire, by François Bouchot
While in Egypt, Bonaparte stayed informed of European affairs. He learned that France had suffered a series of defeats in the War of the Second Coalition. On 24 August 1799, he took advantage of the temporary departure of British ships from French coastal ports and set sail for France, despite the fact that he had received no explicit orders from Paris. The army was left in the charge of Jean Baptiste Kléber.
Unknown to Bonaparte, the Directory had sent him orders to return to ward off possible invasions of French soil, but poor lines of communication prevented the delivery of these messages.Connelly 2006, p.57 By the time that he reached Paris in October, France's situation had been improved by a series of victories. The Republic, however, was bankrupt and the ineffective Directory was unpopular with the French population. The Directory discussed Bonaparte's "desertion" but was too weak to punish him.
Despite the failures in Egypt, Napoleon returned to a hero's welcome. He drew together an alliance with director Emmanuel Joseph Sieyès, his brother Lucien, speaker of the Council of Five Hundred Roger Ducos, director Joseph Fouché, and Talleyrand, and they overthrew the Directory by a coup d'état on 9 November 1799 ("the 18th Brumaire" according to the revolutionary calendar), closing down the council of five hundred. Napoleon became "first consul" for ten years, with two consuls appointed by him who had consultative voices only. His power was confirmed by the new "Constitution of the Year VIII", originally devised by Sieyès to give Napoleon a minor role, but rewritten by Napoleon, and accepted by direct popular vote (3,000,000 in favor, 1,567 opposed). The constitution preserved the appearance of a republic but in reality established a dictatorship.François Furet, The French Revolution, 1770–1814 (1996), p. 212Georges Lefebvre, Napoleon from 18 Brumaire to Tilsit 1799–1807 (1969), pp. 60–68
French Consulate
thumb|upright|Bonaparte, First Consul, by Ingres. Posing the hand inside the waistcoat was often used in portraits of rulers to indicate calm and stable leadership.
thumb|Napoleon as commander of the Army
Napoleon established a political system that historian Martyn Lyons called "dictatorship by plebiscite." Worried by the democratic forces unleashed by the Revolution, but unwilling to ignore them entirely, Napoleon resorted to regular electoral consultations with the French people on his road to imperial power. He drafted the Constitution of the Year VIII and secured his own election as First Consul, taking up residence at the Tuileries. The constitution was approved in a rigged plebiscite held the following January, with 99.94 percent officially listed as voting "yes."Lefebvre, Napoleon from 18 Brumaire to Tilsit 1799–1807 (1969), pp. 71–92 Napoleon's brother, Lucien, had falsified the returns to show that 3 million people had participated in the plebiscite; the real number was 1.5 million. Political observers at the time assumed the eligible French voting public numbered about 5 million people, so the regime artificially doubled the participation rate to indicate popular enthusiasm for the Consulate. In the first few months of the Consulate, with war in Europe still raging and internal instability still plaguing the country, Napoleon's grip on power remained very tenuous.
In the spring of 1800, Napoleon and his troops crossed the Swiss Alps into Italy, aiming to surprise the Austrian armies that had reoccupied the peninsula when Napoleon was still in Egypt. After a difficult crossing over the Alps, the French army entered the plains of Northern Italy virtually unopposed. While one French army approached from the north, the Austrians were busy with another stationed in Genoa, which was besieged by a substantial force. The fierce resistance of this French army, under André Masséna, gave the northern force some time to carry out their operations with little interference. After spending several days looking for each other, the two armies collided at the Battle of Marengo on 14 June. General Melas had a numerical advantage, fielding about Austrian soldiers while Napoleon commanded French troops. The battle began favorably for the Austrians as their initial attack surprised the French and gradually drove them back. Melas stated that he'd won the battle and retired to his headquarters around 3 pm, leaving his subordinates in charge of pursuing the French. The French lines never broke during their tactical retreat; Napoleon constantly rode out among the troops urging them to stand and fight. Late in the afternoon, a full division under Desaix arrived on the field and reversed the tide of the battle. A series of artillery barrages and cavalry charges decimated the Austrian army, which fled over the Bormida River back to Alessandria, leaving behind casualties. The following day, the Austrian army agreed to abandon Northern Italy once more with the Convention of Alessandria, which granted them safe passage to friendly soil in exchange for their fortresses throughout the region.
Although critics have blamed Napoleon for several tactical mistakes preceding the battle, they have also praised his audacity for selecting a risky campaign strategy, choosing to invade the Italian peninsula from the north when the vast majority of French invasions came from the west, near or along the coastline. As Chandler points out, Napoleon spent almost a year getting the Austrians out of Italy in his first campaign; in 1800, it took him only a month to achieve the same goal. German strategist and field marshal Alfred von Schlieffen concluded that "Bonaparte did not annihilate his enemy but eliminated him and rendered him harmless" while "[attaining] the object of the campaign: the conquest of North Italy."
Napoleon's triumph at Marengo secured his political authority and boosted his popularity back home, but it did not lead to an immediate peace. Bonaparte's brother, Joseph, led the complex negotiations in Lunéville and reported that Austria, emboldened by British support, would not acknowledge the new territory that France had acquired. As negotiations became increasingly fractious, Bonaparte gave orders to his general Moreau to strike Austria once more. Moreau and the French swept through Bavaria and scored an overwhelming victory at Hohenlinden in December 1800. As a result, the Austrians capitulated and signed the Treaty of Lunéville in February 1801. The treaty reaffirmed and expanded earlier French gains at Campo Formio. Britain now remained the only nation that was still at war with France.
Temporary peace in Europe
After a decade of constant warfare, France and Britain signed the Treaty of Amiens in March 1802, bringing the Revolutionary Wars to an end. Amiens called for the withdrawal of British troops from recently conquered colonial territories as well as for assurances to curtail the expansionary goals of the French Republic. With Europe at peace and the economy recovering, Napoleon's popularity soared to its highest levels under the Consulate, both domestically and abroad. In a new plebiscite during the spring of 1802, the French public came out in huge numbers to approve a constitution that made the Consulate permanent, essentially elevating Napoleon to dictator for life. Whereas the plebiscite two years earlier had brought out 1.5 million people to the polls, the new referendum enticed 3.6 million to go and vote (72% of all eligible voters). There was no secret ballot in 1802 and few people wanted to openly defy the regime; the constitution gained approval with over 99% of the vote. His broad powers were spelled out in the new constitution: Article 1. The French people name, and the Senate proclaims Napoleon-Bonaparte First Consul for Life.Edwards 1999, p.55 After 1802, he was generally referred to as Napoleon rather than Bonaparte.
The brief peace in Europe allowed Napoleon to focus on the French colonies abroad. Saint-Domingue had managed to acquire a high level of political autonomy during the Revolutionary Wars, with Toussaint Louverture installing himself as de facto dictator by 1801. Napoleon saw his chance to recuperate the formerly wealthy colony when he signed the Treaty of Amiens. During the Revolution, the National Convention voted to abolish slavery in February 1794. Under the terms of Amiens, however, Napoleon agreed to appease British demands by not abolishing slavery in any colonies where the 1794 decree had never been implemented. The resulting Law of 20 May never applied to colonies like Guadeloupe or Guyane, even though rogue generals and other officials used the pretext of peace as an opportunity to reinstate slavery in some of these places. The Law of 20 May officially restored the slave trade to the Caribbean colonies, not slavery itself.Roberts, Andrew. Napoleon: A Life. Penguin Group, 2014, p. 301 Napoleon sent an expedition under General Leclerc designed to reassert control over Sainte-Domingue. Although the French managed to capture Toussaint Louverture, the expedition failed when high rates of disease crippled the French army. In May 1803, the last 8000 French troops left the island and the slaves proclaimed an independent republic that they called Haïti in 1804.Roberts, Andrew. Napoleon: A Life. Penguin Group, 2014, p. 303 Seeing the failure of his colonial efforts, Napoleon decided in 1803 to sell the Louisiana Territory to the United States, instantly doubling the size of the U.S. The selling price in the Louisiana Purchase was less than three cents per acre, a total of $15 million.Connelly 2006, p.70
The peace with Britain proved to be uneasy and controversial.For an advanced diplomatic history of the era, see Paul W. Schroeder, The Transformation of European Politics 1763–1848 (Oxford U.P. 1996) pp 177–560 Britain did not evacuate Malta as promised and protested against Bonaparte's annexation of Piedmont and his Act of Mediation, which established a new Swiss Confederation. Neither of these territories were covered by Amiens, but they inflamed tensions significantly. The dispute culminated in a declaration of war by Britain in May 1803; Napoleon responded by reassembling the invasion camp at Boulogne.
French Empire
thumb|alt=Colored painting depicting Napoleon crowning his wife inside of a cathedral |The Coronation of Napoleon by Jacques-Louis David in 1804.
thumb|left|Bust of Napoleon I, 1807-1809 CE. Marble, from Carrara, Italy. After Antoine-Denis Chaudet. The Victoria and Albert Museum, London
During the Consulate, Napoleon faced several royalist and Jacobin assassination plots, including the Conspiration des poignards (Dagger plot) in October 1800 and the Plot of the Rue Saint-Nicaise (also known as the Infernal Machine) two months later. In January 1804, his police uncovered an assassination plot against him that involved Moreau and which was ostensibly sponsored by the Bourbon family, the former rulers of France. On the advice of Talleyrand, Napoleon ordered the kidnapping of the Duke of Enghien, violating the sovereignty of Baden. The Duke was quickly executed after a secret military trial, even though he had not been involved in the plot. Enghien's execution infuriated royal courts throughout Europe, becoming one of the contributing political factors for the outbreak of the Napoleonic Wars.
To expand his power, Napoleon used these assassination plots to justify the creation of an imperial system based on the Roman model. He believed that a Bourbon restoration would be more difficult if his family's succession was entrenched in the constitution. Launching yet another referendum, Napoleon was elected as Emperor of the French by a tally exceeding 99%. As with the Life Consulate two years earlier, this referendum produced heavy participation, bringing out almost 3.6 million voters to the polls.
Napoleon's coronation took place on 2 December 1804. Two separate crowns were brought for the ceremony: a golden laurel wreath recalling the Roman Empire and a replica of Charlemagne's crown.Roberts, Andrew. Napoleon: A Life. Penguin Group, 2014, p. 355. Napoleon entered the ceremony wearing the laurel wreath and kept it on his head throughout the proceedings. For the official coronation, he raised the Charlemagne crown over his own head in a symbolic gesture, but never placed it on top because he was already wearing the golden wreath. Instead he placed the crown on Josephine's head, the event commemorated in the officially sanctioned painting by Jacques-Louis David. Napoleon was also crowned King of Italy, with the Iron Crown of Lombardy, at the Cathedral of Milan on 26 May 1805. He created eighteen Marshals of the Empire from amongst his top generals to secure the allegiance of the army.
War of the Third Coalition
thumb|alt=Colored painting depicting Napoleon receiving the surrender of the Austrian generals, with the opposing armies and the city of Ulm in the background |Napoleon and the Grande Armée receive the surrender of Austrian General Mack after the Battle of Ulm in October 1805. The decisive finale of the Ulm Campaign raised the tally of captured Austrian soldiers to . With the Austrian army destroyed, Vienna would fall to the French in November.
Great Britain had broken the Peace of Amiens by declaring war on France in May 1803.Paul W. Schroeder, The Transformation of European Politics 1763–1848 (1996) pp 231-86 In December 1804, an Anglo-Swedish agreement became the first step towards the creation of the Third Coalition. By April 1805, Britain had also signed an alliance with Russia.. Meanwhile, French territorial rearrangements in Germany occurred without Russian consultation and Napoleon's annexations in the Po valley increasingly strained relations between the two. Austria had been defeated by France twice in recent memory and wanted revenge, so it joined the coalition a few months later.
Before the formation of the Third Coalition, Napoleon had assembled an invasion force, the Armée d'Angleterre, around six camps at Boulogne in Northern France. He intended to use this invasion force to strike at England. They never invaded, but Napoleon's troops received careful and invaluable training for future military operations. The men at Boulogne formed the core for what Napoleon later called La Grande Armée. At the start, this French army had about men organized into seven corps, which were large field units that contained 36 to 40 cannons each and were capable of independent action until other corps could come to the rescue. A single corps properly situated in a strong defensive position could survive at least a day without support, giving the Grande Armée countless strategic and tactical options on every campaign. On top of these forces, Napoleon created a cavalry reserve of organized into two cuirassier divisions, four mounted dragoon divisions, one division of dismounted dragoons, and one of light cavalry, all supported by 24 artillery pieces. By 1805, the Grande Armée had grown to a force of men, who were well equipped, well trained, and led by competent officers.Michael J. Hughes, Forging Napoleon's Grande Armée: Motivation, Military Culture, and Masculinity in the French Army, 1800-1808 (NYU Press, 2012).
Napoleon knew that the French fleet could not defeat the Royal Navy in a head-to-head battle, so he planned to lure it away from the English Channel through diversionary tactics. The main strategic idea involved the French Navy escaping from the British blockades of Toulon and Brest and threatening to attack the West Indies. In the face of this attack, it was hoped, the British would weaken their defense of the Western Approaches by sending ships to the Caribbean, allowing a combined Franco-Spanish fleet to take control of the channel long enough for French armies to cross and invade. However, the plan unraveled after the British victory at the Battle of Cape Finisterre in July 1805. French Admiral Villeneuve then retreated to Cádiz instead of linking up with French naval forces at Brest for an attack on the English Channel.
By August 1805, Napoleon had realized that the strategic situation had changed fundamentally. Facing a potential invasion from his continental enemies, he decided to strike first and turned his army's sights from the English Channel to the Rhine. His basic objective was to destroy the isolated Austrian armies in Southern Germany before their Russian allies could arrive. On 25 September, after great secrecy and feverish marching, French troops began to cross the Rhine on a front of .Richard Brooks (editor), Atlas of World Military History. p. 108Andrew Uffindell, Great Generals of the Napoleonic Wars. p. 15 Austrian commander Karl Mack had gathered the greater part of the Austrian army at the fortress of Ulm in Swabia. Napoleon swung his forces to the southeast and the Grande Armée performed an elaborate wheeling movement that outflanked the Austrian positions. The Ulm Maneuver completely surprised General Mack, who belatedly understood that his army had been cut off. After some minor engagements that culminated in the Battle of Ulm, Mack finally surrendered after realizing that there was no way to break out of the French encirclement. For just French casualties, Napoleon had managed to capture a total of Austrian soldiers through his army's rapid marching.Richard Brooks (editor), Atlas of World Military History. p. 156. The Ulm Campaign is generally regarded as a strategic masterpiece and was influential in the development of the Schlieffen Plan in the late 19th century.Richard Brooks (editor), Atlas of World Military History. p. 156. "It is a historical cliché to compare the Schlieffen Plan with Hannibal's tactical envelopment at Cannae (216 BC); Schlieffen owed more to Napoleon's strategic maneuver on Ulm (1805)". For the French, this spectacular victory on land was soured by the decisive victory that the Royal Navy attained at the Battle of Trafalgar on 21 October. After Trafalgar, Britain had total domination of the seas for the duration of the Napoleonic Wars.
thumb|300px|Napoleon at the Battle of Austerlitz, by François Gérard 1805. The Battle of Austerlitz, also known as the Battle of the Three Emperors, was one of Napoleon's many victories, where the French Empire defeated the Third Coalition.
Following the Ulm Campaign, French forces managed to capture Vienna in November. The fall of Vienna provided the French a huge bounty as they captured muskets, 500 cannons, and the intact bridges across the Danube.David G. Chandler, The Campaigns of Napoleon. p. 407 At this critical juncture, both Tsar Alexander I and Holy Roman Emperor Francis II decided to engage Napoleon in battle, despite reservations from some of their subordinates. Napoleon sent his army north in pursuit of the Allies, but then ordered his forces to retreat so that he could feign a grave weakness. Desperate to lure the Allies into battle, Napoleon gave every indication in the days preceding the engagement that the French army was in a pitiful state, even abandoning the dominant Pratzen Heights near the village of Austerlitz. At the Battle of Austerlitz, in Moravia on 2 December, he deployed the French army below the Pratzen Heights and deliberately weakened his right flank, enticing the Allies to launch a major assault there in the hopes of rolling up the whole French line. A forced march from Vienna by Marshal Davout and his III Corps plugged the gap left by Napoleon just in time. Meanwhile, the heavy Allied deployment against the French right weakened their center on the Pratzen Heights, which was viciously attacked by the IV Corps of Marshal Soult. With the Allied center demolished, the French swept through both enemy flanks and sent the Allies fleeing chaotically, capturing thousands of prisoners in the process. The battle is often seen as a tactical masterpiece because of the near-perfect execution of a calibrated but dangerous plan — of the same stature as Cannae, the celebrated triumph by Hannibal some 2,000 years before.
The Allied disaster at Austerlitz significantly shook the faith of Emperor Francis in the British-led war effort. France and Austria agreed to an armistice immediately and the Treaty of Pressburg followed shortly after on 26 December. Pressburg took Austria out of both the war and the Coalition while reinforcing the earlier treaties of Campo Formio and of Lunéville between the two powers. The treaty confirmed the Austrian loss of lands to France in Italy and Bavaria, and lands in Germany to Napoleon's German allies. It also imposed an indemnity of 40 million francs on the defeated Habsburgs and allowed the fleeing Russian troops free passage through hostile territories and back to their home soil. Napoleon went on to say, "The battle of Austerlitz is the finest of all I have fought."Schom 1997, p.414 Frank McLynn suggests that Napoleon was so successful at Austerlitz that he lost touch with reality, and what used to be French foreign policy became a "personal Napoleonic one". Vincent Cronin disagrees, stating that Napoleon was not overly ambitious for himself, "he embodied the ambitions of thirty million Frenchmen".Cronin 1994, p.344
Middle-Eastern alliances
thumb|250px|The Iranian Envoy Mirza Mohammed Reza-Qazvini meeting with Napoleon I at the Finckenstein Palace, 27 April 1807, to sign the Treaty of Finckenstein.
Napoleon continued to entertain a grand scheme to establish a French presence in the Middle East in order to put pressure on Britain and Russia, and perhaps form an alliance with the Ottoman Empire.Watson 2003, pp.13–14 In February 1806, Ottoman Emperor Selim III finally recognized Napoleon as Emperor. He also opted for an alliance with France, calling France "our sincere and natural ally."Karsh 2001, p.12 That decision brought the Ottoman Empire into a losing war against Russia and Britain. A Franco-Persian alliance was also formed between Napoleon and the Persian Empire of Fat′h-Ali Shah Qajar. It collapsed in 1807, when France and Russia themselves formed an unexpected alliance. In the end, Napoleon had made no effective alliances in the Middle East.
War of the Fourth Coalition and Tilsit
After Austerlitz, Napoleon established the Confederation of the Rhine in 1806. A collection of German states intended to serve as a buffer zone between France and Central Europe, the creation of the Confederation spelled the end of the Holy Roman Empire and significantly alarmed the Prussians. The brazen reorganization of German territory by the French risked threatening Prussian influence in the region, if not eliminating it outright. War fever in Berlin rose steadily throughout the summer of 1806. At the insistence of his court, especially his wife Queen Louise, Frederick William III decided to challenge the French domination of Central Europe by going to war.
thumb|Napoleon reviews the Imperial Guard before the Battle of Jena.
The initial military maneuvers began in September 1806. In a letter to Marshal Soult detailing the plan for the campaign, Napoleon described the essential features of Napoleonic warfare and introduced the phrase le bataillon-carré ("square battalion").Chandler 1966, p. 467–68 In the bataillon-carré system, the various corps of the Grande Armée would march uniformly together in close supporting distance. If any single corps was attacked, the others could quickly spring into action and arrive to help. Napoleon invaded Prussia with 180,000 troops, rapidly marching on the right bank of the River Saale. As in previous campaigns, his fundamental objective was to destroy one opponent before reinforcements from another could tip the balance of the war. Upon learning the whereabouts of the Prussian army, the French swung westwards and crossed the Saale with overwhelming force. At the twin battles of Jena and Auerstedt, fought on 14 October, the French convincingly defeated the Prussians and inflicted heavy casualties. With several major commanders dead or incapacitated, the Prussian king proved incapable of effectively commanding the army, which began to quickly disintegrate. In a vaunted pursuit that epitomized the "peak of Napoleonic warfare," according to historian Richard Brooks,Brooks 2000, p. 110 the French managed to capture 140,000 soldiers, over 2,000 cannons and hundreds of ammunition wagons, all in a single month. Historian David Chandler wrote of the Prussian forces: "Never has the morale of any army been more completely shattered." Despite their overwhelming defeat, the Prussians refused to negotiate with the French until the Russians had an opportunity to enter the fight.
thumb|left|The Treaties of Tilsit: Napoleon meeting with Alexander I of Russia on a raft in the middle of the Neman River
Following his triumph, Napoleon imposed the first elements of the Continental System through the Berlin Decree issued in November 1806. The Continental System, which prohibited European nations from trading with Britain, was widely violated throughout his reign.Jacques Godechot et al. Napoleonic Era in Europe (1971) pp 126–39 In the next few months, Napoleon marched against the advancing Russian armies through Poland and was involved in the bloody stalemate at the Battle of Eylau in February 1807. After a period of rest and consolidation on both sides, the war restarted in June with an initial struggle at Heilsberg that proved indecisive. On 14 June, however, Napoleon finally obtained an overwhelming victory over the Russians at the Battle of Friedland, wiping out the majority of the Russian army in a very bloody struggle. The scale of their defeat convinced the Russians to make peace with the French. On 19 June, Czar Alexander sent an envoy to seek an armistice with Napoleon. The latter assured the envoy that the Vistula River represented the natural borders between French and Russian influence in Europe. On that basis, the two emperors began peace negotiations at the town of Tilsit after meeting on an iconic raft on the River Niemen. The very first thing Alexander said to Napoleon was probably well-calibrated: "I hate the English as much as you do."
Alexander faced pressure from his brother, Duke Constantine, to make peace with Napoleon. Given the victory he had just achieved, the French emperor offered the Russians relatively lenient terms–demanding that Russia join the Continental System, withdraw its forces from Wallachia and Moldavia, and hand over the Ionian Islands to France. By contrast, Napoleon dictated very harsh peace terms for Prussia, despite the ceaseless exhortations of Queen Louise. Wiping out half of Prussian territories from the map, Napoleon created a new kingdom of 1,100 square miles called Westphalia. He then appointed his young brother Jérôme as the new monarch of this kingdom. Prussia's humiliating treatment at Tilsit caused a deep and bitter antagonism which festered as the Napoleonic era progressed. Moreover, Alexander's pretensions at friendship with Napoleon led the latter to seriously misjudge the true intentions of his Russian counterpart, who would violate numerous provisions of the treaty in the next few years. Despite these problems, the Treaties of Tilsit at last gave Napoleon a respite from war and allowed him to return to France, which he had not seen in over 300 days.
Peninsular War and Erfurt
The settlements at Tilsit gave Napoleon time to organize his empire. One of his major objectives became enforcing the Continental System against the British. He decided to focus his attention on the Kingdom of Portugal, which consistently violated his trade prohibitions. After defeat in the War of the Oranges in 1801, Portugal adopted a double-sided policy. At first, John VI agreed to close his ports to British trade. The situation changed dramatically after the Franco-Spanish defeat at Trafalgar; John grew bolder and officially resumed diplomatic and trade relations with Britain.
thumb|Joseph Bonaparte, Napoleon's brother, as King of Spain
Unhappy with this change of policy by the Portuguese government, Napoleon sent an army to invade Portugal. On 17 October 1807, 24,000 French troops under General Junot crossed the Pyrenees with Spanish cooperation and headed towards Portugal to enforce Napoleon's orders.Todd Fisher & Gregory Fremont-Barnes, The Napoleonic Wars: The Rise and Fall of an Empire. p. 197. This attack was the first step in what would eventually become the Peninsular War, a six-year struggle that significantly sapped French strength. Throughout the winter of 1808, French agents became increasingly involved in Spanish internal affairs, attempting to incite discord between members of the Spanish royal family. On 16 February 1808, secret French machinations finally materialized when Napoleon announced that he would intervene to mediate between the rival political factions in the country.Fisher & Fremont-Barnes pp. 198–99. Marshal Murat led 120,000 troops into Spain and the French arrived in Madrid on 24 March,Fisher & Fremont-Barnes p. 199. where wild riots against the occupation erupted just a few weeks later. Napoleon appointed his brother, Joseph Bonaparte, as the new King of Spain in the summer of 1808. The appointment enraged a heavily religious and conservative Spanish population. Resistance to French aggression soon spread throughout the country. The shocking French defeat at the Battle of Bailén in July gave hope to Napoleon's enemies and partly persuaded the French emperor to intervene in person.
Before going to Iberia, Napoleon decided to address several lingering issues with the Russians. At the Congress of Erfurt in October 1808, Napoleon hoped to keep Russia on his side during the upcoming struggle in Spain and during any potential conflict against Austria. The two sides reached an agreement, the Erfurt Convention, that called upon Britain to cease its war against France, that recognized the Russian conquest of Finland from Sweden, and that affirmed Russian support for France in a possible war against Austria "to the best of its ability." Napoleon then returned to France and prepared for war. The Grande Armée, under the Emperor's personal command, rapidly crossed the Ebro River in November 1808 and inflicted a series of crushing defeats against the Spanish forces. After clearing the last Spanish force guarding the capital at Somosierra, Napoleon entered Madrid on 4 December with 80,000 troops.Fisher & Fremont-Barnes p. 205. He then unleashed his soldiers against Moore and the British forces. The British were swiftly driven to the coast, and they withdrew from Spain entirely after a last stand at the Battle of Corunna in January 1809.
Napoleon would end up leaving Iberia in order to deal with the Austrians in Central Europe, but the Peninsular War continued on long after his absence. He never returned to Spain after the 1808 campaign. Several months after Corunna, the British sent another army to the peninsula under the future Duke of Wellington. The war then settled into a complex and asymmetric strategic deadlock where all sides struggled to gain the upper hand. The highlight of the conflict became the brutal guerrilla warfare that engulfed much of the Spanish countryside. Both sides committed the worst atrocities of the Napoleonic Wars during this phase of the conflict. The vicious guerrilla fighting in Spain, largely absent from the French campaigns in Central Europe, severely disrupted the French lines of supply and communication. Although France maintained roughly 300,000 troops in Iberia during the Peninsular War, the vast majority were tied down to garrison duty and to intelligence operations. The French were never able to concentrate all of their forces effectively, prolonging the war until events elsewhere in Europe finally turned the tide in favor of the Allies. After the invasion of Russia in 1812, the number of French troops in Spain vastly declined as Napoleon needed reinforcements to conserve his strategic position in Europe. By 1814, after scores of battles and sieges throughout Iberia, the Allies had managed to push the French out of the peninsula.
War of the Fifth Coalition and Marie Louise
thumb|Napoleon at the Battle of Wagram, painted by Horace Vernet.
After four years on the sidelines, Austria sought another war with France to avenge its recent defeats. Austria could not count on Russian support because the latter was at war with Britain, Sweden, and the Ottoman Empire in 1809. Frederick William of Prussia initially promised to help the Austrians, but reneged before conflict began.Fisher & Fremont-Barnes, p. 106. A report from the Austrian finance minister suggested that the treasury would run out of money by the middle of 1809 if the large army that the Austrians had formed since the Third Coalition remained mobilized. Although Archduke Charles warned that the Austrians were not ready for another showdown with Napoleon, a stance that landed him in the so-called "peace party," he did not want to see the army demobilized either. On 8 February 1809, the advocates for war finally succeeded when the Imperial Government secretly decided on another confrontation against the French.
In the early morning of 10 April, leading elements of the Austrian army crossed the Inn River and invaded Bavaria. The early Austrian attack surprised the French; Napoleon himself was still in Paris when he heard about the invasion. He arrived at Donauwörth on the 17th to find the Grande Armée in a dangerous position, with its two wings separated by and joined together by a thin cordon of Bavarian troops. Charles pressed the left wing of the French army and hurled his men towards the III Corps of Marshal Davout. In response, Napoleon came up with a plan to cut off the Austrians in the celebrated Landshut Maneuver. He realigned the axis of his army and marched his soldiers towards the town of Eckmühl. The French scored a convincing win in the resulting Battle of Eckmühl, forcing Charles to withdraw his forces over the Danube and into Bohemia. On 13 May, Vienna fell for the second time in four years, although the war continued since most of the Austrian army had survived the initial engagements in Southern Germany.
By 17 May, the main Austrian army under Charles had arrived on the Marchfeld. Charles kept the bulk of his troops several miles away from the river bank in hopes of concentrating them at the point where Napoleon decided to cross. On 21 May, the French made their first major effort to cross the Danube, precipitating the Battle of Aspern-Essling. The Austrians enjoyed a comfortable numerical superiority over the French throughout the battle; on the first day, Charles disposed of soldiers against only commanded by Napoleon. By the second day, reinforcements had boosted French numbers up to . The battle was characterized by a vicious back-and-forth struggle for the two villages of Aspern and Essling, the focal points of the French bridgehead. By the end of the fighting, the French had lost Aspern but still controlled Essling. A sustained Austrian artillery bombardment eventually convinced Napoleon to withdraw his forces back onto Lobau Island. Both sides inflicted about casualties on each other. It was the first defeat Napoleon suffered in a major set-piece battle, and it caused excitement throughout many parts of Europe because it proved that he could be beaten on the battlefield.
After the setback at Aspern-Essling, Napoleon took more than six weeks in planning and preparing for contingencies before he made another attempt at crossing the Danube.David G. Chandler, The Campaigns of Napoleon. p. 708. From 30 June to the early days of July, the French recrossed the Danube in strength, with more than troops marching across the Marchfeld towards the Austrians. Charles received the French with of his own men.David G. Chandler, The Campaigns of Napoleon. p. 720. In the ensuing Battle of Wagram, which also lasted two days, Napoleon commanded his forces in what was the largest battle of his career up until then. Napoleon finished off the battle with a concentrated central thrust that punctured a hole in the Austrian army and forced Charles to retreat. Austrian losses were very heavy, reaching well over casualties.David G. Chandler, The Campaigns of Napoleon. p. 729. The French were too exhausted to pursue the Austrians immediately, but Napoleon eventually caught up with Charles at Znaim and the latter signed an armistice on 12 July.
thumb|alt=Map of Europe. French Empire shown as bigger than present day France as it included parts of present-day Netherlands and Italy.|First French Empire at its greatest extent in 1811
In the Kingdom of Holland, the British launched the Walcheren Campaign to open up a second front in the war and to relieve the pressure on the Austrians. The British army only landed at Walcheren on 30 July, by which point the Austrians had already been defeated. The Walcheren Campaign was characterized by little fighting but heavy casualties thanks to the popularly dubbed "Walcheren Fever." Over 4000 British troops were lost in a bungled campaign, and the rest withdrew in December 1809. The main strategic result from the campaign became the delayed political settlement between the French and the Austrians. Emperor Francis wanted to wait and see how the British performed in their theater before entering into negotiations with Napoleon. Once it became apparent that the British were going nowhere, the Austrians agreed to peace talks.
The resulting Treaty of Schönbrunn in October 1809 was the harshest that France had imposed on Austria in recent memory. Metternich and Archduke Charles had the preservation of the Habsburg Empire as their fundamental goal, and to this end they succeeded by making Napoleon seek more modest goals in return for promises of friendship between the two powers.Todd Fisher & Gregory Fremont-Barnes, The Napoleonic Wars: The Rise and Fall of an Empire. p. 144. Nevertheless, while most of the hereditary lands remained a part of the Habsburg realm, France received Carinthia, Carniola, and the Adriatic ports, while Galicia was given to the Poles and the Salzburg area of the Tyrol went to the Bavarians. Austria lost over three million subjects, about one-fifth of her total population, as a result of these territorial changes.David G. Chandler, The Campaigns of Napoleon. p. 732. Although fighting in Iberia continued, the War of the Fifth Coalition would be the last major conflict on the European continent for the next three years.
Napoleon turned his focus to domestic affairs after the war. Empress Joséphine had still not given birth to a child from Napoleon, who became worried about the future of his empire following his death. Desperate for a legitimate heir, Napoleon divorced Joséphine in January 1810 and started looking for a new wife. Hoping to cement the recent alliance with Austria through a family connection, Napoleon married the Archduchess Marie Louise, who was 18 years old at the time. On 20 March 1811, Marie Louise gave birth to a baby boy, whom Napoleon made heir apparent and bestowed the title of King of Rome. His son never actually ruled the empire, but historians still refer to him as Napoleon II.
Invasion of Russia
thumb|left|right|The Moscow fire depicted by an unknown German artist
In 1808, Napoleon and Czar Alexander met at the Congress of Erfurt to preserve the Russo-French alliance. The leaders had a friendly personal relationship after their first meeting at Tilsit in 1807. By 1811, however, tensions had increased and Alexander was under pressure from the Russian nobility to break off the alliance. A major strain on the relationship between the two nations became the regular violations of the Continental System by the Russians, which led Napoleon to threaten Alexander with serious consequences if he formed an alliance with Britain.
By 1812, advisers to Alexander suggested the possibility of an invasion of the French Empire and the recapture of Poland. On receipt of intelligence reports on Russia's war preparations, Napoleon expanded his Grande Armée to more than 450,000 men. He ignored repeated advice against an invasion of the Russian heartland and prepared for an offensive campaign; on 24 June 1812 the invasion commenced.
thumb|left|Napoleon's withdrawal from Russia, a painting by Adolph Northen
In an attempt to gain increased support from Polish nationalists and patriots, Napoleon termed the war the Second Polish War—the First Polish War had been the Bar Confederation uprising by Polish nobles against Russia in 1768. Polish patriots wanted the Russian part of Poland to be joined with the Duchy of Warsaw and an independent Poland created. This was rejected by Napoleon, who stated he had promised his ally Austria this would not happen. Napoleon refused to manumit the Russian serfs because of concerns this might provoke a reaction in his army's rear. The serfs later committed atrocities against French soldiers during France's retreat.
The Russians avoided Napoleon's objective of a decisive engagement and instead retreated deeper into Russia. A brief attempt at resistance was made at Smolensk in August; the Russians were defeated in a series of battles, and Napoleon resumed his advance. The Russians again avoided battle, although in a few cases this was only achieved because Napoleon uncharacteristically hesitated to attack when the opportunity arose. Owing to the Russian army's scorched earth tactics, the French found it increasingly difficult to forage food for themselves and their horses.Harvey 2006, p.773
The Russians eventually offered battle outside Moscow on 7 September: the Battle of Borodino resulted in approximately 44,000 Russian and 35,000 French dead, wounded or captured, and may have been the bloodiest day of battle in history up to that point in time. Although the French had won, the Russian army had accepted, and withstood, the major battle Napoleon had hoped would be decisive. Napoleon's own account was: "The most terrible of all my battles was the one before Moscow. The French showed themselves to be worthy of victory, but the Russians showed themselves worthy of being invincible."Markham 1988, p.194
The Russian army withdrew and retreated past Moscow. Napoleon entered the city, assuming its fall would end the war and Alexander would negotiate peace. However, on orders of the city's governor Feodor Rostopchin, rather than capitulation, Moscow was burned. After five weeks, Napoleon and his army left. In early November Napoleon got concerned about loss of control back in France after the Malet coup of 1812. His army walked through snow up to their knees and nearly 10,000 men and horses froze to death on the night of 8/9 November alone. After Battle of Berezina Napoleon managed to escape but had to abandon much of the remaining artillery and baggage train. On 5 December, shortly before arriving in Vilnius, Napoleon left the army in a sledge.
The French suffered in the course of a ruinous retreat, including from the harshness of the Russian Winter. The Armée had begun as over 400,000 frontline troops, with fewer than 40,000 crossing the Berezina River in November 1812.Markham 1988, pp.190, 199 The Russians had lost 150,000 in battle and hundreds of thousands of civilians.
War of the Sixth Coalition
thumb|Napoleon's farewell to his Imperial Guard, 20 April 1814
There was a lull in fighting over the winter of 1812–13 while both the Russians and the French rebuilt their forces; Napoleon was able to field 350,000 troops. Heartened by France's loss in Russia, Prussia joined with Austria, Sweden, Russia, Great Britain, Spain, and Portugal in a new coalition. Napoleon assumed command in Germany and inflicted a series of defeats on the Coalition culminating in the Battle of Dresden in August 1813.
Despite these successes, the numbers continued to mount against Napoleon, and the French army was pinned down by a force twice its size and lost at the Battle of Leipzig. This was by far the largest battle of the Napoleonic Wars and cost more than 90,000 casualties in total.Chandler 1995, p.1020
The Allies offered peace terms in the Frankfurt proposals in November 1813. Napoleon would remain as Emperor of France, but it would be reduced to its "natural frontiers." That meant that France could retain control of Belgium, Savoy and the Rhineland (the west bank of the Rhine River), while giving up control of all the rest, including all of Spain and the Netherlands, and most of Italy and Germany. Metternich told Napoleon these were the best terms the Allies were likely to offer; after further victories, the terms would be harsher and harsher. Metternich's motivation was to maintain France as a balance against Russian threats, while ending the highly destabilizing series of wars.
Napoleon, expecting to win the war, delayed too long and lost this opportunity; by December the Allies had withdrawn the offer. When his back was to the wall in 1814 he tried to reopen peace negotiations on the basis of accepting the Frankfurt proposals. The Allies now had new, harsher terms that included the retreat of France to its 1791 boundaries, which meant the loss of Belgium. Napoleon would remain Emperor, however he rejected the term. The British wanted Napoleon permanently removed; they prevailed. Napoleon adamantly refused.
Napoleon withdrew back into France, his army reduced to 70,000 soldiers, and little cavalry; he faced more than three times as many Allied troops.Fremont-Barnes 2004, p.14 The French were surrounded: British armies pressed from the south, and other Coalition forces positioned to attack from the German states. Napoleon won a series of victories in the Six Days' Campaign, though these were not significant enough to turn the tide. The leaders of Paris surrendered to the Coalition in March 1814.
On 1 April, Alexander addressed the Sénat conservateur. Long docile to Napoleon, under Talleyrand's prodding it had turned against him. Alexander told the Sénat that the Allies were fighting against Napoleon, not France, and they were prepared to offer honorable peace terms if Napoleon were removed from power. The next day, the Sénat passed the Acte de déchéance de l'Empereur ("Emperor's Demise Act"), which declared Napoleon deposed. Napoleon had advanced as far as Fontainebleau when he learned that Paris was lost. When Napoleon proposed the army march on the capital, his senior officers and marshals mutinied. On 4 April, led by Ney, they confronted Napoleon. Napoleon asserted the army would follow him, and Ney replied the army would follow its generals. While the ordinary soldiers and regimental officers wanted to fight on, without any senior officers or marshals any prospective invasion of Paris would have been impossible. Bowing to the inevitable, on 4 April Napoleon abdicated in favour of his son, with Marie-Louise as regent. However, the Allies refused to accept this under prodding from Alexander, who feared that Napoleon might find an excuse to retake the throne. Napoleon was then forced to announce his unconditional abdication only two days later.
Exile to Elba
thumb|alt=Cartoon of Napoleon sitting back to front on a donkey with a broken sword and two soldiers in the background drumming|British etching from 1814 in celebration of Napoleon's first exile to Elba at the close of the War of the Sixth Coalition
In the Treaty of Fontainebleau, the Allies exiled him to Elba, an island of 12,000 inhabitants in the Mediterranean, off the Tuscan coast. They gave him sovereignty over the island and allowed him to retain the title of Emperor. Napoleon attempted suicide with a pill he had carried after nearly being captured by the Russians during the retreat from Moscow. Its potency had weakened with age, however, and he survived to be exiled while his wife and son took refuge in Austria. In the first few months on Elba he created a small navy and army, developed the iron mines, oversaw the construction of new roads, issued decrees on modern agricultural methods, and overhauled the island's legal and educational system.
A few months into his exile, Napoleon learned that his ex-wife Josephine had died in France. He was devastated by the news, locking himself in his room and refusing to leave for two days.
Hundred Days
thumb|Napoleon returned from Elba, by Karl Stenben, 19th century
Separated from his wife and son, who had returned to Austria, cut off from the allowance guaranteed to him by the Treaty of Fontainebleau, and aware of rumours he was about to be banished to a remote island in the Atlantic Ocean, Napoleon escaped from Elba, in the brig Inconstant on 26 February 1815 with 700 men over him. Two days later, he landed on the French mainland at Golfe-Juan and started heading north.
The 5th Regiment was sent to intercept him and made contact just south of Grenoble on 7 March 1815. Napoleon approached the regiment alone, dismounted his horse and, when he was within gunshot range, shouted to the soldiers, "Here I am. Kill your Emperor, if you wish." The soldiers quickly responded with, "Vive L'Empereur!" Ney, who had boasted to the restored Bourbon king, Louis XVIII, that he would bring Napoleon to Paris in an iron cage, affectionately kissed his former emperor and forgot his oath of allegiance to the Bourbon monarch. The two then marched together towards Paris with a growing army. The unpopular Louis XVIII fled to Belgium after realizing he had little political support. On 13 March, the powers at the Congress of Vienna declared Napoleon an outlaw. Four days later, Great Britain, Russia, Austria, and Prussia each pledged to put 150,000 men into the field to end his rule.
Napoleon arrived in Paris on 20 March and governed for a period now called the Hundred Days. By the start of June the armed forces available to him had reached 200,000, and he decided to go on the offensive to attempt to drive a wedge between the oncoming British and Prussian armies. The French Army of the North crossed the frontier into the United Kingdom of the Netherlands, in modern-day Belgium.Chesney 2006, p.35
Napoleon's forces fought the Coalition armies, commanded by the Duke of Wellington and Gebhard Leberecht von Blücher, at the Battle of Waterloo on 18 June 1815. Wellington's army withstood repeated attacks by the French and drove them from the field while the Prussians arrived in force and broke through Napoleon's right flank.
Napoleon returned to Paris and found that both the legislature and the people had turned against him. Realizing his position was untenable, he abdicated on 22 June in favour of his son. He left Paris three days later and settled at Josephine's former palace in Malmaison (on the western bank of the Seine about west of Paris). Even as Napoleon travelled to Paris, the Coalition forces swept through France (arriving in the vicinity of Paris on 29 June), with the stated intent of restoring Louis XVIII to the French throne.
When Napoleon heard that Prussian troops had orders to capture him dead or alive, he fled to Rochefort, considering an escape to the United States. British ships were blocking every port. Napoleon demanded asylum from the British Captain Frederick Maitland on on 15 July 1815.Cordingly 2004, p.254
Exile on Saint Helena
thumb|Napoleon on Saint Helena
Britain kept Napoleon on the island of Saint Helena in the Atlantic Ocean, from the west coast of Africa. Napoleon was moved to Longwood House there in December 1815; it had fallen into disrepair, and the location was damp, windswept and unhealthy. The Times published articles insinuating the British government was trying to hasten his death, and he often complained of the living conditions in letters to the governor and his custodian, Hudson Lowe.Schom 1997, pp.769–770
With a small cadre of followers, Napoleon dictated his memoirs and grumbled about conditions. Lowe cut Napoleon's expenditure, ruled that no gifts were allowed if they mentioned his imperial status, and made his supporters sign a guarantee they would stay with the prisoner indefinitely.
thumb|alt=Photo of a front garden and large brown building. French flag on a flagpole next to a small cannon.|Longwood House, Saint Helena: site of Napoleon's captivity
There were rumors of plots and even of his escape, but in reality no serious attempts were made.Wilkins 1972 For English poet Lord Byron, Napoleon was the epitome of the Romantic hero, the persecuted, lonely, and flawed genius.
Death
thumb|left|150px|Bronze Death Mask of Napoleon I. Modeled in 1821; Cast in 1833.
His personal physician, Barry O'Meara, warned London that his declining state of health was mainly caused by the harsh treatment. Napoleon confined himself for months on end in his damp and wretched habitation of Longwood.Albert Benhamou, Inside Longwood – Barry O'Meara's clandestine letters, 2012
In February 1821, Napoleon's health began to deteriorate rapidly. He reconciled with the Catholic Church. He died on 5 May 1821, after confession, Extreme Unction and Viaticum in the presence of Father Ange Vignali. His last words were, "France, l'armée, tête d'armée, Joséphine" ("France, army, head of the army, Joséphine").Roberts, Napoleon (2014) 799-801
Napoleon's original death mask was created around 6 May, although it is not clear which doctor created it.Wilson 1975, pp.293–5 In his will, he had asked to be buried on the banks of the Seine, but the British governor said he should be buried on Saint Helena, in the Valley of the Willows.
thumb|Napoleon's tomb at Les Invalides
In 1840, Louis Philippe I obtained permission from the British to return Napoleon's remains to France. On 15 December 1840, a state funeral was held. The hearse proceeded from the Arc de Triomphe down the Champs-Élysées, across the Place de la Concorde to the Esplanade des Invalides and then to the cupola in St Jérôme's Chapel, where it remained until the tomb designed by Louis Visconti was completed.
In 1861, Napoleon's remains were entombed in a porphyry stone sarcophagus in the crypt under the dome at Les Invalides.Driskel 1993, p. 168"
Cause of death
The cause of his death has been debated. Napoleon's physician, François Carlo Antommarchi, led the autopsy, which found the cause of death to be stomach cancer. Antommarchi did not sign the official report. Napoleon's father had died of stomach cancer, although this was seemingly unknown at the time of the autopsy.Johnson 2002, pp.180–1 Antommarchi found evidence of a stomach ulcer; this was the most convenient explanation for the British, who wanted to avoid criticism over their care of Napoleon.
thumb|left|alt=Gold-framed portrait painting of a gaunt middle-aged man with receding hair and laurel wreath, lying eyes-closed on white pillow with a white blanket covering to his neck and a gold Jesus cross resting on his chest|Napoleon on His Death Bed, by Horace Vernet, 1826
In 1955, the diaries of Napoleon's valet, Louis Marchand, were published. His description of Napoleon in the months before his death led Sten Forshufvud in a 1961 paper in Nature to put forward other causes for his death, including deliberate arsenic poisoning.Cullen 2008, pp.146–48 Arsenic was used as a poison during the era because it was undetectable when administered over a long period. Forshufvud, in a 1978 book with Ben Weider, noted that Napoleon's body was found to be well preserved when moved in 1840. Arsenic is a strong preservative, and therefore this supported the poisoning hypothesis. Forshufvud and Weider observed that Napoleon had attempted to quench abnormal thirst by drinking large amounts of orgeat syrup that contained cyanide compounds in the almonds used for flavouring.
They maintained that the potassium tartrate used in his treatment prevented his stomach from expelling these compounds and that his thirst was a symptom of the poison. Their hypothesis was that the calomel given to Napoleon became an overdose, which killed him and left extensive tissue damage behind. According to a 2007 article, the type of arsenic found in Napoleon's hair shafts was mineral, the most toxic, and according to toxicologist Patrick Kintz, this supported the conclusion that he was murdered.Cullen 2008, p.156
There have been modern studies that have supported the original autopsy finding. In a 2008 study, researchers analysed samples of Napoleon's hair from throughout his life, as well as samples from his family and other contemporaries. All samples had high levels of arsenic, approximately 100 times higher than the current average. According to these researchers, Napoleon's body was already heavily contaminated with arsenic as a boy, and the high arsenic concentration in his hair was not caused by intentional poisoning; people were constantly exposed to arsenic from glues and dyes throughout their lives. Studies published in 2007 and 2008 dismissed evidence of arsenic poisoning, and confirmed evidence of peptic ulcer and gastric cancer as the cause of death.Cullen 2008, p.161, and Hindmarsh et al. 2008, p.2092
Religion
thumb|right|Reorganisation of the religious geography: France is divided into 59 dioceses and 10 ecclesiastical provinces.
Napoleon's baptism took place in Ajaccio on 21 July 1771; he was piously raised as a Catholic but he never developed much faith. As an adult, Napoleon was a deist. Napoleon's deity was an absent and distant God. However he had a keen appreciation of the power of organized religion in social and political affairs, and paid a great deal of attention to bending it to his purposes. He noted the influence of Catholicism's rituals and splendors. Napoleon had a civil marriage with Joséphine de Beauharnais, without religious ceremony. Napoleon was crowned Emperor on 2 December 1804 at Notre Dame de Paris in a ceremony presided over by Pope Pius VII. On 1 April 1810, Napoleon married the Austrian princess Marie Louise in a Catholic ceremony. During his brother's rule in Spain, he abolished the Spanish Inquisition in 1813.
Concordat
thumb|Leaders of the Catholic Church taking the civil oath required by the Concordat
Seeking national reconciliation between revolutionaries and Catholics, the Concordat of 1801 was signed on 15 July 1801 between Napoleon and Pope Pius VII. It solidified the Roman Catholic Church as the majority church of France and brought back most of its civil status. The hostility of devout Catholics against the state had now largely been resolved. It did not restore the vast church lands and endowments that had been seized during the revolution and sold off. As a part of the Concordat, he presented another set of laws called the Organic Articles.William Roberts, "Napoleon, the Concordat of 1801, and Its Consequences." in by Frank J. Coppa, ed., Controversial Concordats: The Vatican's Relations with Napoleon, Mussolini, and Hitler (1999) pp: 34-80.Nigel Aston, Religion and revolution in France, 1780–1804 (Catholic University of America Press, 2000) pp 279-315
While the Concordat restored much power to the papacy, the balance of church–state relations had tilted firmly in Napoleon's favour. He selected the bishops and supervised church finances. Napoleon and the pope both found the Concordat useful. Similar arrangements were made with the Church in territories controlled by Napoleon, especially Italy and Germany.Nigel Aston, Christianity and revolutionary Europe, 1750–1830 (Cambridge University Press, 2002) pp 261-62. Now, Napoleon could win favor with the Catholics while also controlling Rome in a political sense. Napoleon said in April 1801, "Skillful conquerors have not got entangled with priests. They can both contain them and use them." French children were issued a catechism that taught them to love and respect Napoleon.
Religious emancipation
Napoleon emancipated Jews, as well as Protestants in Catholic countries and Catholics in Protestant countries, from laws which restricted them to ghettos, and he expanded their rights to property, worship, and careers. Despite the anti-semitic reaction to Napoleon's policies from foreign governments and within France, he believed emancipation would benefit France by attracting Jews to the country given the restrictions they faced elsewhere.
He stated, "I will never accept any proposals that will obligate the Jewish people to leave France, because to me the Jews are the same as any other citizen in our country. It takes weakness to chase them out of the country, but it takes strength to assimilate them."Schwarzfuchs 1979, p.50 He was seen as so favourable to the Jews that the Russian Orthodox Church formally condemned him as "Antichrist and the Enemy of God".Cronin 1994, p.315
Personality
thumb|Napoleon visiting the Palais Royal for the opening of the 8th session of the Tribunat in 1807, by Merry-Joseph Blondel
Historians emphasize the strength of the ambition that took Napoleon from an obscure village to command of most of Europe.Pieter Geyl, Napoleon, For and Against (1982) George F. E. Rudé stresses his "rare combination of will, intellect and physical vigour." At , he was not physically imposing but in one-on-one situations he typically had a hypnotic effect on people and seemingly bent the strongest leaders to his will. He understood military technology, but was not an innovator in that regard. He was an innovator in using the financial, bureaucratic, and diplomatic resources of France. He could rapidly dictate a series of complex commands to his subordinates, keeping in mind where major units were expected to be at each future point, and like a chess master, "seeing" the best plays moves ahead.See David Chandler, "General Introduction" to his Chandler, David. The Campaigns of Napoleon: The Mind and Method of History's Greatest Soldier (1975).
Napoleon maintained strict, efficient work habits, prioritizing what needed to be done. He cheated at cards, but repaid the losses; he had to win at everything he attempted.Roberts, Napoleon: A Life (2014) pp 470-73 He kept relays of staff and secretaries at work. Unlike many generals, Napoleon did not examine history to ask what Hannibal or Alexander or anyone else did in a similar situation. Critics said he won many battles simply because of luck; Napoleon responded, "Give me lucky generals," aware that "luck" comes to leaders who recognize opportunity, and seize it. Dwyer states that Napoleon's victories at Austerlitz and Jena in 1805–06 heightened his sense of self-grandiosity, leaving him even more certain of his destiny and invincibility.
In terms of influence on events, it was more than Napoleon's personality that took effect. He reorganized France itself to supply the men and money needed for wars.J. M. Thompson, Napoleon Bonaparte: His Rise and Fall (1954), p.285 He inspired his men—Wellington said his presence on the battlefield was worth 40,000 soldiers, for he inspired confidence from privates to field marshals. He also unnerved the enemy. At the Battle of Auerstadt in 1806, King Frederick William III of Prussia outnumbered the French by 63,000 to 27,000; however, when he was told, mistakenly, that Napoleon was in command, he ordered a hasty retreat that turned into a rout. The force of his personality neutralized material difficulties as his soldiers fought with the confidence that with Napoleon in charge they would surely win.Steven Englund, Napoleon: A Political Life (2004), pp.379ff
Image
thumb|left|Plate, c. 1810 CE. It depicts Napoleon I of France. From France. By Darte-Freres. Porcelain painted in enamels and gilded. The Victoria and Albert Museum, London
thumb|upright|Napoleon is often represented in his green colonel uniform of the Chasseur à Cheval of the Imperial Guard, the regiment that often served as his personal escort, with a large bicorne and a hand-in-waistcoat gesture.
Napoleon has become a worldwide cultural icon who symbolises military genius and political power. Martin van Creveld described him as "the most competent human being who ever lived". Since his death, many towns, streets, ships, and even cartoon characters have been named after him. He has been portrayed in hundreds of films and discussed in hundreds of thousands of books and articles. and Bell 2007, p.13
During the Napoleonic Wars he was taken seriously by the British press as a dangerous tyrant, poised to invade. The British nicknamed him Boney. A nursery rhyme warned children that Bonaparte ravenously ate naughty people; the "bogeyman".Roberts 2004, p.93 The British Tory press has depicted Napoleon as much smaller than average height before, and that image persisted. Confusion about his height results from the difference between the French pouce and British inch—2.71 cm and 2.54 cm, respectively. The myth of the "Napoleon Complex"—named after him to describe men who have an inferiority complex—stems primarily from the fact that he was listed, incorrectly, as 5 feet 2 inches (in French units) at the time of his death. He was tall, an average height for a man of that period.
In 1908 Alfred Adler, a psychologist, cited Napoleon to describe an inferiority complex in which short people adopt an over-aggressive behaviour to compensate for lack of height; this inspired the term Napoleon complex.Hall 2006, p.181 The stock character of Napoleon is a comically short "petty tyrant" and this has become a cliché in popular culture. He is often portrayed wearing a large bicorne hat with a hand-in-waistcoat gesture—a reference to the painting produced in 1812 by Jacques-Louis David.Bordes 2007, p.118
When he became First Consul and later Emperor, Napoleon eschewed his general's uniform and habitually wore the green colonel uniform (non-Hussar) of a colonel of the Chasseur à Cheval of the Imperial Guard, the regiment that served as his personal escort many times, with a large bicorne. He also habitually wore (usually on Sundays) the blue uniform of a colonel of the Imperial Guard Foot Grenadiers (blue with white facings and red cuffs). He also wore his Légion d'honneur star, medal and ribbon, and the Order of the Iron Crown decorations, white French-style culottes and white stockings. This was in contrast to the complex uniforms with many decorations of his marshals and those around him.
Reforms
thumb|right|First remittance of the Légion d'Honneur, 15 July 1804, at Saint-Louis des Invalides, by Jean-Baptiste Debret (1812).
Napoleon instituted various reforms, such as higher education, a tax code, road and sewer systems, and established the Banque de France, the first central bank in French history. He negotiated the Concordat of 1801 with the Catholic Church, which sought to reconcile the mostly Catholic population to his regime. It was presented alongside the Organic Articles, which regulated public worship in France. He dissolved the Holy Roman Empire prior to German Unification later in the 19th century. The sale of the Louisiana Territory to the United States doubled the size of the United States.
In May 1802, he instituted the Legion of Honour, a substitute for the old royalist decorations and orders of chivalry, to encourage civilian and military achievements; the order is still the highest decoration in France.Blaufarb 2007, pp.101–2
Napoleonic Code
thumb|alt=Page of French writing|First page of the 1804 original edition of the Code Civil
Napoleon's set of civil laws, the Code Civil—now often known as the Napoleonic Code—was prepared by committees of legal experts under the supervision of Jean Jacques Régis de Cambacérès, the Second Consul. Napoleon participated actively in the sessions of the Council of State that revised the drafts. The development of the code was a fundamental change in the nature of the civil law legal system with its stress on clearly written and accessible law. Other codes ("Les cinq codes") were commissioned by Napoleon to codify criminal and commerce law; a Code of Criminal Instruction was published, which enacted rules of due process.
The Napoleonic code was adopted throughout much of Europe, though only in the lands he conquered, and remained in force after Napoleon's defeat. Napoleon said: "My true glory is not to have won forty battles ... Waterloo will erase the memory of so many victories. ... But ... what will live forever, is my Civil Code." The Code influences a quarter of the world's jurisdictions such as that of in Europe, the Americas and Africa.Wood 2007, p.55
Dieter Langewiesche described the code as a "revolutionary project" which spurred the development of bourgeois society in Germany by the extension of the right to own property and an acceleration towards the end of feudalism. Napoleon reorganised what had been the Holy Roman Empire, made up of more than a thousand entities, into a more streamlined forty-state Confederation of the Rhine; this provided the basis for the German Confederation and the unification of Germany in 1871.Scheck 2008, Chapter: The Road to National Unification
The movement toward national unification in Italy was similarly precipitated by Napoleonic rule.Astarita 2005, p.264 These changes contributed to the development of nationalism and the nation state.Alter 2006, pp.61–76
Napoleon implemented a wide array of liberal reforms in France and across Europe, especially in Italy and Germany, as summarized by British historian Andrew Roberts:
The ideas that underpin our modern world–meritocracy, equality before the law, property rights, religious toleration, modern secular education, sound finances, and so on–were championed, consolidated, codified and geographically extended by Napoleon. To them he added a rational and efficient local administration, an end to rural banditry, the encouragement of science and the arts, the abolition of feudalism and the greatest codification of laws since the fall of the Roman Empire.Andrew Roberts, Napoleon: A Life (2014) p xxxiii
Napoleon directly overthrew feudal remains in much of western Europe. He liberalised property laws, ended seigneurial dues, abolished the guild of merchants and craftsmen to facilitate entrepreneurship, legalised divorce, closed the Jewish ghettos and made Jews equal to everyone else. The Inquisition ended as did the Holy Roman Empire. The power of church courts and religious authority was sharply reduced and equality under the law was proclaimed for all men.Robert R. Palmer and Joel Colton, A History of the Modern World (New York: McGraw Hill, 1995), pp. 428–9.
Warfare
thumb|upright|alt=Photo of a grey and phosphorous-coloured equestrian statue. Napoleon is seated on the horse, which is rearing up, he looks forward with his right hand raised and pointing forward; his left hand holds the reins.|Statue in Cherbourg-Octeville unveiled by Napoleon III in 1858. Napoleon I strengthened the town's defences to prevent British naval incursions.
In the field of military organisation, Napoleon borrowed from previous theorists such as Jacques Antoine Hippolyte, Comte de Guibert, and from the reforms of preceding French governments, and then developed much of what was already in place. He continued the policy, which emerged from the Revolution, of promotion based primarily on merit.
Corps replaced divisions as the largest army units, mobile artillery was integrated into reserve batteries, the staff system became more fluid and cavalry returned as an important formation in French military doctrine. These methods are now referred to as essential features of Napoleonic warfare.Archer et al. 2002, p.397 Though he consolidated the practice of modern conscription introduced by the Directory, one of the restored monarchy's first acts was to end it.Flynn 2001, p.16
His opponents learned from Napoleon's innovations. The increased importance of artillery after 1807 stemmed from his creation of a highly mobile artillery force, the growth in artillery numbers, and changes in artillery practices. As a result of these factors, Napoleon, rather than relying on infantry to wear away the enemy's defenses, now could use massed artillery as a spearhead to pound a break in the enemy's line that was then exploited by supporting infantry and cavalry. McConachy rejects the alternative theory that growing reliance on artillery by the French army beginning in 1807 was an outgrowth of the declining quality of the French infantry and, later, France's inferiority in cavalry numbers.Bruce McConachy, "The Roots of Artillery Doctrine: Napoleonic Artillery Tactics Reconsidered," Journal of Military History 2001 65(3): 617–640. in JSTOR; online Weapons and other kinds of military technology remained static through the Revolutionary and Napoleonic eras, but 18th-century operational mobility underwent change.Archer et al. 2002, p.383
Napoleon's biggest influence was in the conduct of warfare. Antoine-Henri Jomini explained Napoleon's methods in a widely used textbook that influenced all European and American armies.John Shy, "Jomini" in Peter Paret, ed. Makers of Modern Strategy: From Machiavelli to the Nuclear Age (1986). Napoleon was regarded by the influential military theorist Carl von Clausewitz as a genius in the operational art of war, and historians rank him as a great military commander.Archer et al. 2002, p.380 Wellington, when asked who was the greatest general of the day, answered: "In this age, in past ages, in any age, Napoleon."Roberts 2001, p.272
Under Napoleon, a new emphasis towards the destruction, not just outmanoeuvring, of enemy armies emerged. Invasions of enemy territory occurred over broader fronts which made wars costlier and more decisive. The political effect of war increased; defeat for a European power meant more than the loss of isolated enclaves. Near-Carthaginian peaces intertwined whole national efforts, intensifying the Revolutionary phenomenon of total war.Archer et al. 2002, p.404
Metric system
The official introduction of the metric system in September 1799 was unpopular in large sections of French society. Napoleon's rule greatly aided adoption of the new standard not only across France but also across the French sphere of influence. Napoleon took a retrograde step in 1812 when he passed legislation to introduce the mesures usuelles (traditional units of measurement) for retail trade—a system of measure that resembled the pre-revolutionary units but were based on the kilogram and the metre; for example the livre metrique (metric pound) was 500 g instead of 489.5 g—the value of the livre du roi (the king's pound). Other units of measure were rounded in a similar manner prior to the definitive introduction of the metric system across Europe in the middle of the 19th century.O'Connor 2003
Education
Napoleon's educational reforms laid the foundation of a modern system of education in France and throughout much of Europe. Napoleon synthesized the best academic elements from the Ancien Régime, The Enlightenment, and the Revolution, with the aim of establishing a stable, well-educated and prosperous society. He made French the only official language. He left some primary education in the hands of religious orders, but he offered public support to secondary education. Napoleon founded a number of state secondary schools (lycées) designed to produce a standardized education that was uniform across France. All students were taught the sciences along with modern and classical languages. Unlike the system during the Ancien Régime, religious topics did not dominate the curriculum, although they were present with the teachers from the clergy. Napoleon hoped to use religion to produce social stability.L. Pearce Williams, "Science, education and Napoleon I." Isis (1956): 369-382 in JSTOR He gave special attention to the advanced centers, such as the École Polytechnique, that provided both military expertise and state-of-the-art research in science.Margaret Bradley, "Scientific education versus military training: the influence of Napoleon Bonaparte on the École Polytechnique." Annals of science (1975) 32#5 pp: 415-449. Napoleon made some of the first efforts at establishing a system of secular and public education. The system featured scholarships and strict discipline, with the result being a French educational system that outperformed its European counterparts, many of which borrowed from the French system.
Memory and evaluation
Criticism
thumb|right|The Third of May 1808 by Francisco Goya, showing Spanish resisters being executed by Napoleon's troops.
In the political realm, historians debate whether Napoleon was "an enlightened despot who laid the foundations of modern Europe or, instead, a megalomaniac who wrought greater misery than any man before the coming of Hitler."Max Hastings, "Everything Is Owed to Glory," The Wall Street Journal October 31, 2014 Many historians have concluded that he had grandiose foreign policy ambitions. The Continental powers as late as 1808 were willing to give him nearly all of his gains and titles, but some scholars maintain he was overly aggressive and pushed for too much, until his empire collapsed.Charles Esdaile, Napoleon's Wars: An International History 1803–1815 (2008), p 39
Napoleon ended lawlessness and disorder in post-Revolutionary France.Abbott 2005, p.3 He was considered a tyrant and usurper by his opponents. His critics charge that he was not troubled when faced with the prospect of war and death for thousands, turned his search for undisputed rule into a series of conflicts throughout Europe and ignored treaties and conventions alike. His role in the Haitian Revolution and decision to reinstate slavery in France's overseas colonies are controversial and affect his reputation.
Napoleon institutionalised plunder of conquered territories: French museums contain art stolen by Napoleon's forces from across Europe. Artefacts were brought to the Musée du Louvre for a grand central museum; his example would later serve as inspiration for more notorious imitators.Poulos 2000 He was compared to Adolf Hitler most famously by the historian Pieter Geyl in 1947Geyl 1947 and Claude Ribbe in 2005.Philip Dwyer, "Remembering and Forgetting in Contemporary France: Napoleon, Slavery, and the French History Wars", French Politics, Culture & Society (2008) 26#3. pp 110–122. online David G. Chandler, a foremost historian of Napoleonic warfare, wrote in 1973 that, "Nothing could be more degrading to the former [Napoleon] and more flattering to the latter [Hitler]. The comparison is odious. On the whole Napoleon was inspired by a noble dream, wholly dissimilar from Hitler's... Napoleon left great and lasting testimonies to his genius—in codes of law and national identities which survive to the present day. Adolf Hitler left nothing but destruction."Chandler 1973, p. xliii
Critics argue Napoleon's true legacy must reflect the loss of status for France and needless deaths brought by his rule: historian Victor Davis Hanson writes, "After all, the military record is unquestioned—17 years of wars, perhaps six million Europeans dead, France bankrupt, her overseas colonies lost."Hanson 2003 McLynn states that, "He can be viewed as the man who set back European economic life for a generation by the dislocating impact of his wars." Vincent Cronin replies that such criticism relies on the flawed premise that Napoleon was responsible for the wars which bear his name, when in fact France was the victim of a series of coalitions which aimed to destroy the ideals of the Revolution.Cronin 1994, pp.342–3
Propaganda and memory
Napoleon's use of propaganda contributed to his rise to power, legitimated his régime, and established his image for posterity. Strict censorship, controlling aspects of the press, books, theater, and art, was part of his propaganda scheme, aimed at portraying him as bringing desperately wanted peace and stability to France. The propagandistic rhetoric changed in relation to events and to the atmosphere of Napoleon's reign, focusing first on his role as a general in the army and identification as a soldier, and moving to his role as emperor and a civil leader. Specifically targeting his civilian audience, Napoleon fostered a relationship with the contemporary art community, taking an active role in commissioning and controlling different forms of art production to suit his propaganda goals.Alan Forrest, "Propaganda and the Legitimation of Power in Napoleonic France." French History, 2004 18(4): 426–445
Hazareesingh (2004) explores how Napoleon's image and memory are best understood. They played a key role in collective political defiance of the Bourbon restoration monarchy in 1815–1830. People from different walks of life and areas of France, particularly Napoleonic veterans, drew on the Napoleonic legacy and its connections with the ideals of the 1789 revolution.Sudhir Hazareesingh, "Memory and Political Imagination: the Legend of Napoleon Revisited." French History, 2004 18(4): 463–483
Widespread rumors of Napoleon's return from St. Helena and Napoleon as an inspiration for patriotism, individual and collective liberties, and political mobilization manifested themselves in seditious materials, displaying the tricolor and rosettes. There were also subversive activities celebrating anniversaries of Napoleon's life and reign and disrupting royal celebrations—they demonstrated the prevailing and successful goal of the varied supporters of Napoleon to constantly destabilize the Bourbon regime.
Datta (2005) shows that, following the collapse of militaristic Boulangism in the late 1880s, the Napoleonic legend was divorced from party politics and revived in popular culture. Concentrating on two plays and two novels from the period—Victorien Sardou's Madame Sans-Gêne (1893), Maurice Barrès's Les Déracinés (1897), Edmond Rostand's L'Aiglon (1900), and André de Lorde and Gyp's Napoléonette (1913)—Datta examines how writers and critics of the Belle Époque exploited the Napoleonic legend for diverse political and cultural ends.Venita Datta, "'L'appel Au Soldat': Visions of the Napoleonic Legend in Popular Culture of the Belle Epoque." French Historical Studies 2005 28(1): 1–30
Reduced to a minor character, the new fictional Napoleon became not a world historical figure but an intimate one, fashioned by individuals' needs and consumed as popular entertainment. In their attempts to represent the emperor as a figure of national unity, proponents and detractors of the Third Republic used the legend as a vehicle for exploring anxieties about gender and fears about the processes of democratization that accompanied this new era of mass politics and culture.
International Napoleonic Congresses take place regularly, with participation by members of the French and American military, French politicians and scholars from different countries. In January 2012, the mayor of Montereau-Fault-Yonne, near Paris—the site of a late victory of Napoleon—proposed development of Napoleon's Bivouac, a commemorative theme park at a projected cost of 200 million euros.
Long-term influence outside France
thumb|upright|Bas-relief of Napoleon I in the chamber of the United States House of Representatives
Napoleon was responsible for spreading the values of the French Revolution to other countries, especially in legal reform and the abolition of serfdom.Alexander Grab, Napoleon and the Transformation of Europe (Macmillan, 2003), country by country analysis
After the fall of Napoleon, not only was Napoleonic Code retained by conquered countries including the Netherlands, Belgium, parts of Italy and Germany, but has been used as the basis of certain parts of law outside Europe including the Dominican Republic, the US state of Louisiana and the Canadian province of Quebec. The memory of Napoleon in Poland is favorable, for his support for independence and opposition to Russia, his legal code, the abolition of serfdom, and the introduction of modern middle class bureaucracies.Andrzej Nieuwazny, "Napoleon and Polish identity." History Today, May 1998 vol. 48 no. 5 pp.50–55
Napoleon could be considered one of the founders of modern Germany. After dissolving the Holy Roman Empire, he reduced the number of German states from 300 to less than 50, prior to the German Unification. A byproduct of the French occupation was a strong development in German nationalism. Napoleon also significantly aided the United States when he agreed to sell the territory of Louisiana for 15 million dollars during the presidency of Thomas Jefferson. That territory almost doubled the size of the United States, adding the equivalent of 13 states to the Union.
Marriages and children
Napoleon married Joséphine de Beauharnais in 1796, when he was 26; she was a 32-year-old widow whose first husband had been executed during the Revolution. Until she met Bonaparte, she had been known as "Rose", a name which he disliked. He called her "Joséphine" instead, and she went by this name henceforth. Bonaparte often sent her love letters while on his campaigns. He formally adopted her son Eugène and cousin Stéphanie and arranged dynastic marriages for them. Joséphine had her daughter Hortense marry Napoleon's brother Louis.
Joséphine had lovers, such as lieutenant Hippolyte Charles, during Napoleon's Italian campaign. Napoleon learnt of that affair and a letter he wrote about it was intercepted by the British and published widely, to embarrass Napoleon. Napoleon had his own affairs too: during the Egyptian campaign he took Pauline Bellisle Foures, the wife of a junior officer, as his mistress. She became known as "Cleopatra."
thumb|upright|left|Plate showing statues of Amenhotep III at Luxor, Egypt. Commissioned by Napoleon as a present to Josephine but she rejected it. From France. The Victoria and Albert Museum, London
While Napoleon's mistresses had children by him, Joséphine did not produce an heir, possibly because of either the stresses of her imprisonment during the Reign of Terror or an abortion she may have had in her twenties. Napoleon chose divorce so he could remarry in search of an heir. Despite his divorce from Josephine, Napoleon showed his dedication to her for the rest of his life. When he heard the news of her death while on exile in Elba, he locked himself in his room and would not come out for two full days. Her name would also be his final word on his deathbed in 1821.
In March 1810, he married the 19-year old Marie Louise, Archduchess of Austria, and a great niece of Marie Antoinette by proxy; thus he had married into a German royal and imperial family. Louise was less than happy with the arrangement, at least at first, stating "Just to see the man would be the worst form of torture." Her great-aunt had been executed in France, while Napoleon had fought numerous campaigns against Austria all throughout his military career. However, she seemed to warm up to him over time. After her wedding, she wrote to her father "He loves me very much. I respond to his love sincerely. There is something very fetching and very eager about him that is impossible to resist."
Napoleon and Marie Louise remained married until his death, though she did not join him in exile on Elba and thereafter never saw her husband again. The couple had one child, Napoleon Francis Joseph Charles (1811–1832), known from birth as the King of Rome. He became Napoleon II in 1814 and reigned for only two weeks. He was awarded the title of the Duke of Reichstadt in 1818 and died of tuberculosis aged 21, with no children.
Napoleon acknowledged one illegitimate son: Charles Léon (1806–1881) by Eléonore Denuelle de La Plaigne. Alexandre Colonna-Walewski (1810–1868), the son of his mistress Maria Walewska, although acknowledged by Walewska's husband, was also widely known to be his child, and the DNA of his direct male descendant has been used to help confirm Napoleon's Y-chromosome haplotype. He may have had further unacknowledged illegitimate offspring as well, such as Eugen Megerle von Mühlfeld by Emilie Victoria Kraus and Hélène Napoleone Bonaparte (1816–1907) by Albine de Montholon.
Titles, styles, honours, and arms
Ancestry
Notes
Citations
References
Biographical studies
Gueniffey, Patrice. Bonaparte: 1769–1802 (Harvard UP, 2015, French edition 2013); 1008pp; vol 1 of most comprehensive recent scholarly biography by leading French specialist; less emphasis on battles and campaigns excerpt
; 200pp; quite hostile
influential wide-ranging history
; 303pp; short biography by an Oxford scholar online
, 412pp; by an Oxford scholar
Specialty studies
advanced diplomatic history of Napoleon and his era
Historiography and memory
Englund, Steven. "Napoleon and Hitler" Journal of the Historical Society (2006) 6#1 pp 151–169.
excerpt and text search
Hazareesingh, Sudhir. "Memory and Political Imagination: The Legend of Napoleon Revisited," French History (2004) 18#4 pp 463–483.
External links
The Napoleonic Guide
Napoleon Series
International Napoleonic Society
Biography by the US Public Broadcasting Service
Inside Longwood descriptions of Longwood House & other places on St. Helena, articles on Napoleon's captivity
Alan Schom Interview on his book Napoleon Bonaparte on Booknotes, 26 October 1997
Vol. 1/4
Napoleon Personal Manuscripts & Letters
Vol. 3/4
Letter written by Napoleon Buonaparte (Bonaparte) to Guillaume Thomas Francois Raynal RG 523 Brock University Library Digital Repository
Category:1769 births
Category:1821 deaths
Category:18th-century rulers in Europe
Category:19th-century monarchs in Europe
Category:Amateur mathematicians
Category:Corsican politicians
Category:Deaths from stomach cancer
Category:People of the First French Empire
Category:French commanders of the Napoleonic Wars
Category:French emperors
Category:French exiles
Category:French military leaders of the French Revolutionary Wars
Category:French people of Italian descent
Category:French Roman Catholics
Category:House of Bonaparte
Category:Kings of Italy
Category:Leaders who took power by coup
Category:Members of the French Academy of Sciences
Category:Monarchs imprisoned and detained during war
Category:Monarchs who abdicated
Category:People excommunicated by the Roman Catholic Church
Category:People from Ajaccio
Category:People of Tuscan descent
Category:Princes of Andorra | 69,880 | 2017-01 |
MP3 | MPEG-1 and/or MPEG-2 Audio Layer III, more commonly referred to as MP3, is an audio coding format for digital audio which uses a form of lossy data compression. It is a common audio format for consumer audio streaming or storage, as well as a de facto standard of digital audio compression for the transfer and playback of music on most digital audio players and computing devices.
The use of lossy compression is designed to reduce by a factor of 10 the amount of data required to represent digital audio recordings yet still sound like the original uncompressed audio to most listeners.
Compared to CD quality digital audio, MP3 compression commonly achieves 75 to 95% reduction in size. MP3 files are thus 1/4 to 1/20 the size of the original digital audio stream. This is important for both transmission and storage concerns. The basis for such comparison is the CD digital audio format which requires 1411200 bit/s. A commonly used MP3 encoding setting is CBR 128 kbit/s resulting in file size 1/11 (=9% or 91% compression) of the original CD-quality file.
The MP3 lossy compression works by reducing (or approximating) the accuracy of certain parts of a continuous sound that are considered to be beyond the auditory resolution ability of most people. This method is commonly referred to as perceptual coding. It uses psychoacoustic models to discard or reduce precision of components less audible to human hearing, and then records the remaining information in an efficient manner.
MP3 was designed by the Moving Picture Experts Group (MPEG) as part of its MPEG-1 standard and later extended in the MPEG-2 standard. The first subgroup for audio was formed by several teams of engineers at Fraunhofer IIS, University of Hanover, AT&T-Bell Labs, Thomson-Brandt, CCETT, and others. MPEG-1 Audio (MPEG-1 Part 3), which included MPEG-1 Audio Layer I, II and III was approved as a committee draft of ISO/IEC standard in 1991, finalised in 1992 and published in 1993 (ISO/IEC 11172-3:1993). A backwards compatible MPEG-2 Audio (MPEG-2 Part 3) extension with lower sample and bit rates was published in 1995 (ISO/IEC 13818-3:1995).
MP3 is a streaming or broadcast format (as opposed to a file format) meaning that individual frames can be lost without affecting the ability to decode successfully delivered frames. Storing an MP3 stream in a file enables time shifted playback.
History
Development
The MP3 lossy audio data compression algorithm takes advantage of a perceptual limitation of human hearing called auditory masking. In 1894, the American physicist Alfred M. Mayer reported that a tone could be rendered inaudible by another tone of lower frequency. In 1959, Richard Ehmer described a complete set of auditory curves regarding this phenomenon. Ernst Terhardt et al. created an algorithm describing auditory masking with high accuracy. This work added to a variety of reports from authors dating back to Fletcher, and to the work that initially determined critical ratios and critical bandwidths.
The psychoacoustic masking codec was first proposed in 1979, apparently independently, by Manfred R. Schroeder, et al. from Bell Telephone Laboratories, Inc. in Murray Hill, New Jersey, and M. A. Krasner both in the United States. Krasner was the first to publish and to produce hardware for speech (not usable as music bit compression), but the publication of his results as a relatively obscure Lincoln Laboratory Technical Report did not immediately influence the mainstream of psychoacoustic codec development. Manfred Schroeder was already a well-known and revered figure in the worldwide community of acoustical and electrical engineers, but his paper was not much noticed, since it described negative results due to the particular nature of speech and the linear predictive coding (LPC) gain present in speech. Both Krasner and Schroeder built upon the work performed by Eberhard F. Zwicker in the areas of tuning and masking of critical bands, that in turn built on the fundamental research in the area from Bell Labs of Harvey Fletcher and his collaborators. A wide variety of (mostly perceptual) audio compression algorithms were reported in IEEE's refereed Journal on Selected Areas in Communications. That journal reported in February 1988 on a wide range of established, working audio bit compression technologies, some of them using auditory masking as part of their fundamental design, and several showing real-time hardware implementations.
The immediate predecessors of MP3 were "Optimum Coding in the Frequency Domain" (OCF), and Perceptual Transform Coding (PXFM). These two codecs, along with block-switching contributions from Thomson-Brandt, were merged into a codec called ASPEC, which was submitted to MPEG, and which won the quality competition, but that was mistakenly rejected as too complex to implement. The first practical implementation of an audio perceptual coder (OCF) in hardware (Krasner's hardware was too cumbersome and slow for practical use), was an implementation of a psychoacoustic transform coder based on Motorola 56000 DSP chips.
As a doctoral student at Germany's University of Erlangen-Nuremberg, Karlheinz Brandenburg began working on digital music compression in the early 1980s, focusing on how people perceive music. He completed his doctoral work in 1989. MP3 is directly descended from OCF and PXFM, representing the outcome of the collaboration of Brandenburg—working as a postdoc at AT&T-Bell Labs with James D. Johnston ("JJ") of AT&T-Bell Labs—with the Fraunhofer Institute for Integrated Circuits, Erlangen (where he worked with Bernhard Grill and four other researchers - "The Original Six"), with relatively minor contributions from the MP2 branch of psychoacoustic sub-band coders. In 1990, Brandenburg became an assistant professor at Erlangen-Nuremberg. While there, he continued to work on music compression with scientists at the Fraunhofer Society (in 1993 he joined the staff of the Fraunhofer Institute).
The song "Tom's Diner" by Suzanne Vega was the first song used by Karlheinz Brandenburg to develop the MP3. Brandenburg adopted the song for testing purposes, listening to it again and again each time refining the scheme, making sure it did not adversely affect the subtlety of Vega's voice.
Standardization
In 1991, there were two available proposals that were assessed for an MPEG audio standard: Musicam (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing) and ASPEC (Adaptive Spectral Perceptual Entropy Coding). As proposed by the Dutch corporation Philips, the French research institute CCETT, and the German standards organization Institute for Broadcast Technology, the Musicam technique was chosen due to its simplicity and error robustness, as well as for its high level of computational efficiency. The Musicam format, based on sub-band coding, became the basis for the MPEG Audio compression format, incorporating, for example, its frame structure, header format, sample rates, etc.
While much of Musicam's technology and ideas were incorporated into the definition of MPEG Audio Layer I and Layer II, only the filter bank alone would remain in the Layer III (MP3) format, as part of the computationally inefficient hybrid filter bank. Under the chairmanship of Professor Musmann of the University of Hanover, the editing of the standard was delegated to Dutchman Leon van de Kerkhof and to German Gerhard Stoll, who worked on Layer I and Layer II respectively.
ASPEC was the joint proposal of AT&T Bell Laboratories, Thomson Consumer Electronics, Fraunhofer Society and CNET. It provided the highest coding efficiency.
A working group consisting of van de Kerkhof, Stoll, Italian Leonardo Chiariglione (CSELT VP for Media), Frenchman Yves-François Dehery, German Karlheinz Brandenburg, and American James D. Johnston (United States) took ideas from ASPEC, integrated the filter bank from Layer II, added some of their own ideas and created the MP3 format, which was designed to achieve the same quality at 128 kbit/s as MP2 at 192 kbit/s.
The algorithms for MPEG-1 Audio Layer I, II and III were approved in 1991 and finalized in 1992 as part of MPEG-1, the first standard suite by MPEG, which resulted in the international standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio or MPEG-1 Part 3), published in 1993. Files or data streams conforming to this standard must handle sample rates of 48k, 44100 and 32k and continue to be supported by current MP3 players and decoders. Thus the first generation of MP3 defined 14*3=42 interpretations of MP3 frame data structures and size layouts.
Further work on MPEG audio was finalized in 1994 as part of the second suite of MPEG standards, MPEG-2, more formally known as international standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Part 3 or backwards compatible MPEG-2 Audio or MPEG-2 Audio BC), originally published in 1995. MPEG-2 Part 3 (ISO/IEC 13818-3) defined 42 additional bit rates and sample rates for MPEG-1 Audio Layer I, II and III. The new sampling rates are exactly half that of those originally defined in MPEG-1 Audio. This reduction in sampling rate serves to cut the available frequency fidelity in half while likewise cutting the bitrate by 50%.
MPEG-2 Part 3 also enhanced MPEG-1's audio by allowing the coding of audio programs with more than two channels, up to 5.1 multichannel. An MP3 coded with MPEG-2 results in half of the bandwidth reproduction of MPEG-1 appropriate for piano and singing.
A third generation of "MP3" style data streams (files) extended the MPEG-2 ideas and implementation but was named MPEG-2.5 audio, since MPEG-3 already had a different meaning. This extension was developed at Fraunhofer IIS, the registered patent holders of MP3 by reducing the frame sync field in the MP3 header from 12 to 11 bits. As in the transition from MPEG-1 to MPEG-2, MPEG-2.5 adds additional sampling rates exactly half of those available using MPEG-2. It thus widens the scope of MP3 to include human speech and other applications yet requires only 25% of the bandwidth (frequency reproduction) possible using MPEG-1 sampling rates. While not an ISO recognized standard, MPEG-2.5 is widely supported by both inexpensive Chinese and brand name digital audio players as well as computer software based MP3 encoders (LAME), decoders (FFmpeg) and players (MPC) adding 3*8=24 additional MP3 frame types. Each generation of MP3 thus supports 3 sampling rates exactly half that of the previous generation for a total of 9 varieties of MP3 format files. The sample rate comparison table between MPEG-1, 2 and 2.5 is given later in the article. MPEG 2.5 is supported by both LAME (since 2000), Media Player Classic (MPC), iTunes, and FFmpeg.
MPEG-2.5 was not developed by MPEG (see above) and was never approved as an international standard. MPEG-2.5 is thus an unofficial or proprietary extension to the MP3 format. It is nonetheless ubiquitous and especially advantageous for low-bit rate human speech applications.
+MPEG Audio Layer III versions Version International Standard First edition public release date Latest edition public release date MPEG-1 Audio Layer III ISO/IEC 11172-3 (MPEG-1 Part 3) 1993 MPEG-2 Audio Layer III ISO/IEC 13818-3 (MPEG-2 Part 3) 1995 1998 MPEG-2.5 Audio Layer III nonstandard, proprietary20002008
The ISO standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio) defined three formats: the MPEG-1 Audio Layer I, Layer II and Layer III. The ISO standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Audio) defined extended version of the MPEG-1 Audio: MPEG-2 Audio Layer I, Layer II and Layer III. MPEG-2 Audio (MPEG-2 Part 3) should not be confused with MPEG-2 AAC (MPEG-2 Part 7 – ISO/IEC 13818-7).
Compression efficiency of encoders is typically defined by the bit rate, because compression ratio depends on the bit depth and sampling rate of the input signal. Nevertheless, compression ratios are often published. They may use the Compact Disc (CD) parameters as references (44.1 kHz, 2 channels at 16 bits per channel or 2×16 bit), or sometimes the Digital Audio Tape (DAT) SP parameters (48 kHz, 2×16 bit). Compression ratios with this latter reference are higher, which demonstrates the problem with use of the term compression ratio for lossy encoders.
Karlheinz Brandenburg used a CD recording of Suzanne Vega's song "Tom's Diner" to assess and refine the MP3 compression algorithm. This song was chosen because of its nearly monophonic nature and wide spectral content, making it easier to hear imperfections in the compression format during playbacks. Some refer to Suzanne Vega as "The mother of MP3". This particular track has an interesting property in that the two channels are almost, but not completely, the same, leading to a case where Binaural Masking Level Depression causes spatial unmasking of noise artifacts unless the encoder properly recognizes the situation and applies corrections similar to those detailed in the MPEG-2 AAC psychoacoustic model. Some more critical audio excerpts (glockenspiel, triangle, accordion, etc.) were taken from the EBU V3/SQAM reference compact disc and have been used by professional sound engineers to assess the subjective quality of the MPEG Audio formats.
LAME is the most advanced MP3 encoder. LAME includes a VBR variable bit rate encoding which uses a quality parameter rather than a bit rate goal. Later versions 2008+) support an n.nnn quality goal which automatically selects MPEG -2 or MPEG-2.5 sampling rates as appropriate for human speech recordings which need only 5512 Hz bandwidth resolution.
Going public
A reference simulation software implementation, written in the C language and later known as ISO 11172-5, was developed (in 1991–1996) by the members of the ISO MPEG Audio committee in order to produce bit compliant MPEG Audio files (Layer 1, Layer 2, Layer 3). It was approved as a committee draft of ISO/IEC technical report in March 1994 and printed as document CD 11172-5 in April 1994. It was approved as a draft technical report (DTR/DIS) in November 1994, finalized in 1996 and published as international standard ISO/IEC TR 11172-5:1998 in 1998. The reference software in C language was later published as a freely available ISO standard. Working in non-real time on a number of operating systems, it was able to demonstrate the first real time hardware decoding (DSP based) of compressed audio. Some other real time implementation of MPEG Audio encoders were available for the purpose of digital broadcasting (radio DAB, television DVB) towards consumer receivers and set top boxes.
On 7 July 1994, the Fraunhofer Society released the first software MP3 encoder called l3enc. The filename extension .mp3 was chosen by the Fraunhofer team on 14 July 1995 (previously, the files had been named .bit). With the first real-time software MP3 player WinPlay3 (released 9 September 1995) many people were able to encode and play back MP3 files on their PCs. Because of the relatively small hard drives back in that time (~ 500–1000 MB) lossy compression was essential to store non-instrument based (see tracker and MIDI) music for
playback on computer.
As sound scholar Jonathan Sterne notes, "An Australian hacker acquired l3enc using a stolen credit card. The hacker then reverse-engineered the software, wrote a new user interface, and redistributed it for free, naming it "thank you Fraunhofer"".Sterne, Jonathan (2012). MP3: The Meaning of a Format Reproduction. Durham: Duke University Press, p. 201-202
Internet distribution
In the second half of '90s, MP3 files began to spread on the Internet. The popularity of MP3s began to rise rapidly with the advent of Nullsoft's audio player Winamp, released in 1997. In 1998, the first portable solid state digital audio player MPMan, developed by SaeHan Information Systems which is headquartered in Seoul, South Korea, was released and the Rio PMP300 was sold afterwards in 1998, despite legal suppression efforts by the RIAA.
In November 1997, the website mp3.com was offering thousands of MP3s created by independent artists for free. The small size of MP3 files enabled widespread peer-to-peer file sharing of music ripped from CDs, which would have previously been nearly impossible. The first large peer-to-peer filesharing network, Napster, was launched in 1999.
The ease of creating and sharing MP3s resulted in widespread copyright infringement. Major record companies argued that this free sharing of music reduced sales, and called it "music piracy". They reacted by pursuing lawsuits against Napster (which was eventually shut down and later sold) and against individual users who engaged in file sharing.
Unauthorized MP3 file sharing continues on next-generation peer-to-peer networks. Some authorized services, such as Beatport, Bleep, Juno Records, eMusic, Zune Marketplace, Walmart.com, Rhapsody, the recording industry approved re-incarnation of Napster, and Amazon.com sell unrestricted music in the MP3 format.
Design
File structure
An MP3 file is made up of MP3 frames, which consist of a header and a data block. This sequence of frames is called an elementary stream. Due to the "byte reservoir", frames are not independent items and cannot usually be extracted on arbitrary frame boundaries. The MP3 Data blocks contain the (compressed) audio information in terms of frequencies and amplitudes. The diagram shows that the MP3 Header consists of a sync word, which is used to identify the beginning of a valid frame. This is followed by a bit indicating that this is the MPEG standard and two bits that indicate that layer 3 is used; hence MPEG-1 Audio Layer 3 or MP3. After this, the values will differ, depending on the MP3 file. ISO/IEC 11172-3 defines the range of values for each section of the header along with the specification of the header. Most MP3 files today contain ID3 metadata, which precedes or follows the MP3 frames, as noted in the diagram.
The data stream can contain an optional checksum, but the checksum only protects the header data, not the audio data.
Joint stereo is done only on a frame-to-frame basis.
Encoding and decoding
The MPEG-1 standard does not include a precise specification for an MP3 encoder, but does provide example psychoacoustic models, rate loop, and the like in the non-normative part of the original standard.
MPEG-2 doubles the number of sampling rates which are supported and MPEG-2.5 adds 3 more.
When this was written, the suggested implementations were quite dated. Implementers of the standard were supposed to devise their own algorithms suitable for removing parts of the information from the audio input. As a result, many different MP3 encoders became available, each producing files of differing quality. Comparisons were widely available, so it was easy for a prospective user of an encoder to research the best choice. Some encoders that were proficient at encoding at higher bit rates (such as LAME) were not necessarily as good at lower bit rates. Over time, LAME evolved on the SourceForge website until it became the de facto CBR MP3 encoder. Later an ABR mode was added. Work progressed on true variable bit rate using a quality goal between 0 and 10. Eventually numbers (such as -V 9.600) could generate excellent quality low bit rate voice encoding at only 41kbit/sec using the mpeg-2.5 extensions.
During encoding, 576 time-domain samples are taken and are transformed to 576 frequency-domain samples. If there is a transient, 192 samples are taken instead of 576. This is done to limit the temporal spread of quantization noise accompanying the transient. (See psychoacoustics.)
Frequency resolution is limited by the small long block window size, which decreases coding efficiency.
Time resolution can be too low for highly transient signals and may cause smearing of percussive sounds.
Due to the tree structure of the filter bank, pre-echo problems are made worse, as the combined impulse response of the two filter banks does not, and cannot, provide an optimum solution in time/frequency resolution. Additionally, the combining of the two filter banks' outputs creates aliasing problems that must be handled partially by the "aliasing compensation" stage; however, that creates excess energy to be coded in the frequency domain, thereby decreasing coding efficiency.
Decoding, on the other hand, is carefully defined in the standard. Most decoders are "bitstream compliant", which means that the decompressed output that they produce from a given MP3 file will be the same, within a specified degree of rounding tolerance, as the output specified mathematically in the ISO/IEC high standard document (ISO/IEC 11172-3). Therefore, comparison of decoders is usually based on how computationally efficient they are (i.e., how much memory or CPU time they use in the decoding process). Over time this concern has become less of an issue as CPU speeds transitioned from MHz to GHz.
Encoder / decoder overall delay is not defined, which means there is no official provision for gapless playback. However, some encoders such as LAME can attach additional metadata that will allow players that can handle it to deliver seamless playback.
Quality
When performing lossy audio encoding, such as creating an MP3 data stream, there is a trade-off between the amount of data generated and the sound quality of the results. The person generating an MP3 selects a bit rate, which specifies how many kilobits per second of audio are desired. The higher the bit rate, the larger the MP3 data stream will be, and, generally, the closer it will sound to the original recording.
With too low a bit rate, compression artifacts (i.e., sounds that were not present in the original recording) may be audible in the reproduction. Some audio is hard to compress because of its randomness and sharp attacks. When this type of audio is compressed, artifacts such as ringing or pre-echo are usually heard. A sample of applause compressed with a relatively low bit rate provides a good example of compression artifacts.
Besides the bit rate of an encoded piece of audio, the quality of MP3 encoded sound also depends on the quality of the encoder algorithm as well as the complexity of the signal being encoded. As the MP3 standard allows quite a bit of freedom with encoding algorithms, different encoders do feature quite different quality, even with identical bit rates. As an example, in a public listening test featuring two early MP3 encoders set at about 128 kbit/s, one scored 3.66 on a 1–5 scale, while the other scored only 2.22.
Quality is dependent on the choice of encoder and encoding parameters.
This observation caused a revolution in audio encoding. Early on bitrate was the prime and only consideration. At the time MP3 files were of the very simplest type: they used the same bit rate for the entire file: this process is known as Constant Bit Rate (CBR) encoding. Using a constant bit rate makes encoding simpler and less CPU intensive. However, it is also possible to create files where the bit rate changes throughout the file. These are known as Variable Bit Rate The bit reservoir and VBR encoding were actually part of the original MPEG-1 standard. The concept behind them is that, in any piece of audio, some sections are easier to compress, such as silence or music containing only a few tones, while others will be more difficult to compress. So, the overall quality of the file may be increased by using a lower bit rate for the less complex passages and a higher one for the more complex parts. With some advanced MP3 encoders, it is possible to specify a given quality, and the encoder will adjust the bit rate accordingly. Users that desire a particular "quality setting" that is transparent to their ears can use this value when encoding all of their music, and generally speaking not need to worry about performing personal listening tests on each piece of music to determine the correct bit rate.
Perceived quality can be influenced by listening environment (ambient noise), listener attention, and listener training and in most cases by listener audio equipment (such as sound cards, speakers and headphones). Furthermore, sufficient quality may be achieved by a lesser quality setting for lectures and human speech applications and reduces encoding time and complexity.
A test given to new students by Stanford University Music Professor Jonathan Berger showed that student preference for MP3-quality music has risen each year. Berger said the students seem to prefer the 'sizzle' sounds that MP3s bring to music.
An in-depth study of MP3 audio quality, sound artist and composer Ryan Maguire's project "The Ghost in the MP3" isolates the sounds lost during MP3 compression. In 2015, he released the track "moDernisT" (an anagram of "Tom's Diner"), composed exclusively from the sounds deleted during MP3 compression of the song "Tom's Diner", the track originally used in the formulation of the MP3 standard. A detailed account of the techniques used to isolate the sounds deleted during MP3 compression, along with the conceptual motivation for the project, was published in the 2014 Proceedings of the International Computer Music Conference.
Bit rate
+MPEG Audio Layer IIIavailable bit rates (kbit/s) MPEG-1Audio Layer III MPEG-2Audio Layer III MPEG-2.5Audio Layer III - 8 8 - 16 16 - 24 24 32 32 32 40 40 40 48 48 48 56 56 56 64 64 64 80 80 96 96 112 112 128 128 n/a 144 160 160 192 - - 224 - - 256 - - 320 - -
+Supported sampling ratesby MPEG Audio Format MPEG-1Audio Layer III MPEG-2Audio Layer III MPEG-2.5Audio Layer III - - 8000 Hz - - 11025 Hz - - 12000 Hz - 16000 Hz - - 22050 Hz - - 24000 Hz - 32000 Hz - - 44100 Hz - - 48000 Hz - -
Bitrate is the product of the sample rate and number of bits per sample. CD audio is 44100 samples per second. The number of bits per sample also depends on the number of audio channels. CD is stereo and 16 bits per channel. So, multiplying 44100 by 32 gives 1411200—the bitrate of uncompressed CD digital audio. MP3 was designed to encode 1411 kbit/s at 320 kbit/s or less. As less complex passages are detected by MP3 algorithms then lower bitrates may be employed. MPEG-2 reduces bitrate further by cutting samples to only 8 bits in size and half the samples per second. MPEG-2 thus cuts half of the upper frequency spectrum of mpeg-1 off.
As shown in these two tables, 14 selected bit rates are allowed in MPEG-1 Audio Layer III standard: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 kbit/s, along with the 3 highest available sampling frequencies of 32, 44.1 and 48 kHz. MPEG-2 Audio Layer III also allows 14 somewhat different (and mostly lower) bit rates of 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160 kbit/s with sampling frequencies of 16, 22.05 and 24 kHz which are exactly half that of MPEG-1 MPEG-2.5 Audio Layer III frames are limited to only 8 bit rates of 8, 16, 24, 32, 40, 48, 56 and 64 kbit/s with 3 even lower sampling frequencies of 8, 11.025, and 12 kHz.
MPEG-1 frames contain the most detail in 320kbit/s mode with silence and simple tones still requiring 32 kbit/s. MPEG-2 frames can capture up to 12 kHz sound reproductions needed up to 160kbit/s. MP3 files made with MPEG-2 don't have 20 kHz bandwidth because of the Nyquist–Shannon sampling theorem. Frequency reproduction is always strictly less than half of the sampling frequency, and imperfect filters require a larger margin for error (noise level versus sharpness of filter), so an 8 kHz sampling rate limits the maximum frequency to 4 kHz, while a 48 kHz sampling rate limits an MP3 to a maximum 24 kHz sound reproduction. MPEG-2 uses half and MPEG-2.5 only a quarter of MPEG-1 sample rates.
For the general field of human speech reproduction, a bandwidth of 5512 Hz is sufficient to produce excellent results (for voice) using the sampling rate of 11025 and VBR encoding from 44100 (standard) wave files.. This is easily accomplished using LAME version 3.99.5 and the command line "lame -V 9.6 lecture.WAV" English speakers average 41-42kbit/s with -V 9.6 setting but this may vary with amount of silence recorded or the rate of delivery (wpm). Resampling to 12000 (6K bandwidth) is selected by the LAME parameter -V 9.4 Likewise -V 9.2 selects 16000 sample rate and a resultant 8K lowpass filtering. For more info see Nyquist - Shannon. Older versions of LAME and FFmpeg only support integer arguments for variable bit rate quality selection parameter. The n.nnn quality parameter (-V) is documented at lame.sourceforge.net but is only supported in LAME with the new style VBR variable bit rate quality selector—not average bit rate (ABR).
A sample rate of 44.1 kHz is commonly used for music reproduction, because this is also used for CD audio, the main source used for creating MP3 files. A great variety of bit rates are used on the Internet. A bit rate of 128 kbit/s is commonly used, at a compression ratio of 11:1, offering adequate audio quality in a relatively small space. As Internet bandwidth availability and hard drive sizes have increased, higher bit rates up to 320 kbit/s are widespread.
Uncompressed audio as stored on an audio-CD has a bit rate of 1,411.2 kbit/s, (16 bit/sample × 44100 samples/second × 2 channels / 1000 bits/kilobit), so the bitrates 128, 160 and 192 kbit/s represent compression ratios of approximately 11:1, 9:1 and 7:1 respectively.
Non-standard bit rates up to 640 kbit/s can be achieved with the LAME encoder and the freeformat option, although few MP3 players can play those files. According to the ISO standard, decoders are only required to be able to decode streams up to 320 kbit/s.
Early MPEG Layer III encoders used what is now called Constant Bit Rate (CBR). The software was only able to use a uniform bitrate on all frames in an MP3 file. Later more sophisticated MP3 encoders were able to use the bit reservoir to target an average bit rate selecting the encoding rate for each frame based on the complexity of the sound in that portion of the recording.
A more sophisticated MP3 encoder can produce variable bitrate audio. MPEG audio may use bitrate switching on a per-frame basis, but only layer III decoders must support it. VBR is used when the goal is to achieve a fixed level of quality. The final file size of a VBR encoding is less predictable than with constant bitrate. Average bitrate is a type of VBR implemented as a compromise between the two: the bitrate is allowed to vary for more consistent quality, but is controlled to remain near an average value chosen by the user, for predictable file sizes. Although an MP3 decoder must support VBR to be standards compliant, historically some decoders have bugs with VBR decoding, particularly before VBR encoders became widespread. The most evolved LAME MP3 encoder supports the generation of VBR, ABR, and even the ancient CBR MP3 formats.
Layer III audio can also use a "bit reservoir", a partially full frame's ability to hold part of the next frame's audio data, allowing temporary changes in effective bitrate, even in a constant bitrate stream. Internal handling of the bit reservoir increases encoding delay.
There is no scale factor band 21 (sfb21) for frequencies above approx 16 kHz, forcing the encoder to choose between less accurate representation in band 21 or less efficient storage in all bands below band 21, the latter resulting in wasted bitrate in VBR encoding.
Ancillary Data
The ancillary data field can be used to store user defined data. The ancillary data is optional and the number of bits available is not explicitly given. The ancillary data is located after the Huffman code bits and ranges to where the next frame’s main_data_begin points to. mp3PRO uses ancillary data to encode their bits to improve audio quality.
Metadata
A "tag" in an audio file is a section of the file that contains metadata such as the title, artist, album, track number or other information about the file's contents. The MP3 standards do not define tag formats for MP3 files, nor is there a standard container format that would support metadata and obviate the need for tags.
However, several de facto standards for tag formats exist. As of 2010, the most widespread are ID3v1 and ID3v2, and the more recently introduced APEv2. These tags are normally embedded at the beginning or end of MP3 files, separate from the actual MP3 frame data. MP3 decoders either extract information from the tags, or just treat them as ignorable, non-MP3 junk data.
Playing & editing software often contains tag editing functionality, but there are also tag editor applications dedicated to the purpose.
Aside from metadata pertaining to the audio content, tags may also be used for DRM.
ReplayGain is a standard for measuring and storing the loudness of an MP3 file (audio normalization) in its metadata tag, enabling a ReplayGain-compliant player to automatically adjust the overall playback volume for each file. MP3Gain may be used to reversibly modify files based on ReplayGain measurements so that adjusted playback can be achieved on players without ReplayGain capability.
Licensing, ownership and legislation
The basic MP3 decoding and encoding technology is patent-free in the European Union, all patents having expired there. In the United States, the technology will be substantially patent-free on 31 December 2017 (see below). The majority of MP3 patents expired in the US between 2007 and 2015.
In the past, many organizations have claimed ownership of patents related to MP3 decoding or encoding. These claims led to a number of legal threats and actions from a variety of sources. As a result, uncertainty about which patents must be licensed in order to create MP3 products without committing patent infringement in countries that allow software patents was a common feature of the early stages of adoption of the technology.
The initial near-complete MPEG-1 standard (parts 1, 2 and 3) was publicly available on 6 December 1991 as ISO CD 11172. In most countries, patents cannot be filed after prior art has been made public, and patents expire 20 years after the initial filing date, which can be up to 12 months later for filings in other countries. As a result, patents required to implement MP3 expired in most countries by December 2012, 21 years after the publication of ISO CD 11172.
An exception is the United States, where patents filed prior to 8 June 1995 expire 17 years after the publication date of the patent, but application extensions make it possible for a patent to issue much later than normally expected (see submarine patents). The various MP3-related patents expire on dates ranging from 2007 to 2017 in the United States Patents filed for anything disclosed in ISO CD 11172 a year or more after its publication are questionable. If only the known MP3 patents filed by December 1992 are considered, then MP3 decoding has been patent-free in the US since 22 September 2015 when expired which had a PCT filing in October 1992. If the longest-running patent mentioned in the aforementioned references is taken as a measure, then the MP3 technology will be patent-free in the United States on 30 December 2017 when , held by the Fraunhofer-Gesellschaft and administered by Technicolor, expires.
Technicolor (formerly called Thomson Consumer Electronics) claims to control MP3 licensing of the Layer 3 patents in many countries, including the United States, Japan, Canada and EU countries. Technicolor has been actively enforcing these patents.
MP3 license revenues generated about €100 million for the Fraunhofer Society in 2005.
In September 1998, the Fraunhofer Institute sent a letter to several developers of MP3 software stating that a license was required to "distribute and/or sell decoders and/or encoders". The letter claimed that unlicensed products "infringe the patent rights of Fraunhofer and Thomson. To make, sell and/or distribute products using the [MPEG Layer-3] standard and thus our patents, you need to obtain a license under these patents from us."
Sisvel S.p.A. and its United States subsidiary Audio MPEG, Inc. previously sued Thomson for patent infringement on MP3 technology, but those disputes were resolved in November 2005 with Sisvel granting Thomson a license to their patents. Motorola followed soon after, and signed with Sisvel to license MP3-related patents in December 2005. Except for three patents, the US patents administered by Sisvelhttp://217.27.95.141/media/files/US%20MPEG%20Audio%20Patents%281%29.pdf had all expired in 2015, however (the exceptions are: , expires February 2017, , expires February 2017 and , expires 9. April 2017.
In September 2006, German officials seized MP3 players from SanDisk's booth at the IFA show in Berlin after an Italian patents firm won an injunction on behalf of Sisvel against SanDisk in a dispute over licensing rights. The injunction was later reversed by a Berlin judge, but that reversal was in turn blocked the same day by another judge from the same court, "bringing the Patent Wild West to Germany" in the words of one commentator.
In February 2007, Texas MP3 Technologies sued Apple, Samsung Electronics and Sandisk in eastern Texas federal court, claiming infringement of a portable MP3 player patent that Texas MP3 said it had been assigned. Apple, Samsung, and Sandisk all settled the claims against them in January 2009.
Alcatel-Lucent has asserted several MP3 coding and compression patents, allegedly inherited from AT&T-Bell Labs, in litigation of its own. In November 2006, before the companies' merger, Alcatel sued Microsoft for allegedly infringing seven patents. On 23 February 2007, a San Diego jury awarded Alcatel-Lucent US $1.52 billion in damages for infringement of two of them. The court subsequently tossed the award, however, finding that one patent had not been infringed and that the other was not even owned by Alcatel-Lucent; it was co-owned by AT&T and Fraunhofer, who had licensed it to Microsoft, the judge ruled. That defense judgment was upheld on appeal in 2008. See Alcatel-Lucent v. Microsoft for more information.
Alternative technologies
Other lossy formats exist. Among these, mp3PRO, AAC, and MP2 are all members of the same technological family as MP3 and depend on roughly similar psychoacoustic models. The Fraunhofer Gesellschaft owns many of the basic patents underlying these formats as well, with others held by Alcatel-Lucent, and Thomson Consumer Electronics.
There are also open compression formats like Opus and Vorbis that are available free of charge and without any known patent restrictions. Some of the newer audio compression formats, such as AAC, WMA Pro and Vorbis, are free of some limitations inherent to the MP3 format that cannot be overcome by any MP3 encoder.
Besides lossy compression methods, lossless formats are a significant alternative to MP3 because they provide unaltered audio content, though with an increased file size compared to lossy compression. Lossless formats include FLAC (Free Lossless Audio Codec), Apple Lossless and many others.
See also
Comparison of audio coding formats
MP3 blog
MP3 Surround
MP3HD
MPEG-4 Part 14
Podcast
Portable media player
References
Further reading
External links
MP3-history.com, The Story of MP3: How MP3 was invented, by Fraunhofer IIS
MPEG.chiariglione.org, MPEG Official Web site
HydrogenAudio Wiki, MP3
RFC 3119, A More Loss-Tolerant RTP Payload Format for MP3 Audio
RFC 3003, The audio/mpeg Media Type
Category:Articles with inconsistent citation formats
Category:1993 introductions
Category:Audio codecs
Category:Data compression
Category:Digital audio
Category:MPEG
Category:Open standards covered by patents
Category:Technicolor SA | 19,673 | 2017-01 |
England national football team | The England national football team represents England in international football and is controlled by The Football Association, the governing body for football in England.
England are one of the two oldest national teams in football; alongside Scotland, whom they played in the world's first international football match in 1872. England's home ground is Wembley Stadium, London, and the current manager is Gareth Southgate. Although part of the United Kingdom, England has always had a representative side that plays in major professional tournaments, though not in the Olympic Games, as the IOC has always recognised United Kingdom representative sides.
England contest the FIFA World Cup and UEFA European Championship, which alternate biennially. In contesting for the World Cup seventeen times over the past sixty four years, England won the 1966 World Cup, when they hosted the finals, and achieved a semi final appearance in 1990. England have never won the UEFA European Football Championship – after fifteen attempts over fifty-six years – their best performances were semi final appearances at the 1968 and 1996 Championships, the latter of which they hosted.
History
thumb|left|The England team before playing a match against Scotland at Richmond in 1893.
The England national football team is the joint-oldest in the world; it was formed at the same time as Scotland. A representative match between England and Scotland was played on 5 March 1870, having been organised by the Football Association. A return fixture was organised by representatives of Scottish football teams on 30 November 1872.
This match, played at Hamilton Crescent in Scotland, is viewed as the first official international football match, because the two teams were independently selected and operated, rather than being the work of a single football association. Over the next forty years, England played exclusively with the other three Home Nations—Scotland, Wales and Ireland—in the British Home Championship.
To begin with, England had no permanent home stadium. They joined FIFA in 1906 and played their first ever games against countries other than the Home Nations on a tour of Central Europe in 1908. Wembley Stadium was opened in 1923 and became their home ground. The relationship between England and FIFA became strained, and this resulted in their departure from FIFA in 1928, before they rejoined in 1946. As a result, they did not compete in a World Cup until 1950, in which they were beaten in a 1–0 defeat by the United States, failing to get past the first round in one of the most embarrassing defeats in the team's history.
Their first ever defeat on home soil to a foreign team was a 0–2 loss to the Republic of Ireland, on 21 September 1949 at Goodison Park. A 6–3 loss in 1953 to Hungary, was their second defeat by a foreign team at Wembley. In the return match in Budapest, Hungary won 7–1. This still stands as England's worst ever defeat. After the game, a bewildered Syd Owen said, "it was like playing men from outer space". In the 1954 FIFA World Cup, England reached the quarter-finals for the first time, and lost 4–2 to reigning champions Uruguay.
thumbnail|right|Queen Elizabeth II presenting England captain Bobby Moore with the Jules Rimet trophy following England's 4–2 victory over West Germany in the 1966 World Cup final
Although Walter Winterbottom was appointed as England's first ever full-time manager in 1946, the team was still picked by a committee until Alf Ramsey took over in 1963. The 1966 FIFA World Cup was hosted in England and Ramsey guided England to victory with a 4–2 win against West Germany after extra time in the final, during which Geoff Hurst famously scored a hat-trick. In UEFA Euro 1968, the team reached the semi-finals for the first time, being eliminated by Yugoslavia.
England qualified for the 1970 FIFA World Cup in Mexico as reigning champions, and reached the quarter-finals, where they were knocked out by West Germany. England had been 2–0 up, but were eventually beaten 3–2 after extra time. They failed in qualification for the 1974, leading to Ramsey's dismissal, and 1978 FIFA World Cups. Under Ron Greenwood, they managed to qualify for the 1982 FIFA World Cup in Spain (the first time competitively since 1962); despite not losing a game, they were eliminated in the second group stage.
The team under Bobby Robson fared better as England reached the quarter-finals of the 1986 FIFA World Cup, losing 2–1 to Argentina in a game made famous by two goals by Maradona for very contrasting reasons, before losing every match in UEFA Euro 1988. They next went on to achieve their second best result in the 1990 FIFA World Cup by finishing fourth – losing again to West Germany in an semi-final finishing 1–1 after extra time, then 3–4 in England's first penalty shoot-out.
Despite losing to Italy in the third place play-off, the members of the England team were given bronze medals identical to the Italians’. The England team of 1990 were welcomed home as heroes and thousands of people lined the streets, for a spectacular open-top bus parade. However, the team did not win any matches in UEFA Euro 1992, drawing with tournament winners Denmark, and later with France, before being eliminated by host nation Sweden.
The 1990s saw four England managers, each in the role for a relatively brief period. Graham Taylor was Robson's successor, but resigned after England failed to qualify for the 1994 FIFA World Cup. At UEFA Euro 1996, held in England, Terry Venables led England, equalling their best performance at a European Championship, reaching the semi-finals as they did in 1968.
He resigned following investigations into his financial activities."Venables is also the only England manager ever to resign from his post because of the muddy personal details set to be showcased in a high-profile trial related to financial irregularities." His successor, Glenn Hoddle, similarly left the job for non-footballing reasons after just one international tournament – the 1998 FIFA World Cup — in which England were eliminated in the second round again by Argentina and again on penalties (after a 2–2 draw). Following Hoddle's departure, Kevin Keegan took England to UEFA Euro 2000, but performances were disappointing and he resigned shortly afterwards.
thumb|right|The England team during the 2006 FIFA World Cup.
Sven-Göran Eriksson took charge of the team between 2001 and 2006, and was the first non–English manager of England. Despite controversial press coverage of his personal life, Eriksson was consistently popular with the majority of fans. He guided England to the quarter-finals of the 2002 FIFA World Cup, UEFA Euro 2004, and the 2006 FIFA World Cup. He lost only five competitive matches during his tenure, and England rose to a No.4 world ranking under his guidance. His contract was extended by the Football Association by two years, to include UEFA Euro 2008. However, it was terminated by them at the 2006 FIFA World Cup's conclusion.
Steve McClaren was then appointed as head coach, and was sacked unanimously by The Football Association on 22 November 2007, after failing to get the team to Euro 2008. The following month, he was replaced by a second foreign manager, Italian Fabio Capello, whose experience included stints at Juventus and Real Madrid.
England won all but one of their qualifying games for the 2010 FIFA World Cup, but at the tournament itself, England drew their opening two games; this led to questions about the team's spirit, tactics and ability to handle pressure. They progressed to the next round, where they were beaten 4–1 by Germany, their heaviest defeat in a World Cup finals tournament match.
In February 2012, Capello resigned from his role as England manager, following a disagreement with the FA over their request to remove John Terry from team captaincy after accusations of racial abuse concerning the player. Following this, there was media speculation that Harry Redknapp would take the job. However, on 1 May 2012, Roy Hodgson was announced as the new manager, just six weeks before UEFA Euro 2012. England managed to finish top of their group, winning two and drawing one of their fixtures, but exited the Championships in the quarter-finals via a penalty shoot-out, this time to Italy.
In the 2014 FIFA World Cup, England were eliminated at the group stage for the first time since the 1958 World Cup, and the first time at a major tournament since Euro 2000. England's points total of one from three matches was its worst ever in the World Cup, obtaining one point from drawing against Costa Rica in their last match. England qualified for UEFA Euro 2016, with 10 wins from 10 qualifying matches, but were ultimately eliminated in the Round of 16, losing 2–1 to Iceland, for the first time since the 2010 World Cup. Hodgson resigned as manager immediately, and just under a month later was replaced by Sam Allardyce. After only 67 days Allardyce resigned from his managerial post by mutual agreement, after alleged breach of rules of the FA, making him the shortest serving permanent England manager.
Team image
Media coverage
All England matches are broadcast with full commentary on BBC Radio 5 Live. From the 2008–09 season until the 2017–18 season, England's home and away qualifiers, and friendlies both home and away are broadcast live on ITV (often with the exception of STV, the ITV affiliate in central and northern Scotland). England's away qualifiers for the 2010 World Cup were shown on Setanta Sports until that company's collapse. As a result of Setanta Sports's demise, England's World Cup qualifier in Ukraine on 10 October 2009 was shown in the United Kingdom on a pay-per-view basis via the internet only. This one-off event was the first time an England game had been screened in such a way. The number of subscribers, paying between £4.99 and £11.99 each, was estimated at between 250,000 and 300,000 and the total number of viewers at around 500,000.
Colours
thumb|upright=0.8|England shirt during 1966 World Cup final.
England's traditional home colours are white shirts, navy blue shorts and white or black socks. The team has periodically worn an all-white kit.
Although England's first away kits were blue, England's traditional away colours are red shirts, white shorts and red socks. In 1996, England's away kit was changed to grey shirts, shorts and socks. This kit was only worn three times, including against Germany in the semi-final of Euro 96 but the deviation from the traditional red was unpopular with supporters and the England away kit remained red until 2011, when a navy blue away kit was introduced. The away kit is also sometimes worn during home matches, when a new edition has been released to promote it.
England have occasionally had a third kit. At the 1970 World Cup England wore a third kit with pale blue shirts, shorts and socks against Czechoslovakia. They had a kit similar to Brazil's, with yellow shirts, yellow socks and blue shorts which they wore in the summer of 1973. For the World Cup in 1986 England had a third kit of pale blue, imitating that worn in Mexico sixteen years before and England retained pale blue third kits until 1992, but they were rarely used.
Umbro first agreed to manufacture the kit in 1954 and since then has supplied most of the kits, the exceptions being from 1959–1965 with Bukta and 1974–1984 with Admiral. Nike purchased Umbro in 2008 and took over as kit supplier in 2013 following their sale of the Umbro brand.
Kit evolution
WC 1950WC 1954WC 1958WC 1962vs Chile and Spain vs United States All the matchesAll the matches vs Argentinavs Hungaryvs BulgariaWC 1966Euro 1968WC 1970vs Uruguay, Mexico, France and Portugal vs Argentina vs West Germany vs Yugoslavia and USSR vs Romania and Brazil vs Czechoslovakia vs West GermanyEuro 1980WC 1982WC 1986Euro 1988WC 1990Euro 1992 All the matches vs Czechoslovakia, Kuwait and Spain vs West Germany and France vs all except Argentina vs Argentina All the matches All the matches All the matchesEuro 1996WC 1998Euro 2000WC 2002vs all except Germany vs Germany vs Tunisiaand Romania vs Argentina vs Colombia vs Romania and Portugal vs Germany vs Sweden, Denmark and Brazil vs Argentina and NigeriaEuro 2004WC 2006WC 2010Euro 2012WC 2014vs all except Croatia vs Croatia vs all exceptSweden vs Sweden vs United States andAlgeria vs Slovenia and Germany vs all except Sweden vs Sweden All the matchesEuro 2016vs all except Slovakia vs Slovakia
Kit manufacturer
ManufacturerPeriod Umbro 1954–1961 Bukta 1959–1965 Umbro 1965–1974 Admiral 1974–1984 Umbro 1984–2013 Nike2013–
Logo
The motif of the England national football team has three lions passant guardant, the emblem of King Richard I, who reigned from 1189 to 1199. The lions, often blue, have had minor changes to colour and appearance. Initially topped by a crown, this was removed in 1949 when the FA was given an official coat of arms by the College of Arms; this introduced ten Tudor roses, one for each of the regional branches of the FA. Since 2003, England top their logo with a star to recognise their World Cup win in 1966; this was first embroidered onto the left sleeve of the home kit, and a year later was moved to its current position, first on the away shirt.
Home stadium
thumb|right|Wembley Stadium during a friendly match between England and Germany
For the first fifty years of their existence, England played their home matches all around the country. They initially used cricket grounds before later moving on to football clubs' stadiums. The original Empire Stadium was built in Wembley, London, for the British Empire Exhibition.
England played their first match at the stadium in 1924 against Scotland and for the next 27 years Wembley was used as a venue for matches against Scotland only. The stadium later became known simply as Wembley Stadium and it became England's permanent home stadium during the 1950s. In October 2000, the stadium closed its doors, ending with a defeat.
This stadium was demolished during the period of 2002–2003, and work began to completely rebuild it. During this time, England played at a number of different venues across the country, though by the time of the 2006 FIFA World Cup qualification, this had largely settled down to having Manchester United's Old Trafford stadium as the primary venue, with Newcastle United's St. James' Park used on occasions when Old Trafford was unavailable.
They returned to the new Wembley Stadium in March 2007. The stadium is now owned by the Football Association, via its subsidiary Wembley National Stadium Limited.
Coaching staff
Manager Gareth Southgate Assistant Manager Steve Holland First Team Coach vacant Goalkeeping Coach Martyn Margetson First-Team Doctor Ian Beasley Fitness Coach Chris Neville Masseur Mark Sertori Physiotherapist Gary Lewin
Players
For all past and present players who have appeared for the national team, see List of England international footballers (alphabetical)
Current squad
The following players have been called up for the 2018 World Cup qualifier against Scotland on 11 November 2016 and Friendly Match against Spain on 15 November 2016.
Caps and goals updated as of 15 November 2016 after the match against Spain.
Recent call ups
The following players have also been called up to the England squad within the last twelve months.
Notes:
RET Retired from the national team
Results and fixtures
2016
2017
Records
Most capped players
Updated 11 November 2016.
Players in bold are still active at club level.
thumb|right|Goalkeeper Peter Shilton is the most capped player in the history of England with 125 caps
Players with an equal number of caps are ranked in chronological order of reaching the milestone.
#NameCareerCapsGoalsPosition1Peter Shilton1970–19901250GK2Wayne Rooney2003–11953FW3David Beckham1996–200911517MF4 Steven Gerrard2000–201411421MF5Bobby Moore1962–19731082DF6 Ashley Cole2001–20141070DF7Bobby Charlton1958–197010649MF Frank Lampard1999–201410629MF9Billy Wright1946–19591053DF10Bryan Robson1980–19919026MF11Michael Owen1998–20088940FW12Kenny Sansom1979–1988861DF13Gary Neville1995–2007850DF14Ray Wilkins1976–1986843MF15Rio Ferdinand1997–2011813DF16Gary Lineker1984–19928048FW17John Barnes1983–19957911MF18Stuart Pearce1987–1999785DF John Terry2003–2012786DF20Terry Butcher1980–1990773DF
Top goalscorers
Goalscorers with an equal number of goals are ranked with the highest to lowest goals per game ratio.
thumb|right|Wayne Rooney is the top goalscorer in the history of England, with 53 goals.
Updated 11 November 2016.
Players in bold are still active at club level.
#NameCareerGoalsCapsPositionAverage1Wayne Rooney (list)2003–53119FW0.44542Bobby Charlton (list)1958–197049106MF0.46223Gary Lineker1984–19924880FW0.60004Jimmy Greaves1959–19674457FW0.77195Michael Owen1998–20084089FW0.44946Nat Lofthouse1950–1958 3033FW0.9090Alan Shearer1992–20003063FW0.4762Tom Finney1946–1958 3076FW0.39479Vivian Woodward1903–19112923FW1.2609Frank Lampard1999–201429106MF0.273511Steve Bloomer1895–19072823FW1.217412David Platt1989–19962762MF0.435513Bryan Robson1981–19892690MF0.288914Geoff Hurst1965–19722449FW0.489815Stan Mortensen1947–19532325FW0.920016Tommy Lawton1938–19482223FW0.9565Peter Crouch2005–20102242FW0.523818Mick Channon1972–19772146FW0.4565Kevin Keegan1972–19822163FW0.3333Steven Gerrard2000–201421114MF0.1842
Competitive record
For the all-time record of the national team against opposing nations, see the team's all-time record page
FIFA World Cup
England first appeared at the 1950 FIFA World Cup and have appeared in 14 FIFA World Cup Finals tournaments, tied for sixth best by number of appearances. They are also tied for sixth by number of wins, alongside France and Spain. The national team is one of eight national teams to have won at least one FIFA World Cup title. The England team won their first and only World Cup title in 1966. The tournament was played on home soil and England defeated Germany 4–2 in the final. In 1990, England finished in fourth place, losing 2–1 to host nation Italy in the third place play-off, after losing on penalties to champions Germany in the semi-final. The team has also reached the quarter-final on two recent occasions in 2002 and 2006. Previously, they reached this stage in 1954, 1962, 1970 and 1986.
England failed to qualify for the World Cup in 1974, 1978 and 1994. The team's earliest exit in the competition itself was its elimination in the first round in 1950, 1958 and most recently in the 2014 FIFA World Cup, after being defeated in both their opening two matches for the first time, versus Italy and Uruguay in Group D. In 1950, four teams remained after the first round, in 1958 eight teams remained and in 2014 sixteen teams remained. In 2010, England suffered its most resounding World Cup defeat (4–1 to Germany) in the Round of 16, after drawing with the United States and Algeria and defeating Slovenia 1–0 in the group stage.
Winners Runners-up Third place
Fourth place
FIFA World Cup recordFIFA World Cup qualification recordManager(s)YearRoundPosition * 1930Did not enter–––––– 1934–––––– 1938–––––– 1950Group Stage8th of 133102223300143Winterbottom 1954Quarter-finals7th of 163111883300114 1958Group Stage11th of 164031454310155 1962Quarter-finals8th of 164112564310162 1966Champions1st of 166510113Qualified as hostsRamsey 1970Quarter-finals8th of 16420244Qualified as defending champions Ramsey 1974Did not qualify412134 19786501154Revie 1982Round 26th of 245320618413138Greenwood 1986Quarter-finals8th of 245212738440212Robson 1990Fourth Place4th of 24733(1*)1866330100 1994Did not qualify10532269Taylor 1998Round of 169th of 32421*1748611152Hoddle 2002 Quarter-finals6th of 325221638521166 Keegan, Wilkinson, ErikssonKevin Keegan and Howard Wilkinson managed one qualifying match each: Eriksson managed the remainder of qualification and the finals campaign. 20067th of 32532(1*)06210811175Eriksson 2010Round of 1613th of 3241213510901346Capello 2014Group Stage26th of 3230122410640314Hodgson 2018To Be Determined431060Allardyce, SouthgateSam Allardyce managed one qualifying match: Gareth Southgate is currently caretaker manager for the qualification. 2022TBDTotal1 title14/2062262016795610570241126164
*Draws include knockout matches decided on penalty kicks. Darker color indicates win, normal color indicates lost.
**Gold background colour indicates that the tournament was won.***Red border colour indicates tournament was held on home soil.****England played all of their 2002 matches in Japan.UEFA European Championship
England have achieved moderate success at the UEFA European Football Championship, having finished in third place in 1968 and reached the semi-final in 1996. England hosted Euro 96 and have qualified for nine UEFA European Championship Finals tournaments, tied for fourth best by number of appearances. The team has also reached the quarter-final on two recent occasions, in 2004 and 2012. The team's worst result in the competition was a first-round elimination in 1980, 1988, 1992, 2000 and UEFA Euro 2016/2016. The team did not enter in 1960, and they failed to qualify in 1964, 1972, 1976, 1984, and 2008.
UEFA European Championship recordUEFA European Championship qualification recordManager(s)YearRoundPosition * 1960Did not enter – – – – – – 1964Did not qualify201136 Winterbottom, RamseyEngland were defeated by France in a two-legged elimination round. Ramsey took over from Winterbottom between the two legs. 1968Third place3rd of 42101218611186 Ramsey 1972Did not qualifyAlthough England did not qualify for the finals, they reached the last eight of the competition. Only the last four teams progressed to the finals.8521166Ramsey 19766321113 Revie 1980Group Stage6th of 83111338710225 Greenwood 1984Did not qualify8521233Robson 1988Group Stage7th of 83003276510191 1992Group Stage7th of 8302112633073 Taylor 1996Semi-Finals3rd of 16523083Qualified as hosts Venables 2000Group Stage11th of 1631025610442165 Hoddle, KeeganHoddle managed the first three qualifiers, while Keegan managed the remainder of qualification and the finals campaign. 2004Quarter-finals5th of 1642111068620145 Eriksson 2008Did not qualify12723247 McClaren 2012Quarter-finals5th of 164220538530175Capello, HodgsonCapello managed the qualification campaign. He resigned before the tournament and was replaced by Hodgson. 2016Round of 1612th of 24412144101000313 Hodgson 2020To Be Determined TBDTotalThird place (x2)9/153110111040359662241020858*Draws include knockout matches decided on penalty kicks.Minor tournaments
YearRoundPositionGPWD*LGSGA 1964 Taça de Nações Group stage 3rd301227 1976 USA Bicentennial Cup Tournament Group stage 2nd320164 1985 Rous Cup One match 2nd100101 1985 Ciudad de México Cup Tournament Group stage 3rd200213 1985 Azteca 2000 Tournament Group stage 2nd210131 1986 Rous Cup Winners, one match 1st110021 1987 Rous Cup Group stage 2nd202011 1988 Rous Cup Winners, group stage 1st211021 1989 Rous Cup Winners, group stage 1st211020 1991 England Challenge Cup Winners, group stage 1st211053 1993 U.S. Cup Group stage 4th301225 1995 Umbro Cup Group stage 2nd311167 1997 Tournoi de FranceWinners, group stage 1st320131 1998 King Hassan II International Cup Tournament Group stage 2nd211010 2004 FA Summer TournamentWinners, group stage 1st211072Total6 titles331210114337*Draws include knockout matches decided on penalty kicks.''
Honours & Achievements
thumb|The England squad (red) which won the 1966 World Cup final against West Germany (white)
Major:
FIFA World Cup
Winners (1): 1966
Semi finalists (1): 1990
UEFA European Championship
Semi finalists (2): 1968, 1996
Regional:
British Home Championship
Winners (54): (including 20 shared)
Rous Cup:
Winners (3): 1986, 1988, 1989
Minor:
FA Summer Tournament
Winners (1): 2004
Tournament of FranceWinners (1): 1997
England Challenge Cup
Winners (1): 1991
Other:
FIFA Fair Play Trophy:
Winners (2): 1990, 1998
Unofficial:
Unofficial Football World Championships:
Matches as Champion: 88Reigns as Champion: 21
See also
Great Britain Olympic football team
United Kingdom national football team
References
External links
Official website at the FA's website
englandstats.com – A complete database of England Internationals since 1872
Football
Category:European national association football teams
Category:FIFA World Cup-winning countries
Football
Category:Football teams in England
Category:1872 establishments in England | 9,904 | 2017-01 |
Green | Green is the color between blue and yellow on the spectrum of visible light. It is evoked by light with a predominant wavelength of roughly 495570 nm. In the subtractive color system, used in painting and color printing, it is created by a combination of yellow and blue, or yellow and cyan; in the RGB color model, used on television and computer screens, it is one of the additive primary colors, along with red and blue, which are mixed in different combinations to create all other colors.
The modern English word green comes from the Middle English and Anglo-Saxon word grene, from the same Germanic root as the words "grass" and "grow".Webster's New World Dictionary of the American Language World Publishing Company, 1964. It is the color of living grass and leaves"...in nature chiefly conspicuous as the colour of growing herbage and leaves..." (Oxford English Dictionary, 2nd Edition, Clarendon Press, Oxford, 1989.)Webster's New World Dictionary of the American Language World Publishing Company, 1964 and as a result is the color most associated with springtime, growth and nature. By far the largest contributor to green in nature is chlorophyll, the chemical by which plants photosynthesize and convert sunlight into chemical energy. Many creatures have adapted to their green environments by taking on a green hue themselves as camouflage. Several minerals have a green color, including the emerald, which is colored green by its chromium content.
In surveys made in Europe and the United States, green is the color most commonly associated with nature, life, health, youth, spring, hope and envy. In Europe and the U.S. green is sometimes associated with death (green has several seemingly contrary associations), sickness, or the devil, but in China its associations are very positive, as the symbol of fertility and happiness. In the Middle Ages and Renaissance, when the color of clothing showed the owner's social status, green was worn by merchants, bankers and the gentry, while red was the color of the nobility. The Mona Lisa by Leonardo da Vinci wears green, showing she is not from a noble family; the benches in the British House of Commons are green, while those in the House of Lords are red. Green is also the traditional color of safety and permission; a green light means go ahead, a green card permits permanent residence in the United States. It is the most important color in Islam. It was the color of the banner of Muhammad, and is found in the flags of nearly all Islamic countries, and represents the lush vegetation of Paradise. It is also often associated with the culture of Gaelic Ireland, and is a color of the flag of Ireland. Because of its association with nature, it is the color of the environmental movement. Political groups advocating environmental protection and social justice describe themselves as part of the Green movement, some naming themselves Green parties. This has led to similar campaigns in advertising, as companies have sold green, or environmentally friendly, products.
Etymology and linguistic definitions
thumb|right|The word green has the same Germanic root as the words for grass and grow
The word green comes from the Middle English and Old English word grene, which, like the German word grün, has the same root as the words grass and grow.Webster's New World Dictionary of the American Language, The World Publishing Company, New York, 1964. It is from a Common Germanic *gronja-, which is also reflected in Old Norse grænn, Old High German gruoni (but unattested in East Germanic), ultimately from a PIE root * "to grow", and root-cognate with grass and to grow.
The first recorded use of the word as a color term in Old English dates to ca. AD 700.Maerz and Paul A Dictionary of Color New York:1930 McGraw-Hill Page 196
Latin with viridis also has a genuine and widely used term for "green". Related to virere "to grow" and ver "spring", it gave rise to words in several Romance languages, French vert, Italian verde (and English vert, verdure etc.). Likewise the Slavic languages with zelenъ. Ancient Greek also had a term for yellowish, pale green – χλωρός, chloros (cf. the color of chlorine), cognate with χλοερός "verdant" and χλόη "the green of new growth".
Thus, the languages mentioned above (Germanic, Romance, Slavic, Greek) have old terms for "green" which are derived from words for fresh, sprouting vegetation.
However, comparative linguistics makes clear that these terms were coined independently, over the past few millennia, and there is no identifiable single Proto-Indo-European or word for "green". For example, the Slavic zelenъ is cognate with Sanskrit hari "yellow, ochre, golden".
The Turkic languages also have jašɨl "green" or "yellowish green", compared to a Mongolian word for "meadow".
Languages where green and blue are one color
In some languages, including old Chinese, Thai, old Japanese, and Vietnamese, the same word can mean either blue or green.Paul Kay and Luisa Maffi, "Color Appearance and the Emergence and Evolution of Basic Color Lexicons", American Anthropologist, March 1999 The Chinese character 青 (pronounced qīng in Mandarin, ao in Japanese, and thanh in Sino-Vietnamese) has a meaning that covers both blue and green; blue and green are traditionally considered shades of "青". In more contemporary terms, they are 藍 (lán, in Mandarin) and 綠 (lǜ, in Mandarin) respectively. Japanese also has two terms that refer specifically to the color green, 緑 (midori, which is derived from the classical Japanese descriptive verb midoru "to be in leaf, to flourish" in reference to trees) and グリーン (guriin, which is derived from the English word "green"). However, in Japan, although the traffic lights have the same colors that other countries have, the green light is described using the same word as for blue, "aoi", because green is considered a shade of aoi; similarly, green variants of certain fruits and vegetables such as green apples, green shiso (as opposed to red apples and red shiso) will be described with the word "aoi". Vietnamese uses a single word for both blue and green, xanh, with variants such as xanh da trời (azure, lit. "sky blue"), lam (blue), and lục (green; also xanh lá cây, lit. "leaf green").
"Green" in modern European languages corresponds to about 520–570 nm, but many historical and non-European languages make other choices, e.g. using a term for the range of ca. 450–530 nm ("blue/green") and another for ca. 530–590 nm ("green/yellow"). In the comparative study of color terms in the world's languages, green is only found as a separate category in languages with the fully developed range of six colors (white, black, red, green, yellow, and blue), or more rarely in systems with five colors (white, red, yellow, green, and black/blue). (See distinction of green from blue)Newman, Paul and Martha Ratliff. Linguistic Fieldwork. Cambridge: Cambridge University Press, 2001. ISBN 0-521-66937-5. pg. 105. These languages have introduced supplementary vocabulary to denote "green", but these terms are recognizable as recent adoptions that are not in origin color terms (much like the English adjective orange being in origin not a color term but the name of a fruit). Thus, the Thai word เขียว kheīyw, besides meaning "green", also means "rank" and "smelly" and holds other unpleasant associations.
The Celtic languages had a term for "blue/green/grey", Proto-Celtic *glasto-, which gave rise to Old Irish glas "green, grey" and to Welsh glas "blue". This word is cognate with the Ancient Greek γλαυκός "bluish green", contrasting with χλωρός "yellowish green" discussed above.
In modern Japanese, the term for green is 緑, while the old term for "blue/green", now means "blue". But in certain contexts, green is still conventionally referred to as 青, as in and , reflecting the absence of blue-green distinction in old Japanese (more accurately, the traditional Japanese color terminology grouped some shades of green with blue, and others with yellow tones).
The Persian language is traditionally lacking a black/blue/green distinction. The Persian word سبز sabz can mean "green", "black", or "dark". Thus, Persian erotic poetry, dark-skinned women are addressed as sabz-eh, as in phrases like سبز گندم گون sabz-eh-gandom-gun (literally "dark wheat colored") or سبز مليح sabz-eh-malih ("a dark beauty").F. Steingass, A Comprehensive Persian-English Dictionary s.v. سبز Similarly, in Sudanese Arabic, dark-skinned people are described as أخضر akhḍar, the term which in Standard Arabic stands unambiguously for "green".Carla N. Daughtry, "Greenness in the Field", Michigan Today, University of Michigan, Fall 1997
In nature and culture
In science
Color vision and colorimetry
center|200px|sRGB rendering of the spectrum of visible lightColorFrequencyWavelengthviolet668–789 THz380–450 nmblue606–668 THz450–495 nmgreen526–606 THz495–570 nmyellow508–526 THz570–590 nmorange484–508 THz590–620 nmred400–484 THz620–750 nm
left|thumb|Green, blue and red are additive colors. All the colors you see on your computer screen are made by mixing them in different intensities.
In optics, the perception of green is evoked by light having a spectrum dominated by energy with a wavelength of roughly 495570nm. The sensitivity of the dark-adapted human eye is greatest at about 507nm, a blue-green color, while the light-adapted eye is most sensitive about 555nm, a yellow-green; these are the peak locations of the rod and cone (scotopic and photopic, respectively) luminosity functions.
The perception of greenness (in opposition to redness forming one of the opponent mechanisms in human color vision) is evoked by light which triggers the medium-wavelength M cone cells in the eye more than the long-wavelength L cones. Light which triggers this greenness response more than the yellowness or blueness of the other color opponent mechanism is called green. A green light source typically has a spectral power distribution dominated by energy with a wavelength of roughly 487570 nm.More specifically, "blue green" 487–493 nm, "bluish green" 493–498 nm, "green" 498–530 nm, "yellowish green" 530–559 nm, "yellow green" 559–570 nm
Human eyes have color receptors known as cone cells, of which there are three types. In some cases, one is missing or faulty, which can cause color blindness, including the common inability to distinguish red and yellow from green, known as deuteranopia or redgreen color blindness.The New Encyclopædia Britannica. Chicago: Encyclopædia Britannica, 2002. ISBN 0-85229-787-4 Green is restful to the eye. Studies show that a green environment can reduce fatigue.Laird, Donald A. "Fatigue: Public Enemy Number One: What It Is and How to Fight It." The American Journal of Nursing (Sep 1933) 33.9 pgs. 835–841.
In the subtractive color system, used in painting and color printing, green is created by a combination of yellow and blue, or yellow and cyan; in the RGB color model, used on television and computer screens, it is one of the additive primary colors, along with red and blue, which are mixed in different combinations to create all other colors. On the HSV color wheel, also known as the RGB color wheel, the complement of green is magenta; that is, a color corresponding to an equal mixture of red and blue light (one of the purples). On a traditional color wheel, based on subtractive color, the complementary color to green is considered to be red.
In additive color devices such as computer displays and televisions, one of the primary light sources is typically a narrow-spectrum yellowish-green of dominant wavelength ~550nm; this "green" primary is combined with an orangish-red "red" primary and a purplish-blue "blue" primary to produce any color in betweenthe RGB color model. A unique green (green appearing neither yellowish nor bluish) is produced on such a device by mixing light from the green primary with some light from the blue primary.
Lasers
Lasers emitting in the green part of the spectrum are widely available to the general public in a wide range of output powers. Green laser pointers outputting at 532nm (563.5 THz) are relatively inexpensive compared to other wavelengths of the same power, and are very popular due to their good beam quality and very high apparent brightness. The most common green lasers use diode pumped solid state (DPSS) technology to create the green light. An infrared laser diode at 808nm is used to pump a crystal of neodymium-doped yttrium vanadium oxide (Nd:YVO4) or neodymium-doped yttrium aluminium garnet (Nd:YAG) and induces it to emit 281.76 THz (1064 nm). This deeper infrared light is then passed through another crystal containing potassium, titanium and phosphorus (KTP), whose non-linear properties generate light at a frequency that is twice that of the incident beam (563.5 THz); in this case corresponding to the wavelength of 532nm ("green"). Other green wavelengths are also available using DPSS technology ranging from 501 nm to 543 nm. Green wavelengths are also available from gas lasers, including the helium–neon laser (543nm), the Argon-ion laser (514nm) and the Krypton-ion laser (521nm and 531nm), as well as liquid dye lasers. Green lasers have a wide variety of applications, including pointing, illumination, surgery, laser light shows, spectroscopy, interferometry, fluorescence, holography, machine vision, non-lethal weapons and bird control.
As of mid-2011, direct green laser diodes at 510nm and 500nm have become generally available, although the price remains relatively prohibitive for widespread public use. The efficiency of these lasers (peak 3%) compared to that of DPSS green lasers (peak 35%) may also be limiting adoption of the diodes to niche uses.
Pigments, food coloring and fireworks
thumb|right|The Chicago River is dyed green every year to mark St. Patrick's Day
Many minerals provide pigments which have been used in green paints and dyes over the centuries. Pigments, in this case, are minerals which reflect the color green, rather that emitting it through luminescent or phosphorescent qualities. The large number of green pigments makes it impossible to mention them all. Among the more notable green minerals, however is the emerald, which is colored green by trace amounts of chromium and sometimes vanadium.Hurlbut, Cornelius S. Jr, & Kammerling, Robert C., 1991, Gemology, p. 203, John Wiley & Sons, New York Chromium(III) oxide (Cr2O3), is called chrome green, also called viridian or institutional green when used as a pigment. For many years, the source of amazonite's color was a mystery. Widely thought to have been due to copper because copper compounds often have blue and green colors, the blue-green color is likely to be derived from small quantities of lead and water in the feldspar. Copper is the source of the green color in malachite pigments, chemically known as basic copper(II) carbonate.
Verdigris is made by placing a plate or blade of copper, brass or bronze, slightly warmed, into a vat of fermenting wine, leaving it there for several weeks, and then scraping off and drying the green powder that forms on the metal. The process of making verdigris was described in ancient times by Pliny. It was used by the Romans in the murals of Pompeii, and in Celtic medieval manuscripts as early as the 5th century AD. It produced a blue-green which no other pigment could imitate, but it had drawbacks; it was unstable, it could not resist dampness, it did not mix well with other colors, it could ruin other colors with which it came into contact., and it was toxic. Leonardo da Vinci, in his treatise on painting, warned artists not to use it. It was widely used in miniature paintings in Europe and Persia in the 16th and 17th centuries. Its use largely ended in the late 19th century, when it was replaced by the safer and more stable chrome green. Viridian, also called chrome green, is a pigment made with chromium oxide dihydrate, was patented in 1859. It became popular with painters, since, unlike other synthetic greens, it was stable and not toxic. Vincent van Gogh used it, along with Prussian blue, to create a dark blue sky with a greenish tint in his painting Cafe terrace at night.
Green earth is a natural pigment used since the time of the Roman Empire. It is composed of clay colored by iron oxide, magnesium, aluminum silicate, or potassium. Large deposits were found in the South of France near Nice, and in Italy around Verona, on Cyprus, and in Bohemia. The clay was crushed, washed to remove impurities, then powdered. It was sometimes called Green of Verona.
Mixtures of oxidized cobalt and zinc were also used to create green paints as early as the 18th century.
Cobalt green, sometimes known as Rinman's green or Zinc Green, is a translucent green pigment made by heating a mixture of cobalt (II) oxide and zinc oxide. Sven Rinman, a Swedish chemist, discovered this compound in 1780.Green Pigment Spins chip promise. August 2006, news.bbc.co.uk/2/hi/technology/4776479.stm
Green chrome oxide was a new synthetic green created by a chemist named Pannetier in Paris in about 1835. Emerald green was a synthetic deep green made in the 19th century by hydrating chrome oxide. It was also known as Guignet Green.A. F. Holleman and E. Wiberg "Inorganic Chemistry" Academic Press, 2001, New York.
thumb|left|Fireworks typically use barium salts to create green sparks
There is no natural source for green food colorings which has been approved by the US Food and Drug Administration. Chlorophyll, the E numbers E140 and E141, is the most common green chemical found in nature, and only allowed in certain medicines and cosmetic materials. Quinoline Yellow (E104) is a commonly used coloring in the United Kingdom but is banned in Australia, Japan, Norway and the United States. Green S (E142) is prohibited in many countries, for it is known to cause hyperactivity, asthma, urticaria, and insomnia.
To create green sparks, fireworks use barium salts, such as barium chlorate, barium nitrate crystals, or barium chloride, also used for green fireplace logs. Copper salts typically burn blue, but cupric chloride (also known as "campfire blue") can also produce green flames. Green pyrotechnic flares can use a mix ratio 75:25 of boron and potassium nitrate. Smoke can be turned green by a mixture: solvent yellow 33, solvent green 3, lactose, magnesium carbonate plus sodium carbonate added to potassium chlorate.
Biology
Green is common in nature, as many plants are green because of a complex chemical known as chlorophyll, which is involved in photosynthesis. Chlorophyll absorbs the long wavelengths of light (red) and short wavelengths of light (blue) much more efficiently than the wavelengths that appear green to the human eye, so light reflected by plants is enriched in green. Chlorophyll absorbs green light poorly because it first arose in organisms living in oceans where purple halobacteria were already exploiting photosynthesis. Their purple color arose because they extracted energy in the green portion of the spectrum using bacteriorhodopsin. The new organisms that then later came to dominate the extraction of light were selected to exploit those portions of the spectrum not used by the halobacteria.Goldsworthy, A. (December 10, 1987). Why trees are green. New Scientist, 116 (1880) 48–52.
Animals typically use the color green as camouflage, blending in with the chlorophyll green of the surrounding environment. Green animals include, especially, amphibians, reptiles, and some fish, birds and insects. Most fish, reptiles, amphibians, and birds appear green because of a reflection of blue light coming through an over-layer of yellow pigment. Perception of color can also be affected by the surrounding environment. For example, broadleaf forests typically have a yellow-green light about them as the trees filter the light. Turacoverdin is one chemical which can cause a green hue in birds, especially. Invertebrates such as insects or mollusks often display green colors because of porphyrin pigments, sometimes caused by diet. This can causes their feces to look green as well. Other chemicals which generally contribute to greenness among organisms are flavins (lychochromes) and hemanovadin. Humans have imitated this by wearing green clothing as a camouflage in military and other fields. Substances that may impart a greenish hue to one's skin include biliverdin, the green pigment in bile, and ceruloplasmin, a protein that carries copper ions in chelation.
The green huntsman spider is green due to the presence of bilin pigments in the spider's hemolymph (circulatory system fluids) and tissue fluids. It hunts insects in green vegetation, where it is well camouflaged.
Green eyes
thumb|right|Green eyes
There is no green pigment in green eyes; like the color of blue eyes, it is an optical illusion; its appearance is caused by the combination of an amber or light brown pigmentation of the stroma, given by a low or moderate concentration of melanin, with the blue tone imparted by the Rayleigh scattering of the reflected light.Fox, Denis Llewellyn (1979). Biochromy: Natural Coloration of Living Things. University of California Press. p. 9. ISBN 0-520-03699-9. Green eyes are most common in Northern and Central Europe.Blue Eyes Versus Brown Eyes: A Primer on Eye Color. Eyedoctorguide.com. Retrieved on December 23, 2011.Why Do Europeans Have So Many Hair and Eye Colors?. Cogweb.ucla.edu. Retrieved on December 23, 2011. They can also be found in Southern Europe,Herbert Risley, William Crooke, The People of India, (1999) West Asia, Central Asia, and South Asia. In Iceland, 89% of women and 87% of men have either blue or green eye color. A study of Icelandic and Dutch adults found green eyes to be much more prevalent in women than in men.Genetic determinants of hair, eye and skin pigmentation in Europeans. Retrieved on August 7, 2012. Among European Americans, green eyes are most common among those of recent Celtic and Germanic ancestry, about 16%.
In history and art
Prehistoric history
Neolithic cave paintings do not have traces of green pigments, but neolithic peoples in northern Europe did make a green dye for clothing, made from the leaves of the birch tree. it was of very poor quality, more brown than green. Ceramics from ancient Mesopotamia show people wearing vivid green costumes, but it is not known how the colors were produced.Anne Vachiron (2000), Couleurs- pigments et teintures dans les mains des peuples, pg. 196
Ancient history
In Ancient Egypt, green was the symbol of regeneration and rebirth, and of the crops made possible by the annual flooding of the Nile. For painting on the walls of tombs or on papyrus, Egyptian artists used finely-ground malachite, mined in the west Sinai and the eastern desert- A paintbox with malachite pigment was found inside the tomb of King Tutankhamun. They also used less expensive green earth pigment, or mixed yellow ochre and blue azurite. To dye fabrics green, they first colored them yellow with dye made from saffron and then soaked them in blue dye from the roots of the woad plant.
For the ancient Egyptians, green had very positive associations. The hieroglyph for green represented a growing papyrus sprout, showing the close connection between green, vegetation, vigor and growth. In wall paintings, the ruler of the underworld, Osiris, was typically portrayed with a green face, because green was the symbol of good health and rebirth. Palettes of green facial makeup, made with malachite, were found in tombs. It was worn by both the living and dead, particularly around the eyes, to protect them from evil. Tombs also often contained small green amulets in the shape of scarab beetles made of malachite, which would protect and give vigor to the deceased. It also symbolized the sea, which was called the "Very Green."Anne Vachiron (2000), Couleurs- pigments et teintures dans les mains des peuples, pg. 203
In Ancient Greece, green and blue were sometimes considered the same color, and the same word sometimes described the color of the sea and the color of trees. The philosopher Democritus described two different greens; cloron, or pale green, and prasinon, or leek green. Aristotle considered that green was located midway between black, symbolizing the earth, and white, symbolizing water. However, green was not counted among of the four classic colors of Greek painting; red, yellow, black and white, and is rarely found in Greek art.
The Romans had a greater appreciation for the color green; it was the color of Venus, the goddess of gardens, vegetables and vineyards.The Romans made a fine green earth pigment, which was widely used in the wall paintings of Pompeii, Herculaneum, Lyon, Vaison-la-Romaine, and other Roman cities. They also used the pigment verdigris, made by soaking copper plates in fermenting wine.Anne Vachiron (2000), Couleurs- pigments et teintures dans les mains des peuples, pg. 214. By the Second Century AD, the Romans were using green in paintings, mosaics and glass, and there were ten different words in Latin for varieties of green.
Postclassical history
In the Middle Ages and Renaissance, the color of clothing showed a person's social rank and profession. Red could only be worn by the nobility, brown and gray by peasants, and green by merchants, bankers and the gentry and their families. The Mona Lisa wears green in her portrait, as does the bride in the Arnolfini portrait by Jan van Eyck.
Unfortunately for those who wanted or were required to wear green, there were no good vegetal green dyes which resisted washing and sunlight. Green dyes were made out of the fern, plantain, buckthorn berries, the juice of nettles and of leeks, the digitalis plant, the broom plant, the leaves of the fraxinus, or ash tree, and the bark of the alder tree, but they rapidly faded or changed color. Only in the 16th century was a good green dye produced, by first dyeing the cloth blue with woad, and then yellow with reseda luteola, also known as yellow-weed.
The pigments available to painters were more varied; monks in monasteries used use of verdigris, made by soaking copper in fermenting wine, to color medieval manuscripts. They also used finely-ground malachite, which made a luminous green. They used green earth colors for backgrounds.
During the early Renaissance, painters such as Duccio di Buoninsegna learned to paint faces first with a green undercoat, then with pink, which gave the faces a more realistic hue. Over the centuries the pink has faded, making some of the faces look green.Pigments Through the Ages, http://www.webexhibits.org/pigments/intro/antiquity.html
Modern history
In the 18th and 19th century
The 18th and 19th century brought the discovery and production of synthetic green pigments and dyes, which rapidly replaced the earlier mineral and vegetable pigments and dyes. These new dyes were more stable and brilliant than the vegetable dyes, but some contained high levels of arsenic, and were eventually banned.
In the 18th and 19th century, green was associated with the romantic movement in literature and art. The French philosopher Jean-Jacques Rousseau celebrated the virtues of nature, The German poet and philosopher Goethe declared that green was the most restful color, suitable for decorating bedrooms. Painters such as John Constable and Jean-Baptiste-Camille Corot depicted the lush green of rural landscapes and forests. Green was contrasted to the smoky grays and blacks of the Industrial Revolution.
The second half of the 19th century saw the use of green in art to create specific emotions, not just to imitate nature. One of the first to make color the central element of his picture was the American artist James McNeil Whistler, who created a series of paintings called "symphonies" or "noctures" of color, including "Symphony in gray and green; The Ocean" between 1866 and 1872.
The late nineteenth century also brought the systematic study of color theory, and particularly the study of how complementary colors such as red and green reinforced each other when they were placed next to each other. These studies were avidly followed by artists such as Vincent van Gogh. Describing his painting, The Night Cafe, to his brother Theo in 1888, Van Gogh wrote: "I sought to express with red and green the terrible human passions. The hall is blood red and pale yellow, with a green billiard table in the center, and four lamps of lemon yellow, with rays of orange and green. Everywhere it is a battle and antithesis of the most different reds and greens."Vincent van Gogh, Corréspondénce general, number 533, cited by John Gage, Practice and Meaning from Antiquity to Abstraction.
In the 20th and 21st century
In the 1980s green became a political symbol, the color of the Green Party in Germany and in many other European countries. It symbolized the environmental movement, and also a new politics of the left which rejected traditional socialism and communism. (See Politics section below.)
Symbolism and associations
Safety and permission
thumb|left|110px|A green light is the universal symbol of permission to go
thumb|left|An agriculture company chooses green and yellow for their products. Three wheels on each hub enables this tractor to work wetter land.
Green can communicate safety to proceed, as in traffic lights.Oxford English Dictionary Green and red were standardized as the colors of international railroad signals in the 19th century. The first traffic light, using green and red gas lamps, was erected in 1868 in front of the Houses of Parliament in London. It exploded the following year, injuring the policeman who operated it. In 1912, the first modern electric traffic lights were put up in Salt Lake City, Utah. Red was chosen largely because of its high visibility, and its association with danger, while green was chosen largely because it could not be mistaken for red. Today green lights universally signal that a system is turned on and working as it should. In many video games, green signifies both health and completed objectives, opposite red.
Nature, vivacity, and life
Green is the color most commonly associated in Europe and the U.S. with nature, vivacity and life.Eva Heller (2000), Psychologie de la couleur – effets et symboliques, pg. 90. 47 percent of respondents surveyed associated green with nature and natural, 18 percent choosing white. 32 percent associated green with vivacity (20 percent chose yellow), and 40 percent with good health (20 percent with red)
It is the color of many environmental organizations, such as Greenpeace, and of the Green Parties in Europe. Many cities have designated a garden or park as a green space, and use green trash bins and containers. A green cross is commonly used to designate pharmacies in Europe.
In China, green is associated with the east, with sunrise, and with life and growth.Yoon, Hong-Key. The Culture of Feng-Shui in Korea. Lexington: Lexington Books, 2006. ISBN 0-7391-1348-8 pg. 27 In Thailand, the color green is consider auspicious for those born on a Wednesday day (light green for those born at night).
Springtime, freshness, and hope
Green is the color most commonly associated in the U.S. and Europe with springtime, freshness, and hope. Green is often used to symbolize rebirth and renewal and immortality. In Ancient Egypt; the god Osiris, king of the underworld, was depicted as green-skinned. Green as the color of hope is connected with the color of springtime; hope represents the faith that things will improve after a period of difficulty, like the renewal of flowers and plants after the winter season.
Youth and inexperience
Green the color most commonly associated in Europe and the U.S. with youth. It also often is used to describe anyone young, inexperienced, probably by the analogy to immature and unripe fruit. Examples include green cheese, a term for a fresh, unaged cheese, and greenhorn, an inexperienced person.
Calm, tolerance, and the agreeable
Surveys also show that green is the color most associated with the calm, the agreeable, and tolerance. Red is associated with heat, blue with cold, and green with an agreeable temperature. Red is associated with dry, blue with wet, and green, in the middle, with dampness. Red is the most active color, blue the most passive; green, in the middle, is the color of neutrality and calm, sometimes used in architecture and design for these reasons.See, for example Science Faculty building, UTS. Blue and green together symbolize harmony and balance.
Jealousy and envy
Green is often associated with jealousy and envy. The expression "green-eyed monster" was first used by William Shakespeare in Othello: "it is the green-eyed monster which doth mock the meat it feeds on." Shakespeare also used it in the Merchant of Venice, speaking of "green-eyed jealousy."
Love and sexuality
Green today is not commonly associated in Europe and the United States with love and sexuality, but in stories of the medieval period it sometimes represented loveChamberlin, Vernon A. "Symbolic Green: A Time-Honored Characterizing Device in Spanish Literature." Hispania. 51.1 (Mar 1968) pp. 29–37 and the base, natural desires of man.Goldhurst, William. "The Green and the Gold: The Major Theme of Gawain and the Green Knight." College English. 20.2 (Nov 1958) pp. 61–65 It was the color of the serpent in the Garden of Eden who caused the downfall of Adam and Eve. However, for the troubadours, green was the color of growing love, and light green clothing was reserved for young women who were not yet married.
In Persian and Sudanese poetry, dark-skinned women, called "green" women, were considered erotic. The Chinese term for cuckold is "to wear a green hat." This was because in ancient China, prostitutes were called "the family of the green lantern" and a prostitute's family would wear a green headscarf.
In Victorian England, the color green was associated with homosexuality.
Dragons, fairies, monsters, and devils
In legends, folk tales and films, fairies, dragons, monsters, and the devil are often shown as green.
In the Middle Ages, the devil was usually shown as either red, black or green. Dragons were usually green, because they had the heads, claws and tails of reptiles.
Modern Chinese dragons are also often green, but unlike European dragons, they are benevolent; Chinese dragons traditionally symbolize potent and auspicious powers, particularly control over water, rainfall, hurricane, and floods. The dragon is also a symbol of power, strength, and good luck. The Emperor of China usually used the dragon as a symbol of his imperial power and strength. The dragon dance is a popular feature of Chinese festivals.
In Irish folklore and English folklore, the color was sometimes was associated with witchcraft, and with faeries and spirits.Williams, Margaret. The Pearl Poet, His Complete Works. Random House, 1967. The type of Irish fairy known as a leprechaun is commonly portrayed wearing a green suit, though before the 20th century he was usually described as wearing a red suit.
Theater
In the theater and in films, green was often connected with horror or ghost stories, and with corpses. The earliest films of Frankenstein were in black and white, but in the poster for the 1935 version The Bride of Frankenstein, the monster had a green face. Actor Bela Lugosi wore green-hued makeup for the role of Dracula in the 1927–1928 Broadway stage production.Why The Devil Wears Green, D. W. Robertson, Jr., Modern Language Notes, Vol. 69, No. 7. (Nov. 1954), pp. 470–472. The Johns Hopkins University Press.
Poison and sickness
Like other common colors, green has several completely opposite associations. While it is the color most associated by Europeans and Americans with good health, it is also the color most often associated with toxicity and poison. There was a solid foundation for this association; in the nineteenth century several popular paints and pigments, notably verdigris, vert de Schweinfurt and vert de Paris, were highly toxic, containing copper or arsenic. The intoxicating drink absinthe was known as "the green fairy".
A green tinge in the skin is sometimes associated with nausea and sickness.Ford, Mark. Self Improvement of Relationship Skills through Body Language. City: Llumina Press, 2004. ISBN 1-932303-79-0 pg. 81 The expression 'green at the gills' means appearing sick. The color, when combined with gold, is sometimes seen as representing the fading of youth.Lewis, John S. "Gawain and the Green Knight." College English. 21.1 (Oct 1959) pp. 50–51 In some Far East cultures the color green is used as a symbol of sickness and/or nausea.Kalb, Ira. Creating Your Own Marketing Makes Good $ & Sense. K & A Press, 1989. ISBN 0-924050-01-2 pg. 210
Social status, prosperity and the dollar
Green in Europe and the United States is sometimes associated with status and prosperity. From the Middle Ages to the 19th century it was often worn by bankers, merchants country gentlemen and others who were wealthy but not members of the nobility. The benches in the House of Commons of the United Kingdom, where the landed gentry sat, are colored green.
In the United States green was connected with the dollar bill. Since 1861, the reverse side of the dollar bill has been green. Green was originally chosen because it deterred counterfeiters, who tried to use early camera equipment to duplicate banknotes. Also, since the banknotes were thin, the green on the back did not show through and muddle the pictures on the front of the banknote. Green continues to be used because the public now associates it with a strong and stable currency."Currency Notes" on the website of the U.S. Bureau of Engraving and Printing, page 12.
One of the more notable uses of this meaning is found in The Wonderful Wizard of Oz. In this story is the Emerald City, where everyone wears tinted glasses which make everything look green. According to the populist interpretation of the story, the city’s color is used by the author, L. Frank Baum, to illustrate the financial system of America in his day, as he lived in a time when America was debating the use of paper money versus gold.Carruthers, Bruce G.; Sarah Babb. "The Color of Money and the Nature of Value: Greenbacks and Gold in Postbellum America." The American Journal of Sociology. (May 1996) 101.6 pgs. 1556–1591
On flags
The flag of Italy (1797) was modeled after the French tricolor. It was originally the flag of the Cisalpine Republic, whose capital was Milan; red and white were the colors of Milan, and green was the color of the military uniforms of the army of the Cisalpine Republic. Other versions say it is the color of the Italian landscape, or symbolizes hope.Gazzetta Ufficiale della Repubblica Italiana nº 174 del 28 luglio 2006.
The flag of Brazil has a green field adapted from the flag of the Empire of Brazil. The green represented the royal family.
The flag of India was inspired by an earlier flag of the independence movement of Gandhi, which had a red band for Hinduism and a green band representing Islam, the second largest religion in India.
The flag of Pakistan symbolizes Pakistan's commitment to Islam and equal rights of religious minorities where the larger portion (3:2 ratio) of flag is dark green representing Muslim majority (98% of total population) while a white vertical bar (3:1 ratio) at the mast representing equal rights for religious minorities and minority religions in country. The crescent and star symbolizes progress and bright future respectively.
The Flag of Bangladesh has a green field based on a similar flag used during the Bangladesh Liberation War of 1971. It consists of a red disc on top of a green field. The red disc represents the sun rising over Bengal, and also the blood of those who died for the independence of Bangladesh. The green field stands for the lushness of the land of Bangladesh.
Green is one of the three colors (along with red and black, or red and gold) of Pan-Africanism. Several African countries thus use the color on their flags, including Nigeria, South Africa, Ghana, Senegal, Mali, Ethiopia, Togo, Guinea, Benin, and Zimbabwe. The Pan-African colors are borrowed from the Ethiopian flag, one of the oldest independent African countries. Green on some African flags represents the natural richness of Africa.Murrell, Nathaniel et al. Chanting down Babylon. Philadelphia: Temple University Press, 1998. ISBN 1-56639-584-4 pg. 135
Many flags of the Islamic world are green, as the color is considered sacred in Islam (see below). The flag of Hamas,Friedland, Roger and Richard Hecht. To Rule Jerusalem. Berkeley: University of California Press, 2000. ISBN 0-520-22092-7 pg. 461 as well as the flag of Iran, is green, symbolizing their Islamist ideology.Kaplan, Leslie C. Iran. ISBN 1-4042-5548-6 pg. 22 The 1977 flag of Libya consisted of a simple green field with no other characteristics. It was the only national flag in the world with just one color and no design, insignia, or other details.Symons, Mitchell. This Book...of More Perfectly Useless Information. New York: HarperEntertainment, 2005. ISBN 0-06-082823-4 p. 229 Some countries used green in their flags to represent their country's lush vegetation, as in the flag of Jamaica,Smith, Whitney. Flag Lore of All Nations. Brookfield: Millbrook Press, 2001. ISBN 0-7613-1753-8 pg. 49 and hope in the future, as in the flags of Portugal and Nigeria.Amienyi, Osabuohien. Communicating National Integration. Ashgate Publishing, 2005. ISBN 0-7546-4425-1 pg. 43 The green cedar of Lebanon tree on the Flag of Lebanon officially represents steadiness and tolerance.
Green is a symbol of Ireland, which is often referred to as the "Emerald Isle". The color is particularly identified with the republican and nationalist traditions in modern times. It is used this way on the flag of the Republic of Ireland, in balance with white and the Protestant orange. Green is a strong trend in the Irish holiday St. Patrick's Day.
In politics
The first recorded green party was a political faction in Constantinople during the 6th century Byzantine Empire. which took its name from a popular chariot racing team. They were bitter opponents of the blue faction, which supported Emperor Justinian I and which had its own chariot racing team. In 532 AD rioting between the factions began after one race, which led to the massacre of green supporters and the destruction of much of the center of Constantinople.Edward Gibbon, The Decline and Fall of the Roman Empire, Abridgement of D.M. Low, Harcourt Brace and Company, 1960, pg. 553–559 (See Nika Riots).
Green was the traditional color of Irish nationalism, beginning in the 17th century. The green harp flag, with a traditional gaelic harp, became the symbol of the movement. It was the banner of the Society of United Irishmen, which organized the Irish Rebellion of 1798, calling for Irish independence. The uprising was suppressed with great bloodshed by the British army. When Ireland achieved independence in 1922, green was incorporated into the national flag.
In the 1970s green became the color of the third biggest Swiss Federal Council political party, the Swiss People's Party SVP. The ideology is Swiss nationalism, national conservatism, right-wing populism, economic liberalism, agrarianism, isolationism, euroscepticism. The SVP was founded on September 22, 1971 and has 90,000 members.
In the 1980s green became the color of a number of new European political parties organized around an agenda of environmentalism. Green was chosen for its association with nature, health, and growth. The largest green party in Europe is Alliance '90/The Greens (German: Bündnis 90/Die Grünen) in Germany, which was formed in 1993 from the merger of the German Green Party, founded in West Germany in 1980, and Alliance 90, founded during the Revolution of 1989–1990 in East Germany. In the 2009 federal elections, the party won 10.7% of the votes and 68 out of 622 seats in the Bundestag.
Green parties in Europe have programs based on ecology, grassroots democracy, nonviolence, and social justice. Green parties are found in over one hundred countries, and most are members of the Global Green Network.
Greenpeace is a non-governmental environmental organization which emerged from the anti-nuclear and peace movements in the 1970s. Its ship, the Rainbow Warrior, frequently tried to interfere with nuclear tests and whaling operations. The movement now has branches in forty countries.
The Australian Greens party was founded in 1992. At the 2010 federal election, the party received 13 percent of the vote (more than 1.6 million votes) in the Senate, a first for any Australian minor party.
Green is the color associated with Puerto Rico's Independence Party, the smallest of Puerto Rico's three major political parties and which advocates for Puerto Rican independence from the United States.
In religion
Green is the traditional color of Islam. According to tradition, the robe and banner of Muhammad were green. and according to the Koran (XVIII, 31 and LXXVI, 21), those fortunate enough to live in paradise wear green silk robes.John Gage (2006), La Couleur dans l'art, page 150-151 Muhammad is quoted in a hadith as saying that "water, greenery, and a beautiful face" were three universally good things.
Al-Khidr ("The Green One"), was an important Qur'anic figure who was said to have met and traveled with Moses. He was given that name because of his role as a diplomat and negotiator. Green was also considered to be the median color between light and obscurity.
Roman Catholic and more traditional Protestant clergy wear green vestments at liturgical celebrations during Ordinary Time. In the Eastern Catholic Church, green is the color of Pentecost. Green is one of the Christmas colors as well, possibly dating back to pre-Christian times, when evergreens were worshiped for their ability to maintain their color through the winter season. Romans used green holly and evergreen as decorations for their winter solstice celebration called Saturnalia, which eventually evolved into a Christmas celebration.Collins, Ace and Clint Hansen. Stories behind the Great Traditions of Christmas. Grand Rapids: Zondervan, 2003. ISBN 0-310-24880-9 pg. 77 In Ireland and Scotland especially, green is used to represent Catholics, while orange is used to represent Protestantism. This is shown on the national flag of Ireland.
In gambling and sports
Gambling tables in a casino are traditionally green. The tradition is said to have started in gambling rooms in Venice in the 16th century.*
Billiards tables are traditionally covered with green woolen cloth. The first indoor tables, dating to the 15th century, were colored green after the grass courts used for the similar lawn games of the period.
Green was the traditional color worn by hunters in the 19th century, particularly the shade called hunter green. In the 20th century most hunters began wearing the color olive drab, a shade of green, instead of hunter green.Maerz and Paul A Dictionary of Color New York:1930 McGraw-Hill p. 162—Discussion of color Hunter Green
Green is a common color for sports teams. Well-known teams include A.S. Saint-Étienne of France, known as Les Verts (The Greens). The Mexico national football team has a green uniform.
British racing green was the international motor racing color of Britain from the early 1900s until the 1960s, when it was replaced by the colors of the sponsoring automobile companies.
A green belt in karate, taekwondo and judo symbolizes a level of proficiency in the sport.
Idioms and expressions
Having a green thumb. To be passionate about or talented at gardening. The expression was popularized beginning in 1925 by a BBC gardening program.
Greenhorn. Someone who is inexperienced.
Green-eyed monster. Refers to jealousy. (See section above on jealousy and envy).
Greenmail. A term used in finance and corporate takeovers. It refers to the practice of a company paying a high price to buy back shares of its own stock to prevent an unfriendly takeover by another company or businessman. It originated in the 1980s on Wall Street, and originates from the green of dollars.
Green room. A room at a theater where actors rest when not onstage, or a room at a television studio where guests wait before going on-camera. It originated in the late 17th century from a room of that color at the Theatre Royal, Drury Lane in London.
Greenwashing. Environmental activists sometimes use this term to describe the advertising of a company which promotes its positive environmental practices to cover up its environmental destruction.
Green around the gills. A description of a person who looks physically ill.Oxford English Dictionary
Going green. An expression commonly used to refer to preserving the natural environment, and participating in activities such as recycling materials.
Notes
See also
Shades of green
References
Cited texts
External links
Green All Over—slideshow by Life magazine
Category:Color
Category:Optical spectrum
Category:Rainbow
Category:Web colors | 12,460 | 2017-01 |
Palermo | Palermo (, Sicilian: Palermu, , from , Panormos, , Balarm; Phoenician: זִיז, Ziz) is a city of Southern Italy, the capital of both the autonomous region of Sicily and the Metropolitan City of Palermo. The city is noted for its history, culture, architecture and gastronomy, playing an important role throughout much of its existence; it is over 2,700 years old. Palermo is located in the northwest of the island of Sicily, right by the Gulf of Palermo in the Tyrrhenian Sea.
The city was founded in 734 BC by the Phoenicians as Ziz ('flower'). Palermo then became a possession of Carthage, before becoming part of the Roman Republic, the Roman Empire and eventually part of the Byzantine Empire, for over a thousand years. The Greeks named the city Panormus meaning 'complete port'. From 831 to 1072 the city was under Arab rule during the Emirate of Sicily when the city first became a capital. The Arabs shifted the Greek name into Balarm, the root for Palermo's present-day name. Following the Norman reconquest, Palermo became the capital of a new kingdom (from 1130 to 1816), the Kingdom of Sicily and the capital of the Holy Roman Empire under Frederick II Holy Roman Emperor and Conrad IV of Germany, King of the Romans. Eventually Sicily would be united with the Kingdom of Naples to form the Kingdom of the Two Sicilies until the Italian unification of 1860.
The population of Palermo urban area is estimated by Eurostat to be 855,285, while its metropolitan area is the fifth most populated in Italy with around 1.2 million people. In the central area, the city has a population of around 676,000 people. The inhabitants are known as Palermitani or, poetically, panormiti. The languages spoken by its inhabitants are the Italian language, Sicilian language and the Palermitano dialect.
Palermo is Sicily's cultural, economic and touristic capital. It is a city rich in history, culture, art, music and food. Numerous tourists are attracted to the city for its good Mediterranean weather, its renowned gastronomy and restaurants, its Romanesque, Gothic and Baroque churches, palaces and buildings, and its nightlife and music. Palermo is the main Sicilian industrial and commercial center: the main industrial sectors include tourism, services, commerce and agriculture. Palermo currently has an international airport, and a significant underground economy.
In fact, for cultural, artistic and economic reasons, Palermo was one of the largest cities in the Mediterranean and is now among the top tourist destinations in both Italy and Europe. It is the main seat of the UNESCO World Heritage Site of Arab-Norman Palermo and the Cathedral Churches of Cefalù and Monreale. The city is also going through careful redevelopment, preparing to become one of the major cities of the Euro-Mediterranean area.php Capital dell'euromediterraneo for redevelopment, development and promotion of the metropolitan area of Palermo
Roman Catholicism is highly important in Palermitano culture. The Patron Saint of Palermo is Santa Rosalia whose Feast Day is celebrated on 15 July. The area attracts significant numbers of tourists each year and is widely known for its colourful fruit, vegetable and fish markets at the heart of Palermo, known as Vucciria, Ballarò and Capo.
Geography
1000px|thumb|center|View of Palermo from Monte Pellegrino.
Palermo lies in a basin, formed by the Papireto, Kemonia and Oreto rivers. The basin was named the Conca d'Oro (the Golden Basin) by the Arabs in the 9th century. The city is surrounded by a mountain range which is named after the city itself. These mountains face the Tyrrhenian Sea. Palermo is home to a natural port and offers excellent views to the sea, especially from Monte Pellegrino.
Topography
thumb|right|210px|Monte Pellegrino pictured at the end of the 19th century; the mountain is visible from everywhere in the city
Palermo is surrounded by mountains, formed of calcar, which form a cirque around the city. Some districts of the city are divided by the mountains themselves. Historically, it was relatively difficult to reach the inner part of Sicily from the city because of the mounts. The tallest peak of the range is La Pizzuta, about high. However, historically, the most important mount is Monte Pellegrino, which is geographically separated from the rest of the range by a plain. The mount lies right in front of the Tyrrhenian Sea. Monte Pellegrino's cliff was described in the 19th century by Johann Wolfgang von Goethe, as "the most beautiful promontory in the world", in his essay "Italian Journey".
Rivers
Today both the Papireto river and the Kemonia are covered up by buildings. However, the shape of the former watercourses can still be recognised today, because the streets that were built on them follow their shapes. Today the only waterway not drained yet is the Oreto river that divides the downtown of the city from the western uptown and the industrial districts. In the basins there were, though, many seasonal torrents that helped formed swampy plains, reclaimed during history; a good example of which can be found in the borough of Mondello.
History
Early history
thumb|left|150px|Mesolithic cave art at Addaura.
Evidence of human settlement in the area now known as Palermo goes back to at least the Mesolithic period, perhaps around 8000 BC, where a group of cave drawings at nearby Addaura from that period have been found.Sandars, Nancy K., Prehistoric Art in Europe, Penguin (Pelican, now Yale, History of Art), 1968 (nb 1st edn.), pp. 85-86 The original inhabitants were Sicani people who, according to Thucydides, arrived from the Iberian Peninsula (perhaps Catalonia).
Ancient period
thumb|right| A brief stretch of Palermo's Phoenician defence wall, now enclosed in the Santa Caterina Monastery.
During 734 BC the Phoenicians, a sea trading people from the north of ancient Canaan, built a small settlement on the natural harbor of Palermo. Some sources suggest they named the settlement Ziz. It became one of the three main Phoenician colonies of Sicily, along with Motya and Soluntum. However, the remains of the Phoenician presence in the city are few and mostly preserved in the very populated center of the downtown area, making any excavation efforts costly and logistically difficult. The site chosen by the Phoenicians made it easy to connect the port to the mountains with a straight road that today has become Corso Calatifimi. This road helped the Phoenicians in trading with the populations that lived beyond the mountains that surround the gulf.
The first settlement is known as Paleapolis (), the Ancient Greek world for "old city", in order to distinguish it from a second settlement built during the 5th century BC, called Neapolis (), "new city". Neapolis was erected towards the east and along with it, monumental walls around the whole settlement were built to prevent attacks from foreign threats. Some part of this structure can still be seen in the Cassaro district. This district was named after the walls themselves; the word Cassaro deriving from the Arab al-qaṣr (castle, stronghold, see also alcázar). Along the walls there were few doors to access and exit the city, suggesting that trade even toward the inner part of the island occurred frequently. Moreover, according to some studies, it may be possible that there were some walls that divided the old city from the new one too. The colony developed around a central street (decumanus), cut perpendicularly by minor streets. This street today has become Corso Vittorio Emanuele.
Carthage was Palermo’s major trading partner under the Phoenicians and the city enjoyed a prolonged peace during this period. Palermo came into contact with the Ancient Greeks between the 6th and the 5th centuries BC which preceded the Sicilian Wars, a conflict fought between the Greeks of Syracuse and the Phoenicians of Carthage for control over the island of Sicily. During this war the Greeks named the settlement Panormos () from which the current name is derived, meaning "all port" due to the shape of its coast. It was from Palermo that Hamilcar I's fleet (which was defeated at the Battle of Himera) was launched.
In 409 BC the city was looted by Hermocrates of Syracuse. The Sicilian Wars ended in 265 BC when Carthage and Syracuse stopped warring and united in order to stop the Romans from gaining full control of the island during the First Punic War. In 276 BC, during the Pyrrhic War, Panormos briefly became a Greek colony after being conquered by Pyrrhus of Epirus, but returned to Phoenician Carthage in 275 BC. In 254 BC Panormos was besieged and conquered by the Romans in the first battle of Panormus (the name Latin name). Carthage attempted to reconquer Panormus in 251 BC but failed.
Middle Ages
thumb|left|San Giovanni degli Eremiti, a church showing elements of Byzantine, Arabic and Norman architecture.
As the Roman Empire was falling apart, Palermo fell under the control of several Germanic tribes. The first were the Vandals in 440 AD under the rule of their king Geiseric. The Vandals had occupied all the Roman provinces in North Africa by 455 establishing themselves as a significant force. They acquired Corsica, Sardinia and Sicily shortly afterwards. However, they soon lost these newly acquired possessions to the Ostrogoths. The Ostrogothic conquest under Theodoric the Great began in 488; Theodoric supported Roman culture and government unlike the Germanic Goths. The Gothic War took place between the Ostrogoths and the Eastern Roman Empire, also known as the Byzantine Empire. Sicily was the first part of Italy to be taken under control of General Belisarius who was commissioned by Eastern Emperor. Justinian I solidified his rule in the following years.
thumb|right|Cappella Palatina, decorated with Byzantine, Arabic and Norman elements.
The Arabs took control of the island in 904, and the Emirate of Sicily was established. Muslim rule on the island lasted for about 120 years .https://archive.org/details/storiadeimusulm01amargoog Palermo (Balarm during Arab rule) displaced Syracuse as the capital of Sicily. It was said to have then begun to compete with Córdoba and Cairo in terms of importance and splendor. For more than a hundred years Palermo was the capital of a flourishing emirate.Joseph Strayer, Dictionary of the Middle Ages, Scribner, 1987, t.9, p.352 The Arabs also introduced many agricultural crops which remain a mainstay of Sicilian cuisine.
After dynastic quarrels however, there was a Christian reconquest in 1072. The family who returned the city to Christianity were called the Hautevilles, including Robert Guiscard and his army, who is regarded as a hero by the natives.Appleton, The World in the Middle Ages, 100. It was under Roger II of Sicily that Norman holdings in Sicily and the southern part of the Italian Peninsula were promoted from the County of Sicily into the Kingdom of Sicily. The kingdom's capital was Palermo, with the King's Court held at the Palazzo dei Normanni. Much construction was undertaken during this period, such as the building of Palermo Cathedral. The Kingdom of Sicily became one of the wealthiest states in Europe.
Sicily fell under the control of the Holy Roman Empire in 1194. Palermo was the preferred city of the Emperor Frederick II. Muslims of Palermo emigrated or were expelled during Holy Roman rule. After an interval of Angevin rule (1266–1282), Sicily came under control of the Aragon and Barcelona dynasties. By 1330, Palermo's population had declined to 51,000. From 1479 until 1713 Palermo was ruled by the Kingdom of Spain, and again between 1717 and 1718. Palermo was also under Savoy control between 1713 and 1717 and 1718–1720 as a result of the Treaty of Utrecht. It was also ruled by Austria between 1720 and 1734.
Two Sicilies
After the Treaty of Utrecht (1713), Sicily was handed over to the House of Savoy, but by 1734 it was in Bourbon possession. Charles III chose Palermo for his coronation as King of Sicily. Charles had new houses built for the growing population, while trade and industry grew as well. However, by now Palermo was now just another provincial city as the Royal Court resided in Naples. Charles' son Ferdinand, though disliked by the population, took refuge in Palermo after the French Revolution in 1798. His son Alberto died on the way to Palermo and is buried in the city.
When the Kingdom of the Two Sicilies was founded, the original capital city was Palermo (1816) but a year later moved to Naples.
thumb|right|The revolution in Palermo (12 January 1848).
From 1820 to 1848 Sicily was shaken by upheavals, which culminated on 12 January 1848, with a popular insurrection, the first one in Europe that year, led by Giuseppe La Masa. A parliament and constitution were proclaimed. The first president was Ruggero Settimo. The Bourbons reconquered Palermo in 1849, and remained under their rule until the time of Giuseppe Garibaldi. The famous general entered Palermo with his troops (the “Thousands”) on 27 May 1860. After the plebiscite later that year Palermo, along with the rest of Sicily, became part of the new Kingdom of Italy (1861).
Italian unification and today
thumb|Giuseppe Garibaldi entering Palermo on 27 May 1860
The majority of Sicilians preferred independence to the Savoy kingdom; in 1866, Palermo became the seat of a week-long popular rebellion, which was finally crushed after Martial law was declared. The Italian government blamed anarchists and the Church, specifically the Archbishop of Palermo, for the rebellion and began enacting anti-Sicilian and anti-clerical policies. A new cultural, economic and industrial growth was spurred by several families, like the Florio, the Ducrot, the Rutelli, the Sandron, the Whitaker, the Utveggio, and others. In the early twentieth century, Palermo expanded outside the old city walls, mostly to the north along the new boulevards Via Roma, Via Dante, Via Notarbartolo, and Viale della Libertà. These roads would soon boast a huge number of villas in the Art Nouveau style. Many of these were designed by the famous architect Ernesto Basile. The Grand Hotel Villa Igiea, designed by Ernesto Basile for the Florio family, is a good example of Palermitan Art Nouveau. The huge Teatro Massimo was designed in the same period by Giovan Battista Filippo Basile, and built by the Rutelli & Machì building firm of the industrial and old Rutelli Italian family in Palermo, and was opened in 1897.
During the Second World War, Palermo was untouched until the Allied invasion of Sicily in 1943. In July, the harbour and the surrounding quarters were heavily bombed by the Allied forces and were all but destroyed.
In 1946 the city was declared the seat of the Regional Parliament, as capital of a Special Status Region (1947) whose seat is in the Palazzo dei Normanni.
A theme in the city's modern age has been the struggle against the Mafia, Red Brigades and outlaws such as Salvatore Giuliano, who controlled the neighbouring area of Montelepre. The Italian state effectively has had to share control of the territory, economically and administratively, with the Mafia.
The so-called "Sack of Palermo" is one of the major visible faces of the problem. The term is used to indicate the speculative building practices that have filled the city with poor buildings, mainly during the 1950s to the 1980s. The reduced importance of agriculture in the Sicilian economy has led to a massive migration to the cities, especially Palermo, which swelled in size, leading to rapid expansion towards the north. The regulatory plans for expansion was largely ignored in the boom. New parts of town appeared almost out of nowhere, but without parks, schools, public buildings, proper roads and the other amenities that characterise a modern city.
Districts
260px|right|Quarters of Palermo
Municipality QuartersIKalsa, Albergheria, Seralcadio & La LoggiaIISettecannoli, Brancaccio & Ciaculli-OretoIIIVillagrazia-Falsomiele & Stazione-OretoIVMontegrappa, S. Rosalia, Cuba, Calafatimi, Mezzomonreale, Villa Tasca-Altarello & BoccadifalcoVZisa, Noce, Uditore-Passo di Rigano & Borgo NuovoVICruillas, S. Giovanni Apostolo, Resuttana & San LorenzoVIIPallavicino, Tommaso Natale, Sferracavallo, Partanna Mondello, Arenella, Vergine Maria & San Filippo Neri (formerly known as ZEN)VIIIPoliteama, Malaspina-Palagonia, Libertà & Monte Pellegrino
Shown above are the thirty five quarters of Palermo: these thirty five neighbourhoods or "quartiere" as they are known, are further divided into eight governmental community boards.
Climate
300px|thumbnail|right|Gulf of Mondello seen from Monte Pellegrino
Palermo experiences a hot-summer Mediterranean climate (Köppen climate classification: Csa) that is mild with moderate seasonality. Summers are hot and dry due to the domination of subtropical high pressure system, while winters experience moderate temperatures and changeable, rainy weather due to the polar front.http://www.palermo.climatemps.com/ Temperatures in autumn and spring are usually mild. Palermo is one of the warmest cities in Europe (mainly due to its warm nights), with an average annual air temperature of . It receives approximately 2,530 hours of sunshine per year. Snow is usually a rare occurrence, but it does occur occasionally during the strongest cold spells. http://www.meteoservice.net/dossier-neve-a-palermo-nel-1986-nevico-persino-a-natale/ Between the 1940s and the 2000s there have been eleven times when considerable snowfall has occurred. In 1949 and in 1956, when the minimum temperature went down to , the city was blanketed by several centimetres of snow.http://www.italyheritage.com/magazine/articles/history/1956-snow.htm Snowfall also occurred in 1999, 2009 and 2015.http://meteolive.leonardo.it/news/In-primo-piano/2/tutte-le-nevicate-su-palermo-/364/
The average annual temperature of the sea is above ; from in February to in August. In the period from May to November, the average sea temperature exceeds and in the period from June to October, the average sea temperature exceeds .
Landmarks
675px|thumbnail|center|Palermo Cathedral
Palermo has a large architectural heritage and is notable for its many Norman buildings.
Churches
thumb|right|San Cataldo's Church.
thumb|right|Chiesa della Martorana.
thumb|right|Church of Saint Catherine.
Palermo Cathedral: Located at Corso Vittorio Emanuele, corner Via Matteo Bonello, its long history has led to an accumulation of different architectural styles, the latest being the 18th century.
Cappella Palatina, the 12th century chapel of the Palazzo dei Normanni, has outstanding mosaics in both Western and the Eastern traditions and a roof by Saracen craftsmen.
San Giovanni dei Lebbrosi
San Giovanni degli Eremiti (St. John of the Hermit Order): Located near the Palazzo dei Normanni, a 12th-century church notable for its bright red domes, a remnant of Arab influence in Sicily. In his Diary of an Idle Woman in Sicily, F. Elliot described it as "... totally oriental... it would fit well in Baghdad or Damascus". The bell tower is an example of Gothic architecture.
Chiesa della Martorana: Also known as Santa Maria dell'Ammiraglio (St Mary of the Admiral), the church is annexed to the next-door church of San Cataldo and overlooks the Piazza Bellini in central Palermo. The original layout was a compact cross-in-square ("Greek cross plan"), a common south Italian and Sicilian variant of the middle Byzantine period church style. Three eastern apses adjoin directly to the naos, instead of being separated by an additional bay, as was usual in eastern Byzantine architecture.Kitzinger, Mosaics, 29–30. The bell tower, lavishly decorated, still serves as the main entrance to the church. The interior decoration is elaborate, and includes Byzantine mosaics.
San Cataldo: Church, on the central Piazza Bellini, which is a good example of Norman architecture.
Santa Maria della Gancia
Santa Caterina: This church is located behind Piazza Pretoria and built between 1566 and 1596 in the baroque style.
Santa Maria della Catena: This church was built between 1490-1520. Designed by Matteo Carnilivari: The name derives from chains that were once attached to one of the walls.
San Domenico: Located near Via Roma, it is known as the “Pantheon of illustrious Sicilians”.
San Giuseppe dei Teatini: Located near the Quattro Canti, it is an example of Sicilian Baroque.
Oratorio di San Lorenzo Working in stucco, Rococo sculptor Giacomo Serpotta, his brother Giuseppe and his son Procopio, decorated the church (1690/98–1706) with such a profusion of statuary, and an abundance of putti, the walls appear alive. In October 1969, two thieves removed Caravaggio's Nativity with St. Francis and St. Lawrence from its frame. It has never been recovered.
Oratorio del Rosario: Completed by Giacomo Serpotta in (1710–17)
Santa Teresa alla Kalsa, which derives its name from Al-Khalisa, an Arabic term meaning elected, was constructed between 1686-1706 over the former Emir's residence, is one of the best examples of Sicilian Baroque. It has a single, airy nave, with stucco decorations from the early 18th century.
Santa Maria dello Spasimo was built in 1506 and later turned into a hospital. This church inspired Raphael to paint his famous Sicilia's Spasimo, now in the Museo del Prado. The church today is a fascinating open-air auditorium, which occasionally houses exhibitions and musical shows.
Church of the Gesu (Church of Jesus): Located in the city centre, the church was built in 1564 in the late-Renaissance style by the Jesuits. It was built over a pre-existing convent of Basilian monks. Alterations in 1591 were completed in a Sicilian Baroque. The church was heavily damaged after the 1943 bombings, which destroyed most of the frescos. The interior has a Latin cross plan with a nave and two aisles, and has a particularly rich decoration of marbles, tarsias and stuccoes, especially in St Anne's Chapel. At the right is the Casa Professa, with a 1685 portal and a precious 18th century cloister. The building has been home to the Municipal Library since 1775.
San Francesco di Assisi: this church was built between 1255 and 1277 in what was once the market district of the city, at the site of two pre-existing churches and was largely renovated in the 15th, 16th, 18th and 19th centuries, the last after an earthquake. After the 1943 bombings, the church was restored to its Medieval appearance, which now includes part of the original building such as part of the right side, the apses and the Gothic portal in the façade. The interior has a typical Gothic flavour, with a nave and two aisles separated by two rows of cylindrical pilasters. Some of the chapels are in Renaissance style, as well as the late 16th century side portals. The church includes precious sculptures by Antonio, Giacomo Gagini and Francesco Laurana. Of note are also statues built by Giacomo Serpotta in 1723.
Church of the Magione: Officially known as the church of the Holy Trinity. This church was built in the Norman style in 1191 by Matteo d'Ajello, who donated it to the Cistercian monks.
Palaces and museums
thumb|right|Palazzo dei Normanni, seat of the Sicilian Regional Assembly.
Palazzo dei Normanni (the Norman Palace), one of the most beautiful Italian palaces and a notable example of Norman architecture. It houses the famous Cappella Palatina.
Zisa (1160) and Cuba, magnificent castles/houses historically used by the kings of Palermo for hunting. The Zisa today houses the Islamic museum. The Cuba was once encircled by water.
Natoli Palace, of Vincenzo Natoli,
Palazzo Chiaramonte
Palazzo Abatellis. Built at the end of the 15th century for the prefect of the city, Francesco Abatellis. It is a massive though elegant construction, in typical Catalan Gothic style, with Renaissance influences. The Gallery houses an Eleonora of Aragon bust by Francesco Laurana (1471) and the Malvagna Triptych ( 1510), by Jan Gossaert and the famous Annunziata by Antonello da Messina.
The Regional Archeological Museum Antonio Salinas is one of the main museums of Italy: it includes numerous remains from Etruscan, Carthaginian, Roman and Hellenistic civilisations. It houses all the decorative remains from the Sicilian temples of Segesta and Selinunte.
Palazzina Cinese, royal residence of the House of Bourbon-Two Sicilies and location of the Ethnografic Museum of Sicily.
City walls
thumb|right|City wall at Corso Alberto
Palermo has at least two rings of city walls, many pieces of which still survive. The first ring surrounded the ancient core of the Phoenician city – the so-called Palaeopolis (in the area east of Porta Nuova) and the Neapolis. Via Vittorio Emanuele was the main road east–west through this early walled city. The eastern edge of the walled city was on Via Roma and the ancient port in the vicinity of Piazza Marina. The wall circuit was approximately Porto Nuovo, Corso Alberti, Piazza Peranni, Via Isodoro, Via Candela, Via Venezia, Via Roma, Piazza Paninni, Via Biscottari, Via Del Bastione, Palazzo dei Normanni and back to Porto Nuovo.
In the medieval period the walled city was expanded. Via Vittorio Emanuele continued to be the main road east–west through the walled city. The west gate was still Porta Nuova, the walls continued to Corso Alberti, to Piazza Vittorio Emanuele Orlando where it turned east along Via Volturno to Piazza Verdi and along the line of Via Cavour. At this northeast corner there was a defence, Castello a Mare, to protect the port at La Cala. A huge chain was used to block La Cala with the other end at Santa Maria della Catena (St Mary of the Chain). The sea-side wall was along the western side of Foro Italico Umberto. The wall turns west along the northern side of Via Abramo Lincoln, continues along Corso Tukory. The wall turns north approximately on Via Benedetto, to Palazzo dei Normanni and back to Porta Nuova.Palermo - City Guide by Adriana Chirco, 1998, Dario Flaccovio Editore.
Several gates in the city wall survive. Images of the wall can be seen here.
Opera houses
thumb|Teatro Massimo opera house.
thumb|right|Teatro Politeama.
Up until the beginning of 20th century there were hundreds of small opera theatres known as magazzeni in the city of Palermo.
The Teatro Massimo ("Greatest Theatre") was opened in 1897. It is the biggest in Italy (), and one of the largest of Europe (the third after the Paris Opera and the Vienna State Opera), renowned for its perfect acoustics. Enrico Caruso sang in a performance of La Gioconda during the opening season, returning for Rigoletto at the very end of his career. Closed for renovation from 1974 until 1997, it is now restored and has an active schedule.
The Teatro Politeama was built between 1867 and 1874.
Squares
thumb|Quattro Canti at Christmas time.
thumb|Piazza Pretoria.
Quattro Canti is a small square at the crossing of the ancient main roads (now: Corso Vittorio Emanuele and Via Maqueda) dividing the town into its quarters (mandamenti). The buildings at the corner have diagonal baroque façades so the square has an almost octagonal form.
Piazza Pretoria was planned in the 16th century near the Quattro Canti as the site of a fountain by Francesco Camilliani, the Fontana Pretoria
Other sights
thumb|left|Palermo Botanical Garden: the Winter Garden greenhouses.
The cathedral has a heliometer (solar observatory) dating to 1690, one of a number built in Italy in the 17th and 18th centuries. The device itself is quite simple: a tiny hole in one of the minor domes acts as pinhole camera, projecting an image of the sun onto the floor at solar noon (12:00 in winter, 13:00 in summer). There is a bronze line, la Meridiana, on the floor, running precisely north–south. The ends of the line mark the positions as at the summer and winter solstices; signs of the zodiac show the various other dates throughout the year.
The purpose of the instrument was to standardise the measurement of time and the calendar. The convention in Sicily had been that the (24‑hour) day was measured from the moment of dawn, which of course meant that no two locations had the same time and, more importantly, did not have the same time as in St. Peter's Basilica in Rome. It was also important to know when the vernal equinox occurred, to provide the correct date for Easter.
The Orto botanico di Palermo (Palermo Botanical Garden), founded in 1785, is the largest in Italy with a surface of .
One site of interest is the Capuchin Catacombs, with many mummified corpses in varying degrees of preservation.
Close to the city is the Monte Pellegrino, offering a panorama of the city, its surrounding mountains and the sea.
Another good panoramic viewpoint is the promontory of Monte Gallo (), near Mondello Beach.http://www.siciliaincammino.it/promontori-sul-mare-ascesa-monte-gallo-da-partanna-mondellopalermo
UNESCO World Heritage Sites
UNESCO World Heritage Sites include the Palazzo Reale with the Cappella Palatina, the Chiesa di San Giovanni degli Eremiti, the Chiesa di Santa Maria dell’Ammiraglio, the Chiesa di San Cataldo, the Cattedrale di Palermo, the Palazzo della Zisa and the Ponte dell’Ammiraglio. This makes Italy the country with the most UNESCO world heritage sites,http://www.thesalmons.org/lynn/world.heritage.htmlhttps://top5ofanything.com/list/28beede4/Countries-with-the-Most-UNESCO-World-Heritage-Sites- and Sicily the region hosting the most within Italy.http://www.touringclub.it/notizie-di-viaggio/nuovi-siti-unesco-per-litalia-diventano-patrimonio-dellumanita-la-palermo-arabo
Demographics
In 2010, there were 1.2 million people living in the greater Palermo area, 655,875 of which resided in the City boundaries, of whom 47.4% were male and 52.6% were female. People under age 15 totalled 15.6% compared to pensioners who composed 17.2% of the population. This compares with the Italian average of 14.1% people under 15 years and 20.2% pensioners. The average age of a Palermo resident is 40.4 compared to the Italian average of 42.8. In the ten years between 2001 and 2010, the population of Palermo declined by 4.5%, while the population of Italy, as a whole, grew by 6.0%. The reason for Palermo's decline is a population flight to the suburbs, and to Northern Italy. The current birth rate of Palermo is 10.2 births per 1,000 inhabitants compared to the Italian average of 9.3 births.
, 97.79% of the population was of Italian descent. The largest immigrant group came from South Asia (mostly from Sri Lanka): 0.80%, other European countries (mostly from Albania, Romania, Serbia, Macedonia and Ukraine): 0.3%, and North Africa (mostly from Tunisia): 0.28%.http://demo.istat.it/str2006/dati/Palermo.zip
+2012 largest resident foreign-born groups Country of birth Population 4,111 3,405 2,373 2,132 1,461 1,057 1,029 1,028
Economy
Being Sicily's administrative capital, Palermo is a centre for much of the region's finance, tourism and commerce. The city currently hosts an international airport, and Palermo's economic growth over the years has brought the opening of many new businesses. The economy mainly relies on tourism and services, but also has commerce, shipbuilding and agriculture. The city, however, still has high unemployment levels, high corruption and a significant black market empire (Palermo being the home of the Sicilian Mafia). Even though the city still suffers from widespread corruption, inefficient bureaucracy and organized crime, the level of crime in Palermo's has gone down dramatically, unemployment has been decreasing and many new, profitable opportunities for growth (especially regarding tourism) have been introduced, making the city safer and better to live in.
Education
The local university is the University of Palermo, the island's second oldest university. It was officially founded in 1806, although historical records indicate that medicine and law have been taught there since the late 15th century. The Orto botanico di Palermo (Palermo botanical gardens) is home to the university's Department of Botany and is also open to visitors.
Sports
Palermo hosts a professional football team, U.S. Città di Palermo, commonly referred to as simply Palermo, who compete in Serie A, the top division of Italian football.
The Targa Florio was an open road endurance car race held near Palermo. Founded in 1906, it used to be one of the oldest sports car racing events until it was discontinued in 1977 due to safety concerns but has since run as a rallying event. Palermo was home to the grand depart of the 2008 Giro d'Italia. The initial stage was a TTT (Team Time Trial).
The Internazionali Femminili di Palermo is an annual ladies professional tennis event held in the city, which is part of the WTA Tour.
Infrastructure
Public transport
Palermo has a local railway called the Palermo metropolitan railway service.
thumb|Palermo, AMAT Tramway System Map|306x306px
Palermo's public bus system is operated by AMAT which covers a net area of . About 90 different routes reach every part of the city.See Transport net by AMAT
Palermo has a public tram system finalized in 2015 and operated by AMAT. There are 4 lines:
Roccella — Central Station
Borgo Nuovo — Notarbartolo Station
CEP — Notarbartolo Station
Corso Calatafimi — Notarbartolo Station
The local coach company, AST, with its coaches totalling 35 lines, links Palermo to all of the main cities in Sicily.
Roads
Palermo is a key intersection on the Sicilian road network, being the junction between the eastern A19 motorway to Trapani, the southeastern A29 to airport and Mazzara del Vallo and the southwestern A19 to Messina and A20 to Catania. Palermo is one of the main cities on European route E90. The three main national roads starting from Palermo are the SS113, SS121, SS186 and the SS624.
Airports
Palermo International Airport, known as Falcone-Borsellino Airport (formerly Punta Raisi Airport), is located west of Palermo. It is dedicated to Giovanni Falcone and Paolo Borsellino, two anti-mafia judges killed by the mafia in the early 1990s.
The airport's rail facility, known as Punta Raisi railway station, can be reached from Palermo Centrale, Palermo Notarbartolo and Palermo Francia railway stations.
Palermo-Boccadifalco Airport is the second airport of the city.
Port
The port of Palermo, founded by the Phoenicians over 2,700 years ago, is, together with the port of Messina, the main port of Sicily. From here ferries link Palermo to Cagliari, Genoa, Livorno, Naples, Tunis and other cities and carry a total of almost 2 million passengers annually. It is also an important port for cruise ships. Traffic includes also almost of cargo and 80,000 TEUs yearly.See table from assoporti.it The port also has links to minor sicilian islands such as Ustica and the Aeolian Islands (via Cefalù in summer). Inside the Port of Palermo there is a section known as "tourist marina" for sailing yachts and catamarans.
National rail
The main railway station of Palermo is Palermo Centrale which links to the other cities of Sicily, including Agrigento, Trapani and Catania, and through Messina and the strait to the rest of Italy. The railways also connect to the Palermo airport with departures every thirty minutes.
Patron saints
The patron saint of Palermo is Saint Rosalia, who is widely revered.
On 14 July, people in Palermo celebrate the annual Festino, the most important religious event of the year. The Festino is a procession which goes through the main street of Palermo to commemorate the miracle attributed to Saint Rosalia who, it is believed, freed the city from the Black Death in 1624. Her remains were discovered in a cave on Monte Pellegrino, and her remains were carried around the city three times, banishing the plague. There is a sanctuary marking the spot where her remains were found which can be reached via a scenic bus ride from the city.
Before 1624 Palermo had four patron saints, one for each of the four major parts of the city. They were Saint Agatha, Saint Christina, Saint Nympha and Saint Olivia.
Saint Lucy is also honoured with a peculiar celebration, during which the inhabitants of Palermo do not eat anything made with flour, but boil wheat in its natural state and use it to prepare a special dish called cuccìa. This commemorates the saving of the city from famine due to a miracle attributed to Saint Lucy; A ship full of grain mysteriously arrived in the city's harbour and the hungry population wasted no time in making flour but ate the grain as it arrived.
Saint Benedict the Moor is the heavenly protector of the city of Palermo.
The ancient patron of the city was the Genius of Palermo, genius loci and numen protector of the place, that became the laic patron of the modern Palermo. Alberto Samonà. Il Genio di Palermo e il Monte Pellegrino. Retrieved 2 September 2010.
International relations
Twin towns and sister cities
Palermo is twinned with:
Bizerte, Tunisia
Bukavu, Democratic Republic of the Congo
Chengdu, China
Gdańsk, Poland
Hanoi, Vietnam
Khan Younis, Gaza, Palestine
Miami, Florida, United States
Monterey, California, United States
Ottawa, Ontario, Canada
Palermo, Colombia
Pistoia, Tuscany, Italy
Rijeka, Croatia
Samara, Russia
Santiago de Cuba, Cuba
Sestu, Sicily, Italy
Tbilisi, Georgia
Timișoara, Romania
Utica, New York, United States
Vilnius, Lithuania
Yaroslavl, Russia
Zagreb, Croatia
Gallery
See also
Arab-Norman Palermo and the Cathedral Churches of Cefalù and Monreale
List of mayors of Palermo
Hugo Falcandus
References
Sources
See also:
External links
Tourist Information Centre
Palermo Tourist Board
Palermo Coupon
Category:Populated coastal places in Italy
Category:Mediterranean port cities and towns in Italy
Category:Municipalities of the Province of Palermo
Category:Phoenician colonies in Sicily
Category:Populated places established in the 8th century BC
Category:Capitals of former nations
Category:University towns in Italy | 38,881 | 2017-01 |
Freemasonry | thumb|alt=Standard image of masonic square and compasses|The Masonic Square and Compasses.(Found with or without the letter G)
Freemasonry or Masonry consists of fraternal organisations that trace their origins to the local fraternities of stonemasons, which from the end of the fourteenth century regulated the qualifications of stonemasons and their interaction with authorities and clients. The degrees of freemasonry retain the three grades of medieval craft guilds, those of Apprentice, Journeyman or fellow (now called Fellowcraft), and Master Mason. These are the degrees offered by Craft (or Blue Lodge) Freemasonry. Members of these organisations are known as Freemasons or Masons. There are additional degrees, which vary with locality and jurisdiction, and are usually administered by different bodies than the craft degrees.
The basic, local organisational unit of Freemasonry is the Lodge. The Lodges are usually supervised and governed at the regional level (usually coterminous with either a state, province, or national border) by a Grand Lodge or Grand Orient. There is no international, worldwide Grand Lodge that supervises all of Freemasonry; each Grand Lodge is independent, and they do not necessarily recognise each other as being legitimate.
Modern Freemasonry broadly consists of two main recognition groups. Regular Freemasonry insists that a volume of scripture is open in a working lodge, that every member profess belief in a Deity, that no women are admitted, and that the discussion of religion and politics is banned. Continental Freemasonry is now the general term for the "liberal" jurisdictions who have removed some, or all, of these restrictions.
Masonic Lodge
thumb|alt=Italian lodge at Palazzo Roffia, Florence|Lodge in Palazzo Roffia, Florence, set out for French (Moderns) ritual
The Masonic Lodge is the basic organisational unit of Freemasonry. The Lodge meets regularly to conduct the usual formal business of any small organisation (pay bills, organise social and charitable events, elect new members, etc.). In addition to business, the meeting may perform a ceremony to confer a Masonic degree"Frequently Asked Questions" United Grand Lodge of England retrieved 30 October 2013 or receive a lecture, which is usually on some aspect of Masonic history or ritual."Materials: Papers and Speakers" Provincial Grand Lodge of East Lancashire, retrieved 30 October 2013 At the conclusion of the meeting, the Lodge might adjourn for a formal dinner, or festive board, sometimes involving toasting and song."Gentlemen, please be upstanding" Toasts for the festive board, Grand Lodge of British Columbia and Yukon retrieved 30 October 2013
The bulk of Masonic ritual consists of degree ceremonies. Candidates for Freemasonry are progressively initiated into Freemasonry, first in the degree of Entered Apprentice. Some time later, in a separate ceremony, they will be passed to the degree of Fellowcraft, and finally they will be raised to the degree of Master Mason. In all of these ceremonies, the candidate is entrusted with passwords, signs and grips peculiar to his new rank."Words, Grips and Signs" H. L. Haywood, Symbolical Masonry, 1923, Chapter XVIII, Sacred Texts website, retrieved 9 January 2014 Another ceremony is the annual installation of the Master and officers of the Lodge. In some jurisdictions Installed Master is valued as a separate rank, with its own secrets to distinguish its members."Past Master" Masonic Dictionary, retrieved 31 October 2013 In other jurisdictions, the grade is not recognised, and no inner ceremony conveys new secrets during the installation of a new Master of the Lodge."Maçon célèbre : le Maître Installé" GADLU blog Maçonnique, 3 March 2013, retrieved 2 November 2013
Most Lodges have some sort of social calendar, allowing Masons and their partners to meet in a less ritualised environment.For instance "Introduction into Freemasonry", Provincial Grand Lodge of Hertfordshire, retrieved 8 November 2013 Often coupled with these events is the obligation placed on every Mason to contribute to charity. This occurs at both Lodge and Grand Lodge level. Masonic charities contribute to many fields from education to disaster relief."Charitable work", UGLE, retrieved 8 November 2013(editors) John Hamill and Robert Gilbert, Freemasonry, Angus, 2004, pp 214–220
These private local Lodges form the backbone of Freemasonry, and a Freemason will necessarily have been initiated into one of these. There also exist specialist Lodges where Masons meet to celebrate anything from sport to Masonic research. The rank of Master Mason also entitles a Freemason to explore Masonry further through other degrees, administered separately from the Craft, or "Blue Lodge" degrees described here, but having a similar format to their meetings.Michael Johnstone, The Freemasons, Arcturus, 2005, pp 101–120
There is very little consistency in Freemasonry. Because each Masonic jurisdiction is independent, each sets its own procedures. The wording of the ritual, the number of officers present, the layout of the meeting room, etc. varies from jurisdiction to jurisdiction."Les Officiers de Loge" Maconnieke Encyclopedie, retrieved 31 October 2013
The officers of the Lodge are elected or appointed annually. Every Masonic Lodge has a Master, two Wardens, a secretary and a treasurer. There is also a Tyler, or outer guard, who is always present outside the door of a working Lodge. Other offices vary between jurisdictions.
Each Masonic Lodge exists and operates according to a set of ancient principles known as the Landmarks of Freemasonry. These principles have thus far eluded any universally accepted definition.Alain Bernheim, "My Approach to Masonic History", Pietre Stones, from address of 2011, retrieved 8 November 2013
Joining a Lodge
thumb|alt=Worshipful Master George Washington|Print from 1870 portraying George Washington as Master of his Lodge
Candidates for Freemasonry will have met most active members of the Lodge they are joining before they are initiated. The process varies between jurisdictions, but the candidate will typically have been introduced by a friend at a Lodge social function, or at some form of open evening in the Lodge. In modern times, interested people often track down a local Lodge through the Internet. The onus is on candidates to ask to join; while candidates may be encouraged to ask, they are never invited. Once the initial inquiry is made, an interview usually follows to determine the candidate's suitability. If the candidate decides to proceed from here, the Lodge ballots on the application before he (or she, depending on the Masonic Jurisdiction) can be accepted."How to become a Freemason", Masonic Lodge of Education, retrieved 20 November 2013
The absolute minimum requirement of any body of Freemasons is that the candidate must be free, and considered to be of good character."Comment devenir franc-maçon?", Grande Loge de Luxembourg, retrieved 23 November 2013 There is usually an age requirement, varying greatly between Grand Lodges, and (in some jurisdictions) capable of being overridden by a dispensation from the Grand Lodge. The underlying assumption is that the candidate should be a mature adult.
In addition, most Grand Lodges require the candidate to declare a belief in a Supreme Being. In a few cases, the candidate may be required to be of a specific religion. The form of Freemasonry most common in Scandinavia (known as the Swedish Rite), for example, accepts only Christians."Swedish Rite FAQ", Grand Lodge of British Columbia & Yukon, Accessed 19 November 2013 At the other end of the spectrum, "Liberal" or Continental Freemasonry, exemplified by the Grand Orient de France, does not require a declaration of belief in any deity, and accepts atheists (a cause of discord with the rest of Freemasonry)."Faut-il croire en Dieu?", Foire aux Questions, Grand Orient de France, Retrieved 23 November 2013Jack Buta, "The God Conspiracy, The Politics of Grand Lodge Foreign Relations", Pietre-Stones, retrieved 23 November 2013
During the ceremony of initiation, the candidate is expected to swear (usually on a volume of sacred text appropriate to his personal religious faith) to fulfil certain obligations as a Mason. In the course of three degrees, new masons will promise to keep the secrets of their degree from lower degrees and outsiders, and to support a fellow Mason in distress (as far as practicality and the law permit). There is instruction as to the duties of a Freemason, but on the whole, Freemasons are left to explore the craft in the manner they find most satisfying. Some will further explore the ritual and symbolism of the craft, others will focus their involvement on the social side of the Lodge, while still others will concentrate on the charitable functions of the lodge."Social events and activities", Hampshire Province, retrieved 20 November 2013"Who are Masons, and what do they do?", MasonicLodges.com, retrieved 20 November 2013
Organisation
Grand Lodges
thumb|alt=Freemason's Hall, London|Freemasons Hall, London, home of the United Grand Lodge of England
Grand Lodges and Grand Orients are independent and sovereign bodies that govern Masonry in a given country, state, or geographical area (termed a jurisdiction). There is no single overarching governing body that presides over worldwide Freemasonry; connections between different jurisdictions depend solely on mutual recognition.(editors) John Hamill and Robert Gilbert, Freemasonry, Angus, 2004, Glossary, p247"Difficult Questions; Is Freemasonry a Global Conspiracy?" MasterMason.com, retrieved 18 November 2013
Freemasonry, as it exists in various forms all over the world, has a membership estimated by the United Grand Lodge of England at around six million worldwide. The fraternity is administratively organised into independent Grand Lodges (or sometimes Grand Orients), each of which governs its own Masonic jurisdiction, which consists of subordinate (or constituent) Lodges. The largest single jurisdiction, in terms of membership, is the United Grand Lodge of England (with a membership estimated at around a quarter million). The Grand Lodge of Scotland and Grand Lodge of Ireland (taken together) have approximately 150,000 members. In the United States total membership is just under two million.Hodapp, Christopher. Freemasons for Dummies. Indianapolis: Wiley, 2005. p. 52.
Recognition, amity and regularity
Relations between Grand Lodges are determined by the concept of Recognition. Each Grand Lodge maintains a list of other Grand Lodges that it recognises. When two Grand Lodges recognise and are in Masonic communication with each other, they are said to be in amity, and the brethren of each may visit each other's Lodges and interact Masonically. When two Grand Lodges are not in amity, inter-visitation is not allowed. There are many reasons why one Grand Lodge will withhold or withdraw recognition from another, but the two most common are Exclusive Jurisdiction and Regularity.Jim Bantolo, "On Recognition" , Masonic Short Talk, Pilar lodge, 2007, retrieved 25 November 2013
Exclusive Jurisdiction
Exclusive Jurisdiction is a concept whereby only one Grand Lodge will be recognised in any geographical area. If two Grand Lodges claim jurisdiction over the same area, the other Grand Lodges will have to choose between them, and they may not all decide to recognise the same one. (In 1849, for example, the Grand Lodge of New York split into two rival factions, each claiming to be the legitimate Grand Lodge. Other Grand Lodges had to choose between them until the schism was healed.Ossian Lang, "History of Freemasonry in the State of New York" (pdf), 1922, pp135-140, Masonic Trowel eBooks) Exclusive Jurisdiction can be waived when the two over-lapping Grand Lodges are themselves in Amity and agree to share jurisdiction (for example, since the Grand Lodge of Connecticut is in Amity with the Prince Hall Grand Lodge of Connecticut, the principle of Exclusive Jurisdiction does not apply, and other Grand Lodges may recognise both)."Exclusive Jurisdiction", Paul M. Bessel, 1998, retrieved 25 November 2013
Regularity
thumb|240px|alt=First Freemason's Hall, 1809|Freemasons' Hall, London, c. 1809
Regularity is a concept based on adherence to Masonic Landmarks, the basic membership requirements, tenets and rituals of the craft. Each Grand Lodge sets its own definition of what these landmarks are, and thus what is Regular and what is Irregular (and the definitions do not necessarily agree between Grand Lodges). Essentially, every Grand Lodge will hold that its landmarks (its requirements, tenets and rituals) are Regular, and judge other Grand Lodges based on those. If the differences are significant, one Grand Lodge may declare the other "Irregular" and withdraw or withhold recognition."Regularity in Freemasonry and its Meaning", Grand Lodge of Latvia, retrieved 25 November 2013Tony Pope, "Regularity and Recognition", from Freemasonry Universal, by Kent Henderson & Tony Pope, 1998, Pietre Stones website, retrieved 25 November 2013
The most commonly shared rules for Recognition (based on Regularity) are those given by the United Grand Lodge of England in 1929:
The Grand Lodge should be established by an existing regular Grand Lodge, or by at least three regular Lodges.
A belief in a supreme being and scripture is a condition of membership.
Initiates should take their vows on that scripture.
Only men can be admitted, and no relationship exists with mixed Lodges.
The Grand Lodge has complete control over the first three degrees, and is not subject to another body.
All Lodges shall display a volume of scripture with the square and compasses while in session.
There is no discussion of politics or religion.
"Antient landmarks, customs and usages" observed.UGLE Book of Constitutions, "Basic Principles for Grand Lodge Recognition", any year since 1930, page numbers may vary.
Other degrees, orders and bodies
Blue Lodge Freemasonry offers only three traditional degrees, and in most jurisdictions, the rank of past or installed master. Master Masons are also able to extend their Masonic experience by taking further degrees, in appendant bodies approved by their own Grand Lodge.Robert L.D. Cooper, Cracking the Freemason's Code, Rider 2006, p229
The Ancient and Accepted Scottish Rite is a system of 33 degrees (including the three Blue Lodge degrees) administered by a local or national Supreme Council. This system is popular in North America and in Continental Europe. The York Rite, with a similar range, administers three orders of Masonry, namely the Royal Arch, Cryptic Masonry and Knights Templar.Michael Johnstone, The Freemasons, Arcturus, 2005, pp 95–98
In Britain, separate bodies administer each order. Freemasons are encouraged to join the Holy Royal Arch, which is linked to Mark Masonry in Scotland and Ireland, but separate in England. Templar and Cryptic Masonry also exist.J S M Ward, "The Higher Degrees Handbook", Pietre Stones, retrieved 11 November 2013
In the Nordic countries the Swedish Rite is dominant; a variation of it is also used in parts of Germany.
Ritual and symbolism
Freemasonry describes itself as a "'beautiful system of morality, veiled in allegory and illustrated by symbols"."What is Freemasonry?" Grand Lodge of Alberta retrieved 7 November 2013 The symbolism is mainly, but not exclusively, drawn from the manual tools of stonemasons – the square and compasses, the level and plumb rule, the trowel, among others. A moral lesson is attached to each of these tools, although the assignment is by no means consistent. The meaning of the symbolism is taught and explored through ritual.
All Freemasons begin their journey in the "craft" by being progressively initiated, passed and raised into the three degrees of Craft, or Blue Lodge Masonry. During these three rituals, the candidate is progressively taught the meanings of the Lodge symbols, and entrusted with grips, signs and words to signify to other Masons that he has been so initiated. The initiations are part allegory and part lecture, and revolve around the construction of the Temple of Solomon, and the artistry and death of his chief architect, Hiram Abiff. The degrees are those of Entered apprentice, Fellowcraft and Master Mason. While many different versions of these rituals exist, with at least two different lodge layouts and versions of the Hiram myth, each version is recognisable to any Freemason from any jurisdiction.
In some jurisdictions the main themes of each degree are illustrated by tracing boards. These painted depictions of Masonic themes are exhibited in the lodge according to which degree is being worked, and are explained to the candidate to illustrate the legend and symbolism of each degree.Mark S. Dwor, "Some thoughts on the history of the Tracing Boards", Grand Lodge of British Columbia and Yukon, 1999, retrieved 7 November 2013
The idea of Masonic brotherhood probably descends from a 16th-century legal definition of a brother as one who has taken an oath of mutual support to another. Accordingly, Masons swear at each degree to keep the contents of that degree secret, and to support and protect their brethren unless they have broken the law.Robert L.D. Cooper, Cracking the Freemason's Code, Rider 2006, p79 In most Lodges the oath or obligation is taken on a Volume of Sacred Law, whichever book of divine revelation is appropriate to the religious beliefs of the individual brother (usually the Bible in the Anglo-American tradition). In Progressive continental Freemasonry, books other than scripture are permissible, a cause of rupture between Grand Lodges."Masonic U.S. Recognition of French Grand Lodges in the 20th century", Paul M. Bessel. retrieved 8 November 2013
History
Origins
thumb|alt=Goose and Gridiron|Goose and Gridiron, where the Grand Lodge of England was founded
Since the middle of the 19th century, Masonic historians have sought the origins of the movement in a series of similar documents known as the Old Charges, dating from the Regius Poem in about 1425Andrew Prescott, "The Old Charges Revisited", from Transactions of the Lodge of Research No. 2429 (Leicester), 2006, Pietre-Stones Masonic Papers, retrieved 12 October 2013 to the beginning of the 18th century. Alluding to the membership of a lodge of operative masons, they relate a mythologised history of the craft, the duties of its grades, and the manner in which oaths of fidelity are to be taken on joining.A. F. A. Woodford, preface to William James Hughan, The Old Charges of British Freemasons, London, 1872 The fifteenth century also sees the first evidence of ceremonial regalia.
There is no clear mechanism by which these local trade organisations became today's Masonic Lodges, but the earliest rituals and passwords known, from operative lodges around the turn of the 17th–18th centuries, show continuity with the rituals developed in the later 18th century by accepted or speculative Masons, as those members who did not practice the physical craft came to be known.Robert L.D. Cooper, Cracking the Freemason's Code, Rider 2006, Chapter 4, p 53 The minutes of the Lodge of Edinburgh (Mary's Chapel) No. 1 in Scotland show a continuity from an operative lodge in 1598 to a modern speculative Lodge.David Murray Lyon, History of the Lodge of Edinburgh (Mary's Chapel) No 1, Blackwood 1873, Preface It is reputed to be the oldest Masonic Lodge in the world.
thumb|left|alt=Royal Arch Chapter in England, beginning of c20|View of room at the Masonic Hall, Bury St Edmunds, Suffolk, England, early 20th century, set up for a Holy Royal Arch convocation
The first Grand Lodge, the Grand Lodge of London and Westminster (later called the Grand Lodge of England (GLE)), was founded on 24 June 1717, when four existing London Lodges met for a joint dinner. Many English Lodges joined the new regulatory body, which itself entered a period of self-publicity and expansion. However, many Lodges could not endorse changes which some Lodges of the GLE made to the ritual (they came to be known as the Moderns), and a few of these formed a rival Grand Lodge on 17 July 1751, which they called the "Antient Grand Lodge of England." These two Grand Lodges vied for supremacy until the Moderns promised to return to the ancient ritual. They united on 27 December 1813 to form the United Grand Lodge of England (UGLE).I. R. Clarke, "The Formation of the Grand Lodge of the Antients" , Ars Quatuor Coronatorum, vol 79 (1966), p. 270-73, Grand Lodge of British Columbia and Yukon, retrieved 28 June 2012
The Grand Lodge of Ireland and the Grand Lodge of Scotland were formed in 1725 and 1736 respectively, although neither persuaded all of the existing lodges in their countries to join for many years.H. L. Haywood, "Various Grand Lodges", The Builder, vol X no 5, May 1924, Pietre Stones website, retrieved 9 January 2014Robert L.D. Cooper, Cracking the Freemason's Code, Rider 2006, Chapter 1, p 17
North America
The earliest known American lodges were in Pennsylvania. The Collector for the port of Pennsylvania, John Moore, wrote of attending lodges there in 1715, two years before the formation of the first Grand Lodge in London. The Premier Grand Lodge of England appointed a Provincial Grand Master for North America in 1731, based in Pennsylvania.Francis Vicente, An Overview of Early Freemasonry in Pennsylvania, Pietre-Stones, retrieved 15 November 2013 Other lodges in the colony obtained authorisations from the later Antient Grand Lodge of England, the Grand Lodge of Scotland, and the Grand Lodge of Ireland, which was particularly well represented in the travelling lodges of the British Army.Werner Hartmann, "History of St. John's Lodge No. 1", St. John's Lodge No. 1, A.Y.M., 2012, retrieved 16 November 2013M. Baigent and R. Leigh, The Temple and the Lodge, Arrow 1998, Appendix 2, pp360-362, "Masonic Field Lodges in Regiments in America", 1775–77 Many lodges came into existence with no warrant from any Grand Lodge, applying and paying for their authorisation only after they were confident of their own survival.Robert L.D. Cooper, Cracking the Freemason's Code, Rider 2006, p190
After the American Revolution, independent U.S. Grand Lodges formed themselves within each state. Some thought was briefly given to organising an overarching "Grand Lodge of the United States," with George Washington (who was a member of a Virginian lodge) as the first Grand Master, but the idea was short-lived. The various state Grand Lodges did not wish to diminish their own authority by agreeing to such a body.
Prince Hall Freemasonry
Prince Hall Freemasonry exists because of the refusal of early American lodges to admit African-Americans. In 1775, an African-American named Prince Hall, along with fourteen other African-Americans, was initiated into a British military lodge with a warrant from the Grand Lodge of Ireland, having failed to obtain admission from the other lodges in Boston. When the military Lodge left North America, those fifteen men were given the authority to meet as a Lodge, but not to initiate Masons. In 1784, these individuals obtained a Warrant from the Premier Grand Lodge of England (GLE) and formed African Lodge, Number 459. When the UGLE was formed in 1813, all U.S.-based Lodges were stricken from their rolls – due largely to the War of 1812. Thus, separated from both UGLE and any concordantly recognised U.S. Grand Lodge, African Lodge re-titled itself as the African Lodge, Number 1 – and became a de facto "Grand Lodge" (this Lodge is not to be confused with the various Grand Lodges on the Continent of Africa). As with the rest of U.S. Freemasonry, Prince Hall Freemasonry soon grew and organised on a Grand Lodge system for each state."Prince Hall History Education Class" by Raymond T. Coleman(pdf) retrieved 13 October 2013
Widespread segregation in 19th- and early 20th-century North America made it difficult for African-Americans to join Lodges outside of Prince Hall jurisdictions – and impossible for inter-jurisdiction recognition between the parallel U.S. Masonic authorities. By the 1980s, such discrimination was a thing of the past, and today most U.S. Grand Lodges recognise their Prince Hall counterparts, and the authorities of both traditions are working towards full recognition. The United Grand Lodge of England has no problem with recognising Prince Hall Grand Lodges."Foreign Grand Lodges", UGLE Website, retrieved 25 October 2013 While celebrating their heritage as lodges of black Americans, Prince Hall is open to all men regardless of race or religion."History of Prince Hall Masonry: What is Freemasonry", Most Worshipful Prince Hall Grand Lodge Free and Accepted Masons Jurisdiction of Pennsylvania, retrieved 25 October 2013
Emergence of Continental Freemasonry
thumb|alt=Masonic initiation, Paris, 1745|Masonic initiation, Paris, 1745
English Freemasonry spread to France in the 1720s, first as lodges of expatriates and exiled Jacobites, and then as distinctively French lodges which still follow the ritual of the Moderns. From France and England, Freemasonry spread to most of Continental Europe during the course of the 18th century. The Grande Loge de France formed under the Grand Mastership of the Duke of Clermont, who exercised only nominal authority. His successor, the Duke of Orléans, reconstituted the central body as the Grand Orient de France in 1773. Briefly eclipsed during the French Revolution, French Freemasonry continued to grow in the next century.Histoire de la Franc-maçonnerie, Grand Orient de France, retrieved 12 November 2013
Schism
The ritual form on which the Grand Orient of France was based was abolished in England in the events leading to the formation of the United Grand Lodge of England in 1813. However the two jurisdictions continued in amity (mutual recognition) until events of the 1860s and 1870s drove a seemingly permanent wedge between them. In 1868 the Supreme Council of the Ancient and Accepted Scottish Rite of the State of Louisiana appeared in the jurisdiction of the Grand Lodge of Louisiana, recognised by the Grand Orient de France, but regarded by the older body as an invasion of their jurisdiction. The new Scottish rite body admitted blacks, and the resolution of the Grand Orient the following year that neither colour, race, nor religion could disqualify a man from Masonry prompted the Grand Lodge to withdraw recognition, and it persuaded other American Grand Lodges to do the same.Paul Bessel, "U.S. Recognition of French Grand Lodges in the 1900s", from Heredom: The Transactions of the Scottish Rite Research Society, vol 5, 1996, pp 221–244, Paul Bessel website, retrieved 12 November 2013
A dispute during the Lausanne Congress of Supreme Councils of 1875 prompted the Grand Orient de France to commission a report by a Protestant pastor which concluded that, as Freemasonry was not a religion, it should not require a religious belief. The new constitutions read, "Its principles are absolute liberty of conscience and human solidarity", the existence of God and the immortality of the soul being struck out. It is possible that the immediate objections of the United Grand Lodge of England were at least partly motivated by the political tension between France and Britain at the time. The result was the withdrawal of recognition of the Grand Orient of France by the United Grand Lodge of England, a situation that continues today.
Not all French lodges agreed with the new wording. In 1894, lodges favouring the compulsory recognition of the Great Architect of the Universe formed the Grande Loge de France.Historique de la GLDF, Grande Loge de France, retrieved 14 November 2013 In 1913, the United Grand Lodge of England recognised a new Grand Lodge of Regular Freemasons, a Grand Lodge that follows a similar rite to Anglo-American Freemasonry with a mandatory belief in a deity.Alain Bernheim, "My approach to Masonic History", Manchester 2011, Pietre-Stones, retrieved 14 November 2013
There are now three strands of Freemasonry in France, which extend into the rest of Continental Europe:-
Liberal (also adogmatic or progressive) – Principles of liberty of conscience, and laicity, particularly the separation of the Church and State."Liberal Grand Lodges", French Freemasonry, retrieved 14 November 2013
Traditional – Old French ritual with a requirement for a belief in a supreme being."Traditional Grand Lodges", French Freemasonry, retrieved 14 November 2013 (This strand is typified by the Grande Loge de France).
Regular – Standard Anglo-American ritual, mandatory belief in Supreme being."Regular Grand Lodges", French Freemasonry, retrieved 14 November 2013
The term Continental Freemasonry was used in Mackey's 1873 Encyclopedia of Freemasonry to "designate the Lodges on the Continent of Europe which retain many usages which have either been abandoned by, or never were observed in, the Lodges of England, Ireland, and Scotland, as well as the United States of America"."Continental Lodges",Mackey's Encyclopedia of Freemasonry, retrieved 30 November 2013 Today, it is frequently used to refer to only the Liberal jurisdictions typified by the Grand Orient de France.For instance "Women in Freemasonry, and Continental Freemasonry", Corn Wine and Oil, June 2009, retrieved 30 November 2013
The majority of Freemasonry considers the Liberal (Continental) strand to be Irregular, and thus withhold recognition. For the Continental lodges, however, having a different approach to Freemasonry was not a reason for severing masonic ties. In 1961, an umbrella organisation, Centre de Liaison et d'Information des Puissances maçonniques Signataires de l'Appel de Strasbourg (CLIPSAS) was set up, which today provides a forum for most of these Grand Lodges and Grand Orients worldwide. Included in the list of over 70 Grand Lodges and Grand Orients are representatives of all three of the above categories, including mixed and women's organisations. The United Grand Lodge of England does not communicate with any of these jurisdictions, and expects its allies to follow suit. This creates the distinction between Anglo-American and Continental Freemasonry.Tony Pope, "At a Perpertual Distance: Liberal and Adogmatic Grand Lodges", Presented to Waikato Lodge of Research No 445 at Rotorua, New Zealand, on 9 November 2004, as the annual Verrall Lecture, and subsequently published in the Transactions of the lodge, vol 14 #1, March 2005, Pietre-Stones, retrieved 13 November 2013"Current members" CLIPSAS, retrieved 14 November 2014
Freemasonry and women
The status of women in the old guilds and corporations of mediaeval masons remains uncertain. The principle of "femme sole" allowed a widow to continue the trade of her husband, but its application had wide local variations, ranging from full membership of a trade body to limited trade by deputation to approved members of that body.Antonia Frazer, The Weaker Vessel, Mandarin paperbacks, 1989, pp108-109 In masonry, the small available evidence points to the less empowered end of the scale.for example, see David Murray Lyon, History of the lodge of Edinburgh, Blackwood, Edinburgh, 1873, pp 121–123
At the dawn of the Grand Lodge era, during the 1720s, James Anderson composed the first printed constitutions for Freemasons, the basis for most subsequent constitutions, which specifically excluded women from Freemasonry. As Freemasonry spread, continental masons began to include their ladies in Lodges of Adoption, which worked three degrees with the same names as the men's but different content. The French officially abandoned the experiment in the early 19th century."Adoptive Freemasonry" Entry from Mackey's Lexicon of FreemasonryBarbara L. Thames, "A History of Women's Masonry", Phoenix Masonry, retrieved 5 March 2013 Later organisations with a similar aim emerged in the United States, but distinguished the names of the degrees from those of male masonry."Order of the Eastern Star" Masonic Dictionary, retrieved 9 January 2013
Maria Deraismes was initiated into Freemasonry in 1882, then resigned to allow her lodge to rejoin their Grand Lodge. Having failed to achieve acceptance from any masonic governing body, she and Georges Martin started a mixed masonic lodge that actually worked masonic ritual."Maria Deraismes (1828–1894)", Droit Humain, retrieved 5 March 2013. (French Language) Annie Besant spread the phenomenon to the English speaking world.Jeanne Heaslewood, "A Brief History of the Founding of Co-Freemasonry", 1999, Phoenix Masonry, retrieved 12 August 2013 Disagreements over ritual led to the formation of exclusively female bodies of Freemasons in England, which spread to other countries. Meanwhile, the French had re-invented Adoption as an all-female lodge in 1901, only to cast it aside again in 1935. The lodges, however, continued to meet, which gave rise, in 1959, to a body of women practising continental Freemasonry.
In general, Continental Freemasonry is sympathetic to Freemasonry amongst women, dating from the 1890s when French lodges assisted the emergent co-masonic movement by promoting enough of their members to the 33rd degree of the Ancient and Accepted Scottish Rite to allow them, in 1899, to form their own grand council, recognised by the other Continental Grand Councils of that Rite."Histoire du Droit Humain", Droit Humain, retrieved 12 August 2013 The United Grand Lodge of England issued a statement in 1999 recognising the two women's grand lodges there to be regular in all but the participants. While they were not, therefore, recognised as regular, they were part of Freemasonry "in general"."Text of UGLE statement", Honourable Fraternity of Ancient Freemasons, retrieved 12 August 2012 The attitude of most regular Anglo-American grand lodges remains that women Freemasons are not legitimate Masons.Karen Kidd, Haunted Chambers: the Lives of Early Women Freemasons, Cornerstone, 2009, pp204-205
Anti-Masonry
thumb|Masonic Temple of Santa Cruz de Tenerife, one of the few Masonic temples that survived the Franco dictatorship in Spain.
Anti-Masonry (alternatively called Anti-Freemasonry) has been defined as "opposition to Freemasonry","Anti-Masonry" – Oxford English Dictionary (Compact Edition), Oxford University Press, 1979, p.369 but there is no homogeneous anti-Masonic movement. Anti-Masonry consists of widely differing criticisms from diverse (and often incompatible) groups who are hostile to Freemasonry in some form. Critics have included religious groups, political groups, and conspiracy theorists.
There have been many disclosures and exposés dating as far back as the 18th century. These often lack context, may be outdated for various reasons, or could be outright hoaxes on the part of the author, as in the case of the Taxil hoax. Lists many books which perpetuate Masonic ritual hoaxes.
These hoaxes and exposés have often become the basis for criticism of Masonry, often religious or political in nature or are based on suspicion of corrupt conspiracy of some form. The political opposition that arose after the "Morgan Affair" in 1826 gave rise to the term Anti-Masonry, which is still in use today, both by Masons in referring to their critics and as a self-descriptor by the critics themselves."Anti-mason" infoplease.com retrieved 9 January 2014
Religious opposition
Freemasonry has attracted criticism from theocratic states and organised religions for supposed competition with religion, or supposed heterodoxy within the fraternity itself, and has long been the target of conspiracy theories, which assert Freemasonry to be an occult and evil power.Morris, S. Brent; The Complete Idiot's Guide to Freemasonry, Alpha books, 2006, p,204.
Christianity and Freemasonry
Although members of various faiths cite objections, certain Christian denominations have had high-profile negative attitudes to Masonry, banning or discouraging their members from being Freemasons.
The denomination with the longest history of objection to Freemasonry is the Roman Catholic Church. The objections raised by the Roman Catholic Church are based on the allegation that Masonry teaches a naturalistic deistic religion which is in conflict with Church doctrine. A number of Papal pronouncements have been issued against Freemasonry. The first was Pope Clement XII's In eminenti apostolatus, 28 April 1738; the most recent was Pope Leo XIII's Ab apostolici, 15 October 1890. The 1917 Code of Canon Law explicitly declared that joining Freemasonry entailed automatic excommunication, and banned books favouring Freemasonry.Canon 2335, 1917 Code of Canon Law from
In 1983, the Church issued a new code of canon law. Unlike its predecessor, the 1983 Code of Canon Law did not explicitly name Masonic orders among the secret societies it condemns. It states: "A person who joins an association which plots against the Church is to be punished with a just penalty; one who promotes or takes office in such an association is to be punished with an interdict." This named omission of Masonic orders caused both Catholics and Freemasons to believe that the ban on Catholics becoming Freemasons may have been lifted, especially after the perceived liberalisation of Vatican II. However, the matter was clarified when Cardinal Joseph Ratzinger (later Pope Benedict XVI), as the Prefect of the Congregation for the Doctrine of the Faith, issued a Declaration on Masonic Associations, which states: "... the Church's negative judgment in regard to Masonic association remains unchanged since their principles have always been considered irreconcilable with the doctrine of the Church and therefore membership in them remains forbidden. The faithful who enrol in Masonic associations are in a state of grave sin and may not receive Holy Communion."Congregation of the Doctrine of the Faith, DECLARATION ON MASONIC ASSOCIATIONS , 26 November 1983, retrieved 26 November 2015 For its part, Freemasonry has never objected to Catholics joining their fraternity. Those Grand Lodges in amity with UGLE deny the Church's claims. The UGLE now states that "Freemasonry does not seek to replace a Mason's religion or provide a substitute for it."
In contrast to Catholic allegations of rationalism and naturalism, Protestant objections are more likely to be based on allegations of mysticism, occultism, and even Satanism. Masonic scholar Albert Pike is often quoted (in some cases misquoted) by Protestant anti-Masons as an authority for the position of Masonry on these issues. However, Pike, although undoubtedly learned, was not a spokesman for Freemasonry and was also controversial among Freemasons in general. His writings represented his personal opinion only, and furthermore an opinion grounded in the attitudes and understandings of late 19th century Southern Freemasonry of the USA. Notably, his book carries in the preface a form of disclaimer from his own Grand Lodge. No one voice has ever spoken for the whole of Freemasonry.
Free Methodist Church founder B.T. Roberts was a vocal opponent of Freemasonry in the mid 19th century. Roberts opposed the society on moral grounds and stated, "The god of the lodge is not the God of the Bible." Roberts believed Freemasonry was a "mystery" or "alternate" religion and encouraged his church not to support ministers who were Freemasons. Freedom from secret societies is one of the "frees" upon which the Free Methodist Church was founded.
Since the founding of Freemasonry, many Bishops of the Church of England have been Freemasons, such as Archbishop Geoffrey Fisher. In the past, few members of the Church of England would have seen any incongruity in concurrently adhering to Anglican Christianity and practising Freemasonry. In recent decades, however, reservations about Freemasonry have increased within Anglicanism, perhaps due to the increasing prominence of the evangelical wing of the church. The former Archbishop of Canterbury, Dr Rowan Williams, appeared to harbour some reservations about Masonic ritual, whilst being anxious to avoid causing offence to Freemasons inside and outside the Church of England. In 2003 he felt it necessary to apologise to British Freemasons after he said that their beliefs were incompatible with Christianity and that he had barred the appointment of Freemasons to senior posts in his diocese when he was Bishop of Monmouth.
In 1933, the Orthodox Church of Greece officially declared that being a Freemason constitutes an act of apostasy and thus, until he repents, the person involved with Freemasonry cannot partake of the Eucharist. This has been generally affirmed throughout the whole Eastern Orthodox Church. The Orthodox critique of Freemasonry agrees with both the Roman Catholic and Protestant versions: "Freemasonry cannot be at all compatible with Christianity as far as it is a secret organisation, acting and teaching in mystery and secret and deifying rationalism."
Regular Freemasonry has traditionally not responded to these claims, beyond the often repeated statement that those Grand Lodges in amity with UGLE explicitly adhere to the principle that "Freemasonry is not a religion, nor a substitute for religion. There is no separate 'Masonic deity,' and there is no separate proper name for a deity in Freemasonry."
Christian men, who were discouraged from joining the Freemasons by their Churches or who wanted a more religiocentric society, joined similar fraternal organisations, such as the Knights of Columbus for Catholic Christians, and the Loyal Orange Institution for Protestant Christians, although these fraternal organisations have been "organized in part on the style of and use many symbols of Freemasonry".
Islam and Freemasonry
Many Islamic anti-Masonic arguments are closely tied to both antisemitism and Anti-Zionism, though other criticisms are made such as linking Freemasonry to al-Masih ad-Dajjal (the false Messiah). Some Muslim anti-Masons argue that Freemasonry promotes the interests of the Jews around the world and that one of its aims is to destroy the Al-Aqsa Mosque in order to rebuild the Temple of Solomon in Jerusalem."Can a Muslim be a Freemason" Wake up from your slumber, 2007, retrieved 8 January 2014 In article 28 of its Covenant, Hamas states that Freemasonry, Rotary, and other similar groups "work in the interest of Zionism and according to its instructions ..."
Many countries with a significant Muslim population do not allow Masonic establishments within their jurisdictions. However, countries such as Turkey and Morocco have established Grand Lodges,Leyiktez, Celil. "Freemasonry in the Islamic World", Pietre-Stones Retrieved 2 October 2007. while in countries such as Malaysia"Home Page", District Grand Lodge of the Eastern Archipelago, retrieved 9 January 2014 and LebanonFreemasonry in Lebanon Lodges linked to the Grand Lodge of Scotland, retrieved 22 August 2013 there are District Grand Lodges operating under a warrant from an established Grand Lodge.
In Pakistan in 1972, Zulfiqar Ali Bhutto, then Prime Minister of Pakistan, placed a ban on Freemasonry. Lodge buildings were confiscated by the government.Peerzada Salman, "Masonic Mystique", December 2009, Dawn.com (News site), retrieved 3 January 2012
Masonic lodges existed in Iraq as early as 1917, when the first lodge under the United Grand Lodge of England (UGLE) was opened. Nine lodges under UGLE existed by the 1950s, and a Scottish lodge was formed in 1923. However, the position changed following the revolution, and all lodges were forced to close in 1965.Kent Henderson, "Freemasonry in Islamic Countries", 2007 paper, Pietre Stones, retrieved 4 January 2014 This position was later reinforced under Saddam Hussein; the death penalty was "prescribed" for those who "promote or acclaim Zionist principles, including freemasonry, or who associate [themselves] with Zionist organisations."
Political opposition
In 1799, English Freemasonry almost came to a halt due to Parliamentary proclamation. In the wake of the French Revolution, the Unlawful Societies Act banned any meetings of groups that required their members to take an oath or obligation.Andrew Prescott, "The Unlawful Societies Act", First published in M. D. J. Scanlan, ed., The Social Impact of Freemasonry on the Modern Western World, The Canonbury Papers I (London: Canonbury Masonic Research Centre, 2002), pp. 116–134, Pietre-Stones website, retrieved 9 January 2014
The Grand Masters of both the Moderns and the Antients Grand Lodges called on Prime Minister William Pitt (who was not a Freemason) and explained to him that Freemasonry was a supporter of the law and lawfully constituted authority and was much involved in charitable work. As a result, Freemasonry was specifically exempted from the terms of the Act, provided that each private lodge's Secretary placed with the local "Clerk of the Peace" a list of the members of his lodge once a year. This continued until 1967 when the obligation of the provision was rescinded by Parliament.
Freemasonry in the United States faced political pressure following the 1826 kidnapping of William Morgan by Freemasons and subsequent disappearance. Reports of the "Morgan Affair", together with opposition to Jacksonian democracy (Andrew Jackson was a prominent Mason) helped fuel an Anti-Masonic movement, culminating in the formation of a short lived Anti-Masonic Party which fielded candidates for the Presidential elections of 1828 and 1832."The Morgan Affair", Reprinted from The Short Talk Bulletin – Vol. XI, March 1933 No. 3, Grand Lodge of British Columbia and Yukon, retrieved 4 January 2014
thumb|alt=Erlangen Lodge revival, meeting in 1948|Lodge in Erlangen, Germany. First meeting after World War II with guests from USA, France and Czechoslovakia, 1948.
In Italy, Freemasonry has become linked to a scandal concerning the Propaganda Due lodge (a.k.a. P2). This lodge was chartered by the Grande Oriente d'Italia in 1877, as a lodge for visiting Masons unable to attend their own lodges. Under Licio Gelli's leadership, in the late 1970s, P2 became involved in the financial scandals that nearly bankrupted the Vatican Bank. However, by this time the lodge was operating independently and irregularly, as the Grand Orient had revoked its charter and expelled Gelli in 1976.
Conspiracy theorists have long associated Freemasonry with the New World Order and the Illuminati, and state that Freemasonry as an organisation is either bent on world domination or already secretly in control of world politics. Historically, Freemasonry has attracted criticism—and suppression—from both the politically far right (e.g., Nazi Germany) and the far left (e.g. the former Communist states in Eastern Europe).Michael Johnstone, The Freemasons, Arcturus, 2005, pp 73–75
Even in modern democracies, Freemasonry is sometimes viewed with distrust.Hodapp, Christopher. Freemasons for Dummies. Indianapolis: Wiley, 2005. p. 86. In the UK, Masons working in the justice system, such as judges and police officers, were from 1999 to 2009 required to disclose their membership.Bright, Martin (12 June 2005). "MPs told to declare links to Masons", The Guardian While a parliamentary inquiry found that there has been no evidence of wrongdoing, it was felt that any potential loyalties Masons might have, based on their vows to support fellow Masons, should be transparent to the public.Cusick, James (27 December 1996). Police want judges and MPs to reveal Masonic links too, The Independent The policy of requiring a declaration of masonic membership of applicants for judicial office (judges and magistrates) was ended in 2009 by Justice Secretary Jack Straw (who had initiated the requirement in the 1990s). Straw stated that the rule was considered disproportionate, since no impropriety or malpractice had been shown as a result of judges being Freemasons.
Freemasonry is both successful and controversial in France; membership is rising, but reporting in the popular media is often negative.
In some countries anti-Masonry is often related to antisemitism and anti-Zionism. For example, In 1980, the Iraqi legal and penal code was changed by Saddam Hussein's ruling Ba'ath Party, making it a felony to "promote or acclaim Zionist principles, including Freemasonry, or who associate [themselves] with Zionist organisations". Professor Andrew Prescott of the University of Sheffield writes: "Since at least the time of the Protocols of the Elders of Zion, antisemitism has gone hand in hand with anti-masonry, so it is not surprising that allegations that 11 September was a Zionist plot have been accompanied by suggestions that the attacks were inspired by a masonic world order".Prescott, pp. 13–14, 30, 33.
The Holocaust
thumb|alt=Forget-me-not|Forget-me-not
The preserved records of the Reichssicherheitshauptamt (the Reich Security Main Office) show the persecution of Freemasons during the Holocaust. RSHA Amt VII (Written Records) was overseen by Professor Franz Six and was responsible for "ideological" tasks, by which was meant the creation of antisemitic and anti-Masonic propaganda. While the number is not accurately known, it is estimated that between 80,000 and 200,000 Freemasons were killed under the Nazi regime.Freemasons for Dummies, by Christopher Hodapp, Wiley Publishing Inc., Indianapolis, 2005, p. 85, sec. Hitler and the Nazi Masonic concentration camp inmates were graded as political prisoners and wore an inverted red triangle. Hitler believed Freemasons had succumbed to the Jews conspiring against Germany.https://www.ushmm.org/wlc/en/article.php?ModuleId=10007186http://freemasonry.bcy.ca/anti-masonry/hitler.html
The small blue forget-me-not flower was first used by the Grand Lodge Zur Sonne, in 1926, as a Masonic emblem at the annual convention in Bremen, Germany. In 1938 a forget-me-not badge—made by the same factory as the Masonic badge—was chosen for the annual Nazi Party Winterhilfswerk, the annual charity drive of the National Socialist People's Welfare, the welfare branch of the Nazi party. This coincidence enabled Freemasons to wear the forget-me-not badge as a secret sign of membership.Also in:
After World War II, the forget-me-not flower was again used as a Masonic emblem at the first Annual Convention of the United Grand Lodges of Germany in 1948. The badge is now worn in the coat lapel by Freemasons around the world to remember all who suffered in the name of Freemasonry, especially those during the Nazi era.
See also
List of Freemasons
Footnotes
External links
Web of Hiram at the University of Bradford. A database of donated Masonic material.
Masonic Books Online of the Pietre-Stones Review of Freemasonry
The Constitutions of the Free-Masons (1734), James Anderson, Benjamin Franklin, Paul Royster. Hosted by the Libraries at the University of Nebraska-Lincoln
The Mysteries of Free Masonry, by William Morgan, from Project Gutenberg
,
The United Grand Lodge of England's Library and Museum of Freemasonry, London
A page about Freemasonry – claiming to be the world's oldest Masonic website.
Articles on Judaism and Freemasonry
Anti-Masonry: Points of View – Edward L. King's Masonic website
| 11,227 | 2017-01 |
Letter case | thumb|159px|The lower-case "a" and upper-case "A" are the two case variants of the first letter in the alphabet.
In orthography and typography, letter case (or just case) is the distinction between the letters that are in larger upper case (also uppercase, capital letters, capitals, caps, large letters, or more formally majuscule) and smaller lower case (also lowercase, small letters, or more formally minuscule) in the written representation of certain languages. Here is a comparison of the upper and lower case versions of each letter included in the English alphabet (the exact representation will vary according to the font used):
Upper case A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Lower case a b c d e f g h i j k l m n o p q r s t u v w x y z
Typographically, the basic difference between the majuscules and minuscules is not that the majuscules are big and minuscules small, but that the majuscules generally have the same height. The height of the minuscules varies, as some of them have parts higher (ascenders) or lower (descenders) than the typical size. In Times New Roman, for instance, b, d, f, h, k, l, t In Roman Antiqua or other vertical fonts, the defunct Initial or Medial Long-s, ſ, would have been an ascender; however, in italics, it would have been one of only two letters in the English or Expanded Latin Alphabet with both an ascender and a descender, the other being f. are the letters with ascenders, and g, j, p, q, y are the ones with descenders. In addition, with old-style numerals still used by some traditional or classical fonts, 6 and 8 make up the ascender set, and 3, 4, 5, 7 and 9 the descender set.
Letter case is often prescribed by the grammar of a language or by the conventions of a particular discipline. In orthography, the uppercase is primarily reserved for special purposes, such as the first letter of a sentence or of a proper noun, which makes the lowercase the more common variant in text. In mathematics, letter case may indicate the relationship between objects with uppercase letters often representing "superior" objects (e.g. X could be a set containing the generic member x). Engineering design drawings are typically labelled entirely in upper-case letters, which are easier to distinguish than lowercase, especially when space restrictions require that the lettering be small.
Terminology
thumb|250px|Divided upper and lower type cases for movable type
The terms upper case and lower case can be written as two consecutive words, connected with a hyphen (upper-case and lower-case), or as a single word (uppercase and lowercase). These terms originated from the common layouts of the shallow drawers called type cases used to hold the movable type for letterpress printing. Traditionally, the capital letters were stored in a separate case that was located above the case that held the small letters, and the name proved easy to remember since capital letters are taller.
Majuscule, ( or ), for palaeographers, is technically any script in which the letters have very few or very short ascenders and descenders, or none at all (for example, the majuscule scripts used in the Codex Vaticanus Graecus 1209, or the Book of Kells). By virtue of their visual impact, this made the term majuscule an apt descriptor for what much later came to be more commonly referred to as uppercase letters.
Minuscule refers to lowercase letters. The word is often spelled miniscule, by association with the unrelated word miniature and the prefix mini-. This has traditionally been regarded as a spelling mistake (since minuscule is derived from the word minus), but is now so common that some dictionaries tend to accept it as a nonstandard or variant spelling. Miniscule is still less likely, however, to be used in reference to lower-case letters.
Bicameral script
thumb|250px|Williamsburg 18th-century press letters
Scripts using two separate cases are also called bicameral scripts. Languages that use the Latin, Cyrillic, Greek, Coptic, Armenian, Adlam, Varang Kshiti, Cherokee, and Osage scripts use letter cases in their written form as an aid to clarity. Other bicameral scripts that aren't used for any modern languages are Old Hungarian, Glagolitic, and Deseret. The Georgian alphabet has several variants, and there were attempts to use them as different cases, but the modern written Georgian language doesn't distinguish case.
Many other writing systems make no distinction between majuscules and minuscules – a system called unicameral script or unicase. This includes most syllabic and other non-alphabetic scripts.
If an alphabet has letter case, all or nearly all letters have both forms. Paired forms are considered variants of the same letter: they have the same name and pronunciation and will be treated identically when sorting in alphabetical order. The glyphs of lower-case letters can resemble smaller forms of the upper-case glyphs restricted to the base band (e.g. "C/c" and "S/s", cf. small caps) or can look hardly related (e.g. "D/d" and "G/g").
In scripts with a case distinction, lower case is generally used for the majority of text; capitals are used for capitalisation and emphasis. In addition, acronyms and initialisms are often written in all-caps, depending on various factors in English.
Capitalisation
Capitalisation is the writing of a word with its first letter in uppercase and the remaining letters in lowercase. Capitalisation rules vary by language and are often quite complex, but in most modern languages that have capitalisation, the first word of every sentence is capitalised, as are all proper nouns.
Capitalisation in English, in terms of the general orthographic rules independent of context (e.g. title vs. heading vs. text), is universally standardised for formal writing. (Informal communication, such as texting, instant messaging or a handwritten sticky note, may not bother, but that is because its users usually do not expect it to be formal. The same applies in other languages, e.g. in German they do not bother to capitalise nouns in these contexts.) In English, capital letters are used as the first letter of a sentence, a proper noun, or a proper adjective. There are a few pairs of words of different meanings whose only difference is capitalisation of the first letter. The names of the days of the week and the names of the months are also capitalised, as are the first-person pronoun "I" and the interjection "O" (although the latter is uncommon in modern usage, with "oh" being preferred). Other words normally start with a lower-case letter. There are, however, situations where further capitalisation may be used to give added emphasis, for example in headings and titles (see below). In some traditional forms of poetry, capitalisation has conventionally been used as a marker to indicate the beginning of a line of verse independent of any grammatical feature.
Other languages vary in their use of capitals. For example, in German all nouns are capitalised (this was previously common in English as well), while in Romance and most other European languages the names of the days of the week, the names of the months, and adjectives of nationality, religion and so on normally begin with a lower-case letter.
Exceptional letters and digraphs
The German letter "ß" orthographically only exists in lower case, as it never occurs at the beginning of a word. In all-caps style "ß" was historically replaced by the digraph "SS" until the introduction and standardization of a Capital ß form in the 2010s.
The Greek upper-case letter "Σ" has two different lower-case forms: "ς" in word-final position and "σ" elsewhere. In a similar manner, the Latin upper-case letter "S" used to have two different lower-case forms: "s" in word-final position and " ſ " elsewhere. The latter form, called the long s, fell out of general use before the middle of the 19th century, except for the countries that continued to use Blackletter typefaces such as Fraktur. When Blackletter type fell out of general use in the mid-20th century, even those countries dropped the long s.
Unlike most Latin-script languages, which link the dotless upper-case "I" with the dotted lower-case "i", Turkish has both a dotted and dotless I, each in both upper and lower case. Each of the two pairs ("İ/i" and "I/ı") represent a distinctive phoneme.
In some languages, specific digraphs may be regarded as a single letter. For example, in South Slavic languages whose orthography is coordinated between the Cyrillic and Latin scripts, the Latin digraphs "Lj/lj", "Nj/nj" and "Dž/dž" are each regarded as a single letter (like their Cyrillic equivalents "Љ/љ", "Њ/њ" and "Џ/џ", respectively), but even when capitalised, the second part resembles a lower-case letter (see discussion of "title case" below). Only in all-caps style should both parts resemble a capital letter (e.g. Ljiljan–LJILJAN, Njonja–NJONJA, Džidža–DŽIDŽA).
However in other languages, such as Welsh and Hungarian, various digraphs are regarded as single letters for collation purposes, but the second half of the digraph will still be written in lower case even if the first half is capitalised.
In Dutch, the digraph "IJ/ij" is capitalised as a single entity (for example, "IJsland" rather than "Ijsland").
In English, some families whose surname starts with F write it as "ff". For a fictional example, in the P. G. Wodehouse story "A Slice of Life" Wilfred Mulliner must circumvent the nasty Sir Jasper ffinch-ffarowmere to reach his love Angela.
Related phenomena
Similar orthographic and graphostylistic conventions are used for emphasis or following language-specific rules, including:
Font effects such as italic type or oblique type, boldface, and choice of serif vs. sans-serif.
Typographical conventions in mathematical formulae include the use of Greek letters and the use of Latin letters with special formatting such as blackboard bold and blackletter.
Letters of the Arabic alphabet and some jamo of the Korean hangul have different forms for initial or final placement, but these rules are strict and the different forms cannot be used for emphasis.
In Georgian, some authors use isolated letters from the ancient Asomtavruli alphabet within a text otherwise written in the modern Mkhedruli in a fashion that is reminiscent of the usage of upper-case letters in the Latin, Greek, and Cyrillic alphabets.
In the Japanese writing system, an author has the option of switching between kanji, hiragana, katakana, and rōmaji. In particular, every hiragana character has an equivalent katakana character, and vice versa. Because this resembles the Latin alphabet's two cases, romanised Japanese sometimes uses lowercase letters to represent words that would be written in hiragana, and uppercase letters to represent words that would be written in katakana. Some kana syllabograms can be written in smaller type when they modify or combine with the preceding sign (yōon and sokuon).
Stylistic or specialised usage
Case styles
thumb|182px|Alternating all-caps and headline styles at the start of a New York Times report published in November 1919. (The event reported is Arthur Eddington's test of Einstein's theory of general relativity.)
In English, a variety of case styles are used in various circumstances:
Sentence case "The quick brown fox jumps over the lazy dog." The standard case used in English prose. Only the first character of the sentence is capitalised, except for proper nouns and other words which are required by a more specific rule to be capitalised. Generally equivalent to the baseline universal standard of formal English orthography mentioned above.
Title case "The Quick Brown Fox Jumps Over The Lazy Dog." or "The Quick Brown Fox Jumps over the Lazy Dog."(depending on how the house style treats four-letter prepositions). Also known as "headline style" and "capital case". First character in all words capitalised, except for certain subsets defined by rules that are not universally standardised. The standardisation is only at the level of house styles and individual style manuals. (See further explanation below at .)
Start case (initial caps) "The Quick Brown Fox Jumps Over The Lazy Dog."A simplified variant of title case, start case capitalises words, including articles, prepositions, and conjunctions.
All caps (all uppercase) "THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG." Capital letters only. This style can be used in headings and special situations, such as for typographical emphasis in text made on a typewriter. With the advent of the Internet, all-caps is more often used for emphasis; however, it is considered poor netiquette by some to type in all capitals, and said to be tantamount to shouting.RFC 1855 "Netiquette Guidelines" Long spans of Latin-alphabet text in all upper-case are harder to read because of the absence of the ascenders and descenders found in lower-case letters, which can aid recognition.
Small caps "" Capital letters at the size of a lowercase "x". Slightly larger small-caps can be used in a fashion. Used for acronyms, names, mathematical entities, computer commands in printed text, business or personal printed stationery letterheads, and other situations where a given phrase needs to be distinguished from the main text.
All lowercase "the quick brown fox jumps over the lazy dog." No capital letters. This style is sometimes used for artistic effect, such as in poetry. Also commonly seen in computer commands, and in SMS language (avoiding the shift key, to type more quickly).
A comparison of various case styles (from most to least capitals used) Case style Example Description All-caps THE VITAMINS ARE IN MY FRESH CALIFORNIA RAISINS All letters uppercase Start case or initial caps The Vitamins Are In My Fresh California Raisins All words capitalised regardless of function Title case The Vitamins Are in My Fresh California Raisins First word and all other words capitalised except for articles, prepositions of fewer than 5 letters and conjunctions The Vitamins are in My Fresh California Raisins As above and also excepting copulae (forms of "to be") The Vitamins are in my Fresh California Raisins As above but excepting all closed-class words German-style sentence case The Vitamins are in my fresh California Raisins First word and all nouns capitalised German-style mid-sentence case the Vitamins are in my fresh California Raisins All nouns capitalised (but not first word by default) Sentence case The vitamins are in my fresh California raisins First word, proper nouns and some specified words capitalised Mid-sentence case the vitamins are in my fresh California raisins As above, but first word not capitalised by default Lowercase the vitamins are in my fresh raisins All letters lowercase (unconventional in English)
Headings and publication titles
In English-language publications, various conventions are used for the capitalisation of words in publication titles and headlines, including chapter and section headings. The rules differ substantially between individual house styles.
The convention followed by many British publishers (including scientific publishers, like Nature, magazines, like The Economist and New Scientist, and newspapers, like The Guardian and The Times) and also U.S. newspapers, is sentence-style capitalisation in headlines, i.e. capitalisation follows the same rules that apply for sentences. This convention is usually called sentence case. It may also be applied to publication titles, especially in bibliographic references and library catalogues. An example of a global publisher whose English-language house style prescribes sentence-case titles and headings is the International Organization for Standardization.
For publication titles it is, however, a common typographic practice among both British and U.S. publishers to capitalise significant words (and in the United States, this is often applied to headings, too). This family of typographic conventions is usually called title case. For example, R. M. Ritter's Oxford Manual of Style (2002) suggests capitalising "the first word and all nouns, pronouns, adjectives, verbs and adverbs, but generally not articles, conjunctions and short prepositions". This is an old form of emphasis, similar to the more modern practice of using a larger or boldface font for titles. The rules which prescribe which words to capitalise are not based on any grammatically inherent correct/incorrect distinction and are not universally standardised; they differ between style guides, although most style guides tend to follow a few strong conventions, as follows:
Most styles capitalise all words except for short closed-class words (certain parts of speech, namely, articles, prepositions, and conjunctions); but the first word (always) and last word (in many styles) are also capitalised, regardless of their part of speech. Many styles capitalise longer prepositions such as "between" and "throughout", but not shorter ones such as "for" and "with". One possible rule is to count a word of three or fewer letters as "short".
A few styles capitalise all words in title case (the so-called start case), which has the advantage of being easy to implement and hard to get "wrong" (that is, "not edited to style"). Because of this rule's simplicity, software case-folding routines can handle 95% or more of the editing, especially if they are programmed for desired exceptions (such as "FBI" rather than "Fbi").
As for whether hyphenated words are capitalised not only at the beginning but also after the hyphen, there is no universal standard; variation occurs in the wild and among house styles (e.g., "The Letter-Case Rule in My Book"; "Short-term Follow-up Care for Burns"). Traditional copyediting makes a distinction between temporary compounds (such as many nonce [novel instance] compound modifiers), in which every part of the hyphenated word is capitalised (e.g. "How This Particular Author Chose to Style His Autumn-Apple-Picking Heading"), and permanent compounds, which are terms that, although compound and hyphenated, are so well established that dictionaries enter them as headwords (e.g., "Short-term Follow-up Care for Burns").
Title case is widely used in many English-language publications, especially in the United States. However, its conventions are sometimes not followed strictly—especially in informal writing.
In creative typography, such as music record covers and other artistic material, all styles are commonly encountered, including all-lowercase letters and special case styles, such as studly caps (see below). For example, in video-game wordmarks it is not uncommon to use stylised upper-case letters at the beginning and end of a title, with the intermediate letters in small caps or lower case (e.g., , , and DmC.
Multi-word proper nouns
Single-word proper nouns are capitalised in formal written English, unless the name is intentionally stylised to break this rule (such as the first or last name of danah boyd).
Multi-word proper nouns include names of organisations, publications, and people. Often the rules for "title case" (described in the previous section) are applied to these names, so that non-initial articles, conjunctions, and short prepositions are lowercase, and all other words are uppercase. For example, the short preposition "of" and the article "the" are lowercase in "Steering Committee of the Finance Department". Usually only capitalised words are used to form an acronym variant of the name, though there is some variation in this.
With personal names, this practice can vary (sometimes all words are capitalised, regardless of length or function), but is not limited to English names. Examples include the English names Tamar of Georgia and Catherine the Great, "van" and "der" in Dutch names, "de", "los", and "y" in Spanish names, "de" or "d'" in French names, and "ibn" in Arabic names. Some surname prefixes also affect the capitalisation of the following internal letter, for example "Mac" in Celtic names and "Al" in Arabic names.
Special case styles
Some case styles are not used in standard English, but are common in computer programming, product branding, or other specialised fields:
CamelCase "TheQuickBrownFoxJumpsOverTheLazyDog" Spaces and punctuation are removed and the first letter of each word is capitalised. If this includes the first letter of the first word ("CamelCase", "PowerPoint", "TheQuick...", etc.), the case is sometimes called upper camel case (or, when written, "CamelCase"), Pascal case or bumpy case. When, otherwise, the first letter of the first word is lowercase ("camelCase", "iPod", "eBay", etc.), the case is usually known as camelCase and sometimes as lower camel case. This is the format that has become popular in the branding of information technology products.
snake_case "The_quick_brown_fox_jumps_over_the_lazy_dog" Punctuation is removed and spaces are replaced by single underscores. Normally the letters share the same case (e.g. "UPPER_CASE_EMBEDDED_UNDERSCORE" or "lower_case_embedded_underscore") but the case can be mixed. When all upper case, it may be referred to as "SCREAMING_SNAKE_CASE".
kebab-case e.g. "The-quick-brown-fox-jumps-over-the-lazy-dog" As per snake_case above, except hyphens rather than underscores are used to replace spaces. If every word is capitalised, the style is known as Train-Case.
StUdLyCaPs e.g. "tHeqUicKBrOWnFoXJUmpsoVeRThElAzydOG" Mixed case with no semantic or syntactic significance to the use of the capitals. Sometimes only vowels are upper-case, at other times upper and lower case are alternated, but often it is just random. The name comes from the sarcastic or ironic implication that it was used in an attempt by the writer to convey their own coolness. (It is also used to mock the violation of standard English case conventions by marketers in the naming of computer software packages, even when there is no technical requirement to do soe.g. Sun Microsystems' naming of a windowing system NeWS.)
Metric system
thumb|Of the seven SI base-unit symbols, "A" (ampere for electric current) and "K" (kelvin for temperature) are always written in upper case, whereas "s" (second for time), "m" (metre for length), "kg" (kilogram for mass), "cd" (candela for luminous intensity), and "mol" (mole for amount of substance) are written in lower case. (The kelvin, second and kilogram are defined independently of any other units, but the rest depend on the definitions of other base units.)
In the International System of Units (SI), a letter usually has different meanings in upper and lower case when used as a unit symbol. By default, a unit symbol is written in lower case, but if the name of the unit is derived from a proper noun, the first letter of the symbol is written in upper case (nevertheless, the name of the unit, if spelled out, is always considered a common noun and written accordingly):
1 s (one second) when used for the base unit of time.
1 S (one siemens) when used for the unit of electric conductance and admittance (named after Werner von Siemens).
1 Sv (one sievert), used for the unit of ionising radiation dose (named after Rolf Maximilian Sievert).
For the purpose of clarity, the symbol for litre can optionally be written in upper case even though the name is not derived from a proper noun:
1 l, the original form, where "one" and "lower-case L" look rather alike (in some typefaces). (Sometimes a different type face, such as a "curly" one, is used for the lower-case L.)
1 L, an alternative form, where "one" and "capital L" look different.
The letter case of a prefix symbol is determined independently of the unit symbol to which it is attached. Lower case is used for all submultiple prefix symbols and the small multiple prefix symbols up to "k" (for kilo, meaning 103 = 1000 multiplier), whereas upper case is used for larger multipliers:
1 ms, a small measure of time ("m" for milli, meaning 10−3 = 1/1000 multiplier).
1 Ms, a large measure of time ("M" for mega, meaning 106 = 1 000 000 multiplier).
1 mS, a small measure of electric conductance.
1 MS, a large measure of electric conductance.
1 mm, a small measure of length (the latter "m" for metre).
1 Mm, a large measure of length.
Case folding
Case-insensitive operations are sometimes said to fold case, from the idea of folding the character code table so that upper- and lower-case letters coincide. The conversion of letter case in a string is common practice in computer applications, for instance to make case-insensitive comparisons. Many high-level programming languages provide simple methods for case folding, at least for the ASCII character set.
Methods in word processing
Most modern word processors provide automated case folding with a simple click or keystroke. For example, in Microsoft Office Word, there is a dialog box for toggling the selected text through UPPERCASE, then lowercase, then Title Case (actually start caps; exception words must be lowercased individually). The keystroke does the same thing.
Methods in programming
In some forms of BASIC there are two methods for case folding:
UpperA$ = UCASE$("a")
LowerA$ = LCASE$("A")
C and C++, as well as any C-like language that conforms to its standard library, provide these functions in the file ctype.h:
char upperA = toupper('a');
char lowerA = tolower('A');
Case folding is different with different character sets. In ASCII or EBCDIC, case can be folded in the following way, in C:
#define toupper(c) (islower(c) ? (c) – 'a' + 'A' : (c))
#define tolower(c) (isupper(c) ? (c) – 'A' + 'a' : (c))
This only works because the letters of upper and lower cases are spaced out equally. In ASCII they are consecutive, whereas with EBCDIC they are not; nonetheless the upper-case letters are arranged in the same pattern and with the same gaps as are the lower-case letters, so the technique still works.
Some computer programming languages offer facilities for converting text to a form in which all words are first-letter capitalised. Visual Basic calls this "proper case"; Python calls it "title case". This differs from usual title casing conventions, such as the English convention in which minor words are not capitalised.
Unicode case folding and script identification
Unicode defines case folding through the three case-mapping properties of each character: uppercase, lowercase and titlecase. These properties relate all characters in scripts with differing cases to the other case variants of the character.
As briefly discussed in Unicode Technical Note #26, "In terms of implementation issues, any attempt at a unification of Latin, Greek, and Cyrillic would wreak havoc [and] make casing operations an unholy mess, in effect making all casing operations context sensitive […]". In other words, while the shapes of letters like A, B, E, H, K, M, O, P, T, X, Y and so on are shared between the Latin, Greek, and Cyrillic alphabets (and small differences in their canonical forms may be considered to be of a merely typographical nature), it would still be problematic for a multilingual character set or a font to provide only a single codepoint for, say, uppercase letter B, as this would make it quite difficult for a wordprocessor to change that single uppercase letter to one of the three different choices for the lower-case letter, b (Latin), β (Greek), or в (Cyrillic). Without letter case, a "unified European alphabet"such as ABБCГDΔΕZЄЗFΦGHIИJ…Z, with an appropriate subset for each languageis feasible; but considering letter case, it becomes very clear that these alphabets are rather distinct sets of symbols.
History
Originally alphabets were written entirely in majuscule letters, spaced between well-defined upper and lower bounds. When written quickly with a pen, these tended to turn into rounder and much simpler forms. It is from these that the first minuscule hands developed, the half-uncials and cursive minuscule, which no longer stayed bound between a pair of lines. These in turn formed the foundations for the Carolingian minuscule script, developed by Alcuin for use in the court of Charlemagne, which quickly spread across Europe. The advantage of the minuscule over majuscule was improved, faster readability.
In Latin, papyri from Herculaneum dating before 79 AD (when it was destroyed) have been found that have been written in old Roman cursive, where the early forms of minuscule letters "d", "h" and "r", for example, can already be recognised. According to papyrologist Knut Kleve, "The theory, then, that the lower-case letters have been developed from the fifth century uncials and the ninth century Carolingian minuscules seems to be wrong." Both majuscule and minuscule letters existed, but the difference between the two variants was initially stylistic rather than orthographic and the writing system was still basically unicameral: a given handwritten document could use either one style or the other but these were not mixed. European languages, except for Ancient Greek and Latin, did not make the case distinction before about 1300.
The timeline of writing in Western Europe can be divided into four eras:
Greek majuscule (9th–3rd century BC) in contrast to the Greek uncial script (3rd century BC – 12th century AD) and the later Greek minuscule
Roman majuscule (7th century BC – 4th century AD) in contrast to the Roman uncial (4th–8th century AD), Roman Half Uncial, and minuscule
Carolingian majuscule (4th–8th century AD) in contrast to the Carolingian minuscule (around 780 – 12th century)
Gothic majuscule (13th and 14th century), in contrast to the early Gothic (end of 11th to 13th century), Gothic (14th century), and late Gothic (16th century) minuscules.
Traditionally, certain letters were rendered differently according to a set of rules. In particular, those letters that began sentences or nouns were made larger and often written in a distinct script. There was no fixed capitalisation system until the early 18th century. The English language eventually dropped the rule for nouns, while the German language kept it.
Similar developments have taken place in other alphabets. The lower-case script for the Greek alphabet has its origins in the 7th century and acquired its quadrilinear form in the 8th century. Over time, uncial letter forms were increasingly mixed into the script. The earliest dated Greek lower-case text is the Uspenski Gospels (MS 461) in the year 835. The modern practice of capitalising the first letter of every sentence seems to be imported (and is rarely used when printing Ancient Greek materials even today).
thumb|center|700px
|Simplified relationship between various scripts leading to the development of modern lower case of standard Latin alphabet and that of the modern variants Fraktur (used, until recently, in Germany) and Gaelic (used in Ireland). Several scripts coexisted such as half-uncial and uncial, which derive from Roman cursive and Greek uncial, and Visigothic, Merovingian (Luxeuil variant here) and Beneventan. The Carolingian script was the basis for blackletter and humanist minuscule. What is commonly called "Gothic writing" is technically called blackletter (here textualis quadrata) and is completely unrelated to Visigothic script.The letter j is i with a flourish, u and v are the same letter in early scripts and were used depending on their position in insular half-uncial and caroline minuscule and later scripts, w is a ligature of vv, in insular the rune wynn is used as a w (three other runes in use were the thorn (þ), ʻféʼ (ᚠ) as an abbreviation for cattle/goods and maðr (ᛘ) for man).The letters y and z were very rarely used, in particular þ was written identically to y so y was dotted to avoid confusion, the dot was adopted for i only after late-caroline (protogothic), in beneventan script the macron abbreviation featured a dot above.Lost variants such as r rotunda, ligatures and scribal abbreviation marks are omitted, long s is shown when no terminal s (surviving variant) is present.Humanist script was the basis for Venetian types which changed little until today, such as Times New Roman (a serifed typeface)).
Type cases
The individual type blocks used in hand typesetting are stored in shallow wooden or metal drawers known as "type cases". Each is subdivided into a number of compartments ("boxes") for the storage of different individual letters.
The Oxford Universal Dictionary on Historical Advanced Proportional Principles (reprinted 1952) indicates that case in this sense (referring to the box or frame used by a compositor in the printing trade) was first used in English in 1588. Originally one large case was used for each typeface, then "divided cases", pairs of cases for majuscules and minuscules, were introduced in the region of today's Belgium by 1563, England by 1588, and France before 1723.
The terms upper and lower case originate from this division. By convention, when the two cases were taken out of the storage rack, and placed on a rack on the compositor's desk, the case containing the capitals and small capitals stood at a steeper angle at the back of the desk, with the case for the small letters, punctuation and spaces being more easily reached at a shallower angle below it to the front of the desk, hence upper and lower case.
Though pairs of cases were used in English-speaking countries and many European countries in the seventeenth century, in Germany and Scandinavia the single case continued in use.
Various patterns of cases are available, often with the compartments for lower-case letters varying in size according to the frequency of use of letters, so that the commonest letters are grouped together in larger boxes at the centre of the case. The compositor takes the letter blocks from the compartments and places them in a composing stick, working from left to right and placing the letters upside down with the nick to the top, then sets the assembled type in a galley.
See also
All caps
CamelCase
Capitalisation
Drop cap
Roman cursive
Roman square capitals
Shift key
Small caps
StudlyCaps
Text figures
Unicase
References
External links
Online Text Case Converter: Convert to Title Case, Sentence Case, Uppercase & Lowercase
Printing capitals worksheet
Codex Vaticanus B/03 Detailed description of Codex Vaticanus Graecus 1209 with many images.
All-caps is harder to read
Capitals: A Primer of Information about Capitalization with some Practical Typographic Hints as to the Use of Capitals by Frederick W. Hamilton, 1918, from Project Gutenberg
Lower Case Definition by The Linux Information Project; also includes information on lower case as it relates to computers.
Category:Alphabets
Category:Orthography
Category:Typography
Category:Capitalization | 625,125 | 2017-01 |
Communications in Somalia | thumb|right|170px|The Hormuud Telecom building in Mogadishu.
Communications in Somalia encompasses the communications services and capacity of Somalia. Telecommunications, internet, radio, print, television and postal services in the nation are largely concentrated in the private sector. Several of the telecom firms have begun expanding their activities abroad. The Federal government operates two official radio and television networks, which exist alongside a number of private and foreign stations. Print media in the country is also progressively giving way to news radio stations and online portals, as internet connectivity and access increases. Additionally, the national postal service is slated to be officially relaunched in 2013 after a long absence. In 2012, a National Communications Act was also approved by Cabinet members, which lays the foundation for the establishment of a National Communications regulator in the broadcasting and telecommunications sectors.
Telecommunications
General
After the start of the civil war, various new telecommunications companies began to spring up in the country and competed to provide missing infrastructure.Telecom Firms Thrive in Somalia Despite War, Shattered Economy – The Wall Street Journal Somalia now offers some of the most technologically advanced and competitively priced telecommunications and internet services in the world. Funded by Somali entrepreneurs and backed by expertise from China, Korea and Europe, these nascent telecommunications firms offer affordable mobile phone and internet services that are not available in many other parts of the continent. Customers can conduct money transfers (such as through the popular Dahabshiil) and other banking activities via mobile phones, as well as easily gain wireless Internet access.
thumb|right|Minister of Post and Telecommunications Mohamud Ibrihim Adan at the 2012 World Conference on International Telecommunications (WCIT) in Dubai.
After forming partnerships with multinational corporations such as Sprint, ITT and Telenor, these firms now offer the cheapest and clearest phone calls in Africa.Christopher J. Coyne, After war: the political economy of exporting democracy, (Stanford University Press, 2008), p. 154. These Somali telecommunication companies also provide services to every city, town and hamlet in Somalia. There are presently around 25 mainlines per 1,000 persons, and the local availability of telephone lines (tele-density) is higher than in neighboring countries; three times greater than in adjacent Ethiopia. Prominent Somali telecommunications companies include Somtel Network, Golis Telecom Group, Hormuud Telecom, Somafone, Nationlink, Netco, Telcom and Somali Telecom Group. Hormuud Telecom alone grosses about $40 million a year. Despite their rivalry, several of these companies signed an interconnectivity deal in 2005 that allows them to set prices, maintain and expand their networks, and ensure that competition does not get out of control.
In 2008, Dahabshiil Group acquired a majority stake in Somtel Network, a Hargeisa-based telecommunications firm specialising in high speed broadband, mobile internet, LTE services, mobile money transfer and mobile phone services.International Association of Money Transfer NetworksYahoo! Finance The acquisition provided Dahabshiil with the necessary platform for a subsequent expansion into mobile banking, a growth industry in the regional banking sector.TechChangeMonty Munford "Guest Post: Could Tiny Somaliland Become the First Cashless Society?", TechCrunch.com (5 September 2010). In 2014, Somalia's three largest telecommunication operators, Hormuud Telecom, NationLink and Somtel, also signed an interconnection agreement. The cooperative deal will see the firms establish the Somali Telecommunication Company (STC), which will allow their mobile clients to communicate across the three networks.
Investment in the telecom industry is held to be one of the clearest signs that Somalia's economy has continued to develop. The sector provides key communication services, and in the process facilitates job creation and income generation.
Regulation
On March 22, 2012, the Somali Cabinet unanimously approved the National Communications Act, which paves the way for the establishment of a National Communications regulator in the broadcasting and telecommunications sectors. The bill was passed following consultations between government representatives and communications, academic and civil society stakeholders. According to the Ministry of Information, Posts and Telecommunication, the Act is expected to create an environment conducive to investment and the certainty it provides will encourage further infrastructural development, resulting in more efficient service delivery.
Firms
Companies providing telecommunication services in Somalia include:
Mail
The Somali Postal Service (Somali Post) is the national postal service of the Federal Government of Somalia. It is part of the Ministry of Information, Posts and Telecommunication.Ministry of Information, Posts and Telecommunications, Government of Somalia, 2012. Retrieved 9 December 2012.
The national postal infrastructure was completely destroyed during the civil war. In order to fill the vacuum, Somali Post signed an agreement in 2003 with the United Arab Emirates' Emirates Post to process mail to and from Somalia. Emirates Post's mail transit hub at the Dubai International Airport was then used to forward mail from Somalia to the UAE and various Western destinations, including Italy, the Netherlands, the United Kingdom, Sweden, Switzerland and Canada.Emirates Post and Somali Post sign agreement to establish money transfer and mail services, AMEinfo.com, 30 June 2003. Retrieved 9 December 2012. Archived here.
Concurrently, the Somali Transitional Federal Government began preparations to revive the national postal service. The government's overall reconstruction plan for Somali Post is structured into three Phases spread out over a period of ten years. Phase I will see the reconstruction of the postal headquarters and General Post Office (GPO), as well as the establishment of 16 branch offices in the capital and 17 in regional bases. As of March 2012, the Somali authorities have re-established Somalia's membership with the Universal Postal Union (UPU), and taken part once again in the Union's affairs. They have also rehabilitated the GPO in Mogadishu, and appointed an official Postal Consultant to provide professional advice on the renovations. Phase II of the rehabilitation project involves the construction of 718 postal outlets from 2014 to 2016. Phase III is slated to begin in 2017, with the objective of creating 897 postal outlets by 2022.Reconstruction of Somalia Post, Presentation to the U.P.U., Berne, Somali Ministry of Information, Posts and Telecommunications, 2 March 2012. Retrieved 9 December 2012. Archived here.
On 1 November 2013, international postal services for Somalia officially resumed. The Universal Postal Union is now assisting the Somali Postal Service to develop its capacity, including providing technical assistance and basic mail processing equipment.
Radio
There are a number of radio news agencies based in Somalia. Established during the colonial period, Radio Mogadishu initially broadcast news items in both Somali and Italian.World radio TV handbook, (Billboard Publications., 1955), p.77. The station was modernized with Russian assistance following independence in 1960, and began offering home service in Somali, Amharic and Oromo.Thomas Lucien Vincent Blair, Africa: a market profile, (Praeger: 1965), p.126. After closing down operations in the early 1990s due to the civil war, the station was officially re-opened in the early 2000s by the Transitional National Government.SOMALIA: TNG launches “Radio Mogadishu” In the late 2000s, Radio Mogadishu also launched a complementary website of the same name, with news items in Somali, Arabic and English.Radio Muqdisho.net
Other radio stations based in Mogadishu include radio Dalsan, Mustaqbal Media corporation and the Shabelle Media Network, the latter of which was in 2010 awarded the Media of the Year prize by the Paris-based journalism organisation, Reporters Without Borders (RSF). In total, about one short-wave and ten private FM radio stations broadcast from the capital, with several radio stations broadcasting from the central and southern regions.
The northeastern Puntland region has around six private radio stations, including Radio Garowe, Radio Daljir, Radio Codka-Nabbada and Radio Codka-Mudug. Radio Gaalkacyo, formerly known as Radio Free Somalia, operates from Galkayo in the north-central Mudug province. Additionally, the Somaliland region in the northwest has one government-operated radio station.
As of 2007, transmissions for two internationally based broadcasters were also available.
Television
thumb|right|News show on the Somali private channel Horn Cable Television.
The Mogadishu-based Somali National Television is the principal national public service broadcaster. On March 18, 2011, the Ministry of Information of the Transitional Federal Government began experimental broadcasts of the new TV channel. After a 20-year hiatus, the station was shortly thereafter officially re-launched on April 4, 2011.After 20 years, Somali president inaugurates national TV station SNTV broadcasts 24 hours a day, and can be viewed both within Somalia and abroad via terrestrial and satellite platforms.Somalia launches national TV
Additionally, Somalia has several private television networks, including Horn Cable Television and Universal TV. Two such TV stations re-broadcast Al-Jazeera and CNN. Eastern Television Network and SBC TV air from Bosaso, the commercial capital of Puntland. The Puntland and Somaliland regions also each have one government-run TV channel, Puntland TV and Radio and Somaliland National TV, respectively.
Print
In the early 2000s, print media in Somalia reached a peak in activity. Around 50 newspapers were published in Mogadishu alone during this period, including Qaran, Mogadishu Times, Sana'a, Shabelle Press, Ayaamaha, Mandeeq, Sky Sport, Goal, The Nation, Dalka, Panorama, Aayaha Nolosha, Codka Xuriyada and Xidigta Maanta. In 2003, as new free electronic media outlets started to proliferate, advertisers increasingly began switching over from print ads to radio and online commercials in order to reach more customers. A number of the broadsheets in circulation subsequently closed down operations, as they were no longer able to cover printing costs in the face of the electronic revolution. In 2012, the political Xog Doon and Xog Ogaal and Horyaal Sports were reportedly the last remaining newspapers printed in the capital. According to Issa Farah, a former editor with the Dalka broadsheet, newspaper publishing in Somalia is likely to experience a resurgence if the National Somali Printing Press is re-opened and the sector is given adequate public support.
Online news oulets covering Somalia include Garowe Online, Wardheernews, Horseedmedia, Calannka, Jowhar, Hiiraan, Boramanews and Puntland Post.
Telephone
To call in Somalia, the following format is used:
yxx xxxx, yy xxx xxx or yy xxx xxx - Calls within Somalia
+252 yxx xxxx, +252 yy xxx xxx or +252 yyy xxx xxx - Calls from outside Somalia
As of the end of 2013, over 52% of Somalia's population used a cellphone.Somalia - Telecoms, Mobile and Broadband - Market Insights and Statistics
Internet
Internet users: 163,185 in 2014 (156th in the world) or 1.51% of the population (156nd in the world).Internet Users by Country (2014) According to Global Internet, one of the largest Internet providers in central and southern Somalia, unofficial estimates on local Internet usage are higher, with 2.0% of the population estimated to have Internet access as of 2011.
Internet hosts: 186 hosts in 2012 (202nd in the world).
IPv4: 10,240 addresses allocated, less than 0.05% of the world total, 1.0 addresses per 1000 people (2012).Select Formats, Country IP Blocks. Accessed on 2 April 2012. Note: Site is said to be updated daily.Population, The World Factbook, United States Central Intelligence Agency. Accessed on 2 April 2012. Note: Data are mostly for 1 July 2012.
.so is the Internet top-level domain (ccTLD) for Somalia. After a long absence, the .so domain was officially relaunched in November 2010 by the .SO Registry. Regulated by the national Ministry of Posts and Telecommunication, the registrar offers several domain name spaces geared toward specific communities and interest groups:SO Registry
.so – General usage
com.so – Commercial enterprises
net.so – Networks
org.so – Non-profit organizations
gov.so – Government agencies
According to the Centre for Law and Democracy (CLD) and the African Union/United Nations Information Support Team (IST), Somalia did not have systemic internet blocking or filtering as of December 2012. The application of content standards online was also unclear.
Somalia established its first ISP in 1999, one of the last countries in Africa to get connected to the Internet. According to the telecommunications resource Balancing Act, growth in internet connectivity has since then grown considerably, with around 53% of the entire nation covered as of 2009. Both internet commerce and telephony have consequently become among the quickest growing local businesses.
According to the Somali Economic Forum, the number of internet users in Somalia rose from only 200 in the year 2000 to 106,000 users in 2011, with the percentage continuing to rise. The number of mobile subscribers is similarly expected to rise from 512,682 in 2008 to around 6.1 million by 2015.
The Somali Telecommunication Association (STA), a watchdog organization that oversees the policy development and regulatory framework of Somalia's ICT sector, reported in 2006 that there were over half a million users of internet services within the territory. There were also 22 established ISPs and 234 cyber cafes, with an annual growth rate of 15.6%.
As of 2009, dial up, wireless and satellite services were available. Dial up internet services in Somalia were among the fastest growing on the continent, with an annual landline growth rate of over 12.5%. The increase in usage was largely due to innovative policy initiatives adopted by the various Somali telecom operators, including free local in-town calls, a flat rate of $10 per month for unlimited calls, a low charge of $0.005 per minute for Internet connections, and a one-time connection fee of $50. Global Internet Company, a firm jointly owned by the major Somali telecommunication networks Hormuud Telecom, Telcom Somalia and Nationlink, was the country's largest ISP. It was at the time the only provider of dial up services in Somalia's south-central regions. In the northern Puntland and Somaliland regions, online networks offered internet dial up services to their own group of subscribers. Among these firms was Golis Telecom Somalia in the northeast and Telesom in the northwest.
Broadband wireless services were offered by both dial up and non-dial up ISPs in major cities, such as Mogadishu, Bosaso, Hargeisa, Galkayo and Kismayo. Pricing ranged from $150 to $300 a month for unlimited internet access, with bandwidth rates of 64 kbit/s up and down. The main patrons of these wireless services were scholastic institutions, corporations, and UN, NGO and diplomatic missions. Mogadishu had the biggest subscriber base nationwide and was also the headquarters of the largest wireless internet services, among which were Dalkom (Wanaag HK), Orbit, Unitel and Webtel.
As of 2009, Internet via satellite had a steady growth rate of 10% to 15% per year. It was particularly in demand in remote areas that did not have either dialup or wireless online services. The local telecommunications company Dalkom Somalia provided internet over satellite, as well as premium routes for media operators and content providers, and international voice gateway services for global carriers. It also offered inexpensive bandwidth through its internet backbone, whereas bandwidth ordinarily cost customers from $2,500 to $3,000 per month through the major international bandwidth providers. The main clients of these local satellite services were internet cafes, money transfer firms and other companies, as well as international community representatives. In total, there were over 300 local satellite terminals available aross the nation, which were linked to teleports in Europe and Asia. Demand for the satellite services gradually began to fall as broadband wireless access rose. However, it increased in rural areas, as the main client base for the satellite services extended their operations into more remote locales.
In December 2012, Hormuud Telecom launched its Tri-Band 3G service for internet and mobile clients. The first of its kind in the country, this third generation mobile telecommunications technology offers users a faster and more secure connection.
In November 2013, Somalia received its first fiber optic connection. The country previously had to rely on expensive satellite links due to the civil conflict, which limited internet usage. However, residents now have access to broadband internet cable for the first time after an agreement reached between Hormuud Telecom and Liquid Telecom. The deal will see Liquid Telecom link Hormuud to its 17,000 km (10,500 mile) network of terrestrial cables, which will deliver faster internet capacity. The fiber optic connection will also make online access more affordable to the average user. This in turn is expected to further increase the number of internet users. Dalkom Somalia reached a similar agreement with the West Indian Ocean Cable Company (WIOCC) Ltd, which it holds shares in. Effective the first quarter of 2014, the deal will establish fiber optic connectivity to and from Somalia via the EASSy cable. The new services are expected to reduce the cost of international bandwidth and to better optimize performance, thereby further broadening internet access. Dalkom Somalia is concurrently constructing a 1,000 square mile state-of-the-art data center in Mogadishu. The site will facilitate direct connection into the international fiber optic network by hosting equipment for all of the capital's ISPs and telecommunication companies.
See also
Media of Somalia
References
External links
Media and Telecommunications Lansdcape in Somalia, a infoasaid guide, January 2012, 92 pp.
Somalia
Somalia
Somalia
Somalia | 27,363 | 2017-01 |
Exhibition game | thumb|right|333px|Sydney FC playing a friendly match against the Los Angeles Galaxy at ANZ Stadium in 2007.
An exhibition game (also known as a friendly, a scrimmage, a demonstration, a preseason game, a warmup match, or a preparation match, depending at least in part on the sport) is a sporting event whose prize money and impact on the player's or the team's rankings is either zero or otherwise greatly reduced. In team sports, matches of this type are often used to help coaches and managers select and condition players for the competitive matches of a league season or tournament. If the players usually play in different teams in other leagues, exhibition games offer an opportunity for the players to learn to work with each other. The games can be held between separate teams or between parts of the same team.
An exhibition game may also be used to settle a challenge, to provide professional entertainment, to promote the sport, to commemorate an anniversary or a famous player, or to raise money for charities. Several sports leagues hold all-star games to showcase their best players against each other, while other exhibitions games may pit participants from two different leagues or countries to unofficially determine who would be the best in the world. International competitions like the Olympic Games may also hold exhibition games as part of a demonstration sport.
Association football
thumb|228px| Exhibition game
In the early days of association football, known simply as football or soccer, friendly matches (or "friendlies") were the most common type of match. However, since the development of The Football League in England in 1888, league tournaments became established, in addition to lengthy derby and cup tournaments. By the year 2000, national leagues were established in almost every country throughout the world, as well as local or regional leagues for lower level teams, thus the significance of friendlies has seriously declined since the 19th century.
Club football
Since the introduction of league football, most club sides play a number of friendlies before the start of each season (called pre-season friendlies). Friendly football matches are considered to be non-competitive and are only used to "warm up" players for a new season/competitive match. There is generally nothing competitive at stake and some rules may be changed or experimented with (such as unlimited substitutions, which allow teams to play younger, less experienced, players, and no cards). Although most friendlies are simply one-off matches arranged by the clubs themselves, in which a certain amount is paid by the challenger club to the incumbent club, some teams do compete in short tournaments, such as the Emirates Cup, Teresa Herrera Trophy and the Amsterdam Tournament. Although these events may involve sponsorship deals and the awarding of a trophy and may even be broadcast on television, there is little prestige attached to them. Frequently such games take place between a large club and small clubs that play nearby, such as those between Newcastle United and Gateshead.
International football
International teams also play friendlies, generally in preparation for the qualifying or final stages of major tournaments. This is essential, since national squads generally have much less time together in which to prepare. The biggest difference between friendlies at the club and international levels is that international friendlies mostly take place during club league seasons, not between them. This has on occasion led to disagreement between national associations and clubs as to the availability of players, who could become injured or fatigued in a friendly.
International friendlies give team managers the opportunity to experiment with team selection and tactics before the tournament proper, and also allow them to assess the abilities of players they may potentially select for the tournament squad. Players can be booked in international friendlies, and can be suspended from future international matches based on red cards or accumulated yellows in a specified period. Caps and goals scored also count towards a player's career records. In 2004, FIFA ruled that substitutions by a team be limited to six per match in international friendlies in response to criticism that such matches were becoming increasingly farcical with managers making as many as 11 substitutions per match.
Fundraising game
In the UK and Ireland, "exhibition match" and "friendly match" refer to two different types of matches. The types described above as friendlies are not termed exhibition matches, while annual all-star matches such as those held in the US Major League Soccer or Japan's Japanese League are called exhibition matches rather than friendly matches. A one-off match for charitable fundraising, usually involving one or two all-star teams, or a match held in honor of a player for contribution to his/her club, may also be described as exhibition matches but they are normally referred to as charity matches and testimonial matches respectively.
Bounce game
A bounce game is generally a non-competitive football match played between two sides usually as part of a training exerciseBounce game Free Online DictionaryBounce game provides striking solution BBC Blogs, 10 February 2010 or to give players match practice.Crawford takes part in bounce game Scottish Football League, 23 February 2011Guti, Alipio impress in Real Madrid bounce game Tribal Football, 2 December 2009 Managers may also use bounce games as an opportunity to observe a player in action before offering a contract.McShane bags a bounce game hat trick Paisley Daily Express, 18 August 2011Neil Lennon on the lookout Evening Times, 26 October 2011 Usually these games are played on a training groundDumbarton FC manager prepares for first bounce game Lennox Herald, 1 July 2011Mulgrew backs Lennon despite Celtic's disappointing start to SPL season Mail Online, 24 October 2011 rather than in a stadium with no spectators in attendance.ICT players fight to avoid axe North Star News, 1 December 2011
Boxing
Exhibition fights were once common in boxing. Jack Dempsey fought many exhibition bouts after retiring. Joe Louis fought a charity fight on his rematch with Buddy Baer, but this was not considered an exhibition as it was for Louis' world Heavyweight title. Muhammad Ali fought many exhibitions, including one with Lyle Alzado. In more modern times, Mike Tyson, Julio Cesar Chavez Sr. and Jorge Castro have been involved in exhibition fights.
Although not fought for profit, amateur bouts (usually) and sparring sessions are not considered to be exhibition fights.
Ice hockey
Prior to the 1917-18 NHL season, an exhibition game was played on 15 December, between the Montreal Canadiens and the Montreal Wanderers. The game was played as a benefit to aid victims of the Halifax explosion.
Under the 1995–2004 National Hockey League collective bargaining agreement, teams were limited to nine preseason games. From 1975 to 1991, NHL teams sometimes played exhibition games against teams from the Soviet Union in the Super Series, and in 1978, played against World Hockey Association teams also in preseason training. Like the NFL, the NHL sometimes schedules exhibition games for cities without their own NHL teams, often at a club's minor league affiliate (e.g. Carolina Hurricanes games at Time Warner Cable Arena in Charlotte, home of their AHL affiliate; Los Angeles Kings games at Citizens Business Bank Arena in Ontario, California, home of their ECHL affiliate; Montreal Canadiens games at Colisée Pepsi in Quebec City, which has no pro hockey but used to have an NHL team until 1995; Washington Capitals at 1st Mariner Arena in the Baltimore Hockey Classic; various Western Canada teams at Credit Union Centre in Saskatoon, a potential NHL expansion venue). Since the 2000s, some preseason games have been played in Europe against European teams, as part of the NHL Challenge and NHL Premiere series. In addition to the standard preseason, there also exist prospect tournaments such as the Vancouver Canucks' YoungStars tournament and the Detroit Red Wings' training camp, in which NHL teams' younger prospects face off against each other under their parent club's banner.
In 1992, goaltender Manon Rhéaume played in a preseason game for the Tampa Bay Lightning, becoming the first woman to suit up for an all male pro sports team in North America.
The Flying Fathers, a Canadian group of Catholic priests, regularly toured North America playing exhibition hockey games for charity. One of the organization's founders, Les Costello, was a onetime NHL player who was ordained as a priest after retiring from professional hockey. Another prominent exhibition hockey team is the Buffalo Sabres Alumni Hockey Team, which is composed almost entirely of retired NHL players, the majority of whom (as the name suggests) played at least a portion of their career for the Buffalo Sabres.
American college hockey teams occasionally play exhibition games against Canadian college teams as well as against USA or Canadian national teams. (In men's hockey, the senior national teams are selected from NHL and other pro players, and college teams would be overmatched against those teams even if they were allowed to play them. However, the national under-18 teams are made up of amateurs, allowing college squads to play them.)
Baseball
The Major League Baseball's preseason is also known as spring training. All MLB teams maintain a spring-training base in Arizona or Florida. The teams in Arizona make up the Cactus League, while the teams in Florida play in the Grapefruit League. Each team plays about 30 preseason games against other MLB teams. They may also play exhibitions against a local college team or a minor-league team from their farm system. Some days feature the team playing two games with two different rosters evenly divided up, which are known as "split-squad" games.
Several MLB teams used to play regular exhibition games during the year against nearby teams in the other major league, but regular-season interleague play has made such games unnecessary. The two Canadian MLB teams, the Toronto Blue Jays of the American League and the Montreal Expos of the National League, met annually to play the Pearson Cup exhibition game; this tradition ended when the Expos moved to Washington DC for the 2005 season. Similarly, the New York Yankees played in the Mayor's Trophy Game against various local rivals from 1946 to 1983.Baseball Reference
It also used to be commonplace to have a team play an exhibition against Minor League affiliates during the regular season, but worries of injuries to players, along with travel issues, have made this very rare. Exhibitions between inter-city teams in different leagues, like Chicago's Crosstown Classic and New York's Subway Series which used to be played solely as exhibitions for bragging rights are now blended into interleague play. The annual MLB All-Star Game, played in July between players from AL teams and players from NL teams, was long considered an exhibition match, but as of 2003 this status was questioned because the league whose team wins the All-Star game has been awarded home field advantage for the upcoming World Series.
Another exhibition game, the Hall of Fame Game/Classic which was played in Cooperstown, New York on the weekend of inductions to the Baseball Hall of Fame, was also ended in 2008 due to interleague play and teams playing only substitutes.
Basketball
Professional basketball
National Basketball Association teams play eight preseason games per year. Today, NBA teams almost always play each other in the preseason, but often at neutral sites within their market areas in order to allow those who can't usually make a trip to a home team's arena during the regular season to see a game close to home; for instance the Minnesota Timberwolves will play games in arenas in North Dakota and South Dakota, while the Phoenix Suns schedule one exhibition game outdoors at Indian Wells Tennis Garden in Indian Wells, California yearly, the only such instance an NBA game takes place in an outdoor venue. Exhibition games have been also been held on occasion outside the U.S. and Canada.
However, from 1971 to 1975, NBA teams played preseason exhibitions against American Basketball Association teams. In the early days of the NBA, league clubs sometimes challenged the legendary barnstorming Harlem Globetrotters, with mixed success. The NBA has played preseason games in Europe and Asia. In the 2006 and 2007 seasons, the NBA and the primary European club competition, the Euroleague, conducted a preseason tournament featuring two NBA teams and the finalists from that year's Euroleague. In the 1998-99 and 2011-12 seasons, teams were limited to only two preseason games due to lockouts.
The annual NBA All-Star Game is an exhibition game.
Women's National Basketball Association teams play up to three preseasons games per year. WNBA plays will play each other and will also play women's national basketball teams. Most years, the WNBA also stages an All-Star Game, but this game is cancelled if pre-emptied by major international competitions such as the Olympic Games.
College basketball
Traditionally, major college basketball teams began their seasons with a few exhibition games. They played travelling teams made up of former college players on teams such as Athletes in Action or a team sponsored by Marathon Oil. On occasion before 1992, when FIBA allowed professional players on foreign national teams, colleges played those teams in exhibitions. However, in 2003, the National Collegiate Athletic Association banned games with non-college teams. Some teams have begun scheduling exhibition games against teams in NCAA Division II and NCAA Division III, or even against colleges and universities located in Canada. Major college basketball teams still travel to other countries during the summer to play in exhibition games, although a college team is allowed one foreign tour every four years, and a maximum of ten games in each tour.
American football
Professional football
Compared to other team sports, the National Football League preseason is very structured. Every NFL team plays exactly four pre-season exhibition games a year, two at home and two away, with the exception of two teams each year who play a fifth game, the Pro Football Hall of Fame Game. These exhibition games, most of which are held in the month of August, are played for the purpose of helping coaches narrow down the roster from the offseason limit of 90 players to the regular-season limit of 53 players. While the scheduling formula is not as rigid for preseason games as they are for the regular season, there are numerous restrictions and traditions that limit the choices of preseason opponents; teams are also restricted on what days and times they can play these games. Split-squad games, a practice common in baseball and hockey, where a team that is scheduled to play two games on the same day splits their team into two squads, are prohibited. The NFL has played exhibition games in Europe, Japan, Canada, Australia (including the American Bowl in 1999) and Mexico to spread the league's popularity (a game of this type was proposed for China but, due to financial and logistical problems, was eventually canceled). The league has tacitly forbidden the playing of non-league opponents, with the last interleague game having come in 1972 and the last game against a team other than an NFL team (the all-NFL rookie College All-Stars) was held in 1976. Exhibition games are quite unpopular with many fans, who resent having to pay regular-season prices for two home exhibition games as part of a season-ticket package. Numerous lawsuits have been brought by fans and classes of fans against the NFL or its member teams regarding this practice, but none have been successful in halting it. The Pro Bowl, traditionally played after the end of the NFL season (since 2011 is played the week prior to the Super Bowl), is also considered an exhibition game.
The Arena Football League briefly had a two-game exhibition season in the early 2000s, a practice that ended in 2003 with a new television contract. Exhibition games outside of a structured season are relatively common among indoor American football leagues; because teams switch leagues frequently at that level of play, it is not uncommon to see some of the smaller leagues schedule exhibition games against teams that are from another league, about to join the league as a probational franchise, or a semi-pro outdoor team to fill holes in a schedule.
College and high school football
Many college football teams, particularly larger organizations, play a public intramural exhibition game in the spring mainly to promote the team and give new recruits an early chance at public game action. Many of these intramural games are nationally televised, though not to the same level of prominence as intercollegiate play. In college sports the commonly used term for the major scrimmage at the end of spring practice is the "Spring Game."
True exhibition games between opposing colleges at the highest level do not exist in college football; due to the importance of opinion polling in the top level of college football, even exhibition games would not truly be exhibitions because they could influence the opinions of those polled. Intramural games are possible because a team playing against itself leaves little ability for poll participants to make judgments, and at levels below the Football Bowl Subdivision (FBS), championships are decided by objective formulas and thus those teams can play non-league games without affecting their playoff hopes.
High school football teams frequently participate in controlled scrimmages with other teams during preseason practice, but full exhibition games are rare because of league rules and concerns about finances, travel and player injuries, along with enrollments not being registered until the early part of August in most school districts under the traditional September–June academic term. Some states hold pre-season events known as "jamborees" in which several pairs of high school football squads take turns playing one half (usually 24 minutes of game time) to give players some experience before the first official game. Another high school football exhibition contest is the all-star game, which usually brings together top players from a region. These games are typically played by graduating seniors after the regular season or in the summer. Many of these games, which include the U.S. Army All-American Bowl and Under Armour All-America Game, are used as showcases for players to be seen by colleges and increase their college recruiting profile.
Canadian football
Teams in the Canadian Football League play two exhibition games each year, in June. Exhibition games in the CFL have taken on great importance to coaching staff and players alike in that they are used as a final stage of training camp and regular season rosters are finalized after the exhibition games, which are generally referred to as "pre-season" play.
Rugby union
During the amateur era, there were no rugby union competitions between national teams. Therefore matches between national teams are never considered "exhibitions", as they always have Test match status. However, test matches held before the Rugby World Cup are considered warmup matches.
National teams sometimes play exhibition matches versus invitational teams like the Barbarian F.C. and Barbarian Rugby Club. Also, rugby union clubs sometimes play preseason matches.
Australian rules football
Australian rules football has been introduced to a wide range of places around Australia and the world since the code originated in Victoria in 1859. Much of this expansion can be directly attributed to exhibition matches by the major leagues in regions and countries where the code has been played as a demonstration sport.
Auto racing
Various auto racing organizations hold exhibition events; these events usually award no championship points to participants, but they do offer prize money to participants. The NASCAR Sprint Cup Series holds two exhibition events annually - the Sprint Unlimited, held at Daytona International Speedway at the start of the season, and the NASCAR Sprint All-Star Race, held at Charlotte Motor Speedway midway through the season. Both events carry a hefty purse of over USD $1,000,000. NASCAR has also held exhibition races at Suzuka Circuit and Twin Ring Motegi in Japan and Calder Park Thunderdome in Australia.
Other historical examples of non-championship races include the Marlboro Challenge in IndyCar racing and the TOCA Touring Car Shootout in the British Touring Car Championship. Until the mid-1980s there were a significant number of non-championship Formula One races.
The National Hot Rod Association Pro Stock teams will have a pre-season drag meet held before the traditional start in Pomona. The Pro Stock Showdown is a pre-season drag meet held for the Pro Stock teams held at The Strip at Las Vegas Motor Speedway.
See also
Sparring
References
External links
All-Time ABA vs. NBA Exhibition Game Results Remember the ABA - article about NBA vs. ABA exhibitions
College Basketball Exhibitions: No Longer Open Season CollegeHoopsNet, 16 November 2004 - article about the 2003 NCAA ruling
Category:Sports terminology
Category:Association football terminology
| 977,090 | 2017-01 |
Hard rock | Hard rock is a loosely defined subgenre of rock music that began in the mid-1960s, with the garage, psychedelic and blues rock movements. It is typified by a heavy use of aggressive vocals, distorted electric guitars, bass guitar, drums, and often accompanied with pianos and keyboards.
Hard rock developed into a major form of popular music in the 1970s, with bands such as Led Zeppelin, The Who, Queen, Black Sabbath, Deep Purple, Aerosmith, AC/DC and Van Halen. During the 1980s, some hard rock bands moved away from their hard rock roots and more towards pop rock,V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 903–5. while others began to return to a hard rock sound. Established bands made a comeback in the mid-1980s and it reached a commercial peak in the 1980s, with glam metal bands like Bon Jovi and Def Leppard and the rawer sounds of Guns N' Roses, which followed up with great success in the later part of that decade. Hard rock began losing popularity with the commercial success of R&B, hip-hop, urban pop, grunge and later Britpop in the 1990s.
Despite this, many post-grunge bands adopted a hard rock sound and in the 2000s there came a renewed interest in established bands, attempts at a revival, and new hard rock bands that emerged from the garage rock and post-punk revival scenes. In the 2000s, only a few hard rock bands from the 1970s and 1980s managed to sustain highly successful recording careers.
Definitions
thumb|left|300px|Drum notation for a back beat.D. Anger, "Introduction to the 'Chop'", Strad (0039-2049), 10 January 2006, vol. 117, issue 1398, pp. 72–7.
Hard rock is a form of loud, aggressive rock music. The electric guitar is often emphasised, used with distortion and other effects, both as a rhythm instrument using repetitive riffs with a varying degree of complexity, and as a solo lead instrument. Drumming characteristically focuses on driving rhythms, strong bass drum and a backbeat on snare, sometimes using cymbals for emphasis.R. Shuker, Popular Music: the Key Concepts, (Abingdon: Routledge, 2nd end., 2005), ISBN 0-415-34770-X, pp. 130–1. The bass guitar works in conjunction with the drums, occasionally playing riffs, but usually providing a backing for the rhythm and lead guitars. Vocals are often growling, raspy, or involve screaming or wailing, sometimes in a high range, or even falsetto voice.E. Macan, Rocking the Classics: English Progressive Rock and the Counterculture (Oxford: Oxford University Press, 1997), ISBN 0-19-509887-0, p. 39.
Hard rock has sometimes been labelled cock rock for its emphasis on overt masculinity and sexuality and because it has historically been predominantly performed and consumed by men: in the case of its audience, particularly white, working-class adolescents.
In the late 1960s, the term heavy metal was used interchangeably with hard rock, but gradually began to be used to describe music played with even more volume and intensity.P. Du Noyer, ed., The Illustrated Encyclopedia of Music (Flame Tree, 2003), ISBN 1-904041-70-1, p. 96. While hard rock maintained a bluesy rock and roll identity, including some swing in the back beat and riffs that tended to outline chord progressions in their hooks, heavy metal's riffs often functioned as stand-alone melodies and had no swing in them.. Heavy metal took on "darker" characteristics after Black Sabbath's breakthrough at the beginning of the 1970s. In the 1980s it developed a number of subgenres, often termed extreme metal, some of which were influenced by hardcore punk, and which further differentiated the two styles.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 1332–3. Despite this differentiation, hard rock and heavy metal have existed side by side, with bands frequently standing on the boundary of, or crossing between, the genres.R. Walser, Running With the Devil: Power, Gender, and Madness in Heavy Metal Music (Middletown, CT: Wesleyan University Press, 1993), ISBN 0-8195-6260-2, p. 7.
History
The roots of hard rock can be traced back to the 1950s, particularly electric blues,Simon Frith, Will Straw. The hard rock genre is originally from Glasgow. John Street, The Cambridge Companion to Pop and Rock, page 19, Cambridge University Press which laid the foundations for key elements such as a rough declamatory vocal style, heavy guitar riffs, string-bending blues-scale guitar solos, strong beat, thick riff-laden texture, and posturing performances.Michael Campbell & James Brody (2007), Rock and Roll: An Introduction, page 201 Electric blues guitarists began experimenting with hard rock elements such as driving rhythms, distorted guitar solos and power chords in the 1950s, evident in the work of Memphis blues guitarists such as Joe Hill Louis, Willie Johnson, and particularly Pat Hare, who captured a "grittier, nastier, more ferocious electric guitar sound" on records such as James Cotton's "Cotton Crop Blues" (1954).Robert Palmer, "Church of the Sonic Guitar", pp. 13–38 in Anthony DeCurtis, Present Tense (Durham NC: Duke University Press, 1992), ISBN 0-8223-1265-4, pp. 24–27. Other antecedents include Link Wray's instrumental "Rumble" in 1958,J. Simmonds, The Encyclopedia of Dead Rock Stars: Heroin, Handguns, and Ham Sandwiches (Chicago Il: Chicago Review Press, 2008), ISBN 1-55652-754-3, p. 559. and the surf rock instrumentals of Dick Dale, such as "Let's Go Trippin'" (1961) and "Misirlou" (1962).
Origins (1960s)
thumb|Cream, whose blues rock improvisation was a major factor in the development of the genre.
In the 1960s, American and British blues and rock bands began to modify rock and roll by adding harder sounds, heavier guitar riffs, bombastic drumming, and louder vocals, from electric blues. Early forms of hard rock can be heard in the work of Chicago blues musicians Elmore James, Muddy Waters, and Howlin' Wolf,Jane Beethoven, Carman Moore, Rock-It, page 37, Alfred Music The Kingsmen's version of "Louie Louie" (1963) which made it a garage rock standard,P. Buckley, The Rough Guide to Rock (London: Rough Guides, 2003), ISBN 1-84353-105-4, p. 1144. and the songs of rhythm and blues influenced British Invasion acts,R. Unterberger, "Early British R&B", in V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 1315–6. including "You Really Got Me" by The Kinks (1964),[ "Review of 'You Really Got Me' "], Denise Sullivan, AllMusic, All Music.com "My Generation" by The Who (1965), "Shapes of Things" (1966) by The Yardbirds, "Inside Looking Out" (1966) by The Animals and "(I Can't Get No) Satisfaction" (1965) by The Rolling Stones.P. Prown and H. P. Newquist, Legends of Rock Guitar: the Essential Reference of Rock's Greatest Guitarists (Milwaukee, WI: Hal Leonard Corporation, 1997), ISBN 0-7935-4042-9, p. 29. From the late 1960s, it became common to divide mainstream rock music that emerged from psychedelia into soft and hard rock. Soft rock was often derived from folk rock, using acoustic instruments and putting more emphasis on melody and harmonies.J. M. Curtis, Rock Eras: Interpretations of Music and Society, 1954–1984 (Madison, WI: Popular Press, 1987), ISBN 0-87972-369-6, p. 447. In contrast, hard rock was most often derived from blues rock and was played louder and with more intensity.
Blues rock acts that pioneered the sound included Cream, The Jimi Hendrix Experience, and The Jeff Beck Group. Cream, in songs like "I Feel Free" (1966) combined blues rock with pop and psychedelia, particularly in the riffs and guitar solos of Eric Clapton.R. Unterberger, [ "Song Review: I Feel Free"], AllMusic, retrieved 22 February 2010. Jimi Hendrix produced a form of blues-influenced psychedelic rock, which combined elements of jazz, blues and rock and roll.D. Henderson, Scuse Me While I kiss the Sky: the Life of Jimi Hendrix (London: Omnibus Press, 2002), ISBN 0-7119-9432-3, p. 112. From 1967 Jeff Beck brought lead guitar to new heights of technical virtuosity and moved blues rock in the direction of heavy rock with his band, The Jeff Beck Group.V. Bogdanov, C. Woodstra, S. T. Erlewine, eds, All Music Guide to the Blues: The Definitive Guide to the Blues (Backbeat, 3rd edn., 2003), ISBN 0-87930-736-6, pp. 700–2. Dave Davies of The Kinks, Keith Richards of The Rolling Stones, Pete Townshend of The Who, Hendrix, Clapton and Beck all pioneered the use of new guitar effects like phasing, feedback and distortion.P. Prown and H. P. Newquist, Legends of Rock Guitar: the Essential Reference of Rock's Greatest Guitarists (Milwaukee, WI: Hal Leonard Corporation, 1997), ISBN 0-7935-4042-9, pp. 59–60. The Beatles began producing songs in the new hard rock style beginning with the White Album in 1968 and, with the track "Helter Skelter", attempted to create a greater level of noise than the Who. Stephen Thomas Erlewine of AllMusic has described the "proto-metal roar" of "Helter Skelter,"S. T. Erlewine, [ "Beatles: 'The White Album"], Allmusic, retrieved 3 August 2010. while Ian MacDonald calling it "ridiculous, with McCartney shrieking weedily against a massively tape-echoed backdrop of out-of-tune thrashing"I. Macdonald, Revolution in the Head: The Beatles Records and the Sixties (London: Vintage, 3rd edn., 2005), p. 298.
thumb|left|250px|left|Led Zeppelin live at Chicago Stadium, January 1975.
Groups that emerged from the American psychedelic scene about the same time included Iron Butterfly, MC5, Blue Cheer and Vanilla Fudge.R. Walser, Running With the Devil: Power, Gender, and Madness in Heavy Metal Music (Middletown, CT: Wesleyan University Press, 1993), ISBN 0-8195-6260-2, pp. 9–10. San Francisco band Blue Cheer released a crude and distorted cover of Eddie Cochran's classic "Summertime Blues", from their 1968 debut album Vincebus Eruptum, that outlined much of the later hard rock and heavy metal sound. The same month, Steppenwolf released its self-titled debut album, including "Born to Be Wild", which contained the first lyrical reference to heavy metal and helped popularise the style when it was used in the film Easy Rider (1969). Iron Butterfly's In-A-Gadda-Da-Vida (1968), with its 17-minute-long title track, using organs and with a lengthy drum solo, also prefigured later elements of the sound.
By the end of the decade a distinct genre of hard rock was emerging with bands like Led Zeppelin, who mixed the music of early rock bands with a more hard-edged form of blues rock and acid rock on their first two albums Led Zeppelin (1969) and Led Zeppelin II (1969), and Deep Purple, who began as a progressive rock group but achieved their commercial breakthrough with their fourth and distinctively heavier album, In Rock (1970). Also significant was Black Sabbath's Paranoid (1970), which combined guitar riffs with dissonance and more explicit references to the occult and elements of Gothic horror. All three of these bands have been seen as pivotal in the development of heavy metal, but where metal further accentuated the intensity of the music, with bands like Judas Priest following Sabbath's lead into territory that was often "darker and more menacing", hard rock tended to continue to remain the more exuberant, good-time music.
Expansion (1970s)
thumb|250px|right|The Who on stage in 1975.
In the early 1970s the Rolling Stones developed their hard rock sound with Exile on Main St. (1972). Initially receiving mixed reviews, according to critic Steve Erlewine it is now "generally regarded as the Rolling Stones' finest album".S. T. Erlewine, [ "Rolling Stones: Exile on Mainstreet"], Allmusic, retrieved 3 August 2010. They continued to pursue the riff-heavy sound on albums including It's Only Rock 'n' Roll (1974) and Black and Blue (1976).S. T. Erlewine, [ "The Rolling Stones"], Allmusic, retrieved 3 August 2010. Led Zeppelin began to mix elements of world and folk music into their hard rock from Led Zeppelin III (1970) and Led Zeppelin IV (1971). The latter included the track "Stairway to Heaven", which would become the most played song in the history of album-oriented radio.S. T. Erlewine, [ "Led Zeppelin"], Allmusic, retrieved 27 September 2010. Deep Purple continued to define hard rock, particularly with their album Machine Head (1972), which included the tracks "Highway Star" and "Smoke on the Water".R. Walser, Running With the Devil: Power, Gender, and Madness in Heavy Metal Music (Middletown, CT: Wesleyan University Press, 1993), ISBN 0-8195-6260-2, p. 64. In 1975 guitarist Ritchie Blackmore left, going on to form Rainbow and after the break-up of the band the next year, vocalist David Coverdale formed Whitesnake.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 292–3. 1970 saw The Who release Live at Leeds, often seen as the archetypal hard rock live album, and the following year they released their highly acclaimed album Who's Next, which mixed heavy rock with extensive use of synthesizers.C. Charlesworth and E. Hanel, The Who: the Complete Guide to Their Music (London: Omnibus Press, 2nd edn., 2004), ISBN 1-84449-428-4, p. 52. Subsequent albums, including Quadrophenia (1973), built on this sound before Who Are You (1978), their last album before the death of pioneering rock drummer Keith Moon later that year.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 1220–2.
Emerging British acts included Free, who released their signature song "All Right Now" (1970), which has received extensive radio airplay in both the UK and US.Paul Rodgers: Biography, iTunes After the breakup of the band in 1973, vocalist Paul Rodgers joined supergroup Bad Company, whose eponymous first album (1974) was an international hit.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 52–3. The mixture of hard rock and progressive rock, evident in the works of Deep Purple, was pursued more directly by bands like Uriah Heep and Argent.E. Macan, Rocking the Classics: English Progressive Rock and the Counterculture (Oxford: Oxford University Press, 1997), ISBN 0-19-509887-0, pp. 138. Scottish band Nazareth released their self-titled début album in 1971, producing a blend of hard rock and pop that would culminate in their best selling, Hair of the Dog (1975), which contained the proto-power ballad "Love Hurts".V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 783–4. Having enjoyed some national success in the early 1970s, Queen, after the release of Sheer Heart Attack (1974) and A Night at the Opera (1975), gained international recognition with a sound that used layered vocals and guitars and mixed hard rock with heavy metal, progressive rock, and even opera.. The latter featured the single "Bohemian Rhapsody", which stayed at number one in the UK charts for nine weeks.
thumb|250px|left|Kiss onstage in Boston in 2004.
In the United States, shock-rock pioneer Alice Cooper achieved mainstream success with the top five album School's Out (1972), which was followed by the #1 album Billion Dollar Babies in 1973.R. Harris and J. D. Peters, Motor City Rock and Roll:: The 1960s and 1970s (Charleston CL., Arcadia Publishing, 2008), ISBN 0-7385-5236-4, p. 114. Also in 1973, blues rockers ZZ Top released their classic album Tres Hombres and Aerosmith produced their eponymous début, as did Southern rockers Lynyrd Skynyrd and proto-punk outfit New York Dolls, demonstrating the diverse directions being pursued in the genre.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 9–11, 681–2, 794 and 1271–2. Montrose, including the instrumental talent of Ronnie Montrose and vocals of Sammy Hagar and arguably the first all American hard rock band to challenge the British dominance of the genre, released their first album in 1973.E. Rivadavia, [ "Montrose"], Allmusic, retrieved 2 August 2010. Kiss built on the theatrics of Alice Cooper and the look of the New York Dolls to produce a unique band persona, achieving their commercial breakthrough with the double live album Alive! in 1975 and helping to take hard rock into the stadium rock era. In the mid-1970s Aerosmith achieved their commercial and artistic breakthrough with Toys in the Attic (1975), which reached number 11 in the American album chart, and Rocks (1976), which peaked at number three.S. T. Erlewine, [ "Aerosmith"], Allmusic, retrieved 27 September 2010. Blue Öyster Cult, formed in the late 60s, picked up on some of the elements introduced by Black Sabbath with their breakthrough live gold album On Your Feet or on Your Knees (1975), followed by their first platinum album, Agents of Fortune (1976), containing the hit single "(Don't Fear) The Reaper", which reached number 12 on the Billboard charts. Journey released their eponymous debut in 1975W. Ruhlmann, [ "Journey"], Allmusic, retrieved 20 June 2010. and the next year Boston released their highly successful début album.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, p. 132. In the same year, hard rock bands featuring women saw commercial success as Heart released Dreamboat Annie and The Runaways débuted with their self-titled album. While Heart had a more folk-oriented hard rock sound, the Runaways leaned more towards a mix of punk-influenced music and hard rock.M. J. Carson, T. Lewis and S. M. Shaw, Girls Rock!: Fifty Years of Women Making Music (University Press of Kentucky, 2004), ISBN 0-8131-2310-0, pp. 86–9. The Amboy Dukes, having emerged from the Detroit garage rock scene and most famous for their Top 20 psychedelic hit "Journey to the Center of the Mind" (1968), were dissolved by their guitarist Ted Nugent, who embarked on a solo career that resulted in four successive multi-platinum albums between Ted Nugent (1975) and his best selling Double Live Gonzo! (1978).RIAA Gold and Platinum Search for albums by Ted Nugent
thumb|250px|right|Rush on stage in Milan in mid-September 2004.
From outside the United Kingdom and the United States, the Canadian trio Rush released three distinctively hard rock albums in 1974–75 (Rush, Fly by Night and Caress of Steel) before moving toward a more progressive sound with the 1976 album 2112.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, p. 966.AllMusic [ Greg Prato on All the World's a Stage]. Retrieved December 14, 2007. The Irish band Thin Lizzy, which had formed in the late 1960s, made their most substantial commercial breakthrough in 1976 with the hard rock album Jailbreak and their worldwide hit "The Boys Are Back in Town", which reached number 8 in the UK and number 12 in the US. Their style, consisting of two duelling guitarists often playing leads in harmony, proved itself to be a large influence on later bands. They reached their commercial, and arguably their artistic peak with Black Rose: A Rock Legend (1979).V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 1333–4. The arrival of Scorpions from Germany marked the geographical expansion of the subgenre.R. Walser, Running With the Devil: Power, Gender, and Madness in Heavy Metal Music (Middletown, CT: Wesleyan University Press, 1993), ISBN 0-8195-6260-2, p. 10. Australian-formed AC/DC, with a stripped back, riff heavy and abrasive style that also appealed to the punk generation, began to gain international attention from 1976, culminating in the release of their multi-platinum albums Let There Be Rock (1977) and Highway to Hell (1979).V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 3–5. Also influenced by a punk ethos were heavy metal bands like Motörhead, while Judas Priest abandoned the remaining elements of the blues in their music,V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 605–6. further differentiating the hard rock and heavy metal styles and helping to create the new wave of British heavy metal which was pursued by bands like Iron Maiden, Saxon and Venom.S. Waksman, This Ain't the Summer of Love: Conflict and Crossover in Heavy Metal and Punk (Berkeley CA: University of California Press, 2009), ISBN 0-520-25310-8, pp. 146–71.
With the rise of disco in the US and punk rock in the UK, hard rock's mainstream dominance was rivalled toward the later part of the decade. Disco appealed to a more diverse group of people and punk seemed to take over the rebellious role that hard rock once held.R. Walser, Running With the Devil: Power, Gender, and Madness in Heavy Metal Music (Middletown, CT: Wesleyan University Press, 1993), ISBN 0-8195-6260-2, p. 11. Early punk bands like The Ramones explicitly rebelled against the drum solos and extended guitar solos that characterised stadium rock, with almost all of their songs clocking in around two minutes with no guitar solos. However, new rock acts continued to emerge and record sales remained high into the 1980s. 1977 saw the début and rise to stardom of Foreigner, who went on to release several platinum albums through to the mid-1980s.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 425–6. Midwestern groups like Kansas, REO Speedwagon and Styx helped further cement heavy rock in the Midwest as a form of stadium rock.R. Kirkpatrick, The Words and Music of Bruce Springsteen (Greenwood Publishing Group, 2007), ISBN 0-275-98938-0, p. 51. In 1978, Van Halen emerged from the Los Angeles music scene with a sound based around the skills of lead guitarist Eddie Van Halen. He popularised a guitar-playing technique of two-handed hammer-ons and pull-offs called tapping, showcased on the song "Eruption" from the album Van Halen, which was highly influential in re-establishing hard rock as a popular genre after the punk and disco explosion, while also redefining and elevating the role of electric guitar.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 1182–3.
Glam metal era (1980s)
thumb|250px|left|Def Leppard onstage in Dublin in 2009
The opening years of the 1980s saw a number of changes in personnel and direction of established hard rock acts, including the deaths of Bon Scott, the lead singer of AC/DC, and John Bonham, drummer with Led Zeppelin.C. Smith, 101 Albums That Changed Popular Music (Oxford: Oxford University Press, 2009), ISBN 0-19-537371-5, p. 135. Whereas Zeppelin broke up almost immediately afterwards, AC/DC pressed on, recording the album Back in Black (1980) with their new lead singer, Brian Johnson. It became the fifth-highest-selling album of all time in the US and the second-highest-selling album in the world. Black Sabbath had split with original singer Ozzy Osbourne in 1979 and replaced him with Ronnie James Dio, formerly of Rainbow, giving the band a new sound and a period of creativity and popularity beginning with Heaven and Hell (1980). Osbourne embarked on a solo career with Blizzard of Ozz (1980), featuring American guitarist Randy Rhoads.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 105–6. Some bands, such as Queen, moved away from their hard rock roots and more towards pop rock, while others, including Rush with Moving Pictures (1981), began to return to a hard rock sound. The creation of thrash metal, which mixed heavy metal with elements of hardcore punk from about 1982, particularly by Metallica, Anthrax, Megadeth and Slayer, helped to create extreme metal and further remove the style from hard rock, although a number of these bands or their members would continue to record some songs closer to a hard rock sound.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, p. 1332.R. Walser, Running with the Devil: Power, Gender, and Madness in Heavy Metal Music (Wesleyan University Press, 2003), ISBN 0-8195-6260-2, pp. 11–14. Kiss moved away from their hard rock roots toward pop metal: firstly removing their makeup in 1983 for their Lick It Up album,S. T. Erlewine and G. Prato, [ "Kiss"], Allmusic, retrieved 18 September 2010. and then adopting the visual and sound of glam metal for their 1984 release, Animalize, both of which marked a return to commercial success.G. Prato, [ "Kiss: Animalize"], Allmusic, retrieved 18 September 2010. Pat Benatar was one of the first women to achieve commercial success in hard rock, with three successive Top 5 albums between 1980 and 1982.
Often categorised with the new wave of British heavy metal, in 1981 Def Leppard released their second album High 'n' Dry, mixing glam-rock with heavy metal, and helping to define the sound of hard rock for the decade.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 293–4. The follow-up Pyromania (1983), reached number two on the American charts and the singles "Photograph", "Rock of Ages" and "Foolin'", helped by the emergence of MTV, all reached the Top 40. It was widely emulated, particularly by the emerging Californian glam metal scene. This was followed by US acts like Mötley Crüe, with their albums Too Fast for Love (1981) and Shout at the Devil (1983) and, as the style grew, the arrival of bands such as Ratt,S. T. Erlewine & G. Prato, [ "Ratt"], Allmusic, retrieved 19 June 2010. White Lion,G. Prato, [ "White Lion"], Allmusic, retrieved 19 June 2010. Twisted Sister and Quiet Riot.R. Moore, Sells Like Teen Spirit: Music, Youth Culture, and Social Crisis (New York, NY: New York University Press, 2009), ISBN 0-8147-5748-0, p. 106. Quiet Riot's album Metal Health (1983) was the first glam metal album, and arguably the first heavy metal album of any kind, to reach number one in the Billboard music charts and helped open the doors for mainstream success by subsequent bands.E. Rivadavia, [ "Quiet Riot"], Allmusic, retrieved 7 July 2010.
thumb|250px|right|Poison seen here in 2008, were among the most successful acts of the 1980s glam metal era.
Established bands made something of a comeback in the mid-1980s. After an 8-year separation, Deep Purple returned with the classic Machine Head line-up to produce Perfect Strangers (1984), which reached number five in the UK, hit the top five in five other countries, and was a platinum-seller in the US.Deep Purple Essential Collection – Planet Rock After somewhat slower sales of its fourth album, Fair Warning, Van Halen rebounded with the Top 3 album Diver Down in 1982, then reached their commercial pinnacle with 1984. It reached number two on the Billboard album chart and provided the track "Jump", which reached number one on the singles chart and remained there for several weeks. Heart, after floundering during the first half of the decade, made a comeback with their eponymous ninth studio album which hit number one and contained four Top 10 singles including their first number one hit. The new medium of video channels was used with considerable success by bands formed in previous decades. Among the first were ZZ Top, who mixed hard blues rock with new wave music to produce a series of highly successful singles, beginning with "Gimme All Your Lovin'" (1983), which helped their albums Eliminator (1983) and Afterburner (1985) achieve diamond and multi-platinum status respectively.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 1271–2. Others found renewed success in the singles charts with power ballads, including REO Speedwagon with "Keep on Loving You" (1980) and "Can't Fight This Feeling" (1984), Journey with "Don't Stop Believin'" (1981) and "Open Arms" (1982), Foreigner's "I Want to Know What Love Is",S. Frith, "Pop Music" in S. Frith, W. Straw and J. Street, eds, The Cambridge Companion to Pop and Rock (Cambridge: Cambridge University Press), ISBN 0-521-55660-0, pp. 100–1. Scorpions' "Still Loving You" (both from 1984), Heart’s "What About Love" (1985) and "These Dreams" (1986), and Boston's "Amanda" (1986).P. Buckley, The Rough Guide to Rock: the Definitive Guide to more than 1200 Artists and Bands (Rough Guides, 2003), ISBN 1-84353-105-4.
Bon Jovi's third album, Slippery When Wet (1986), mixed hard rock with a pop sensitivity and spent a total of 8 weeks at the top of the Billboard 200 album chart, selling 12 million copies in the US while becoming the first hard rock album to spawn three top 10 singles — two of which reached number one.L. Flick, "Bon Jovi bounce back from tragedy", Billboard, Sep 28, 2002, vol. 114, No. 39, ISSN 0006-2510, p. 81. The album has been credited with widening the audiences for the genre, particularly by appealing to women as well as the traditional male dominated audience, and opening the door to MTV and commercial success for other bands at the end of the decade.D. Nicholls, The Cambridge History of American Music (Cambridge: Cambridge University Press, 1998), ISBN 0-521-45429-8, p. 378. The anthemic The Final Countdown (1986) by Swedish group Europe was an international hit, reaching number eight on the US charts while hitting the top 10 in nine other countries. This era also saw more glam-infused American hard rock bands come to the forefront, with both Poison and Cinderella releasing their multi-platinum début albums in 1986.B. Weber, [ "Poison"], Allmusic, retrieved 19 June 2010.W. Ruhlmann, [ "Cinderella"], Allmusic, retrieved 19 June 2010. Van Halen released 5150 (1986), their first album with Sammy Hagar on lead vocals, which was number one in the US for three weeks and sold over 6 million copies. By the second half of the decade, hard rock had become the most reliable form of commercial popular music in the United States."The Pop Life" – New York Times By Stephen Holden. Published: Wednesday, December 27, 1989. Retrieved October 25, 2009.
thumb|225px|left|Original member Izzy Stradlin' on stage with Guns N' Roses in 2006.
Established acts benefited from the new commercial climate, with Whitesnake's self-titled album (1987) selling over 17 million copies, outperforming anything in Coverdale's or Deep Purple's catalogue before or since. It featured the rock anthem "Here I Go Again '87" as one of 4 UK top 20 singles. The follow-up Slip of the Tongue (1989) went platinum, but according to critics Steve Erlwine and Greg Prato, "it was a considerable disappointment after the across-the-board success of Whitesnake".S. T. Erlewine and G. Prato, [ "Whitesnake"], Allmusic, retrieved 27 September 2010. Aerosmith's comeback album Permanent Vacation (1987) would begin a decade long revival of their popularity. Crazy Nights (1987) by Kiss was the band's highest charting release in the US since 1979 and the highest of their career in the UK.J. Tobler, M. St. Michael and A. Doe, Kiss: Live! (London: Omnibus Press, 1996), ISBN 0-7119-6008-9. Mötley Crüe with Girls, Girls, Girls (1987) continued their commercial successV. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 767–8. and Def Leppard with Hysteria (1987) hit their commercial peak, the latter producing seven hit singles (a record for a hard rock act). Guns N' Roses released the best-selling début of all time, Appetite for Destruction (1987). With a "grittier" and "rawer" sound than most glam metal, it produced three top 10 hits, including the number one "Sweet Child O' Mine".V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 494–5. Some of the glam rock bands that formed in the mid-1980s, such as White Lion and Cinderella experienced their biggest success during this period with their respective albums Pride (1987) and Long Cold Winter (1988) both going multi-platinum and launching a series of hit singles. In the last years of the decade, the most notable successes were New Jersey (1988) by Bon Jovi,S. T. Erlewine, [ "Bon Jovi"], Allmusic, retrieved 20 June 2010. OU812 (1988) by Van Halen, Open Up and Say... Ahh! (1988) by Poison, Pump (1989) by Aerosmith, and Mötley Crüe's most commercially successful album Dr. Feelgood (1989). New Jersey spawned five Top 10 singles, a record for a hard rock act. In 1988 from 25 June to 5 November, the number one spot on the Billboard 200 album chart was held by a hard rock album for 18 out of 20 consecutive weeks; the albums were OU812, Hysteria, Appetite for Destruction, and New Jersey. A final wave of glam rock bands arrived in the late 1980s, and experienced success with multi-platinum albums and hit singles from 1989 until the early 1990s, among them Extreme,S. T. Erlewine, "Extreme", Allmusic, retrieved 10 February 2011. WarrantS. T. Erlewine, "Warrant", Allmusic, retrieved 10 February 2011. SlaughterS. Huey, "Slaughter", Allmusic, retrieved 10 February 2011. and FireHouse.S. T. Erlewine, "Firehouse", Allmusic, retrieved 10 February 2011. Skid Row also released their eponymous début (1989), reaching number six on the Billboard 200, but they were to be one of the last major bands that emerged in the glam rock era.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 1018–9.
Grunge and Britpop (1990s)
Hard rock entered the 1990s as one of the dominant forms of commercial music. The multi-platinum releases of AC/DC's The Razors Edge (1990), Guns N' Roses' Use Your Illusion I and Use Your Illusion II (both in 1991), Ozzy Osbourne's No More Tears (1991), and Van Halen's For Unlawful Carnal Knowledge (1991) showcased this popularity. Additionally, The Black Crowes released their debut album, Shake Your Money Maker (1990), which contained a bluesy classic rock sound and sold five million copies.S. T. Erlewine, "The Black Crowes Shake Your Money Maker", Allmusic, retrieved 13 February 2011. In 1992, Def Leppard followed up 1987's Hysteria with Adrenalize, which went multi-platinum, spawned four Top 40 singles and held the number one spot on the US album chart for five weeks."Def Leppard – the Band" BBC h2g2, retrieved 18 June 2010.
thumb|250px|right|Nirvana were at the forefront of the 1990s grunge era.
While these few hard rock bands managed to maintain success and popularity in the early part of the decade, alternative forms of hard rock achieved mainstream success in the form of grunge in the US and Britpop in the UK. This was particularly evident after the success of Nirvana's Nevermind (1991), which combined elements of hardcore punk and heavy metal into a "dirty" sound that made use of heavy guitar distortion, fuzz and feedback, along with darker lyrical themes than their "hair band" predecessors.[ "Grunge"], Allmusic, retrieved 18 June 2010. Although most grunge bands had a sound that sharply contrasted mainstream hard rock, several, including Pearl Jam,S. T. Erlewine, [ "Pearl Jam"], Allmusic, retrieved 23 June 2010. Alice in Chains, Mother Love Bone and Soundgarden, were more strongly influenced by 1970s and 1980s rock and metal, while Stone Temple Pilots managed to turn alternative rock into a form of stadium rock.A. Budofsky, The Drummer: 100 Years of Rhythmic Power and Invention (Milwaukee, WI: Hal Leonard Corporation, 2006), ISBN 1-4234-0567-6, p. 148.S. T. Erlewine, [ "Stone Temple Pilots"], Allmusic, retrieved 20 June 2010. However, all grunge bands shunned the macho, anthemic and fashion-focused aesthetics particularly associated with glam metal. In the UK, Oasis were unusual among the Britpop bands of the mid-1990s in incorporating a hard rock sound.
In the new commercial climate glam metal bands like Europe, Ratt, White Lion and Cinderella broke up, Whitesnake went on hiatus in 1991, and while many of these bands would re-unite again in the late 1990s or early 2000s, they never reached the commercial success they saw in the 1980s or early 1990s.[ "Hair metal"], Allmusic, retrieved 14 June 2010. Other bands such as Mötley Crüe and Poison saw personnel changes which impacted those bands' commercial viability during the decade. In 1995 Van Halen released Balance, a multi-platinum seller that would be the band's last with Sammy Hagar on vocals. In 1996 David Lee Roth returned briefly and his replacement, former Extreme singer Gary Cherone, was fired soon after the release of the commercially unsuccessful 1998 album Van Halen III and Van Halen would not tour or record again until 2004. Guns N' Roses' original lineup was whittled away throughout the decade. Drummer Steven Adler was fired in 1990, guitarist Izzy Stradlin left in late 1991 after recording Use Your Illusion I and II with the band. Tensions between the other band members and lead singer Axl Rose continued after the release of the 1993 covers album The Spaghetti Incident? Guitarist Slash left in 1996, followed by bassist Duff McKagan in 1997. Axl Rose, the only original member, worked with a constantly changing lineup in recording an album that would take over fifteen years to complete.S. T. Erlewine and G. Prato, [ "Guns N' Roses"], Allmusic, retrieved 19 June 2010.
thumb|250px|left|Foo Fighters performing an acoustic show in 2007.
Some established acts continued to enjoy commercial success, such as Aerosmith, with their number one multi-platinum albums: Get a Grip (1993), which produced four Top 40 singles and became the band's best-selling album worldwide (going on to sell over 10 million copies), and Nine Lives (1997). In 1998, Aerosmith released the number one hit "I Don't Want to Miss a Thing", which remains the only single by a hard rock band to debut at number one. AC/DC produced the double platinum Ballbreaker (1995).S. T. Erlewine, [ "AC/DC"], Allmusic, retrieved 20 July 2010. Bon Jovi appealed to their hard rock audience with songs such as "Keep the Faith" (1992), but also achieved success in adult contemporary radio, with the Top 10 ballads "Bed of Roses" (1993) and "Always" (1994). Bon Jovi's 1995 album These Days was a bigger hit in Europe than it was in the United States, spawning four Top 10 singles on the UK Singles Chart. Metallica's Load (1996) and ReLoad (1997) each sold in excess of 4 million copies in the US and saw the band develop a more melodic and blues rock sound.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Milwaukee, WI: Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 729–30. As the initial impetus of grunge bands faltered in the middle years of the decade, post-grunge bands emerged. They emulated the attitudes and music of grunge, particularly thick, distorted guitars, but with a more radio-friendly commercially oriented sound that drew more directly on traditional hard rock.[ "Post-grunge"], Allmusic, retrieved 17 January 2010. Among the most successful acts were the Foo Fighters, Candlebox, Live, Collective Soul, Australia's Silverchair and England's Bush, who all cemented post-grunge as one of the most commercially viable subgenres by the late 1990s.V. Bogdanov, C. Woodstra and S. T. Erlewine, All Music Guide to Rock: the Definitive Guide to Rock, Pop, and Soul (Backbeat Books, 3rd edn., 2002), ISBN 0-87930-653-X, pp. 1344–7. Similarly, some post-Britpop bands that followed in the wake of Oasis, including Feeder and Stereophonics, adopted a hard rock or "pop-metal" sound.J. Ankeny, [ "Feeder"], Allmusic, retrieved 20 June 2010.J. Damas, [ "Stereophonics: Performance and Cocktails"], Allmusic, retrieved 20 June 2010.
Survivals and revivals (2000s)
thumb|right|250px|Aerosmith performing at Quilmes Rock in Buenos Aires, Argentina on April 15, 2007
A few hard rock bands from the 1970s and 1980s managed to sustain highly successful recording careers. Bon Jovi were still able to achieve a commercial hit with "It's My Life" from their double platinum-certified album Crush (2000). and AC/DC released the platinum-certified Stiff Upper Lip (2000) Aerosmith released a number two platinum album, Just Push Play (2001), which saw the band foray further into pop with the Top 10 hit "Jaded", and a blues cover album, Honkin' on Bobo, which reached number five in 2004. Heart achieved their first Top 10 album since the early 90s with Red Velvet Car in 2010, becoming the first female-led hard rock band to earn Top 10 albums spanning five decades. There were reunions and subsequent tours from Van Halen (with Hagar in 2004 and then Roth in 2007),S. T. Erlewine and G. Prato, [ "Van Halen"], Allmusic, retrieved 20 June 2010. The Who (delayed in 2002 by the death of bassist John Entwistle until 2006)B. Eder and S. T. Erlewine, [ "The Who"], Allmusic, retrieved 20 June 2010. and Black Sabbath (with Osbourne 1997–2006 and Dio 2006–2010)W. Ruhlmann, [ "Black Sabbath"], Allmusic, retrieved 20 June 2010. and even a one off performance by Led Zeppelin (2007),H. MacBain, "Led Zeppelin reunion: the review" New Musical Express, 10 December 2007, retrieved 20 June 2010. renewing the interest in previous eras. Additionally, hard rock supergroups, such as Audioslave (with former members of Rage Against the Machine and Soundgarden) and Velvet Revolver (with former members of Guns N' Roses, punk band Wasted Youth and Stone Temple Pilots singer Scott Weiland), emerged and experienced some success. However, these bands were short-lived, ending in 2007 and 2008, respectively.M. Wilson, [ "Audioslave"], Allmusic, retrieved 20 June 2010.J. Loftus, [ "Velvet Revolver"], Allmusic, retrieved 20 June 2010. The long-awaited Guns N' Roses album Chinese Democracy was finally released in 2008, but only went platinum and failed to come close to the success of the band's late 1980s and early 1990s material. More successfully, AC/DC released the double platinum-certified Black Ice (2008). Bon Jovi continued to enjoy success, branching into country music with "Who Says You Can't Go Home", which reached number one on the Hot Country Singles chart in 2006, and the rock/country album Lost Highway, which reached number one in 2007. In 2009, Bon Jovi released another number one album, The Circle, which marked a return to their hard rock sound.
thumb|250px|left|Wolfmother, 2007
The term "retro-metal" has been applied to such bands as Texas based The Sword, California's High on Fire, Sweden's Witchcraft and Australia's Wolfmother.E. Rivadavia, [ "The Sword: 'Age of Winters'"], Allmusic, retrieved 11 June 2007. Wolfmother's self-titled 2005 debut album combined elements of the sounds of Deep Purple and Led Zeppelin.E. Rivadavia, [ "'Wolfmother: 'Cosmic Egg'"], Allmusic, retrieved 11 June 2007. Fellow Australians Airbourne's début album Runnin' Wild (2007) followed in the hard riffing tradition of AC/DC.J. Macgregor, [ "Airbourne"], Allmusic, retrieved 19 June 2010. England's The Darkness' Permission to Land (2003), described as an "eerily realistic simulation of '80s metal and '70s glam",H. Phares, [ The Darkness], Allmusic, retrieved 11 June 2007. topped the UK charts, going quintuple platinum. The follow-up, One Way Ticket to Hell... and Back (2005), reached number 11, before the band broke up in 2006."Chart Stats: The Darkness", Chart Stats, retrieved 17 June 2008. Los Angeles band Steel Panther managed to gain a following by sending up 80s glam metal.J. Lymangrover, [ "Steel Panther"], Allmusic, retrieved 19 June 2010. A more serious attempt to revive glam metal was made by bands of the sleaze metal movement in Sweden, including Vains of Jenna,M. Brown, [ "Vains of Jenna"], Allmusic, retrieved 19 June 2010. Hardcore SuperstarS. Huey, [ "Hardcore Superstar"], Allmusic, retrieved 19 June 2010. and Crashdïet.K. R. Hoffman, [ "Crashdïet"], Allmusic, retrieved 19 June 2010.
Although Foo Fighters continued to be one of the most successful rock acts, with albums like In Your Honor (2005) reaching number two in the US and UK, many of the first wave of post-grunge bands began to fade in popularity. Acts like Creed, Staind, Puddle of Mudd and Nickelback took the genre into the 2000s with considerable commercial success, abandoning most of the angst and anger of the original movement for more conventional anthems, narratives and romantic songs. They were followed in this vein by new acts including Shinedown and Seether.T. Grierson, "Post-Grunge: A History of Post-Grunge Rock", About.com, retrieved 1 January 2010. Acts with more conventional hard rock sounds included Andrew W.K.,H. Phares, [ "Andrew W.K."], Allmusic, retrieved 19 June 2010. Beautiful CreaturesJ. Loftus, [ "Beautiful Creatures"], Allmusic, retrieved 20 June 2010. and Buckcherry, whose breakthrough album 15 (2006) went platinum and spawned the single "Sorry" (2007), which made the Top 10 of the Billboard 100.J. Loftus, [ "Buckcherry"], Allmusic, retrieved 19 June 2010. These were joined by bands with hard rock leanings that emerged in the mid-2000s from the garage rock or post punk revival, including Black Rebel Motorcycle Club and Kings of Leon,S. J. Blackman, Chilling out: the Cultural Politics of Substance Consumption, Youth and Drug Policy (McGraw-Hill International, 2004), ISBN 0-335-20072-9, p. 90. and Queens of the Stone AgeJ. Ankeny and G. Prato, [ "Queens of the Stone Age"], Allmusic, retrieved 19 June 2010. from the US, Three Days Grace from Canada,M. Sutton, [ "Three Days Grace"], Allmusic, retrieved 19 June 2010. Jet from AustraliaP. Smitz, C. Bain, S. Bao, S. Farfor, Australia (Footscray Victoria: Lonely Planet, 14th edn., 2005), ISBN 1-74059-740-0, p. 58. and The Datsuns from New Zealand.C. Rawlings-Way, Lonely Planet New Zealand (Footscray Victoria: Lonely Planet, 14th edn., 2008), ISBN 1-74104-816-8, p. 52. In 2009 Them Crooked Vultures, a supergroup that brought together Foo Fighters' Dave Grohl, Queens of the Stone Age's Josh Homme and Led Zeppelin bass player John Paul Jones attracted attention as a live act and released a self-titled debut album that reached the top 20 in the US and UK and the top ten in several other countries.H. Phares, [ "Them Crooked Vultures"], Allmusic, retrieved 2 October 2010."Them Crooked Vultures – Them Crooked Vultures", Acharts.us, retrieved 2 October 2010.
See also
List of hard rock musicians (A–M)
List of hard rock musicians (N–Z)
Timeline of heavy metal and hard rock music
References
Further reading
Nicolas Bénard, La culture Hard Rock, Paris, Dilecta, 2008.
Nicolas Bénard, Métalorama, ethnologie d'une culture contemporaine, 1983–2010, Rosières-en-Haye, Camion Blanc, 2011.
Fast, Susan (2001). In the Houses of the Holy: Led Zeppelin and the Power of Rock Music. Oxford University Press. ISBN 0-19-511756-5
Fast, Susan (2005). "Led Zeppelin and the Construction of Masculinity," in Music Cultures in the United States, ed. Ellen Koskoff. Routledge. ISBN 0-415-96588-8
Guibert, Gérôme, and Fabien Hein (ed.) (2007), "Les Scènes Metal. Sciences sociales et pratiques culturelles radicales", Volume! La revue des musiques populaires, n°5-2, Bordeaux: Éditions Mélanie Seteun. ISBN 978-2-913169-24-1
Kahn-Harris, Keith, Extreme Metal: Music and Culture on the Edge, Oxford: Berg, 2007, ISBN 1-84520-399-2
Kahn-Harris, Keith and Fabien Hein (2007), "Metal studies: a bibliography", Volume! La revue des musiques populaires, n°5-2, Bordeaux: Éditions Mélanie Seteun. ISBN 978-2-913169-24-1 Downloadable here
Weinstein, Deena (1991). Heavy Metal: A Cultural Sociology. Lexington. ISBN 0-669-21837-5. Revised edition: (2000). Heavy Metal: The Music and its Culture. Da Capo. ISBN 0-306-80970-2.
External links
*
Category:Rock music genres | 124,802 | 2017-01 |
Somalis |
Somalis (, ) are an ethnic group inhabiting the Horn of Africa (Somali Peninsula). The overwhelming majority of Somalis speak the Somali language, which is part of the Cushitic branch of the Afro-Asiatic family. They are predominantly Sunni Muslim. Ethnic Somalis number around 16-20 million and are principally concentrated in Somalia (around 9 million),CIA World Factbook: Somalia, people and Map of the Somalia Ethnic groups (CIA according de Perry-Castañeda Library Map Collection). The first gives 15% non-Somalis and the second 6%. Used 85% of current population of Somalia. Ethiopia (4.6 million), Kenya (2.4 million), and Djibouti (524,000). – Ethnologue.com Expatriate Somalis are also found in parts of the Middle East, North America, Oceania and Europe.
Etymology
Samaale, the oldest common ancestor of several Somali clans, is generally regarded as the source of the ethnonym Somali. The name "Somali" is, in turn, held to be derived from the words soo and maal, which together mean "go and milk" — a reference to the ubiquitous pastoralism of the Somali people.I. M. Lewis, A pastoral democracy: a study of pastoralism and politics among the Northern Somali of the Horn of Africa, (Oxford University Press : 1963), p.12. Another plausible etymology proposes that the term Somali is derived from the Arabic for "wealthy" (dhawamaal), again referring to Somali riches in livestock.
An Ancient Chinese document from the 9th century CE referred to the northern Somalia coast — which was then part of a broader region in Northeast Africa known as Barbara, in reference to the area's Berber (Hamitic) inhabitantsDavid D. Laitin, Said S. Samatar, Somalia: Nation in Search of a State, (Westview Press: 1987), p. 5. — as Po-pa-li.Lee V. Cassanelli, The shaping of Somali society: reconstructing the history of a pastoral people, 1600-1900, (University of Pennsylvania Press: 1982), p.9.Nagendra Kr Singh, International encyclopaedia of Islamic dynasties, (Anmol Publications PVT. LTD., 2002), p.524. The first clear written reference of the sobriquet Somali, however, dates back to the 15th century. During the wars between the Sultanate of Ifat based at Zeila and the Solomonic Dynasty, the Abyssinian Emperor had one of his court officials compose a hymn celebrating a military victory over the Sultan of Ifat's eponymous troops.I.M. Lewis, A modern history of the Somali: nation and state in the Horn of Africa, 4, illustrated edition, (James Currey: 2002), p.25.
History
thumb|left|Ruins of the Adal Sultanate in Zeila, a kingdom led in the 16th century by Imam Ahmad ibn Ibrihim al-Ghazi (Ahmed Gurey).
Ancient rock paintings, which date back 5000 years, have been found in the northern part of Somalia; these depict early life in the territory. The most famous of these is the Laas Geel complex, which contains some of the earliest known rock art on the African continent and features many elaborate pastoralist sketches of animal and human figures. In other places, such as the northern Dhambalin region, a depiction of a man on a horse is postulated as being one of the earliest known examples of a mounted huntsman.
Inscriptions have been found beneath many of the rock paintings, but archaeologists have so far been unable to decipher this form of ancient writing.Susan M. Hassig, Zawiah Abdul Latif, Somalia, (Marshall Cavendish: 2007), p.22 During the Stone age, the Doian and Hargeisan cultures flourished here with their respective industries and factories.pg 105 - A History of African archaeology By Peter Robertshaw
The oldest evidence of burial customs in the Horn of Africa comes from cemeteries in Somalia dating back to 4th millennium BC.pg 40 - Early Holocene Mortuary Practices and Hunter-Gatherer Adaptations in Southern Somalia, by Steven A. Brandt World Archaeology © 1988 The stone implements from the Jalelo site in northern Somalia are said to be the most important link in evidence of the universality in palaeolithic times between the East and the West.Prehistoric Implements from Somaliland by H. W. Seton-Karr pg 183
thumb|The Citadel of Gondershe was an important site in the medieval Ajuran Empire.
In antiquity, the ancestors of the Somali people were an important link in the Horn of Africa connecting the region's commerce with the rest of the ancient world. Somali sailors and merchants were the main suppliers of frankincense, myrrh and spices, items which were considered valuable luxuries by the Ancient Egyptians, Phoenicians, Mycenaeans and Babylonians.Phoenicia pg 199The Aromatherapy Book by Jeanne Rose and John Hulburd pg 94
According to most scholars, the ancient Land of Punt and its inhabitants formed part of the ethnogenesis of the Somali people.Egypt: 3000 Years of Civilization Brought to Life By Christine El MahdyAncient perspectives on Egypt By Roger Matthews, Cornelia Roemer, University College, London.Africa's legacies of urbanization: unfolding saga of a continent By Stefan GoodwinCivilizations: Culture, Ambition, and the Transformation of Nature By Felipe Armesto Fernandez The ancient Puntites were a nation of people that had close relations with Pharaonic Egypt during the times of Pharaoh Sahure and Queen Hatshepsut. The pyramidal structures, temples and ancient houses of dressed stone littered around Somalia are said to date from this period.Man, God and Civilization pg 216
In the classical era, several ancient city-states, such as Opone, Essina, Sarapion, Nikon, Malao, Damo and Mosylon near Cape Guardafui, which competed with the Sabaeans, Parthians and Axumites for the wealthy Indo-Greco-Roman trade, also flourished in Somalia.Oman in history By Peter Vine Page 324
thumb|left|The Ifat Sultanate's realm in the 14th century.
The birth of Islam on the opposite side of Somalia's Red Sea coast meant that Somali merchants, sailors and expatriates living in the Arabian Peninsula gradually came under the influence of the new religion through their converted Arab Muslim trading partners. With the migration of fleeing Muslim families from the Islamic world to Somalia in the early centuries of Islam, and the peaceful conversion of the Somali population by Somali Muslim scholars in the following centuries, the ancient city-states eventually transformed into Islamic Mogadishu, Berbera, Zeila, Barawa and Merca, which were part of the Berberi civilization. The city of Mogadishu came to be known as the City of Islam,Society, security, sovereignty and the state in Somalia - Page 116 and controlled the East African gold trade for several centuries.East Africa: Its Peoples and Resources - Page 18
The Sultanate of Ifat, led by the Walashma dynasty with its capital at Zeila, ruled over parts of what is now eastern Ethiopia, Djibouti, and northern Somalia. The historian al-Umari records that Ifat was situated near the Red Sea coast, and states its size as 15 days travel by 20 days travel. Its army numbered 15,000 horsemen and 20,000 foot soldiers. Al-Umari also credits Ifat with seven "mother cities": Belqulzar, Kuljura, Shimi, Shewa, Adal, Jamme and Laboo.G.W.B. Huntingford, The Glorious Victories of Ameda Seyon, King of Ethiopia (Oxford: University Press, 1965), p. 20.
thumb|Sultan Ali Yusuf Kenadid of the Hobyo Sultanate.
In the Middle Ages, several powerful Somali empires dominated the regional trade including the Ajuran Sultanate, which excelled in hydraulic engineering and fortress building,Shaping of Somali society Lee Cassanelli pg.92 the Sultanate of Adal, whose general Ahmad ibn Ibrahim al-Ghazi (Ahmed Gurey) was the first commander to use cannon warfare on the continent during Adal's conquest of the Ethiopian Empire,Futuh Al Habash Shibab ad Din and the Sultanate of the Geledi, whose military dominance forced governors of the Omani empire north of the city of Lamu to pay tribute to the Somali Sultan Ahmed Yusuf.Sudan Notes and Records - Page 147
In the late 19th century, after the Berlin conference had ended, European empires sailed with their armies to the Horn of Africa. The imperial clouds wavering over Somalia alarmed the Dervish leader Mohammed Abdullah Hassan, who gathered Somali soldiers from across the Horn of Africa and began one of the longest anti-colonial wars ever. The Dervish State successfully repulsed the British empire four times and forced it to retreat to the coastal region.Encyclopedia of African history - Page 1406 As a result of its successes against the British, the Dervish State received support from the Ottoman and German empires. The Turks also named Hassan Emir of the Somali nation,I.M. Lewis, The modern history of Somaliland: from nation to state, (Weidenfeld & Nicolson: 1965), p. 78 and the Germans promised to officially recognize any territories the Dervishes were to acquire.Thomas P. Ofcansky, Historical dictionary of Ethiopia, (The Scarecrow Press, Inc.: 2004), p.405 After a quarter of a century of holding the British at bay, the Dervishes were finally defeated in 1920, when Britain for the first time in Africa used airplanes to bomb the Dervish capital of Taleex. As a result of this bombardment, former Dervish territories were turned into a protectorate of Britain. Italy similarly faced the same opposition from Somali Sultans and armies and did not acquire full control of parts of modern Somalia until the Fascist era in late 1927. This occupation lasted till 1941 and was replaced by a British military administration.
left|thumb|Mohamoud Ali Shire, a prominent Somali anti-imperialist leader and the 26th Sultan of the Warsangali Sultanate.
Following World War II, Britain retained control of both British Somaliland and Italian Somaliland as protectorates. In 1945, during the Potsdam Conference, the United Nations granted Italy trusteeship of Italian Somaliland, but only under close supervision and on the condition — first proposed by the Somali Youth League (SYL) and other nascent Somali political organizations, such as Hizbia Digil Mirifle Somali (HDMS) and the Somali National League (SNL) — that Somalia achieve independence within ten years.Gates, Henry Louis, Africana: The Encyclopedia of the African and African American Experience, (Oxford University Press: 1999), p.1749 British Somaliland remained a protectorate of Britain until 1960.Tripodi, Paolo. The Colonial Legacy in Somalia p. 68 New York, 1999.
thumb|Lieutenant Colonel Salaad Gabeyre Kediye, the "Father of the Revolution" initiated by the Supreme Revolutionary Council.
To the extent that Italy held the territory by UN mandate, the trusteeship provisions gave the Somalis the opportunity to gain experience in political education and self-government. These were advantages that British Somaliland, which was to be incorporated into the new Somali state, did not have. Although in the 1950s British colonial officials attempted, through various administrative development efforts, to make up for past neglect, the protectorate stagnated. The disparity between the two territories in economic development and political experience would cause serious difficulties when it came time to integrate the two parts.Helen Chapin Metz, ed. Somalia: A Country Study. Washington: GPO for the Library of Congress, 1992. countrystudies.us
Meanwhile, in 1948, under pressure from their World War II allies and to the dismay of the Somalis,Federal Research Division, Somalia: A Country Study, (Kessinger Publishing, LLC: 2004), p.38 the British "returned" the Haud (an important Somali grazing area that was presumably 'protected' by British treaties with the Somalis in 1884 and 1886) and the Ogaden to Ethiopia, based on a treaty they signed in 1897 in which the British ceded Somali territory to the Ethiopian Emperor Menelik in exchange for his help against plundering by Somali clans.David D. Laitin, Politics, Language, and Thought: The Somali Experience, (University Of Chicago Press: 1977), p.73 Britain included the proviso that the Somali nomads would retain their autonomy, but Ethiopia immediately claimed sovereignty over them.Zolberg, Aristide R., et al., Escape from Violence: Conflict and the Refugee Crisis in the Developing World, (Oxford University Press: 1992), p.106 This prompted an unsuccessful bid by Britain in 1956 to buy back the Somali lands it had turned over. Britain also granted administration of the almost exclusively Somali-inhabitedFrancis Vallat, First report on succession of states in respect of treaties: International Law Commission twenty-sixth session 6 May-26 July 1974, (United Nations: 1974), p.20 Northern Frontier District (NFD) to Kenyan nationalists despite an informal plebiscite demonstrating the overwhelming desire of the region's population to join the newly formed Somali Republic.David D. Laitin, Politics, Language, and Thought: The Somali Experience, (University Of Chicago Press: 1977), p.75
A referendum was held in neighboring Djibouti (then known as French Somaliland) in 1958, on the eve of Somalia's independence in 1960, to decide whether or not to join the Somali Republic or to remain with France. The referendum turned out in favour of a continued association with France, largely due to a combined yes vote by the sizable Afar ethnic group and resident Europeans. There was also widespread vote rigging, with the French expelling thousands of Somalis before the referendum reached the polls.Kevin Shillington, Encyclopedia of African history, (CRC Press: 2005), p.360. The majority of those who voted no were Somalis who were strongly in favour of joining a united Somalia, as had been proposed by Mahmoud Harbi, Vice President of the Government Council. Harbi was killed in a plane crash two years later.Barrington, Lowell, After Independence: Making and Protecting the Nation in Postcolonial and Postcommunist States, (University of Michigan Press: 2006), p.115 Djibouti finally gained its independence from France in 1977, and Hassan Gouled Aptidon, a Somali who had campaigned for a yes vote in the referendum of 1958, eventually wound up as Djibouti's first president (1977–1991).
British Somaliland became independent on 26 June 1960 as the State of Somaliland, and the Trust Territory of Somalia (the former Italian Somaliland) followed suit five days later.Encyclopædia Britannica, The New Encyclopædia Britannica, (Encyclopædia Britannica: 2002), p.835 On 1 July 1960, the two territories united to form the Somali Republic, albeit within boundaries drawn up by Italy and Britain. A government was formed by Abdullahi Issa Mohamud and Muhammad Haji Ibrahim Egal other members of the trusteeship and protectorate governments, with Haji Bashir Ismail Yusuf as President of the Somali National Assembly, Aden Abdullah Osman Daar as the President of the Somali Republic and Abdirashid Ali Shermarke as Prime Minister (later to become President from 1967 to 1969). On 20 July 1961 and through a popular referendum, the people of Somalia ratified a new constitution, which was first drafted in 1960.Greystone Press Staff, The Illustrated Library of The World and Its Peoples: Africa, North and East, (Greystone Press: 1967), p.338 In 1967, Muhammad Haji Ibrahim Egal became Prime Minister, a position to which he was appointed by Shermarke. Egal would later become the President of the autonomous Somaliland region in northwestern Somalia.
On 15 October 1969, while paying a visit to the northern town of Las Anod, Somalia's then President Abdirashid Ali Shermarke was shot dead by one of his own bodyguards. His assassination was quickly followed by a military coup d'état on 21 October 1969 (the day after his funeral), in which the Somali Army seized power without encountering armed opposition — essentially a bloodless takeover. The putsch was spearheaded by Major General Mohamed Siad Barre, who at the time commanded the army.Moshe Y. Sachs, Worldmark Encyclopedia of the Nations, Volume 2, (Worldmark Press: 1988), p.290.
Alongside Barre, the Supreme Revolutionary Council (SRC) that assumed power after President Sharmarke's assassination was led by Lieutenant Colonel Salaad Gabeyre Kediye and Chief of Police Jama Korshel. The SRC subsequently renamed the country the Somali Democratic Republic,J. D. Fage, Roland Anthony Oliver, The Cambridge history of Africa, Volume 8, (Cambridge University Press: 1985), p.478.The Encyclopedia Americana: complete in thirty volumes. Skin to Sumac, Volume 25, (Grolier: 1995), p.214. dissolved the parliament and the Supreme Court, and suspended the constitution.Peter John de la Fosse Wiles, The New Communist Third World: an essay in political economy, (Taylor & Francis: 1982), p.279.
The revolutionary army established large-scale public works programs and successfully implemented an urban and rural literacy campaign, which helped dramatically increase the literacy rate. In addition to a nationalization program of industry and land, the new regime's foreign policy placed an emphasis on Somalia's traditional and religious links with the Arab world, eventually joining the Arab League (AL) in 1974.Benjamin Frankel, The Cold War, 1945–1991: Leaders and other important figures in the Soviet Union, Eastern Europe, China, and the Third World, (Gale Research: 1992), p.306. That same year, Barre also served as chairman of the Organization of African Unity (OAU), the predecessor of the African Union (AU).Oihe Yang, Africa South of the Sahara 2001, 30th Ed., (Taylor and Francis: 2000), p.1025.
Pan-Somalism
Somali nationalism is centered on the notion that Somalis in Greater Somalia share a common language, religion, culture and ethnicity, and as such constitute a nation unto themselves. The ideology's earliest manifestations are often traced back to the resistance movement led by Mohamed Abdullah Hassan's Dervish State at the turn of the 20th century.Mohamed Diriye Abdullahi. Culture and Customs of Somalia. Westport, Connecticut, Greenwood Publishing Group, Inc, 2001. p. 24. In northwestern present-day Somalia, the first Somali nationalist political organization to be formed was the Somali National League (SNL), established in 1935 in the former British Somaliland protectorate. In the country's northeastern, central and southern regions, the similarly-oriented Somali Youth Club (SYC) was founded in 1943 in Italian Somaliland, just prior to the trusteeship period. The SYC was later renamed the Somali Youth League (SYL) in 1947. It became the most influential political party in the early years of post-independence Somalia.Mohamed Diriye Abdullahi. Culture and Customs of Somalia. Westport, Connecticut, Greenwood Publishing Group, Inc, 2001. p. 25.
Notable Pan-Somalists
thumb|right|Former President of the Somali National Assembly Haji Bashir Ismail Yusuf, one of several prominent pan-Somalists that emerged from the Somali Youth League's leadership ranks.
Mohammed Abdullah Hassan (7 April 1856 – 21 December 1920) – Somali nationalist and religious leader that established the Dervish State during the Scramble for Africa.
Mohamoud Ali Shire – 26th Sultan of the Warsangali Sultanate (1897–1960).
Hasna Doreh – Early 20th century Somali female commander of the Dervish State that frequently joined battles against the imperial powers during the Scramble for Africa.
Hawo Tako (d.1948) – Early 20th century Somali female nationalist whose sacrifice became a symbol for Pan-Somalism.
Bashir Yussuf (b. 1905–1945) – Somali nationalist and religious leader.
Abdullahi Issa (b. 1922–1988) – First Prime Minister of Somalia.
Aden Abdullah Osman Daar (7 January 1960 – 10 June 1967) – First President of Somalia.
Abdirashid Ali Shermarke (10 June 1967 – 15 October 1969) – Second President of Somalia.
Siad Barre (b. 1919 – 2 January 1995) – Third President of Somalia.
Jama Korshel – Somali National Army General, former Head of Somali Police, and commander in the Supreme Revolutionary Council.
Daud Abdulle Hirsi (1925–1965) – Prominent Somali General considered the Father of the Somali Military.
Mahmoud Harbi – active Pan-Somalist that came close to uniting Djibouti with Somalia in the 1970s.
Salaad Gabeyre Kediye – Major General in the Somali military and a revolutionary.
Haji Dirie Hirsi (b. 1905–1975) – Somali businessman actively supporting Pan-Somalist aspirations in the 1950s.
Abdirizak Haji Hussein – Former Prime Minister of Somalia (1964–1967) and Somali Youth League leader.
Sheikh Mukhtar Mohamed Hussein, speaker of parliament, from 1965 to 1969 and interim President of Somalia before the coup d'état in 1969.
Abdullahi Ahmed Irro – General in the Somali National Army; established the National Academy for Strategy.
Ali Matan Hashi – Brigadier General and politician; first Somali Air Force pilot, the father of Somali Air Force and a prominent member of the Supreme Revolutionary Council.
Abdirahman Jama Barre – Former Minister of Foreign Affairs and Minister of Finance of Somalia.
Haji Bashir Ismail Yusuf – First President of the Somali National Assembly and prominent Somali Youth League member.
Osman Haji Mohamed – Prominent Somali Youth League member and parliamentarian.
Abdullahi Yusuf Ahmed – President of Somalia, Colonel in Somali National Army, and commander during Ogaden campaign.
Religion
The history of Islam in Somalia is as old as the religion itself. The early persecuted Muslims fled to various places in the region, including the city of Zeila in modern-day northern Somalia, so as to seek protection from the Quraysh. Somalis were among the first populations on the continent to embrace Islam. With very few exceptions, Somalis are entirely Muslims, the majority belonging to the Sunni branch of Islam and the Shafi`i school of Islamic jurisprudence,Middle East Policy Council - Muslim Populations Worldwide although a few are also adherents of the Shia Muslim denomination.Mohamed Diriye Abdullahi, Culture and Customs of Somalia, (Greenwood Press: 2001), p.1
thumb|The whitewashed coral stone city of Merca is an ancient Islamic center in Somalia.
Qur'anic schools (also known as dugsi) remain the basic system of traditional religious instruction in Somalia. They provide Islamic education for children, thereby filling a clear religious and social role in the country. Known as the most stable local, non-formal system of education providing basic religious and moral instruction, their strength rests on community support and their use of locally made and widely available teaching materials. The Qur'anic system, which teaches the greatest number of students relative to other educational sub-sectors, is oftentimes the only system accessible to Somalis in nomadic as compared to urban areas. A study from 1993 found, among other things, that "unlike in primary schools where gender disparity is enormous, around 40 per cent of Qur'anic school pupils are girls; but the teaching staff have minimum or no qualification necessary to ensure intellectual development of children." To address these concerns, the Somali government on its own part subsequently established the Ministry of Endowment and Islamic Affairs, under which Qur'anic education is now regulated.Koranic School Project
In the Somali diaspora, multiple Islamic fundraising events are held every year in cities like Birmingham, London, Toronto and Minneapolis, where Somali scholars and professionals give lectures and answer questions from the audience. The purpose of these events is usually to raise money for new schools or universities in Somalia, to help Somalis that have suffered as a consequence of floods and/or droughts, or to gather funds for the creation of new mosques like the Abuubakar-As-Saddique Mosque, which is currently undergoing construction in the Twin cities.
In addition, the Somali community has produced numerous important Muslim figures over the centuries, many of whom have significantly shaped the course of Islamic learning and practice in the Horn of Africa, the Arabian Peninsula and well beyond.
Important Islamic figures
thumb|Sheikh Abadir Umar ar-Rida, patron saint of Harar.
thumb|Sheikh Ali Ayanle Samatar, a prominent Somali Islamic scholar.
Abdirahman bin Isma'il al-Jabarti – 10th century Islamic leader in northern Somalia.
Yusuf bin Ahmad al-Kawneyn – 13th century scholar, philosopher and saint. Associated with the development of Wadaad writing.
Abadir Umar ar-Rida – 13th century Sheikh and patron saint of Harar.
Uthman bin Ali Zayla'i – 14th century Somali theologian and jurist who wrote the single most authoritative text on the Hanafi school of Islam, consisting of four volumes known as the Tabayin al-Haqa’iq li Sharh Kanz al-Daqa’iq.
Sa'id of Mogadishu – 14th century Somali scholar and traveler. His reputation as a scholar earned him audiences with the Emirs of Mecca and Medina. He travelled across the Muslim world and visited Bengal and China.
Ahmad ibn Ibrahim al-Ghazi (c. 1507 – 21 February 1543) – 16th century Imam and military leader that led the Conquest of Abyssinia.
Nur ibn Mujahid – 16th century Somali Emir and patron saint of Harar.
Ali al-Jabarti (d. 1492) – 16th century Somali scholar and politician in the Mamluk Empire.
Hassan al-Jabarti (d. 1774) – Somali mathematician, theologian, astronomer and philosopher; considered one of the great scholars of the 18th century.
Abd al-Rahman al-Jabarti (1753–1825) – Somali scholar living in Cairo that recorded the Napoleonic invasion of Egypt.
Abd al Aziz al-Amawi (1832–1896) – 19th century influential Somali diplomat, historian, poet, jurist and scholar living in the Sultanate of Zanzibar.
Shaykh Abd Al-Rahman bin Ahmad al-Zayla'i (1820–1882) – Somali scholar who played a crucial role in the spread of the Qadiriyyah movement in Somalia and East Africa.
Shaykh Sufi (1829–1904) – 19th century Somali scholar, poet, reformist and astrologer.
Sheikh Uways Al-Barawi (1847–1909) – Somali scholar credited with reviving Islam in 19th century East Africa and with followers in Yemen and Indonesia.
Abdallah al-Qutbi (1879–1952) – Somali polemicist theologian and philosopher; best known for his five-part Al-Majmu'at al-mubaraka ("The Blessed Collection"), published in Cairo.
Sheikh Muhammad al-Sumali (1910-2005) – Somali scholar and teacher in the Masjid Al-Haram in Mecca. He influenced many of the prominent Islamic scholars of today.
Clan, family and social stratification
Clans
thumb|Tomb of Sheikh Darod in Haylaan.
Somalis are ethnically of Hamitic ancestral stock, but have genealogical traditions asserting descent from various Arabian patriarchs. They are segmented into various clan groupings, which are important kinship units that play a central part in Somali culture and politics. Clan families are patrilineal, and are divided into clans, primary lineages or subclans, and dia-paying kinship groups. The lineage terms qabiil, qolo, jilib and reer are often interchangeably used to indicate the different segmentation levels. The clan represents the highest kinship level. It owns territorial properties and is typically led by a clan-head or Sultan. Primary lineages are immediately descended from the clans, and are exogamous political units with no formally installed leader. They comprise the segmentation level that an individual usually indicates he or she belongs to, with their founding patriarch reckoned to between six and ten generations.
The most common 5 major clan families are the traditionally nomadic pastoralist Darod, Dir, Hawiye and Isaaq, and the sedentary agropastoralist Rahanweyn Minor Somali clans include Benadiri.
thumb|Sheikh Isaaq's tomb in Maydh.
The Dir, Hawiye, Gardere( Gaalje'el, Degodia) , Hawadle and Garre trace agnatic origins to the patriarch Samaale to Arabian Banu Hashim origins through Aqiil Abu Talib ibn Abd al-Muttalib. The Darod have separate paternal traditions of descent through Abdirahman bin Isma'il al-Jabarti (Sheikh Darod), who is said to have arrived at a later date from the Arabian peninsula, in the 10th or 11th centuries.I.M. Lewis, A Modern History of the Somali, fourth edition (Oxford: James Currey, 2002), p. 22 Sheikh Darod is, in turn, asserted to have married a woman from the Dir, thus establishing matrilateral ties with the Samaale main stem. Although often recognized as a sub-clan of the Dir, the Isaaq clan claims paternal descent from one Shaykh Ishaq ibn Ahmad al-Hashimi (Sheikh Isaaq). The Rahanweyn or Sab trace their stirp to the patriarch Sab. Both Samaale and Sab are supposed to have ultimately descended from a common lineage originating in the Arabian peninsula. These traditions of descent from elite Arab forefathers, who settled on the littoral, are debated, although they are based on early Arab documents and northern oral folklore.
The tombs of the founders of the Darod, Dir and Isaaq major clans, as well as the Abgaal subclan of the Hawiye are all located in northern Somalia. Tradition holds this general area as an ancestral homeland of the Somali people.
Kinship
The traditional political unit among the Somali people has been kinships. Dia-paying groups are groupings of a few small lineages, each of which consist of a few hundred to a few thousand members. They trace their foundation to between four to eight generations. Members are socially contracted to support each other in jural and political duties, including paying or receiving dia or blood compensation (mag in Somali). Compensation is obligatory in regards to actions committed by or against a dia-paying group, including blood-compensation in the event of damage, injury or death.
Social stratification
thumb|Traditional distribution of ethnic Somali clans.
Within traditional Somali society, like the other ethnic groups in the Horn of Africa region, there has been social stratification.Beatrice Akua-Sakyiwah (2016), Education as Cultural Capital and its Effect on the Transitional Issues Faced by Migrant Women in the Diaspora, Journal of International Migration and Integration, Volume 17, Number 4, pages 1125-1142, Quote: "This caste stratification is a daily reality in Somali society". According to the historian Donald Levine, these comprised high-ranking clans, low-ranking clans, caste groups, and slaves. This rigid hierarchy and concepts of lineal purity contrast with the relative egalitarianism in clan leadership and political control., Quote: "The social organization of Somali society accommodated ideological conceptions of inferiority through investing clan membership with definitions of lineal purity. Somali clans, while fiercely egalitarian with regards to leadership and political control, contain divisions of unequal status".
Nobles constituted the upper tier and were known as bilis. They consist of individuals of ethnic Somali ancestral origin, and have been endogamous. The nobles are distinguished by Europid physical features, different from those of negro Africans. They believe with great pride that they are of Arabian ancestry, and trace their stirp to Muhammad's lineage of Quraysh and those of his companions. Although they do not consider themselves culturally Arabs, except for the shared religion, their presumed noble Arabian origins genealogically unite them.
The lower tier was designated as Sab, and was distinguished by its heterogeneous constitution and agropastoral lifestyle as well as some linguistic and cultural differences. A third Somali caste strata was made up of artisanal groups, which were endogamous and hereditary. Among the caste groups, the Midgan were traditionally hunters and circumcision performers.;Е. de Larajasse (1972), Somali-English and Somali-English Dictionary, Trubner, page 108Е. de Larajasse (1972), Somali-English and Somali-English Dictionary, Trubner, pages 108, 119, 134, 145, 178 The Tumal (also spelled Tomal) were smiths and leatherworkers, and the Yibir (also spelled Yebir) were the tanners and magicians., Quote: "Many of these items were not made by nomads but by a caste of artisans called the Saab, considered subservient (...) The Yebir, also members of the Saab caste, were responsible for crafting amulets (hardas), prayer mats, and saddles, and for performing rituals designed to protect nomads from snakes and scorpions, illnesses and harm during marriage and childbirth".
According to the anthropologist Virginia Luling, the artisanal caste groups of the north closely resembled their higher caste kinsmen, being generally Caucasoid like other ethnic Somalis. Although ethnically indistinguishable from each other, state Mohamed Eno and Abdi Kusow, upper castes have stigmatized the lower ones.Mohamed A. Eno and Abdi M. Kusow (2014), Racial and Caste Prejudice in Somalia, Journal of Somali Studies, Iowa State University Press, Volume 1, Issue 2, page 95, Quote: "Unlike that of the Somali Jareer Bantu, the history, social, and ethnic formation of the Somali caste communities is hardly distinguishable from that of other Somalis. The difference is that these communities are stigmatized because mythical narratives claim that (a) they are of unholy origin, and (b) they engage in denigrated occupations."
Outside of the Somali caste system were slaves of Bantu origin and physiognomy (known as jareer or adoon). Their distinct physical features and occupations differentiated them from Somalis and positioned them as inferior within the social hierarchy.Mohamed A. Eno and Abdi M. Kusow (2014), Racial and Caste Prejudice in Somalia, Journal of Somali Studies, Iowa State University Press, Volume 1, Issue 2, pages 91-92, 95-96, 108-112
Marriage
thumb|A traditional Somali wedding basket.
Among Somali clans, in order to strengthen alliance ties, marriage is often to another ethnic Somali from a different clan. According to I. M. Lewis, of 89 marriages initiated by men of the Dhulbahante clan, 55 (62%) were therefore with women of Dhulbahante subclans other than those of their husbands; 30 (33.7%) were with women of adjacent clans of other clan families (Isaaq, 28; Hawiye, 3); and 3 (4.3%) were with women of other clans of the Darod clan family (Majerteen 2, Ogaden 1). Such exogamy is always followed by the dia-paying group and usually adhered to by the primary lineage, whereas marriage to lineal kin falls within the prohibited range. These traditional strictures against consanguinous marriage rule out the patrilateral cousin marriages that are favored by Arab Bedouins, and which are practiced to a limited degree by certain northern Somali subclans. The endogamous tradition within the Somali clans intensified after their contact with the Arab society with increasing preference for cousin marriages. In southern Somalia, endogamous marriages also served as a means of ensuring clan solidarity in uncertain socio-political circumstances.
In 1975, the most prominent government reforms regarding family law in a Muslim country were set in motion in the Somali Democratic Republic, which put women and men, including husbands and wives, on complete equal footing.Pg.115 - Women in Muslim family law by John L. Esposito, Natana J. DeLong-Bas The 1975 Somali Family Law gave men and women equal division of property between the husband and wife upon divorce and the exclusive right to control by each spouse over his or her personal property.Pg.75 - Generating employment and incomes in Somalia: report of an inter-disciplinary employment and project-identification mission to Somalia financed by the United Nations Development Programme and executed by ILO/JASPA
Language
thumb|Old Somali stone tablet: After Somali had lost its ancient writing script,Ministry of Information and National Guidance, Somalia, The writing of the Somali language, (Ministry of Information and National Guidance: 1974), p.5 Somali scholars over the following centuries developed a writing system known as Wadaad writing to transcribe the language.
The Somali language (Af-Somali) is a member of the Cushitic branch of the Afroasiatic (Hamitic-Semitic) family. Its nearest relatives are the Afar and Saho languages.I. M. Lewis, Peoples of the Horn of Africa: Somali, Afar and Saho, (Red Sea Press: 1998), p.11. Somali is the best documented of the Cushitic languages, with academic studies of it dating from before 1900.
thumb|left|Speech sample in Standard Somali.
The exact number of speakers of Somali is unknown. One source estimates that there are 7.78 million speakers of Somali in Somalia itself and 12.65 million speakers globally. The Somali language is spoken by ethnic Somalis in Greater Somalia and the Somali diaspora.
thumb|left|Somali language books on display.
Somali dialects are divided into three main groups: Northern, Benaadir and Maay. Northern Somali (or Northern-Central Somali) forms the basis for Standard Somali. Benaadir (also known as Coastal Somali) is spoken on the Benadir coast from Adale to south of Merca, including Mogadishu, as well as in the immediate hinterland. The coastal dialects have additional phonemes which do not exist in Standard Somali. Maay is principally spoken by the Digil and Mirifle (Rahanweyn) clans in the southern areas of Somalia.Andrew Dalby, Dictionary of languages: the definitive reference to more than 400 languages, (Columbia University Press: 1998), p.571.
A number of writing systems have been used over the years for transcribing the language. Of these, the Somali Latin alphabet is the most widely used, and has been the official writing script in Somalia since the government of former President of Somalia Mohamed Siad Barre formally introduced it in October 1972.Economist Intelligence Unit (Great Britain), Middle East annual review, (1975), p.229 The script was developed by the Somali linguist Shire Jama Ahmed specifically for the Somali language. It uses all letters of the English Latin alphabet, except p, v and z. Besides Ahmed's Latin script, other orthographies that have been used for centuries for writing Somali include the long-established Arabic script and the Wadaad writing. Other writing systems developed in the twentieth century include the Osmanya, Borama and Kaddare scripts, which were invented by Osman Yusuf Kenadid, Abdurahman Sheikh Nuur and Hussein Sheikh Ahmed Kaddare, respectively.David D. Laitin, Politics, Language, and Thought: The Somali Experience, (University Of Chicago Press: 1977), pp.86-87
In addition to Somali, Arabic, which is also an Afro-Asiatic tongue, is an official national language in both Somalia and Djibouti. Many Somalis speak it due to centuries-old ties with the Arab world, the far-reaching influence of the Arabic media, and religious education.Helena Dubnov, A grammatical sketch of Somali, (Kِppe: 2003), pp. 70–71. Somalia and Djibouti are also both members of the Arab League.CIA World Factbook - Djibouti - People and Society; *N.B. ~60% of 774,389 total pop.
Culture
thumb|left|Somali young women and men performing the traditional dhaanto.
The culture of Somalia is an amalgamation of traditions developed independently and through interaction with neighbouring and far away civilizations, such as other parts of Northeast Africa, the Arabian Peninsula, India and Southeast Asia.Mohamed Diriye Abdullahi, Culture and Customs of Somalia, (Greenwood Press: 2001), p.155.
The textile-making communities in Somalia are a continuation of an ancient textile industry, as is the culture of wood carving, pottery and monumental architecture that dominates Somali interiors and landscapes. The cultural diffusion of Somali commercial enterprise can be detected in its cuisine, which contains Southeast Asian influences. Due to the Somali people's passionate love for and facility with poetry, Somalia has often been referred to by scholars as a "Nation of Poets" and a "Nation of Bards" including, among others, the Canadian novelist Margaret Laurence.Diriye, p.75
All of these traditions, including festivals, martial arts, dress, literature, sport and games such as Shax, have immensely contributed to the enrichment of Somali heritage.
Music
Somalis have a rich musical heritage centered on traditional Somali folklore. Most Somali songs are pentatonic. That is, they only use five pitches per octave in contrast to a heptatonic (seven note) scale, such as the major scale. At first listen, Somali music might be mistaken for the sounds of nearby regions such as Ethiopia, Sudan or Arabia, but it is ultimately recognizable by its own unique tunes and styles. Somali songs are usually the product of collaboration between lyricists (midho), songwriters (lahan) and singers ('odka or "voice").Diriye, pp.170-171
Musicians and bands
thumb|right|Somali singer Saado Ali Warsame.
Aar Maanta – UK-based Somali singer, composer, writer and music producer.
Abdi Sinimo – prominent Somali artist and inventor of the Balwo musical style.
Abdullahi Qarshe – Somali musician, poet and playwright known for his innovative styles of music, which included a wide variety of musical instruments such as the guitar, piano and oud.
Ali Feiruz – Somali musician from Djibouti; part of the Radio Hargeisa generation of Somali artists.
Dur-Dur – Somali band active during the 1980s and 1990s in Somalia, Djibouti and Ethiopia.
Hasan Adan Samatar – popular male artist during the 1970s and 80s.
Jonis Bashir – Somali-Italian actor and singer
Khadija Qalanjo – popular Somali singer in the 1970s and 1980s.
K'naan – award-winning Somali-Canadian hip hop artist.
Magool (May 2, 1948 – March 19, 2004) – prominent Somali singer considered in Somalia as one of the greatest entertainers of all time.
Maryam Mursal (born 1950) – famous musician from Somalia; composer and vocalist whose work has been produced by the record label Real World.
Mohammed Mooge – Somali artist from the Radio Hargeisa generation.
Poly Styrene – Somali-British punk rock singer; best known as being the lead singer of X Ray Spex.
Saado Ali Warsame – Somali singer-songwriter and modern qaraami exponent.
Waaberi – Somalia's foremost musical group that toured through several countries in Northeast Africa and Asia, including Egypt, Sudan and China.
Waayaha Cusub – Somali music collective. Organized the international Reconciliation Music Festival in 2013 in Mogadishu.
Cinema and theatre
thumb|upright|Somali film producer and director Ali Said Hassan.
Growing out of the Somali people's rich storytelling tradition, the first few feature-length Somali films and cinematic festivals emerged in the early 1960s, immediately after independence. Following the creation of the Somali Film Agency (SFA) regulatory body in 1975, the local film scene began to expand rapidly. The Somali filmmaker Ali Said Hassan concurrently served as the SFA's representative in Rome. In the 1970s and early 1980s, popular musicals known as riwaayado were the main driving force behind the Somali movie industry. Epic and period films as well as international co-productions followed suit, facilitated by the proliferation of video technology and national television networks. Said Salah Ahmed during this period directed his first feature film, The Somali Darwish (The Somalia Dervishes), devoted to the Dervish State. In the 1990s and 2000s, a new wave of more entertainment-oriented movies emerged. Referred to as Somaliwood, this upstart, youth-based cinematic movement has energized the Somali film industry and in the process introduced innovative storylines, marketing strategies and production techniques. The young directors Abdisalam Aato of Olol Films and Abdi Malik Isak are at the forefront of this quiet revolution.
Art
thumb|left|A Somali woman with kohl eyes.
Somalis have old visual art traditions, which include pottery, jewelry and wood carving. In the medieval period, affluent urbanites commissioned local wood and marble carvers to work on their interiors and houses. Intricate patterns also adorn the mihrabs and pillars of ancient Somali mosques. Artistic carving was considered the province of men, whereas the textile industry was mainly that of women. Among the nomads, carving, especially woodwork, was widespread and could be found on the most basic objects such as spoons, combs and bowls. It also included more complex structures, such as the portable nomadic house, the aqal. In the last several decades, traditional carving of windows, doors and furniture have given way to workshops employing electrical machinery, which deliver the same results in a far shorter time period.
Additionally, henna is an important part of Somali culture. It is worn by Somali women on their hands, arms, feet and neck during wedding ceremonies, Eid, Ramadan and other festive occasions. Somali henna designs are similar to those in the Arabian peninsula, often featuring flower motifs and triangular shapes. The palm is also frequently decorated with a dot of henna and the fingertips are dipped in the dye. Henna parties are usually held before the wedding takes place. Somali women have likewise traditionally applied kohl (kuul) to their eyes.Katheryne S. Loughran, Somalia in word and image, (Foundation for Cross Cultural Understanding: 1986), p.166. Usage of the eye cosmetic in the Horn region is believed to date to the ancient Land of Punt.Studies in Ancient Technology, Volume III, (Brill Archive), p.18.
Sports
thumb|Flag of the Somali Youth League (SYL), Somalia's first political party.
Football is the most popular sport amongst Somalis. Important competitions are the Somalia League and Somalia Cup. The Ocean Stars is Somalia's multi-ethnic national team.
thumb|left|Olympic and world champion distance runner Mo Farah.
Basketball is also played in the country. The FIBA Africa Championship 1981 was hosted in Mogadishu from 15 to 23 December December 1981, during which the national basketball team received the bronze medal. The squad also takes part in the basketball event at the Pan Arab Games. Other team sports include badminton, baseball, table tennis, and volleyball.
In the martial arts, Faisal Jeylani Aweys and Mohamed Deq Abdulle also took home a silver medal and fourth place, respectively, at the 2013 Open World Taekwondo Challenge Cup in Tongeren. The Somali National Olympic committee has devised a special support program to ensure continued success in future tournaments. Additionally, Mohamed Jama has won both world and European titles in K1 and Thai Boxing. Other individuals sports include judo, boxing, athletics, weight lifting, swimming, rowing, fencing and wrestling.
Attire
thumb|Somali man wearing a macawis sarong.
When not dressed in Westernized clothing such as jeans and t-shirts, Somali men typically wear the macawis. It is a sarong that is worn around the waist. On their heads, they often wrap a colorful turban or wear the koofiyad, which is an embroidered fez.
Due to Somalia's proximity to and close ties with the Arabian Peninsula, many Somali men also wear the jellabiya (jellabiyad or qamiis). The costume is a long white garment common in the Arab world.Michigan State University. Northeast African Studies Committee, Northeast African Studies, Volume 8, (African Studies Center, Michigan State University: 2001), p.66.
thumb|left|upright|Somali woman in traditional garbasaar and shash.
During regular, day-to-day activities, Somali women usually wear the guntiino. It is a long stretch of cloth tied over the shoulder and draped around the waist. The cloth is usually made out of alandi, which is a textile that is common in the Horn region and some parts of North Africa. The garment can be worn in different styles. It can also be made with other fabrics, including white cloth with gold borders. For more formal settings, such as at weddings or religious celebrations like Eid, women wear the dirac. It is a long, light, diaphanous voile dress made of silk, chiffon, taffeta or saree fabric. The gown is worn over a full-length half-slip and a brassiere. Known as the gorgorad, the underskirt is made out of silk and serves as a key part of the overall outfit. The dirac is usually sparkly and very colorful, the most popular styles being those with gilded borders or threads.
Married women tend to sport headscarves referred to as shaash. They also often cover their upper body with a shawl, which is known as garbasaar. Unmarried or young women, however, do not always cover their heads. Traditional Arabian garb, such as the jilbab and abaya, is also commonly worn.
Additionally, Somali women have a long tradition of wearing gold jewelry, particularly bangles. During weddings, the bride is frequently adorned in gold. Many Somali women by tradition also wear gold necklaces and anklets.
Ethnic flag
thumb|upright|Somali woman wearing a Somali flag dress.
The Somali flag is an ethnic flag conceived to represent ethnic Somalis. It was created in 1954 by the Somali scholar Mohammed Awale Liban, after he had been selected by the labour trade union of the Trust Territory of Somalia to come up with a design. Upon independence in 1960, the flag was adopted as the national flag of the nascent Somali Republic. The five-pointed Star of Unity in the flag's center represents the Somali ethnic group inhabiting the five territories in Greater Somalia.
Cuisine
thumb|left|canjeero a subtle version of injera, is a staple Somali cuisine.
Somali cuisine varies from region to region and consists of a fusion of diverse culinary influences. It is the product of Somalia's rich tradition of trade and commerce. Despite the variety, there remains one thing that unites the various regional cuisines: all food is served halal. There are therefore no pork dishes, alcohol is not served, nothing that died on its own is eaten, and no blood is incorporated.
Breakfast (quraac) is an important meal for Somalis, who often start the day with some style of tea (shahie) or coffee (qaxwa). The tea is often in the form of haleeb shai (Yemeni milk tea) in the north. The main dish is typically a pancake-like bread (canjeero or canjeelo) similar to Ethiopian injera, but smaller and thinner. It might also be eaten with a stew (maraqe) or soup.Abdullahi, pp.111-114. Qado or lunch is often elaborate. Varieties of bariis (rice), the most popular probably being basmati, usually serve as the main dish alongside goat or lamb. Spices like cumin, cardamom, cloves, cinnamon, and garden sage are used to aromatize these different rice delicacies. Somalis eat dinner as late as 9 pm. During Ramadan, supper is often served after Tarawih prayers; sometimes as late as 11 pm.
Xalwo (halva) is a popular confection eaten during festive occasions, such as Eid celebrations or wedding receptions. It is made from sugar, corn starch, cardamom powder, nutmeg powder and ghee. Peanuts are also sometimes added to enhance texture and flavor.Barlin Ali, Somali Cuisine, (AuthorHouse: 2007), p.79 After meals, homes are traditionally perfumed using frankincense (lubaan) or incense (cuunsi), which is prepared inside an incense burner referred to as a dabqaad.
Literature
thumb|Award-winning author Nuruddin Farah.
Somali scholars have for centuries produced many notable examples of Islamic literature ranging from poetry to Hadith. With the adoption of the Latin alphabet in 1972 to transcribe the Somali language, numerous contemporary Somali authors have also released novels, some of which have gone on to receive worldwide acclaim. Most of the early Somali literature is in the Arabic script and Wadaad writing. This usage was limited to Somali clerics and their associates, as sheikhs preferred to write in the liturgical Arabic language. Various such historical manuscripts in Somali nonetheless exist, which mainly consist of Islamic poems (qasidas), recitations and chants. Among these texts are the Somali poems by Sheikh Uways and Sheikh Ismaaciil Faarah. The rest of the existing historical literature in Somali principally consists of translations of documents from Arabic.
Authors and poets
Mohamed Ibrahim Warsame 'Hadrawi' – songwriter, philosopher, and Somali Poet Laureate; also dubbed the Somali Shakespeare.
Nuruddin Farah (born 1943) – Somali writer and winner of the 1998 Neustadt International Prize for Literature.
Abdillahi Suldaan Mohammed Timacade (1920–1973) – prominent Somali poet known for his nationalist poems such as Kana siib Kana Saar.
Mohamud Siad Togane (born 1943) – Somali-Canadian poet, professor, and political activist.
Maxamed Daahir Afrax – Somali novelist and playwright. Afrax has published several novels and short stories in Somali and Arabic, and has also written two plays, the first being Durbaan Been ah ("A Deceptive Drum"), which was staged in Somalia in 1979. His major contribution in the field of theatre criticism is Somali Drama: Historical and Critical Study (1987).
Nadifa Mohamed – Somali novelist. Winner of the 2010 Betty Trask Prize.
Farah Mohamed Jama Awl – famous Somali author best known for his historical fiction novels.
Diriye Osman – Somali writer and visual artist. Winner of the 2014 Polari First Book Prize.
Sofia Samatar – Somali professor and writer. Winner of the 2014 World Fantasy Award.
Law
thumb|The guurti (court) within the Xeer customary law was traditionally formed beneath an acacia tree.
Somalis for centuries have practiced a form of customary law, which they call Xeer. Xeer is a polycentric legal system where there is no monopolistic agent that determines what the law should be or how it should be interpreted. It is assumed to have developed exclusively in the Horn of Africa since approximately the 7th century. Given the dearth of loan words from foreign languages within the xeer's nomenclature, the customary law appears to have evolved in situ.
Xeer is defined by a few fundamental tenets that are immutable and which closely approximate the principle of jus cogens in international law: payment of blood money (locally referred to as diya or mag), assuring good inter-clan relations by treating women justly, negotiating with "peace emissaries" in good faith, and sparing the lives of socially protected groups (e.g. children, women, the pious, poets and guests), family obligations such as the payment of dowry, and sanctions for eloping, rules pertaining to the management of resources such as the use of pasture land, water, and other natural resources, providing financial support to married female relatives and newlyweds, donating livestock and other assets to the poor. The Xeer legal system also requires a certain amount of specialization of different functions within the legal framework. Thus, one can find odayal (judges), xeer boggeyaal (jurists), guurtiyaal (detectives), garxajiyaal (attorneys), murkhaatiyal (witnesses) and waranle (police officers) to enforce the law.
Architecture
Somali architecture is a rich and diverse tradition of engineering and designing. It involves multiple different construction types, such as stone cities, castles, citadels, fortresses, mosques, mausoleums, towers, tombs, tumuli, cairns, megaliths, menhirs, stelae, dolmens, stone circles, monuments, temples, enclosures, cisterns, aqueducts, and lighthouses. Spanning the ancient, medieval and early modern periods in Greater Somalia, it also includes the fusion of Somali architecture with Western designs in contemporary times.
In ancient Somalia, pyramidical structures known in Somali as taalo were a popular burial style. Hundreds of these dry stone monuments are found around the country today. Houses were built of dressed stone similar to the ones in Ancient Egypt. There are also examples of courtyards and large stone walls enclosing settlements, such as the Wargaade Wall.
The peaceful introduction of Islam in the early medieval era of Somalia's history brought Islamic architectural influences from Arabia and Persia. This had the effect of stimulating a shift in construction from drystone and other related materials to coral stone, sundried bricks, and the widespread use of limestone in Somali architecture. Many of the new architectural designs, such as mosques, were built on the ruins of older structures. This practice would continue over and over again throughout the following centuries.Diriye, p.102
Geographic distribution
thumb|A Somali-owned grocery in Columbus, Ohio.
Somalis constitute the largest ethnic group in Somalia, at approximately 85% of the nation's inhabitants. They also comprise around 60% of the inhabitants in Djibouti.
Civil strife in the early 1990s greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for the Middle East, Europe and North America.Somali Diaspora - Inner City Press In Canada, the cities of Toronto, Ottawa, Calgary, Edmonton, Montreal, Vancouver, Winnipeg and Hamilton all harbor Somali populations. Statistics Canada's 2006 census ranks people of Somali descent as the 69th largest ethnic group in Canada.
thumb|left|Somali women at a political function in Dubai, United Arab Emirates.
While the distribution of Somalis per country in Europe is hard to measure because the Somali community on the continent has grown so quickly in recent years, the Office for National Statistics estimates that 114,000 people born in Somalia were living in the United Kingdom in 2015. Somalis in Britain are largely concentrated in the cities of London, Sheffield, Bristol, Birmingham, Cardiff, Liverpool, Manchester, Leeds, and Leicester, with London alone accounting for roughly 78% of Britain's Somali population in 2001. There are also significant Somali communities in Sweden: 57,906 (2014); the Netherlands: 37,432 (2014); Norway: 38,413 (2015); Denmark: 18,645 (2014); and Finland: 16,721 (2014).
In the United States, Minneapolis, Saint Paul, Columbus, San Diego, Seattle, Washington, D.C., Houston, Atlanta, Los Angeles, Portland, Denver, Nashville, Green Bay, Lewiston, Portland, Maine and Cedar Rapids have the largest Somali populations.
thumb|Sign on Somali Road in the London Borough of Camden.
An estimated 20,000 Somalis emigrated to the U.S. state of Minnesota some ten years ago and the Twin Cities (Minneapolis and Saint Paul) now have the highest population of Somalis in North America.Mosedale, Mike (18 February 2004), "The Mall of Somalia", City Pages The city of Minneapolis hosts hundreds of Somali-owned and operated businesses offering a variety of products, including leather shoes, jewelry and other fashion items, halal meat, and hawala or money transfer services. Community-based video rental stores likewise carry the latest Somali films and music."Talking Point" by M. M. Afrah Minneapolis, Minnesota (USA) Aug., 12. 2004. The number of Somalis has especially surged in the Cedar-Riverside area of Minneapolis.
thumb|left|A Somali high school student in Cairo, Egypt.
There is a sizable Somali community in the United Arab Emirates. Somali-owned businesses line the streets of Deira, the Dubai city centre, with only Iranians exporting more products from the city at large. Internet cafés, hotels, coffee shops, restaurants and import-export businesses are all testimony to the Somalis' entrepreneurial spirit. Star African Air is also one of three Somali-owned airlines which are based in Dubai.
Besides their traditional areas of inhabitation in Greater Somalia, a Somali community mainly consisting of entrepreneurs, academics, and students also exists in Egypt.Somalia: How is the fate of the Somalis in Egypt? In addition, there is an historical Somali community in the general Sudan area. Primarily concentrated in the north and Khartoum, the expatriate community mainly consists of students as well as some businesspeople.The History of Somali Communities in the Sudan since the First World War More recently, Somali entrepreneurs have established themselves in Kenya, investing over $1.5 billion in the Somali enclave of Eastleigh alone.Help Locals Rebuild Their Country By Ensuring World Attention And Peace In South Africa, Somali businesspeople also provide most of the retail trade in informal settlements around the Western Cape province.
Notable individuals of the diaspora
thumb|Politician and diplomat Yusuf Hassan Abdi.
Abdi Yusuf Hassan – Somali politician, diplomat and journalist. Former Director of IRIN and UNHCR Head of External and Media Relations in Southwest and Central Asia.
Abdirahim Hussein Mohamed – Somali politician. Elected Chairman of the Helsinki Centre Youth in 2007 and Chairman of the Moniheli cooperation network for multicultural organizations.
Abdirashid Duale – award-winning Somali entrepreneur, philanthropist, and the CEO of the multinational enterprise Dahabshiil.
Abdulqawi Yusuf – Prominent Somali international lawyer and judge with the International Court of Justice.
Adan Mohammed – Somali banker, entrepreneur and politician. He previously served as the Managing Director of Barclays Bank in East and West Africa and is currently the Cabinet Secretary for Industrialization of Kenya.
Ahmed Hussen – Somali lawyer. President of the Canadian Somali Congress.
Ali Said Faqi – Somali scientist and the leading researcher on the design and interpretation of toxicology studies at the MPI research center in Mattawan, Michigan.
Amina Moghe Hersi – Award-winning Somali entrepreneur that has launched several multimillion-dollar projects in Kampala, Uganda, such as the Oasis Centre luxury mall and the Laburnam Courts. She also runs Kingstone Enterprises Limited, one of the largest distributors of cement and other hardware materials in Kampala.
thumb|International lawyer Amina Mohamed.
Amina Mohamed – Somali lawyer and politician. Former Chairman of the International Organization for Migration and the World Trade Organisation's General Council, and current Secretary for Foreign Affairs of Kenya.
Ayaan and Idyl Mohallim – Somali twin fashion designers and owners of the Mataano brand.
Ayaan Hirsi Ali – Feminist and atheist activist, writer and politician known for her views critical of Islam and female circumcision.
Ayub Daud – Somali international footballer who plays as a forward/attacking midfielder for FC Crotone on loan from Juventus.
Faisal Hawar – Somali engineer and entrepreneur. Chairman of the International Somalia Development Foundation and the Maakhir Resource Company.
Halima Ahmed – Somali political activist with the Youth Rehabilitation Center and prospective candidate in the Federal Parliament of Somalia.
Hanan Ibrahim – Somali social activist. Received the Queen's Award for Voluntary Service in 2004 and was made an MBE in 2010.
Hassan Abdillahi – Somali journalist. President of Ogaal Radio, the largest Somali community station in Canada.
Hawa Ahmed – Somali-Swedish fashion model and winner of Cycle 4 of Sweden's Next Top Model.
thumb|Designers Ayaan and Idyl Mohallim.
Hibaaq Osman – Somali political strategist. Founder and Chairperson of the ThinkTank for Arab Women, the Dignity Fund, and Karama.
Hodan Ahmed – Somali political activist and Senior Program Officer at the National Democratic Institute.
Hodan Nalayeh – Somali media executive and entrepreneur. President of the Cultural Integration Agency and the Vice President of Sales & Programming Development of Cameraworks Productions International.
Idil Ibrahim – Somali-American film director, writer and producer. Founder of Zeila Films.
Iman Mohamed Abdulmajid – international fashion icon, supermodel, actress and entrepreneur; professionally known as Iman.
Jawahir Ahmed – Somali-American model. Served as Miss Somalia in 2013 Miss United Nations USA pageant.
Leila Abukar – Somali-Australian political activist. Recipient of Centenary Medal.
Mohamed Abdullahi Mohamed (Farmajo) – Somali politician and diplomat. Former Prime Minister of Somalia and founder of the Tayo Political Party.
Musse Olol – Somali-American social activist. Recipient of the 2011 Director's Community Leadership Award.
Mustafa Mohamed – Somali-Swedish long-distance runner who mainly competes in the 3,000-meter steeplechase. Won gold in the 2006 Nordic Cross Country Championships and at the 1st SPAR European Team Championships in Leiria, Portugal, in 2009. Beat the 31-year-old Swedish record in 2007.
thumb|Entrepreneur Faisal Hawar.
Nathif Jama Adam – Somali banker and politician. Former Senior Vice President and the Head of the Sharjah Islamic Bank's Investments & International Banking Division, and Governor of Garissa County.
Omar Abdi Ali – Somali entrepreneur, accountant, financial consultant, philanthropist, and specialist on Islamic finance. Was formerly CEO of Dar al-Maal al-Islami (DMI Trust), which under his management increased its assets from $1.6 billion to $4.0 billion. He is currently the chairman and founder of the multinational real estate corporation Integrated Property Investments Limited and its sister company Quadron investments.
Rageh Omaar – Somali-British television news presenter and writer. Formerly a BBC news correspondent in 2009, he moved to a new post at Al Jazeera English, where he currently presents the nightly weekday documentary series Witness.
Sulekha Ali, a Somali-Canadian musician.
Waris Dirie – Somali model, author, actress, and social activist. UN Special Ambassador from 1997 to 2003.
Yasmin Warsame – Somali-Canadian model who was named "The Most Alluring Canadian" in a poll by Fashion magazine.
Zahra Abdulla – Somali politician in Finland and member of the Helsinki City Council representing the Green League.
Genetics
Y-DNA
thumb|A Somali man in a traditional taqiyah.
According to Y chromosome studies by Sanchez et al. (2005), Cruciani et al. (2004, 2007), the Somalis are paternally closely related to other Afro-Asiatic-speaking groups in Northeast Africa.Sanchez et al., High frequencies of Y chromosome lineages characterized by E3b1, DYS19-11, DYS392-12 in Somali males, Eu J of Hum Genet (2005) 13, 856–866Cruciani et al., "Phylogeographic Analysis of Haplogroup E3b (E-M215) Y Chromosomes Reveals Multiple Migratory Events Within and Out Of Africa", Am J Hum Genet. 2004 May; 74(5): 1014–1022 Also see Supplementary Data. Besides comprising the majority of the Y-DNA in Somalis, the E1b1b (formerly E3b) haplogroup also makes up a significant proportion of the paternal DNA of Ethiopians, Sudanese, Egyptians, Berbers, North African Arabs, as well as many Mediterranean populations. Sanchez et al. (2005) observed the M78 (E1b1b1a1) subclade of E1b1b in about 77.6% of their Somali male samples. According to Cruciani et al. (2007), the presence of this subhaplogroup in the Horn region may represent the traces of an ancient migration from Egypt/Libya.
After haplogroup E1b1b, the second most frequently occurring Y-DNA haplogroup among Somalis is the West Asian haplogroup T (M184). The clade is observed in more than 10% of Somali males generally, with a frequency peak of 82.4% among Somalis in Dire Dawa. Haplogroup T, like haplogroup E1b1b, is also typically found among other populations of Northeast Africa, the Maghreb, the Near East and the Mediterranean.
mtDNA
thumb|right|A Somali schoolgirl.
According to mtDNA studies by Holden (2005) and Richards et al. (2006), a significant proportion of the maternal lineages of Somalis consists of the M1 haplogroup.Hans-Jürgen Bandelt, Vincent Macaulay, Dr. Martin Richards, Human mitochondrial DNA and the evolution of Homo sapiens, Volume 18 of Nucleic acids and molecular biology, (シュプリンガー・ジャパン株式会社: 2006), p.235.AD. Holden (2005), MtDNA variation in North, East, and Central African populations gives clues to a possible back-migration from the Middle East, Program of the Seventy-Fourth Annual Meeting of the American Association of Physical Anthropologists (2005) This mitochondrial clade is common among Ethiopians and North Africans, particularly Egyptians and Algerians. M1 is believed to have originated in Asia,Gonzalez et al., Mitochondrial lineage M1 traces an early human backflow to Africa, BMC Genomics 2007, 8:223 where its parent M clade represents the majority of mtDNA lineages.Ghezzi et al. (2005), Mitochondrial DNA haplogroup K is associated with a lower risk of Parkinson's disease in Italians, European Journal of Human Genetics (2005) 13, 748–752. This haplogroup is also thought to possibly correlate with the Afro-Asiatic language family:
"We analysed mtDNA variation in ~250 persons from Libya, Somalia, and Congo/Zambia, as representatives of the three regions of interest. Our initial results indicate a sharp cline in M1 frequencies that generally does not extend into sub-Saharan Africa. While our North and especially East African samples contained frequencies of M1 over 20%, our sub-Saharan samples consisted almost entirely of the L1 or L2 haplogroups only. In addition, there existed a significant amount of homogeneity within the M1 haplogroup. This sharp cline indicates a history of little admixture between these regions. This could imply a more recent ancestry for M1 in Africa, as older lineages are more diverse and widespread by nature, and may be an indication of a back-migration into Africa from the Middle East."
Autosomal DNA
thumb|A young Somali man.
According to an autosomal DNA study by Hodgson et al. (2014), the Afro-Asiatic languages were likely spread across Africa and the Near East by an ancestral population(s) carrying a newly identified non-African genetic component, which the researchers dub the "Ethio-Somali". This Ethio-Somali component is today most common among Afro-Asiatic-speaking populations in the Horn of Africa. It reaches a frequency peak among ethnic Somalis, representing the majority of their ancestry. The Ethio-Somali component is most closely related to the Maghrebi non-African genetic component, and is believed to have diverged from all other non-African ancestries at least 23,000 years ago. On this basis, the researchers suggest that the original Ethio-Somali carrying population(s) probably arrived in the pre-agricultural period from the Near East, having crossed over into northeastern Africa via the Sinai Peninsula. The population then likely split into two branches, with one group heading westward toward the Maghreb and the other moving south into the Horn.Jason A. Hodgson, Connie J. Mulligan, Ali Al-Meeri, Ryan L. Raaum. Early Back-to-Africa Migration into the Horn of Africa. PLoS Genetics, 12 June 2014. DOI: 10.1371/journal.pgen.1004393 Ancient DNA analysis indicates that this foundational ancestry in the Horn region is akin to that of the Neolithic farmers of the southern Levant.
HLA antigens
The analysis of HLA antigens has also helped clarify the possible background of the Somali people, as the distribution of haplotype frequencies vary among population groups.Zachary et al., The Frequencies of Hla Alleles and Haplotypes and Their Distribution Among Donors and Renal Patients in the Unos Registry 1, Transplantation: 27 July 1996 - Volume 62 - Issue 2 - pp 272-283, Immunogenetics, Histocompatibility, and Tissue Antigens. According to Mohamoud et al. (2006):A. M. Mohamoud, P52 Characteristics of HLA Class I and Class II Antigens of the Somali Population, Transfusion MedicineVolume 16, Issue Supplement s1, page 47, October 2006
"HLA antigens of the Somali population are not categorised as well as those of other international ethnic groups. We analysed the HLA antigens of 76 unrelated Somalis who lived in the west of England. HLA -A, -B, -C and DRB1 typing was performed by polymerase chain reaction using sequence-specific oligonucleotide probes (PCR-SSOP) at a low-intermediate resolution level. Phenotype frequency, gene frequency and haplotype frequency were used to study the relationship between Somalis and other relevant populations. The antigens with highest frequencies were HLA -A1, A2, and A30; B7, B51 and B39; Cw7, Cw16, Cw17, Cw15 and Cw18; DR 13, DR17, DR8 and DR1. HLA haplotypes with high significance and characteristics of the Somali population are B7-Cw7, B39-Cw12, B51-Cw16, B57-Cw18. The result of HLA class I and class II antigen frequencies show that the Somali population appear more similar to Arab or Caucasoid than to African populations. The results are consistent with hypothesis, supported by cultural and historical evidence, of common origin of the Somali population."
Somali studies
thumb|right|Pioneering Somali Studies scholar, Osman Yusuf Kenadid.
The scholarly term for research concerning Somalis and Greater Somalia is known as Somali Studies. It consists of several disciplines such as anthropology, sociology, linguistics, historiography and archaeology. The field draws from old Somali chronicles, records and oral literature, in addition to written accounts and traditions about Somalis from explorers and geographers in the Horn of Africa and the Middle East. Since 1980, prominent Somalist scholars from around the world have also gathered annually to hold the International Congress of Somali Studies.
Somalist scholars
Osman Yusuf Kenadid – Pioneering scholar and writer on Somali history and science. Inventor of the Osmanya script and author of several textbooks on Somali language, astronomy, geography and philosophy.
Musa Haji Ismail Galal – Somali writer, scholar and linguist. One of the foremost historical authorities on the Somali astronomical, astrological, meteorological and calendrical systems.
Said Sheikh Samatar – Somali scholar and writer. Main areas of interest are linguistics and sociology.
Mohamed Haji Mukhtar – Somali Professor of African & Middle Eastern History at Savannah State University. Has written extensively on the history of Somalia and the Somali language.
Mohamed Diriye Abdullahi – Somali scholar, linguist and writer. Published on Somali culture, history, language and ethnogenesis.
Ali Jimale Ahmed – Somali poet, essayist, scholar, and short story writer. Published on Somali history and linguistics
Abdi Mohamed Kusow – Somali Associate Professor of Sociology at Iowa State in Ames, Iowa. Has written extensively on Somali sociology and anthropology. He is listed in Marquis Who's Who in America.
Ahmed Ismail Samatar – Somali professor and dean of the Institute for Global Citizenship at Macalester College. He is the editor of Bildhaan: An International Journal of Somali Studies.
See also
Culture of Somalia
Demographics of Somalia
Greater Somalia
References
Bibliography
Hanley, Gerald, Warriors: Life and Death Among the Somalis, (Eland Publishing Ltd, 2004)
External links
Ethnologue population estimates for Somali speakers
US Library of Congress Country Study of Somalia
Category:Cushitic-speaking peoples
Category:Ethnic groups in Djibouti
Category:Ethnic groups in Ethiopia
Category:Ethnic groups in the Arab League
Category:Ethnic groups in Africa
Category:Muslim communities in Africa | 1,571,696 | 2017-01 |
University | thumb|300px|Degree ceremony at the University of Oxford. The Pro-Vice-Chancellor in MA gown and hood, Proctor in official dress and new Doctors of Philosophy in scarlet full dress. Behind them, a bedel, a Doctor and Bachelors of Arts and Medicine graduate.
A university (, "a whole", "a corporation"Fortescue, J. 8 Mod. 163) is an institution of higher (or tertiary) education and research which grants academic degrees in various subjects.
Universities typically provide undergraduate education and postgraduate education.
The word "university" is derived from the Latin universitas magistrorum et scholarium, which roughly means "community of teachers and scholars." Universities were created in Italy and evolved from Cathedral schools for the clergy during the High Middle Ages.Charles H. Haskins. “The Life of Medieval Students as Illustrated by their Letters”. The American Historical Review 3.2 (1898): 203–229.
History
Definition
thumb|right|300px|The University of Bologna in Italy, founded in 1088, is the oldest university in the world, the word university (Latin: universitas) having been coined at its foundation.The original Latin word "universitas" refers in general to "a number of persons associated into one body, a society, company, community, guild, corporation, etc." At the time of the emergence of urban town life and medieval guilds, specialized "associations of students and teachers with collective legal rights usually guaranteed by charters issued by princes, prelates, or the towns in which they were located" came to be denominated by this general term. Like other guilds, they were self-regulating and determined the qualifications of their members.Marcia L. Colish, Medieval Foundations of the Western Intellectual Tradition, 400-1400, (New Haven: Yale Univ. Pr., 1997), p. 267.
In modern usage the word has come to mean "An institution of higher education offering tuition in mainly non-vocational subjects and typically having the power to confer degrees," with the earlier emphasis on its corporate organization considered as applying historically to Medieval universities. Compare
The original Latin word referred to degree-granting institutions of learning in Western and Central Europe, where this form of legal organisation was prevalent, and from where the institution spread around the world.
Academic freedom
An important idea in the definition of a university is the notion of academic freedom. The first documentary evidence of this comes from early in the life of the first university. The University of Bologna adopted an academic charter, the Constitutio Habita,Malagola, C. (1888), Statuti delle Università e dei Collegi dello Studio Bolognese. Bologna: Zanichelli. in 1158 or 1155,Rüegg, W. (2003), Mythologies and Historiography of the Beginnings, pp 4-34 in H. De Ridder-Symoens, editor, A History of the University in Europe; Vol 1, Cambridge University Press. which guaranteed the right of a traveling scholar to unhindered passage in the interests of education. Today this is claimed as the origin of "academic freedom".Watson, P. (2005), Ideas. London: Weidenfeld and Nicolson, page 373 This is now widely recognised internationally - on 18 September 1988, 430 university rectors signed the Magna Charta Universitatum, marking the 900th anniversary of Bologna's foundation. The number of universities signing the Magna Charta Universitatum continues to grow, drawing from all parts of the world.
Medieval universities
European higher education took place for hundreds of years in Christian cathedral schools or monastic schools (scholae monasticae), in which monks and nuns taught classes; evidence of these immediate forerunners of the later university at many places dates back to the 6th century.Riché, Pierre (1978): "Education and Culture in the Barbarian West: From the Sixth through the Eighth Century", Columbia: University of South Carolina Press, ISBN 0-87249-376-8, pp. 126-7, 282-98 The earliest universities were developed under the aegis of the Latin Church by papal bull as studia generalia and perhaps from cathedral schools. It is possible, however, that the development of cathedral schools into universities was quite rare, with the University of Paris being an exception.Gordon Leff, Paris and Oxford Universities in the Thirteenth and Fourteenth Centuries. An Institutional and Intellectual History, Wiley, 1968. Later they were also founded by Kings (University of Naples Federico II, Charles University in Prague, Jagiellonian University in Kraków) or municipal administrations (University of Cologne, University of Erfurt). In the early medieval period, most new universities were founded from pre-existing schools, usually when these schools were deemed to have become primarily sites of higher education. Many historians state that universities and cathedral schools were a continuation of the interest in learning promoted by monasteries.Johnson, P. (2000). The Renaissance : a short history. Modern Library chronicles (Modern Library ed.). New York: Modern Library, p. 9.
The first universities in Europe with a form of corporate/guild structure were the University of Bologna (1088), the University of Paris (c.1150, later associated with the Sorbonne), and the University of Oxford (1167).
The University of Bologna began as a law school teaching the ius gentium or Roman law of peoples which was in demand across Europe for those defending the right of incipient nations against empire and church. Bologna's special claim to Alma Mater Studiorum is based on its autonomy, its awarding of degrees, and other structural arrangements, making it the oldest continuously operating institution independent of kings, emperors or any kind of direct religious authority.Makdisi, G. (1981), Rise of Colleges: Institutions of Learning in Islam and the West. Edinburgh: Edinburgh University Press.Daun, H. and Arjmand, R. (2005), Islamic Education, pp 377-388 in J. Zajda, editor, International Handbook of Globalisation, Education and Policy Research. Netherlands: Springer.
thumb|left|Meeting of doctors at the University of Paris. From a medieval manuscript.
The conventional date of 1088, or 1087 according to some,Huff, T. (2003), The Rise of Early Modern Science. Cambridge University Press, p. 122 records when Irnerius commences teaching Emperor Justinian's 6th century codification of Roman law, the Corpus Iuris Civilis, recently discovered at Pisa. Lay students arrived in the city from many lands entering into a contract to gain this knowledge, organising themselves into 'Nationes', divided between that of the Cismontanes and that of the Ultramontanes. The students "had all the power … and dominated the masters".Kerr, C. (2001), The Uses of the University. P Harvard University Press.p.16 and 145Rüegg, W. (2003), Mythologies and Historiogaphy of the Beginnings, pp 4-34 in H. De Ridder-Symoens, editor, A History of the University in Europe; Vol 1, Cambridge University Press.p. 12
In Europe, young men proceeded to university when they had completed their study of the trivium–the preparatory arts of grammar, rhetoric and dialectic or logic–and the quadrivium: arithmetic, geometry, music, and astronomy.
All over Europe rulers and city governments began to create universities to satisfy a European thirst for knowledge, and the belief that society would benefit from the scholarly expertise generated from these institutions. Princes and leaders of city governments perceived the potential benefits of having a scholarly expertise develop with the ability to address difficult problems and achieve desired ends. The emergence of humanism was essential to this understanding of the possible utility of universities as well as the revival of interest in knowledge gained from ancient Greek texts.Grendler, P. F. (2004). "The universities of the Renaissance and Reformation". Renaissance Quarterly, 57, pp. 2.
The rediscovery of Aristotle's works–more than 3000 pages of it would eventually be translated–fuelled a spirit of inquiry into natural processes that had already begun to emerge in the 12th century. Some scholars believe that these works represented one of the most important document discoveries in Western intellectual history.Rubenstein, R. E. (2003). Aristotle's children: how Christians, Muslims, and Jews rediscovered ancient wisdom and illuminated the dark ages (1st ed.). Orlando, Florida: Harcourt, pp. 16-17. Richard Dales, for instance, calls the discovery of Aristotle's works "a turning point in the history of Western thought."Dales, R. C. (1990). Medieval discussions of the eternity of the world (Vol. 18). Brill Archive, p. 144. After Aristotle re-emerged, a community of scholars, primarily communicating in Latin, accelerated the process and practice of attempting to reconcile the thoughts of Greek antiquity, and especially ideas related to understanding the natural world, with those of the church. The efforts of this "scholasticism" were focused on applying Aristotelian logic and thoughts about natural processes to biblical passages and attempting to prove the viability of those passages through reason. This became the primary mission of lecturers, and the expectation of students.
The university culture developed differently in northern Europe than it did in the south, although the northern (primarily Germany, France and Great Britain) and southern universities (primarily Italy) did have many elements in common. Latin was the language of the university, used for all texts, lectures, disputations and examinations. Professors lectured on the books of Aristotle for logic, natural philosophy, and metaphysics; while Hippocrates, Galen, and Avicenna were used for medicine. Outside of these commonalities, great differences separated north and south, primarily in subject matter. Italian universities focused on law and medicine, while the northern universities focused on the arts and theology. There were distinct differences in the quality of instruction in these areas which were congruent with their focus, so scholars would travel north or south based on their interests and means. There was also a difference in the types of degrees awarded at these universities. English, French and German universities usually awarded bachelor's degrees, with the exception of degrees in theology, for which the doctorate was more common. Italian universities awarded primarily doctorates. The distinction can be attributed to the intent of the degree holder after graduation – in the north the focus tended to be on acquiring teaching positions, while in the south students often went on to professional positions.Grendler, P. F. (2004). "The universities of the Renaissance and Reformation". Renaissance Quarterly, 57, pp. 2-8. The structure of northern universities tended to be modeled after the system of faculty governance developed at the University of Paris. Southern universities tended to be patterned after the student-controlled model begun at the University of Bologna.Scott, J. C. (2006). The mission of the university: Medieval to Postmodern transformations. Journal of Higher Education, 77(1), p. 6. Among the southern universities, a further distinction has been noted between those of northern Italy, which followed the pattern of Bologna as a "self-regulating, independent corporation of scholars" and those of southern Italy and Iberia, which were "founded by royal and imperial charter to serve the needs of government."
Their endowment by a prince or monarch and their role in training government officials made these Mediterranean universities similar to Islamic madrasas, although madrasas were generally smaller and individual teachers, rather than the madrasa itself, granted the license or degree. Scholars like Arnold H. Green and Hossein Nasr have argued that starting in the 10th century, some medieval Islamic madrasahs became universities. George Makdisi and others, however, argue that the European university has no parallel in the medieval Islamic world.George Makdisi: "Madrasa and University in the Middle Ages", Studia Islamica, No. 32 (1970), pp. 255-264 (264): Toby Huff, Rise of Early Modern Science: Islam, China and the West, 2nd ed., Cambridge 2003, ISBN 0-521-52994-8, p. 133-139, 149-159, 179-189; Encyclopaedia of Islam has an entry on the "madrasa" but lacks notably one for a medieval Muslim "university" (Pedersen, J.; Rahman, Munibur; Hillenbrand, R. "Madrasa." Encyclopaedia of Islam, Second Edition. Edited by: P. Bearman , Th. Bianquis , C.E. Bosworth , E. van Donzel and W.P. Heinrichs. Brill, 2010, retrieved 21 March 2010) Other scholars regard the university as uniquely European in origin and characteristics.Rüegg, Walter: "Foreword. The University as a European Institution", in: A History of the University in Europe. Vol. 1: Universities in the Middle Ages, Cambridge University Press, 1992, ISBN 0-521-36105-2, pp. XIX–XX
thumb|Heidelberg University is the oldest university in Germany and among Europe's best ranked.Rankings: Universität Heidelberg in International Comparison - Top Position in Germany, Leading Role in Europe (Heidelberg University) It was established in 1386.
Many scholars (including Makdisi) have argued that early medieval universities were influenced by the religious madrasahs in Al-Andalus, the Emirate of Sicily, and the Middle East (during the Crusades).; Other scholars see this argument as overstated. Lowe and Yasuhara have recently drawn on the well-documented influences of scholarship from the Islamic world on the universities of Western Europe to call for a reconsideration of the development of higher education, turning away from a concern with local institutional structures to a broader consideration within a global context.
Early modern universities
During the Early Modern period (approximately late 15th century to 1800), the universities of Europe would see a tremendous amount of growth, productivity and innovative research. At the end of the Middle Ages, about 400 years after the first university was founded, there were twenty-nine universities spread throughout Europe. In the 15th century, twenty-eight new ones were created, with another eighteen added between 1500 and 1625.Grendler, P. F. (2004). The universities of the Renaissance and Reformation. Renaissance Quarterly, 57, pp. 1-3. This pace continued until by the end of the 18th century there were approximately 143 universities in Europe, with the highest concentrations in the German Empire (34), Italian countries (26), France (25), and Spain (23) – this was close to a 500% increase over the number of universities toward the end of the Middle Ages. This number does not include the numerous universities that disappeared, or institutions that merged with other universities during this time.Frijhoff, W. (1996). Patterns. In H. D. Ridder-Symoens (Ed.), Universities in early modern Europe, 1500-1800, A history of the university in Europe. Cambridge [England]: Cambridge University Press, p. 75. It should be noted that the identification of a university was not necessarily obvious during the Early Modern period, as the term is applied to a burgeoning number of institutions. In fact, the term "university" was not always used to designate a higher education institution. In Mediterranean countries, the term studium generale was still often used, while "Academy" was common in Northern European countries.Frijhoff, W. (1996). Patterns. In H. D. Ridder-Symoens (Ed.), Universities in early modern Europe, 1500-1800, A history of the university in Europe. Cambridge [England]: Cambridge University Press, p. 47.
thumb|200px|17th century classroom at the University of Salamanca
The propagation of universities was not necessarily a steady progression, as the 17th century was rife with events that adversely affected university expansion. Many wars, and especially the Thirty Years' War, disrupted the university landscape throughout Europe at different times. War, plague, famine, regicide, and changes in religious power and structure often adversely affected the societies that provided support for universities. Internal strife within the universities themselves, such as student brawling and absentee professors, acted to destabilize these institutions as well. Universities were also reluctant to give up older curricula, and the continued reliance on the works of Aristotle defied contemporary advancements in science and the arts.Grendler, P. F. (2004). The universities of the Renaissance and Reformation. Renaissance Quarterly, 57, p. 23. This era was also affected by the rise of the nation-state. As universities increasingly came under state control, or formed under the auspices of the state, the faculty governance model (begun by the University of Paris) became more and more prominent. Although the older student-controlled universities still existed, they slowly started to move toward this structural organization. Control of universities still tended to be independent, although university leadership was increasingly appointed by the state.Scott, J. C. (2006). The mission of the university: Medieval to Postmodern transformations. Journal of Higher Education, 77(1), pp. 10-13.
Although the structural model provided by the University of Paris, where student members are controlled by faculty "masters," provided a standard for universities, the application of this model took at least three different forms. There were universities that had a system of faculties whose teaching addressed a very specific curriculum; this model tended to train specialists. There was a collegiate or tutorial model based on the system at University of Oxford where teaching and organization was decentralized and knowledge was more of a generalist nature. There were also universities that combined these models, using the collegiate model but having a centralized organization.Frijhoff, W. (1996). Patterns. In H. D. Ridder-Symoens (Ed.), Universities in early modern Europe, 1500-1800, A history of the university in Europe. Cambridge [England]: Cambridge University Press, p. 65.
Early Modern universities initially continued the curriculum and research of the Middle Ages: natural philosophy, logic, medicine, theology, mathematics, astronomy (and astrology), law, grammar and rhetoric. Aristotle was prevalent throughout the curriculum, while medicine also depended on Galen and Arabic scholarship. The importance of humanism for changing this state-of-affairs cannot be underestimated.Ruegg, W. (1992). Epilogue: the rise of humanism. In H. D. Ridder-Symoens (Ed.), Universities in the Middle Ages, A history of the university in Europe. Cambridge [England]: Cambridge University Press. Once humanist professors joined the university faculty, they began to transform the study of grammar and rhetoric through the studia humanitatis. Humanist professors focused on the ability of students to write and speak with distinction, to translate and interpret classical texts, and to live honorable lives.Grendler, P. F. (2002). The universities of the Italian renaissance. Baltimore: Johns Hopkins University Press, p. 223. Other scholars within the university were affected by the humanist approaches to learning and their linguistic expertise in relation to ancient texts, as well as the ideology that advocated the ultimate importance of those texts.Grendler, P. F. (2002). The universities of the Italian renaissance. Baltimore: Johns Hopkins University Press, p. 197. Professors of medicine such as Niccolò Leoniceno, Thomas Linacre and William Cop were often trained in and taught from a humanist perspective as well as translated important ancient medical texts. The critical mindset imparted by humanism was imperative for changes in universities and scholarship. For instance, Andreas Vesalius was educated in a humanist fashion before producing a translation of Galen, whose ideas he verified through his own dissections. In law, Andreas Alciatus infused the Corpus Juris with a humanist perspective, while Jacques Cujas humanist writings were paramount to his reputation as a jurist. Philipp Melanchthon cited the works of Erasmus as a highly influential guide for connecting theology back to original texts, which was important for the reform at Protestant universities.Ruegg, W. (1996). Themes. In H. D. Ridder-Symoens (Ed.), Universities in Early Modern Europe, 1500-1800, A history of the university in Europe. Cambridge [England]: Cambridge University Press, pp. 33-39. Galileo Galilei, who taught at the Universities of Pisa and Padua, and Martin Luther, who taught at the University of Wittenberg (as did Melanchthon), also had humanist training. The task of the humanists was to slowly permeate the university; to increase the humanist presence in professorships and chairs, syllabi and textbooks so that published works would demonstrate the humanistic ideal of science and scholarship.Grendler, P. F. (2004). The universities of the Renaissance and Reformation. Renaissance Quarterly, 57, pp. 12-13.
Although the initial focus of the humanist scholars in the university was the discovery, exposition and insertion of ancient texts and languages into the university, and the ideas of those texts into society generally, their influence was ultimately quite progressive. The emergence of classical texts brought new ideas and led to a more creative university climate (as the notable list of scholars above attests to). A focus on knowledge coming from self, from the human, has a direct implication for new forms of scholarship and instruction, and was the foundation for what is commonly known as the humanities. This disposition toward knowledge manifested in not simply the translation and propagation of ancient texts, but also their adaptation and expansion. For instance, Vesalius was imperative for advocating the use of Galen, but he also invigorated this text with experimentation, disagreements and further research.Bylebyl, J. J. (2009). Disputation and description in the renaissance pulse controversy. In A. Wear, R. K. French, & I. M. Lonie (Eds.), The medical renaissance of the sixteenth century (1st ed., pp. 223-245). Cambridge University Press. The propagation of these texts, especially within the universities, was greatly aided by the emergence of the printing press and the beginning of the use of the vernacular, which allowed for the printing of relatively large texts at reasonable prices.Füssel, S. (2005). Gutenberg and the Impact of Printing (English ed.). Aldershot, Hampshire: Ashgate Pub., p. 145.
Examining the influence of humanism on scholars in medicine, mathematics, astronomy and physics may suggest that humanism and universities were a strong impetus for the scientific revolution. Although the connection between humanism and the scientific discovery may very well have begun within the confines of the university, the connection has been commonly perceived as having been severed by the changing nature of science during the scientific revolution. Historians such as Richard S. Westfall have argued that the overt traditionalism of universities inhibited attempts to re-conceptualize nature and knowledge and caused an indelible tension between universities and scientists.Westfall, R. S. (1977). The construction of modern science: mechanisms and mechanics. Cambridge: Cambridge University Press, p. 105. This resistance to changes in science may have been a significant factor in driving many scientists away from the university and toward private benefactors, usually in princely courts, and associations with newly forming scientific societies.Ornstein, M. (1928). The role of scientific societies in the seventeenth century. Chicago, IL: University of Chicago Press.
Other historians find incongruity in the proposition that the very place where the vast number of the scholars that influenced the scientific revolution received their education should also be the place that inhibits their research and the advancement of science. In fact, more than 80% of the European scientists between 1450–1650 included in the Dictionary of Scientific Biography were university trained, of which approximately 45% held university posts.Gascoigne, J. (1990). A reappraisal of the role of the universities in the Scientific Revolution. In D. C. Lindberg & R. S. Westman (Eds.), Reappraisals of the Scientific Revolution, pp. 208-209. It was the case that the academic foundations remaining from the Middle Ages were stable, and they did provide for an environment that fostered considerable growth and development. There was considerable reluctance on the part of universities to relinquish the symmetry and comprehensiveness provided by the Aristotelian system, which was effective as a coherent system for understanding and interpreting the world. However, university professors still utilized some autonomy, at least in the sciences, to choose epistemological foundations and methods. For instance, Melanchthon and his disciples at University of Wittenberg were instrumental for integrating Copernican mathematical constructs into astronomical debate and instruction. Another example was the short-lived but fairly rapid adoption of Cartesian epistemology and methodology in European universities, and the debates surrounding that adoption, which led to more mechanistic approaches to scientific problems as well as demonstrated an openness to change. There are many examples which belie the commonly perceived intransigence of universities.Gascoigne, J. (1990). A reappraisal of the role of the universities in the Scientific Revolution. In D. C. Lindberg & R. S. Westman (Eds.), Reappraisals of the Scientific Revolution, pp. 210-229. Although universities may have been slow to accept new sciences and methodologies as they emerged, when they did accept new ideas it helped to convey legitimacy and respectability, and supported the scientific changes through providing a stable environment for instruction and material resources.Gascoigne, J. (1990). A reappraisal of the role of the universities in the Scientific Revolution. In D. C. Lindberg & R. S. Westman (Eds.), Reappraisals of the Scientific Revolution, pp. 245-248.
Regardless of the way the tension between universities, individual scientists, and the scientific revolution itself is perceived, there was a discernible impact on the way that university education was constructed. Aristotelian epistemology provided a coherent framework not simply for knowledge and knowledge construction, but also for the training of scholars within the higher education setting. The creation of new scientific constructs during the scientific revolution, and the epistemological challenges that were inherent within this creation, initiated the idea of both the autonomy of science and the hierarchy of the disciplines. Instead of entering higher education to become a "general scholar" immersed in becoming proficient in the entire curriculum, there emerged a type of scholar that put science first and viewed it as a vocation in itself. The divergence between those focused on science and those still entrenched in the idea of a general scholar exacerbated the epistemological tensions that were already beginning to emerge.Feingold, M. (1991). Tradition vs novelty: universities and scientific societies in the early modern period. In P. Barker & R. Ariew (Eds.), Revolution and continuity: essays in the history and philosophy of early modern science, Studies in philosophy and the history of philosophy. Washington, D.C: Catholic University of America Press, pp. 53-54.
The epistemological tensions between scientists and universities were also heightened by the economic realities of research during this time, as individual scientists, associations and universities were vying for limited resources. There was also competition from the formation of new colleges funded by private benefactors and designed to provide free education to the public, or established by local governments to provide a knowledge hungry populace with an alternative to traditional universities.Feingold, M. (1991). Tradition vs novelty: universities and scientific societies in the early modern period. In P. Barker & R. Ariew (Eds.), Revolution and continuity: essays in the history and philosophy of early modern science, Studies in philosophy and the history of philosophy. Washington, D.C: Catholic University of America Press, pp. 46-50. Even when universities supported new scientific endeavors, and the university provided foundational training and authority for the research and conclusions, they could not compete with the resources available through private benefactors.
thumb|left|Universities in northern Europe were more willing to accept the ideas of Enlightenment and were often greatly influenced by them. For instance the historical ensemble of the University of Tartu in Estonia, that was erected around that time, is now included into European Heritage Label list as an example of a university in the Age of EnlightenmentCulture: Nine European historical sites now on the European Heritage Label list European Commission, February 8, 2016
By the end of the early modern period, the structure and orientation of higher education had changed in ways that are eminently recognizable for the modern context. Aristotle was no longer a force providing the epistemological and methodological focus for universities and a more mechanistic orientation was emerging. The hierarchical place of theological knowledge had for the most part been displaced and the humanities had become a fixture, and a new openness was beginning to take hold in the construction and dissemination of knowledge that were to become imperative for the formation of the modern state.
Modern universities
thumb|Karlsruhe Institute of Technology, a German technical university, founded in the 19th century
By the 18th century, universities published their own research journals and by the 19th century, the German and the French university models had arisen. The German, or Humboldtian model, was conceived by Wilhelm von Humboldt and based on Friedrich Schleiermacher's liberal ideas pertaining to the importance of freedom, seminars, and laboratories in universities. The French university model involved strict discipline and control over every aspect of the university.
Until the 19th century, religion played a significant role in university curriculum; however, the role of religion in research universities decreased in the 19th century, and by the end of the 19th century, the German university model had spread around the world. Universities concentrated on science in the 19th and 20th centuries and became increasingly accessible to the masses. In the United States, the Johns Hopkins University was the first to adopt the (German) research university model; this pioneered the adoption by most other American universities. In Britain, the move from Industrial Revolution to modernity saw the arrival of new civic universities with an emphasis on science and engineering, a movement initiated in 1960 by Sir Keith Murray (chairman of the University Grants Committee) and Sir Samuel Curran, with the formation of the University of Strathclyde. The British also established universities worldwide, and higher education became available to the masses not only in Europe.
In 1963, the Robbins Report on universities in the United Kingdom concluded that such institutions should have four main "objectives essential to any properly balanced system: instruction in skills; the promotion of the general powers of the mind so as to produce not mere specialists but rather cultivated men and women; to maintain research in balance with teaching, since teaching should not be separated from the advancement of learning and the search for truth; and to transmit a common culture and common standards of citizenship."
In the early 21st century, concerns were raised over the increasing managerialisation and standardisation of universities worldwide. Neo-liberal management models have in this sense been critiqued for creating "corporate universities (where) power is transferred from faculty to managers, economic justifications dominate, and the familiar 'bottom line' ecclipses pedagogical or intellectual concerns".Maggie Berg & Barbara Seeber. The Slow Professor: Challenging the Culture of Speed in the Academy, p. x. Toronto: Toronto University Press. 2016. Academics' understanding of time, pedagogical pleasure, vocation, and collegiality have been cited as possible ways of alleviating such problems.Maggie Berg & Barbara Seeber. The Slow Professor: Challenging the Culture of Speed in the Academy. Toronto: Toronto University Press. 2016. (passim)
National universities
A national university is generally a university created or run by a national state but at the same time represents a state autonomic institution which functions as a completely independent body inside of the same state. Some national universities are closely associated with national cultural, religious or political aspirations, for instance the National University of Ireland, which formed partly from the Catholic University of Ireland which was created almost immediately and specifically in answer to the non-denominational universities which had been set up in Ireland in 1850. In the years leading up to the Easter Rising, and in no small part a result of the Gaelic Romantic revivalists, the NUI collected a large amount of information on the Irish language and Irish culture. Reforms in Argentina were the result of the University Revolution of 1918 and its posterior reforms by incorporating values that sought for a more equal and laic higher education system.
Intergovernmental universities
thumb|right|Campus universities with most buildings clustered closely together became especially widespread since the 19th century (Cornell University)
Universities created by bilateral or multilateral treaties between states are intergovernmental. An example is the Academy of European Law, which offers training in European law to lawyers, judges, barristers, solicitors, in-house counsel and academics. EUCLID (Pôle Universitaire Euclide, Euclid University) is chartered as a university and umbrella organisation dedicated to sustainable development in signatory countries, and the United Nations University engages in efforts to resolve the pressing global problems that are of concern to the United Nations, its peoples and member states. The European University Institute, a post-graduate university specialised in the social sciences, is officially an intergovernmental organisation, set up by the member states of the European Union.
Organization
thumb|270px|The University of Sydney is Australia's oldest university.
Although each institution is organized differently, nearly all universities have a board of trustees; a president, chancellor, or rector; at least one vice president, vice-chancellor, or vice-rector; and deans of various divisions. Universities are generally divided into a number of academic departments, schools or faculties. Public university systems are ruled over by government-run higher education boards. They review financial requests and budget proposals and then allocate funds for each university in the system. They also approve new programs of instruction and cancel or make changes in existing programs. In addition, they plan for the further coordinated growth and development of the various institutions of higher education in the state or country. However, many public universities in the world have a considerable degree of financial, research and pedagogical autonomy. Private universities are privately funded and generally have broader independence from state policies. However, they may have less independence from business corporations depending on the source of their finances.
Around the world
The funding and organization of universities varies widely between different countries around the world. In some countries universities are predominantly funded by the state, while in others funding may come from donors or from fees which students attending the university must pay. In some countries the vast majority of students attend university in their local town, while in other countries universities attract students from all over the world, and may provide university accommodation for their students.
Classification
The definition of a university varies widely, even within some countries. Where there is clarification, it is usually set by a government agency. For example:
In Australia, the Tertiary Education Quality and Standards Agency (TEQSA) is Australia's independent national regulator of the higher education sector. Students rights within university are also protected by the Education Services for Overseas Students Act (ESOS).
In the United States there is no nationally standardized definition for the term university, although the term has traditionally been used to designate research institutions and was once reserved for doctorate-granting research institutions. Some states, such as Massachusetts, will only grant a school "university status" if it grants at least two doctoral degrees.
In the United Kingdom, the Privy Council is responsible for approving the use of the word university in the name of an institution, under the terms of the Further and Higher Education Act 1992.
In India, a new designation deemed universities has been created for institutions of higher education that are not universities, but work at a very high standard in a specific area of study ("An Institution of Higher Education, other than universities, working at a very high standard in specific area of study, can be declared by the Central Government on the advice of the UGC as an Institution 'Deemed-to-be-university'"). Institutions that are 'deemed-to-be-university' enjoy the academic status and the privileges of a university. Through this provision many schools that are commercial in nature and have been established just to exploit the demand for higher education have sprung up.
In Canada, college generally refers to a two-year, non-degree-granting institution, while university connotes a four-year, degree-granting institution. Universities may be sub-classified (as in the Macleans rankings) into large research universities with many PhD-granting programs and medical schools (for example, McGill University); "comprehensive" universities that have some PhDs but are not geared toward research (such as Waterloo); and smaller, primarily undergraduate universities (such as St. Francis Xavier).
Colloquial usage
Colloquially, the term university may be used to describe a phase in one's life: "When I was at university..." (in the United States and Ireland, college is often used instead: "When I was in college..."). In Australia, Canada, New Zealand, the United Kingdom, Nigeria, the Netherlands, Spain and the German-speaking countries university is often contracted to uni. In Ghana, New Zealand and in South Africa it is sometimes called "varsity" (although this has become uncommon in New Zealand in recent years). "Varsity" was also common usage in the UK in the 19th century. "Varsity" is still in common usage in Scotland.
Cost
thumb|right|Comenius University in Bratislava - the largest public university in Slovakia
thumb|University of Helsinki, the oldest and largest public university in Finland, founded in 1640.
In many countries, students are required to pay tuition fees.
Many students look to get 'student grants' to cover the cost of university. In 2012, the average outstanding student loan balance per borrower in the United States was US$23,300. In many U.S. states, costs are anticipated to rise for students as a result of decreased state funding given to public universities.
There are several major exceptions on tuition fees. In many European countries, it is possible to study without tuition fees. Public universities in Nordic countries were entirely without tuition fees until around 2005. Denmark, Sweden and Finland then moved to put in place tuition fees for foreign students. Citizens of EU and EEA member states and citizens from Switzerland remain exempted from tuition fees, and the amounts of public grants granted to promising foreign students were increased to offset some of the impact."Studieavgifter i högskolan" SOU 2006:7
See also
Alternative university
Alumni
Ancient higher-learning institutions
Catholic university
College and university rankings
Corporate university
International university
Land-grant university
Liberal arts college
List of academic disciplines
Lists of universities and colleges
Pontifical university
School and university in literature
UnCollege
University student retention
University system
Urban university
Notes
Further reading
External links
Category:Educational stages
Category:Higher education
Category:Types of university or college
*
Category:Youth | 19,725,260 | 2017-01 |
Pacific War | The Pacific War, sometimes called the Asia-Pacific War,Williamson Murray, Allan R. Millett A War to be Won: Fighting the Second World War, Harvard University Press, 2001, p. 143 was the theater of World War II that was fought in the Pacific and East Asia. It was fought over a vast area that included the Pacific Ocean and islands, the South West Pacific, South-East Asia, and in China (including the 1945 Soviet–Japanese conflict).
The Second Sino-Japanese War between the Empire of Japan and the Republic of China had been in progress since 7 July 1937, with hostilities dating back as far as 19 September 1931 with the Japanese invasion of Manchuria.Roy M. MacLeod, Science and the Pacific War: Science and Survival in the Pacific, 1939–1945, Kluwer Academic Publishing, p. 1, 1999 However, it is more widely acceptedYouli Sun, China and the Origins of the Pacific War, 1931–41, Palgrave MacMillan, p. 11 that the Pacific War itself began on 7/8 December 1941, when Japan invaded Thailand and attacked the British possessions of Malaya, Singapore, and Hong Kong as well as the United States military bases in Hawaii, Wake Island, Guam and the Philippines.John Costello, The Pacific War: 1941–1945, Harper Perennial, 1982Japan Economic Foundation, Journal of Japanese Trade & Industry, Volume 16, 1997
The Pacific War saw the Allied powers pitted against the Empire of Japan, the latter briefly aided by Thailand and to a much lesser extent by its Axis allies, Germany and Italy. The war culminated in the atomic bombings of Hiroshima and Nagasaki, and other large aerial bomb attacks by the United States Army Air Forces, accompanied by the Soviet invasion of Manchuria on 8 August 1945, resulting in the Japanese announcement of intent to surrender on 15 August 1945. The formal and official surrender of Japan took place aboard the battleship in Tokyo Bay on 2 September 1945. Following its defeat, Japan's Shinto Emperor stepped down as the divine leader through the Shinto Directive, because the Allied Powers believed this was the major political cause of Japan's military aggression and deconstruction soon took place to install a new liberal-democratic constitution to the Japanese public as the current Constitution of Japan.
Overview
Names for the war
thumb|left|upright|Generalissimo Chiang Kai-shek, Allied Commander-in-Chief in the China theatre from 1942 to 1945.
In Allied countries during the war, "The Pacific War" was not usually distinguished from World War II in general, or was known simply as the War against Japan. In the United States, the term Pacific Theater was widely used, although this was a misnomer in relation to the British campaign in Burma, the war in China and other activities within the Southeast Asian Theater.
Japan used the name , as chosen by a cabinet decision on 10 December 1941, to refer to both the war with the Western Allies and the ongoing war in China. This name was released to the public on 12 December, with an explanation that it involved Asian nations achieving their independence from the Western powers through armed forces of the Greater East Asia Co-Prosperity Sphere. Japanese officials integrated what they called the into the Greater East Asia War.
During the American military occupation of Japan (1945–52), these Japanese terms were prohibited in official documents, although their informal usage continued, and the war became officially known as . In Japan, the is also used, referring to the period from the Mukden Incident of 1931 through 1945.
Participants
thumb|400px|right|Political Map of the Asia-Pacific Region, 1939.
thumb|upright|Generalissimo Chiang Kai-shek and General Joseph Stilwell, Allied Commander-in-Chief in the China theatre from 1942–1945.
The Axis states which assisted Japan included the authoritarian government of Thailand in World War II, which quickly formed a temporary alliance with the Japanese in 1941, as the Japanese forces were already invading the peninsula of southern Thailand. The Phayap Army sent troops to invade and occupy northeastern Burma, which was former Thai territory that had been annexed by Britain much earlier. Also involved were the Japanese puppet states of Manchukuo and Mengjiang (consisting of most of Manchuria and parts of Inner Mongolia respectively), and the collaborationist Wang Jingwei regime (which controlled the coastal regions of China).
The official policy of the U.S. Government is that Thailand was not an ally of the Axis, and that the United States was not at war with Thailand. The policy of the U.S. Government ever since 1945 has been to treat Thailand not as a former enemy, but rather as a country which had been forced into certain actions by Japanese blackmail, before being occupied by Japanese troops. Thailand has been treated by the United States in the same way as such other Axis-occupied countries as Belgium, Czechoslovakia, Denmark, Greece, Norway, Poland, and the Netherlands.
Japan conscripted many soldiers from its colonies of Korea and Formosa (Taiwan). To a small extent, some Vichy French, Indian National Army, and Burmese National Army forces were active in the area of the Pacific War. Collaborationist units from Hong Kong (reformed ex-colonial police), Philippines, Dutch East Indies (the PETA) and Dutch Guinea, British Malaya and British Borneo, Inner Mongolia and former French Indochina (after the overthrow of Vichy French regime) as well as Timorese militia also assisted Japanese war efforts.
Germany and Italy both had limited involvement in the Pacific War. The German and the Italian navies operated submarines and raiding ships in the Indian and Pacific Oceans. The Italians had access to "concession territory" naval bases in China, while the Germans did not. After Japan's attack on Pearl Harbor and the subsequent declarations of war, both navies had access to Japanese naval facilities.
The major Allied participants were the United States, the Republic of China, the United Kingdom (including the armed forces of British India, the Fiji Islands, Samoa, etc.), Australia, the Commonwealth of the Philippines, the Netherlands (as the possessor of the Dutch East Indies and the western part of New Guinea), New Zealand, and Canada, all of whom were members of the Pacific War Council. Mexico, Free France and many other countries also took part, especially forces from other British colonies.
The Soviet Union fought two short, undeclared border conflicts with Japan in 1938 and 1939, then remained neutral until August 1945, when it joined the Allies and invaded the territory of Manchukuo, Republic of China, Inner Mongolia, the Japanese protectorate of Korea and Japanese-claimed islands such as Sakhalin coordinated notably between the Red Banner Pacific Fleet and the US Navy's Task Force 38.
Theaters
thumb|The Pacific War Council as photographed on 12 October 1942. Pictured are representatives from the United States (seated) China, the United Kingdom, Australia, Canada, the Netherlands, New Zealand, and the Philippine Commonwealth.
Between 1942 and 1945, there were four main areas of conflict in the Pacific War: China, the Central Pacific, South East Asia and the South West Pacific. U.S. sources refer to two theaters within the Pacific War: the Pacific theater and the China Burma India Theater (CBI). However these were not operational commands.
In the Pacific, the Allies divided operational control of their forces between two supreme commands, known as Pacific Ocean Areas and Southwest Pacific Area. In 1945, for a brief period just before the Japanese surrender, the Soviet Union and its Mongolian ally engaged Japanese forces in Manchuria and northeast China.
Historical background
Conflict between China and Japan
thumb|170px|Chinese casualties of a mass panic during a June 1941 Japanese aerial bombing of Chongqing.
By 1937, Japan controlled Manchuria and was ready to move deeper into China. The Marco Polo Bridge Incident on 7 July 1937 provoked full-scale war between China and Japan. The Nationalist and Communist Chinese suspended their civil war to form a nominal alliance against Japan, and the Soviet Union quickly lent support by providing large amount of materiel to Chinese troops. In August 1937, Generalissimo Chiang Kai-shek deployed his best army to fight about 300,000 Japanese troops in Shanghai, but, after three months of fighting, Shanghai fell. The Japanese continued to push the Chinese forces back, capturing the capital Nanking in December 1937 and committed which was known as Nanking Massacre. In March 1938, Nationalist forces won their first victory at Taierzhuang. but then the city of Xuzhou was taken by Japanese in May. In June 1938, Japan deployed about 350,000 troops to invade Wuhan and captured it in October. The Japanese achieved major military victories, but world opinion—in particular in the United States—condemned Japan, especially after the Panay incident.
In 1939, Japanese forces tried to push into the Soviet Far East from Manchuria. They were soundly defeated in the Battle of Khalkhin Gol by a mixed Soviet and Mongolian force led by Georgy Zhukov. This stopped Japanese expansion to the north, and Soviet aid to China ended as a result of the signing of the Soviet–Japanese Neutrality Pact at the beginning of its war against Nazi Germany.Edward J. Drea, Nomonhan: Japanese-Soviet Tactical Combat, 1939 (2005)
In September 1940, Japan decided to cut China's only land line to the outside world by seizing Indochina, which was controlled at the time by Vichy France. Japanese forces broke their agreement with the Vichy administration and fighting broke out, ending in a Japanese victory. On 27 September Japan signed a military alliance with Germany and Italy, becoming one of the three Axis Powers. In practice, there was little coordination between Japan and Germany until 1944, by which time the U.S. was deciphering their secret diplomatic correspondence.Boyd, Carl. Hitler's Japanese confidant: General Ōshima Hiroshi and MAGIC intelligence, 1941–1945 (1993)
The war entered a new phase with the unprecedented defeat of the Japanese at Battle of Suixian–Zaoyang and 1st Battle of Changsha. After these victories, Chinese nationalist forces launched a large-scale counter-offensive in early 1940; however, due to its low military-industrial capacity, it was repulsed by Japanese army in late March 1940. In August 1940, Chinese communists launched an offensive in Central China; in retaliation, Japan instituted the "Three Alls Policy" ("Kill all, Burn all, Loot all") in occupied areas to reduce human and material resources for the communists.Chinese-Soviet Relations, 1937–1945; Garver, John W.; p. 120. By 1941 the conflict had become a stalemate. Although Japan had occupied much of northern, central, and coastal China, the Nationalist Government had retreated to the interior with a provisional capital set up at Chungking while the Chinese communists remained in control of base areas in Shaanxi. In addition, Japanese control of northern and central China was somewhat tenuous, in that Japan was usually able to control railroads and the major cities ("points and lines"), but did not have a major military or administrative presence in the vast Chinese countryside. The Japanese found its aggression against the retreating and regrouping Chinese army was stalled by the mountainous terrain in southwestern China while the Communists organised widespread guerrilla and saboteur activities in northern and eastern China behind the Japanese front line.
Japan sponsored several puppet governments, one of which was headed by Wang Jingwei. However, its policies of brutality toward the Chinese population, of not yielding any real power to these regimes, and of supporting several rival governments failed to make any of them a viable alternative to the Nationalist government led by Chiang Kai-shek. Conflicts between Chinese communist and nationalist forces vying for territory control behind enemy lines culminated in a major armed clash in January 1941, effectively ending their co-operation.
Japanese strategic bombing efforts mostly targeted large Chinese cities such as Shanghai, Wuhan, and Chongqing, with around 5,000 raids from February 1938 to August 1943 in the later case. Japan's strategic bombing campaigns devastated Chinese cities extensively, killing 260,000–350,934 non-combatants.Lind Jennifer M. (2010). "Sorry States: Apologies in International Politics". Cornell University Press, p.28. ISBN 0-8014-7628-3
Tensions between Japan and the West
From as early as 1935 Japanese military strategists had concluded the Dutch East Indies were, because of their oil reserves, of considerable importance to Japan. By 1940 they had expanded this to include Indo-China, Malaya, and the Philippines within their concept of the Greater East Asia Co-Prosperity Sphere. Japanese troop build ups in Hainan, Taiwan, and Haiphong were noted, Japanese Army officers were openly talking about an inevitable war, and Admiral Sankichi Takahashi was reported as saying a showdown with the United States was necessary.
In an effort to discourage Japanese militarism, Western powers including Australia, the United States, Britain, and the Dutch government in exile, which controlled the petroleum-rich Dutch East Indies, stopped selling oil, iron ore, and steel to Japan, denying it the raw materials needed to continue its activities in China and French Indochina. In Japan, the government and nationalists viewed these embargos as acts of aggression; imported oil made up about 80% of domestic consumption, without which Japan's economy, let alone its military, would grind to a halt. The Japanese media, influenced by military propagandists, began to refer to the embargoes as the "ABCD ("American-British-Chinese-Dutch") encirclement" or "ABCD line".
Faced with a choice between economic collapse and withdrawal from its recent conquests (with its attendant loss of face), the Japanese Imperial General Headquarters began planning for a war with the western powers in April or May 1941.
Japanese preparations
Japan's key objective during the initial part of the conflict was to seize economic resources in the Dutch East Indies and Malaya which offered Japan a way to escape the effects of the Allied embargo. This was known as the Southern Plan. It was also decided—because of the close relationship between the UK and United States, and the
Peattie & Evans, KaigunWillmott, Barrier and the Javelin (Annapolis: Naval Institute Press, 1983). belief the US would inevitably become involved—Japan would also require taking the Philippines, Wake and Guam.
Japanese planning was for fighting a limited war where Japan would seize key objectives and then establish a defensive perimeter to defeat Allied counterattacks, which in turn would lead to a negotiated peace.
The attack on the US Pacific Fleet at Pearl Harbor, Hawaii, with carrier-based aircraft of the Combined Fleet was to give the Japanese time to complete a perimeter.
The initial period of the war was divided into two operational phases. The First Operational Phase was further divided into three separate parts in which the major objectives of the Philippines, British Malaya, Borneo, Burma, Rabaul and the Dutch East Indies would be occupied. The Second Operational Phase called for further expansion into the South Pacific by seizing eastern New Guinea, New Britain, Fiji, Samoa, and strategic points in the Australian area. In the Central Pacific, Midway was targeted as were the Aleutian Islands in the North Pacific. Seizure of these key areas would provide defensive depth and deny the Allies staging areas from which to mount a counteroffensive.
By November these plans were essentially complete, and were modified only slightly over the next month. Japanese military planners' expectation of success rested on the United Kingdom and the Soviet Union being unable to effectively respond to a Japanese attack because of the threat posed to each by Germany; the Soviet Union was even seen as unlikely to commence hostilities.
The Japanese leadership was aware that a total military victory in a traditional sense against the USA was impossible; the alternative would be negotiating for peace after their initial victories, which would recognize Japanese hegemony in Asia.Boog et al (2006) "Germany and the Second World War: The Global War", p. 175 In fact, the Imperial GHQ noted, should acceptable negotiations be reached with the Americans, the attacks were to be canceled—even if the order to attack had already been given. The Japanese leadership looked to base the conduct of the war against America on the historical experiences of the successful wars against China (1894–95) and Russia (1904–05), in both of which a strong continental power was defeated by reaching limited military objectives, not by total conquest.
They also planned, should the U.S. transfer its Pacific Fleet to the Philippines, to intercept and attack this fleet en route with the Combined Fleet, in keeping with all Japanese Navy prewar planning and doctrine. If the United States or Britain attacked first, the plans further stipulated the military were to hold their positions and wait for orders from GHQ. The planners noted attacking the Philippines and Malaya still had possibilities of success, even in the worst case of a combined preemptive attack including Soviet forces.
Japanese offensives, 1941–42
On 7 December 1941, Japan attacked the American bases in Hawaii. The same day (8 December on the other side of the International Date Line), Japanese forces attacked Guam, Wake Island and the British crown colony of Hong Kong while other Japanese units invaded the Philippines, Thailand and Malaya.
Attack on Pearl Harbor
thumb| burned for two days after being hit by a Japanese bomb in the attack on Pearl Harbor.
In the early hours of 7 December (Hawaiian time), Japan launched a major surprise carrier-based air strike on Pearl Harbor without explicit warning, which crippled the U.S. Pacific Fleet, leaving eight American battleships out of action, 188 American aircraft destroyed, and 2,403 American citizens dead. At the time of the attack, the U.S. was not officially at war anywhere in the world as the Japanese embassy failed to decipher and deliver the Japanese ultimatum to the American government before noon December 7 (Washington time), which means that the people killed or property destroyed at Pearl Harbor by the Japanese attack had a non-combatant status. The Japanese had gambled that the United States, when faced with such a sudden and massive blow, would agree to a negotiated settlement and allow Japan free rein in Asia. This gamble did not pay off. American losses were less serious than initially thought: The American aircraft carriers, which would prove to be more important than battleships, were at sea, and vital naval infrastructure (fuel oil tanks, shipyard facilities, and a power station), submarine base, and signals intelligence units were unscathed. Japan's fallback strategy, relying on a war of attrition to make the U.S. come to terms, was beyond the IJN's capabilities.Parillo, Mark P. Japanese Merchant Marine in World War II. (United States Naval Institute Press, 1993).
Before the attack on Pearl Harbor, the 800,000-member America First Committee vehemently opposed any American intervention in the European conflict, even as America sold military aid to Britain and the Soviet Union through the Lend-Lease program. Opposition to war in the U.S. vanished after the attack. On 8 December, the United States, the United Kingdom, Canada, and the Netherlands declared war on Japan, followed by China and Australia the next day. Four days after Pearl Harbor, Nazi Germany and Fascist Italy declared war on the United States, drawing the country into a two-theater war. This is widely agreed to be a grand strategic blunder, as it abrogated the benefit Germany gained by Japan's distraction of the U.S. (predicted months before in a memo by Commander Arthur McCollum) and the reduction in aid to Britain, which both Congress and Hitler had managed to avoid during over a year of mutual provocation, which would otherwise have resulted.
Attacks on Southeast Asia
thumb|right|HMS Prince of Wales (left, front) and HMS Repulse (left, rear) under attack by Japanese aircraft. A destroyer is in the foreground.
British, Australian, and Dutch forces, already drained of personnel and matériel by two years of war with Germany, and heavily committed in the Middle East, North Africa, and elsewhere, were unable to provide much more than token resistance to the battle-hardened Japanese. The Allies suffered many disastrous defeats in the first six months of the war. Two major British warships, and , were sunk by a Japanese air attack off Malaya on 10 December 1941.
Thailand, with its territory already serving as a springboard for the Malayan campaign, surrendered within 24 hours of the Japanese invasion. The government of Thailand formally allied with Japan on 21 December.
Hong Kong was attacked on 8 December and fell on 25 December 1941, with Canadian forces and the Royal Hong Kong Volunteers playing an important part in the defense. American bases on Guam and Wake Island were lost at around the same time.
Following the Declaration by United Nations (the first official use of the term United Nations) on 1 January 1942, the Allied governments appointed the British General Sir Archibald Wavell to American-British-Dutch-Australian Command (ABDACOM), a supreme command for Allied forces in Southeast Asia. This gave Wavell nominal control of a huge force, albeit thinly spread over an area from Burma to the Philippines to northern Australia. Other areas, including India, Hawaii, and the rest of Australia remained under separate local commands. On 15 January, Wavell moved to Bandung in Java to assume control of ABDACOM.
thumb|left|Japanese battleships , and (more distant)
In January, Japan invaded Burma, the Dutch East Indies, New Guinea, the Solomon Islands and captured Manila, Kuala Lumpur and Rabaul. After being driven out of Malaya, Allied forces in Singapore attempted to resist the Japanese during the Battle of Singapore, but were forced to surrender to the Japanese on 15 February 1942; about 130,000 Indian, British, Australian and Dutch personnel became prisoners of war. The pace of conquest was rapid: Bali and Timor also fell in February. The rapid collapse of Allied resistance left the "ABDA area" split in two. Wavell resigned from ABDACOM on 25 February, handing control of the ABDA Area to local commanders and returning to the post of Commander-in-Chief, India.
thumb|left|The Bombing of Darwin, Australia, 19 February 1942
Meanwhile, Japanese aircraft had all but eliminated Allied air power in Southeast Asia and were making attacks on northern Australia, beginning with a psychologically devastating but militarily insignificant attack on the city of Darwin on 19 February, which killed at least 243 people.
At the Battle of the Java Sea in late-February and early-March, the Imperial Japanese Navy (IJN) inflicted a resounding defeat on the main ABDA naval force, under Admiral Karel Doorman. The Dutch East Indies campaign subsequently ended with the surrender of Allied forces on Java and Sumatra.
In March and April, a powerful IJN carrier force launched a raid into the Indian Ocean. British Royal Navy bases in Ceylon were hit and the aircraft carrier and other Allied ships were sunk. The attack forced the Royal Navy to withdraw to the western part of the Indian Ocean. This paved the way for a Japanese assault on Burma and India.
thumb|right|200px|Surrender of U.S. forces at Corregidor, Philippines, May 1942
In Burma, the British, under intense pressure, made a fighting retreat from Rangoon to the Indo-Burmese border. This cut the Burma Road, which was the western Allies' supply line to the Chinese Nationalists. In March 1942, the Chinese Expeditionary Force started to attack Japanese forces in northern Burma. On 16 April, 7,000 British soldiers were encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division, led by Sun Li-jen.. Cooperation between the Chinese Nationalists and the Communists had waned from its zenith at the Battle of Wuhan, and the relationship between the two had gone sour as both attempted to expand their areas of operation in occupied territories. Most of the Nationalist guerrilla areas were eventually taken over by the Communists. On the other hand, some Nationalist units were deployed to blockade the Communists and not the Japanese. Furthermore, many of the forces of the Chinese Nationalists were warlords allied to Chiang Kai-Shek, but not directly under his command. "Of the 1,200,000 troops under Chiang's control, only 650,000 were directly controlled by his generals, and another 550,000 controlled by warlords who claimed loyalty to his government; the strongest force was the Szechuan army of 320,000 men. The defeat of this army would do much to end Chiang's power." The Japanese exploited this lack of unity to press ahead in their offensives.
Filipino and U.S. forces resisted in the Philippines until 8 May 1942, when more than 80,000 soldiers were ordered to surrender. By this time, General Douglas MacArthur, who had been appointed Supreme Allied Commander South West Pacific, had been withdrawn to Australia. The U.S. Navy, under Admiral Chester Nimitz, had responsibility for the rest of the Pacific Ocean. This divided command had unfortunate consequences for the commerce war,Blair, Silent Victory and consequently, the war itself.
Threat to Australia
In late 1941, as the Japanese struck at Pearl Harbor, most of Australia's best forces were committed to the fight against Hitler in the Mediterranean Theatre. Australia was ill-prepared for an attack, lacking armaments, modern fighter aircraft, heavy bombers, and aircraft carriers. While still calling for reinforcements from Churchill, the Australian Prime Minister John Curtin called for American support with a historic announcement on 27 December 1941:Cited in Frank Crowley (1973) Vol 2, p.51
thumb|left|Dutch and Australian PoWs at Tarsau, in Thailand in 1943. 22,000 Australians were captured by the Japanese; 8,000 died as prisoners of war.
Australia had been shocked by the speedy collapse of British Malaya and Fall of Singapore in which around 15,000 Australian soldiers became prisoners of war. Curtin predicted that the "battle for Australia" would now follow. The Japanese established a major base in the Australian Territory of New Guinea in early 1942. On 19 February, Darwin suffered a devastating air raid, the first time the Australian mainland had been attacked. Over the following 19 months, Australia was attacked from the air almost 100 times.
right|thumb|U.S. General Douglas MacArthur, Commander of Allied forces in the South-West Pacific Area, with Australian Prime Minister John Curtin
Two battle-hardened Australian divisions were steaming from the Mid-East for Singapore. Churchill wanted them diverted to Burma, but Curtin insisted on a return to Australia. In early 1942 elements of the Imperial Japanese Navy proposed an invasion of Australia. The Japanese Army opposed the plan and it was rejected in favour of a policy of isolating Australia from the United States via blockade by advancing through the South Pacific. The Japanese decided upon a seaborne invasion of Port Moresby, capital of the Australian Territory of Papua which would put Northern Australia within range of Japanese bomber aircraft.
President Franklin Roosevelt ordered General Douglas MacArthur in the Philippines to formulate a Pacific defence plan with Australia in March 1942. Curtin agreed to place Australian forces under the command of MacArthur who became Supreme Commander, South West Pacific. MacArthur moved his headquarters to Melbourne in March 1942 and American troops began massing in Australia. Enemy naval activity reached Sydney in late May 1942, when Japanese midget submarines launched a daring raid on Sydney Harbour. On 8 June 1942, two Japanese submarines briefly shelled Sydney's eastern suburbs and the city of Newcastle.
Allies re-group, 1942–43
In early 1942, the governments of smaller powers began to push for an inter-governmental Asia-Pacific war council, based in Washington, D.C. A council was established in London, with a subsidiary body in Washington. However, the smaller powers continued to push for an American-based body. The Pacific War Council was formed in Washington, on 1 April 1942, with President Franklin D. Roosevelt, his key advisor Harry Hopkins, and representatives from Britain, China, Australia, the Netherlands, New Zealand, and Canada. Representatives from India and the Philippines were later added. The council never had any direct operational control, and any decisions it made were referred to the U.S.-UK Combined Chiefs of Staff, which was also in Washington. Allied resistance, at first symbolic, gradually began to stiffen. Australian and Dutch forces led civilians in a prolonged guerilla campaign in Portuguese Timor.
The Doolittle Raid in April 1942, in which bombers took off from the aircraft carrier from Japan, did minimal material damage but was a huge morale boost for the United States, and it had major psychological repercussions exposing the vulnerabilities of the Japanese homeland. The greatest effect of the raid, however, was that it caused the Japanese to launch the ultimately catastrophic assault on Midway.Wilmott, Barrier and the Javelin
Coral Sea and Midway: the turning point
thumb|right|Lexington on fire at the Coral Sea
By mid-1942, the Japanese found themselves holding a vast area from the Indian Ocean to the Central Pacific, but lacking the resources to defend or sustain it. Moreover, Combined Fleet doctrine was inadequate to execute the proposed "barrier" defense. Instead, Japan decided on additional attacks in both the south and central Pacific. However, the element of surprise, present at Pearl Harbor, was now lost due to the success of Allied codebreakers who had discovered the next attack would be against Port Moresby. If it fell, Japan would control the seas to the north and west of Australia and could isolate the country. The carrier under Admiral Fletcher joined and an American-Australian task force to stop the Japanese advance. The resulting Battle of the Coral Sea, fought in May 1942, was the first naval battle in which ships involved never sighted each other and only aircraft were used to attack opposing forces. Although Lexington was sunk and Yorktown seriously damaged, the Japanese lost the carrier , and suffered extensive damage to and heavy losses to the air wing of , both of which missed the operation against Midway the following month. Although Allied losses were heavier than the Japanese, the attack on Port Moresby was thwarted and the Japanese invasion force turned back in a strategic victory for the Allies. The Japanese were subsequently forced to abandon their attempts to isolate Australia. Moreover, Japan lacked the capacity to replace losses in ships, planes and trained pilots.
thumb|Japanese advance until mid-1942
After Coral Sea, Yamamoto had four fleet carriers operational—, , and —and believed Nimitz had a maximum of two— and . was out of action, undergoing repair after a torpedo attack, while Yorktown had been damaged at Coral Sea, and was believed by Japanese navy intelligence to have been sunk. She would, in fact, sortie for Midway after just three days' of repairs involving her flight deck, with civilian work crews still aboard to be present for the next decisive engagement.
In May, Allied codebreakers again discovered Yamamoto's next move: an attack on Midway Atoll. It was hoped the attack would lure the American carriers into a trap, leading to the destruction of United States strategic power in the Pacific. He also intended to occupy Midway as part of an overall plan to extend Japan's defensive perimeter in response to the Doolittle Raid. It would then be turned into a major airbase, giving Japan control of the central Pacific.
Initially, a Japanese force was sent north to attack the Aleutian Islands as a diversion. The next stage of the plan called for the capture of Midway, which would give him an opportunity to destroy Nimitz's remaining carriers. Admiral Nagumo was again in tactical command but was focused on the invasion of Midway; Yamamoto's complex plan had no provision for intervention by Nimitz before the Japanese expected him. Planned surveillance of the U.S. fleet by long range seaplane did not happen (as a result of an abortive identical operation in March), so Fletcher's carriers were able to proceed to a flanking position without being detected. Nagumo had 272 planes operating from his four carriers, the U.S. 348 (115 land-based).
As anticipated by Nimitz, the Japanese fleet arrived off Midway on 4 June and was spotted by PBY patrol aircraft. Nagumo executed a first strike against Midway, while Fletcher launched his aircraft, bound for Nagumo's carriers. At 09:20 the first U.S. carrier aircraft arrived, TBD Devastator torpedo bombers from Hornet, but their attacks were poorly coordinated and ineffectual; thanks in part to faulty aerial torpedoes, they failed to score a single hit and all 15 were wiped out by defending Zero fighters. At 09:35, 15 additional TBDs from Enterprise attacked in which 14 were lost, again with no hits. Thus far, Fletcher's attacks had been disorganized and seemingly ineffectual, but they succeeded in drawing Nagumo's defensive fighters down to sea level where they expended much of their fuel and ammunition repulsing the two waves of torpedo bombers. As a result, when U.S. dive bombers arrived at high altitude, the Zeros were poorly positioned to defend. To make matters worse, Nagumo's four carriers had drifted out of formation in their efforts to avoid torpedoes, reducing the concentration of their anti-aircraft fire. Nagumo's indecision had also created confusion aboard his carriers. Alerted to the need of a second strike on Midway, but also wary of the need to deal with the American carriers that he now knew were in the vicinity, Nagumo twice changed the arming orders for his aircraft. As a result, the American dive bombers found the Japanese carriers with their decks cluttered with munitions as the crews worked hastily to properly re-arm their air groups.
thumb| under attack by B-17 Flying Fortress heavy bombers
With the Japanese CAP out of position and the carriers at their most vulnerable, SBD Dauntlesses from Enterprise and Yorktown appeared at an altitude of and commenced their attack, quickly dealing fatal blows to three fleet carriers: Sōryū, Kaga, and Akagi. Within minutes, all three were ablaze and had to be abandoned with great loss of life. Hiryū managed to survive the wave of dive bombers and launched a counter-attack against the American carriers which caused severe damage to Yorktown (which was later finished off by a Japanese submarine). However, a second attack from the U.S. carriers a few hours later found and destroyed Hiryū, the last remaining fleet carrier available to Nagumo. With his carriers lost and the Americans withdrawn out of range of his powerful battleships, Yamamoto was forced to call off the operation, leaving Midway in American hands. The battle proved to be a decisive victory for the Allies. For the second time, Japanese expansion had been checked and its formidable Combined Fleet was significantly weakened by the loss of four fleet carriers and many highly trained, virtually irreplaceable, personnel. Japan would be largely on the defensive for the rest of the war.
New Guinea and the Solomons
Japanese land forces continued to advance in the Solomon Islands and New Guinea. From July 1942, a few Australian reserve battalions, many of them very young and untrained, fought a stubborn rearguard action in New Guinea, against a Japanese advance along the Kokoda Track, towards Port Moresby, over the rugged Owen Stanley Ranges. The militia, worn out and severely depleted by casualties, were relieved in late August by regular troops from the Second Australian Imperial Force, returning from action in the Mediterranean theater. In early September 1942 Japanese marines attacked a strategic Royal Australian Air Force base at Milne Bay, near the eastern tip of New Guinea. They were beaten back by Allied (primarily Australian Army) forces.
Guadalcanal
thumb|U.S. Marines rest in the field during the Guadalcanal campaign in November 1942
At the same time as major battles raged in New Guinea, Allied forces identified a Japanese airfield under construction at Guadalcanal. Sixteen thousand Allied infantry, primarily U.S. Marines, made an amphibious landing to capture the airfield in August.
With Japanese and Allied forces occupying various parts of the island, over the following six months both sides poured resources into an escalating battle of attrition on land, at sea, and in the sky. Most of the Japanese aircraft based in the South Pacific were redeployed to the defense of Guadalcanal. Many were lost in numerous engagements with the Allied air forces based at Henderson Field as well as carrier based aircraft. Meanwhile, Japanese ground forces launched repeated attacks on heavily defended US positions around Henderson Field, in which they suffered appalling casualties. To sustain these offensives, resupply was carried out by Japanese convoys, termed the "Tokyo Express" by the Allies. The convoys often faced night battles with enemy naval forces in which they expended destroyers that the IJN could ill-afford to lose. Later fleet battles involving heavier ships and even daytime carrier battles resulted in a stretch of water near Guadalcanal becoming known as "Ironbottom Sound" from the multitude of ships sunk on both sides. However, the Allies were much better able to replace these losses. Finally recognizing that the campaign to recapture Henderson Field and secure Guadalcanal had simply become too costly to continue, the Japanese evacuated the island and withdrew in February 1943. In the sixth month war of attrition, the Japanese had lost as a result of failing to commit enough forces in sufficient time.
Allied advances in New Guinea and the Solomons
thumb|Australian commandos in New Guinea during July 1943
By late 1942, Japanese headquarters decided to make Guadalcanal their priority. They ordered the Japanese on the Kokoda Track, within sight of the lights of Port Moresby, to retreat to the northeastern coast of New Guinea. Australian and U.S. forces attacked their fortified positions and after more than two months of fighting in the Buna–Gona area finally captured the key Japanese beachhead in early 1943.
In June 1943, the Allies launched Operation Cartwheel, which defined their offensive strategy in the South Pacific. The operation was aimed at isolating the major Japanese forward base at Rabaul and cutting its supply and communication lines. This prepared the way for Nimitz's island-hopping campaign towards Japan.
Stalemate in China and Southeast Asia
China 1942–1943
thumb|right|Chinese troops during the Battle of Changde in November 1943.
In mainland China, the Japanese 3rd, 6th, and 40th Divisions, a grand total of around 120,000 troops, massed at Yueyang and advanced southward in three columns and crossed the Xinqiang River, and tried again to cross the Miluo River to reach Changsha. In January 1942, Chinese forces got a victory at Changsha which was the first Allied success against Japan..
After the Doolittle Raid, the Japanese army conducted a massive sweep through Zhejiang and Jiangxi of China, now known as the Zhejiang-Jiangxi Campaign, with the goal of searching out the surviving American airmen, applying retribution on the Chinese who aided them and destroying air bases. This operation started on 15 May 1942 with 40 infantry battalions and 15–16 artillery battalions but was repelled by Chinese forces in September. During this campaign, The Imperial Japanese Army left behind a trail of devastation and had also spread cholera, typhoid, plague and dysentery pathogens. Chinese estimates put the death toll at 250,000 civilians. Around 1,700 Japanese troops died out of a total 10,000 Japanese soldiers who fell ill with disease when their own biological weapons attack rebounded on their own forces.Yuki Tanaka, Hidden Horrors, Westviewpres, 1996, p.138Chevrier & Chomiczewski & Garrigue 2004, p.19.Croddy & Wirtz 2005, p. 171. On 2 November 1943, Isamu Yokoyama, commander of the Imperial Japanese 11th Army, deployed the 39th, 58th, 13th, 3rd, 116th and 68th divisions, a grand total of around 100,000 troops, to attack Changde of China. During the seven-week Battle of Changde, the Chinese forced Japan to fight a costly war of attrition. Although the Japanese army initially successfully captured the city, the Chinese 57th division was able to pin them down long enough for reinforcements to arrive and encircle the Japanese. The Chinese army then cut off the Japanese supply lines, forcing them into retreat, whereupon the Chinese pursued their enemy. During the battle, in an act of desperation, Japan used chemical weapons.Agar, Jon Science in the 20th Century and Beyond, p.281
Burma 1942–1943
In the aftermath of the Japanese conquest of Burma, there was widespread disorder and pro-Independence agitation in eastern India and a disastrous famine in Bengal, which ultimately caused up to 3 million deaths. In spite of these, and inadequate lines of communication, British and Indian forces attempted limited counter-attacks in Burma in early 1943. An offensive in Arakan failed, ignominiously in the view of some senior officers, while a long distance raid mounted by the Chindits under Brigadier Orde Wingate suffered heavy losses, but was publicized to bolster Allied morale. It also provoked the Japanese to mount major offensives themselves the following year.
In August 1943 the Allies formed a new South East Asia Command (SEAC) to take over strategic responsibilities for Burma and India from the British India Command, under Wavell. In October 1943 Winston Churchill appointed Admiral Lord Louis Mountbatten as its Supreme Commander. The British and Indian Fourteenth Army was formed to face the Japanese in Burma. Under Lieutenant General William Slim, its training, morale and health greatly improved. The American General Joseph Stilwell, who also was deputy commander to Mountbatten and commanded U.S. forces in the China Burma India Theater, directed aid to China and prepared to construct the Ledo Road to link India and China by land.
Cairo Conference
On 22 November 1943 U.S. President Franklin D. Roosevelt, British Prime Minister Winston Churchill, and ROC Generalissimo Chiang Kai-shek, met in Cairo, Egypt, to discuss a strategy to defeat Japan. The meeting was also known as Cairo Conference and concluded with the Cairo Declaration.
Allied offensives, 1943–44
thumb|left|The Allied leaders of the Asian and Pacific Theaters: Generalissimo Chiang Kai-shek, Franklin D. Roosevelt, and Winston Churchill meeting at the Cairo Conference in 1943
Midway proved to be the last great naval battle for two years. The United States used the ensuing period to turn its vast industrial potential into increased numbers of ships, planes, and trained aircrew. At the same time, Japan, lacking an adequate industrial base or technological strategy, a good aircrew training program, or adequate naval resources and commerce defense, fell further and further behind. In strategic terms the Allies began a long movement across the Pacific, seizing one island base after another. Not every Japanese stronghold had to be captured; some, like Truk, Rabaul, and Formosa, were neutralized by air attack and bypassed. The goal was to get close to Japan itself, then launch massive strategic air attacks, improve the submarine blockade, and finally (only if necessary) execute an invasion.
In November 1943 U.S. Marines sustained high casualties when they overwhelmed the 4,500-strong garrison at Tarawa. This helped the Allies to improve the techniques of amphibious landings, learning from their mistakes and implementing changes such as thorough pre-emptive bombings and bombardment, more careful planning regarding tides and landing craft schedules, and better overall coordination.
The U.S. Navy did not seek out the Japanese fleet for a decisive battle, as Mahanian doctrine would suggest (and as Japan hoped); the Allied advance could only be stopped by a Japanese naval attack, which oil shortages (induced by submarine attack) made impossible.
Submarine warfare
U.S. submarines, as well as some British and Dutch vessels, operating from bases at Cavite in the Philippines (1941–42); Fremantle and Brisbane, Australia; Pearl Harbor; Trincomalee, Ceylon; Midway; and later Guam, played a major role in defeating Japan, even though submarines made up a small proportion of the Allied navies—less than two percent in the case of the US Navy.Theodore Roscoe, United States Submarine Operations in World War II (US Naval Institute Press, 1949). Submarines strangled Japan by sinking its merchant fleet, intercepting many troop transports, and cutting off nearly all the oil imports essential to weapons production and military operations. By early 1945, Japanese oil supplies were so limited that its fleet was virtually stranded.
The Japanese military claimed its defenses sank 468 Allied submarines during the war.Prange et al. Pearl Harbor Papers In reality, only 42 American submarines were sunk in the Pacific due to hostile action, with 10 others lost in accidents or as the result of friendly fire.Roscoe, Theodore. Pig Boats (Bantam Books, 1958); Blair, Silent Victory, pp.991–2. The Dutch lost five submarines due to Japanese attack or minefields,"Boats," www.dutchsubmarines.com and the British lost three.
thumb|right|The torpedoed , as seen through the periscope of an American submarine, , in June 1942
American submarines accounted for 56% of the Japanese merchantmen sunk; mines or aircraft destroyed most of the rest. American submariners also claimed 28% of Japanese warships destroyed.Larry Kimmett and Margaret Regis, U.S. Submarines in World War II Furthermore, they played important reconnaissance roles, as at the battles of the Philippine Sea (June 1944) and Leyte Gulf (October 1944) (and, coincidentally, at Midway in June 1942), when they gave accurate and timely warning of the approach of the Japanese fleet. Submarines also rescued hundreds of downed fliers, including future U.S. president George H. W. Bush.
Allied submarines did not adopt a defensive posture and wait for the enemy to attack. Within hours of the Pearl Harbor attack, in retribution against Japan, Roosevelt promulgated a new doctrine: unrestricted submarine warfare against Japan. This meant sinking any warship, commercial vessel, or passenger ship in Axis-controlled waters, without warning and without aiding survivors. At the outbreak of the war in the Pacific, the Dutch admiral in charge of the naval defense of the East Indies, Conrad Helfrich, gave instructions to wage war aggressively. His small force of submarines sank more Japanese ships in the first weeks of the war than the entire British and US navies together, an exploit which earned him the nickname "Ship-a-day Helfrich". The Dutch force were in fact the first to sink an enemy warship; On 24 December 1941, HNLMS K XVI torpedoed and sank the Japanese destroyer Sagiri.
While Japan had a large number of submarines, they did not make a significant impact on the war. In 1942, the Japanese fleet submarines performed well, knocking out or damaging many Allied warships. However, Imperial Japanese Navy (and pre-war U.S.) doctrine stipulated that only fleet battles, not guerre de course (commerce raiding) could win naval campaigns. So, while the US had an unusually long supply line between its west coast and frontline areas, leaving it vulnerable to submarine attack, Japan used its submarines primarily for long-range reconnaissance and only occasionally attacked US supply lines. The Japanese submarine offensive against Australia in 1942 and 1943 also achieved little.David Stevens. Japanese submarine operations against Australia 1942–1944. Retrieved 18 June 2007.
As the war turned against Japan, IJN submarines increasingly served to resupply strongholds which had been cut off, such as Truk and Rabaul. In addition, Japan honored its neutrality treaty with the Soviet Union and ignored American freighters shipping millions of tons of military supplies from San Francisco to Vladivostok,Carl Boyd, "The Japanese Submarine Force and the Legacy of Strategic and Operational Doctrine Developed Between the World Wars", in Larry Addington ed. Selected Papers from the Citadel Conference on War and Diplomacy: 1978 (Charleston, 1979) 27–40; Clark G. Reynolds, Command of the Sea: The History and Strategy of Maritime Empires (1974) 512. much to the consternation of its German ally.
thumb|right|The , the largest non-nuclear submarines ever constructed.
The US Navy, by contrast, relied on commerce raiding from the outset. However, the problem of Allied forces surrounded in the Philippines, during the early part of 1942, led to diversion of boats to "guerrilla submarine" missions. As well, basing in Australia placed boats under Japanese aerial threat while en route to patrol areas, reducing their effectiveness, and Nimitz relied on submarines for close surveillance of enemy bases. Furthermore, the standard-issue Mark 14 torpedo and its Mark VI exploder both proved defective, problems which were not corrected until September 1943. Worst of all, before the war, an uninformed US Customs officer had seized a copy of the Japanese merchant marine code (called the "maru code" in the USN), not knowing that the Office of Naval Intelligence (ONI) had broken it.Farago, Ladislas. Broken Seal. The Japanese promptly changed it, and the new code was not broken again by OP-20-G until 1943.
Thus, only in 1944 did the US Navy begin to use its 150 submarines to maximum effect: installing effective shipboard radar, replacing commanders deemed lacking in aggression, and fixing the faults in the torpedoes. Japanese commerce protection was "shiftless beyond description," and convoys were poorly organized and defended compared to Allied ones, a product of flawed IJN doctrine and training – errors concealed by American faults as much as Japanese overconfidence. The number of American submarines patrols (and sinkings) rose steeply: 350 patrols (180 ships sunk) in 1942, 350 (335) in 1943, and 520 (603) in 1944.Blair, Silent Victory, pp.359–60, 551–2, & 816. By 1945, sinkings of Japanese vessels had decreased because so few targets dared to venture out on the high seas. In all, Allied submarines destroyed 1,200 merchant ships – about five million tons of shipping. Most were small cargo carriers, but 124 were tankers bringing desperately needed oil from the East Indies. Another 320 were passenger ships and troop transports. At critical stages of the Guadalcanal, Saipan, and Leyte campaigns, thousands of Japanese troops were killed or diverted from where they were needed. Over 200 warships were sunk, ranging from many auxiliaries and destroyers to one battleship and no fewer than eight carriers.
Underwater warfare was especially dangerous; of the 16,000 Americans who went out on patrol, 3,500 (22%) never returned, the highest casualty rate of any American force in World War II. The Joint Army–Navy Assessment Committee assessed U.S. Submarine credits.Roscoe, op. cit. The Japanese losses, 130 submarines in all,Blair, p.877. were even higher.
Japanese counteroffensives in China, 1944
In mid-1944 Japan mobilized over 500,000 men and launched a massive operation across China under the code name Operation Ichi-Go, their largest offensive of World War II, with the goal of connecting Japanese-controlled territory in China and French Indochina and capturing airbases in southeastern China where American bombers were based.Davison, John The Pacific War: Day By Day, pg. 37, 106 During this time, about 250,000 newest American-trained Chinese troops under Joseph Stilwell and Chinese expeditionary force were forcibly locked in the Burmese theater set by terms of the Lend-Lease Agreement. Though Japan suffered about 100,000 casualties,新聞記者が語りつぐ戦争 16 中国慰霊 読売新聞社 (1983/2) P187 these attacks, the biggest in several years, gained much ground for Japan before Chinese forces stopped the incursions in Guangxi. Despite major tactical victories, the operation overall failed to provide Japan with any significant strategic gains. A great majority of the Chinese forces were able to retreat out of the area, and later come back to attack Japanese positions such as Battle of West Hunan. Japan was not any closer in defeating China after this operation, and the constant defeats the Japanese suffered in the Pacific meant that Japan never got the time and resources needed to achieve final victory over China. Operation Ichi-go created a great sense of social confusion in the areas of China that it affected. Chinese Communist guerrillas were able to exploit this confusion to gain influence and control of greater areas of the countryside in the aftermath of Ichi-go.China at War: An Encyclopedia. Ed. Li Xiaobing. United States of America: ABC-CLIO. 2012. ISBN 978-1-59884-415-3. Retrieved 21 May 2012. p.163.
Japanese offensive in India, 1944
thumb|Chinese forces on M3A3 Stuart tanks on the Ledo Road
thumb|British Indian troops during the Battle of Imphal
After the Allied setbacks in 1943, the South East Asia command prepared to launch offensives into Burma on several fronts. In the first months of 1944, the Chinese and American troops of the Northern Combat Area Command (NCAC), commanded by the American Joseph Stilwell, began extending the Ledo Road from India into northern Burma, while the XV Corps began an advance along the coast in the Arakan Province. In February 1944 the Japanese mounted a local counter-attack in the Arakan. After early Japanese success, this counter-attack was defeated when the Indian divisions of XV Corps stood firm, relying on aircraft to drop supplies to isolated forward units until reserve divisions could relieve them.
The Japanese responded to the Allied attacks by launching an offensive of their own into India in the middle of March, across the mountainous and densely forested frontier. This attack, codenamed Operation U-Go, was advocated by Lieutenant General Renya Mutaguchi, the recently promoted commander of the Japanese Fifteenth Army; Imperial General Headquarters permitted it to proceed, despite misgivings at several intervening headquarters. Although several units of the British Fourteenth Army had to fight their way out of encirclement, by early April they had concentrated around Imphal in Manipur state. A Japanese division which had advanced to Kohima in Nagaland cut the main road to Imphal, but failed to capture the whole of the defences at Kohima. During April, the Japanese attacks against Imphal failed, while fresh Allied formations drove the Japanese from the positions they had captured at Kohima.
As many Japanese had feared, Japan's supply arrangements could not maintain her forces. Once Mutaguchi's hopes for an early victory were thwarted, his troops, particularly those at Kohima, starved. During May, while Mutaguchi continued to order attacks, the Allies advanced southwards from Kohima and northwards from Imphal. The two Allied attacks met on 22 June, breaking the Japanese siege of Imphal. The Japanese finally broke off the operation on 3 July. They had lost over 50,000 troops, mainly to starvation and disease. This represented the worst defeat suffered by the Japanese Army to that date.
Although the advance in the Arakan had been halted to release troops and aircraft for the Battle of Imphal, the Americans and Chinese had continued to advance in northern Burma, aided by the Chindits operating against the Japanese lines of communication. In the middle of 1944 the Chinese Expeditionary Force invaded northern Burma from Yunnan. They captured a fortified position at Mount Song. By the time campaigning ceased during the monsoon rains, the NCAC had secured a vital airfield at Myitkyina (August 1944), which eased the problems of air resupply from India to China over "The Hump".
Beginning of the end in the Pacific, 1944
Saipan and Philippine Sea
thumb|The Japanese aircraft carrier Zuikaku and two destroyers under attack in the Battle of the Philippine Sea.
On 15 June 1944, 535 ships began landing 128,000 U.S. Army and Marine personnel on the island of Saipan. The Allied objective was the creation of airfields within B-29 range of Tokyo. The ability to plan and execute such a complex operation in the space of 90 days was indicative of Allied logistical superiority.
It was imperative for Japanese commanders to hold Saipan. The only way to do this was to destroy the U.S. Fifth Fleet, which had 15 fleet carriers and 956 planes, 7 battleships, 28 submarines, and 69 destroyers, as well as several light and heavy cruisers. Vice Admiral Jisaburō Ozawa attacked with nine-tenths of Japan's fighting fleet, which included nine carriers with 473 planes, 5 battleships, several cruisers, and 28 destroyers. Ozawa's pilots were outnumbered 2:1 and their aircraft were becoming or were already obsolete. The Japanese had considerable antiaircraft defenses but lacked proximity fuzes or good radar. With the odds against him, Ozawa devised an appropriate strategy. His planes had greater range because they were not weighed down with protective armor; they could attack at about 480 km (300 mi), and could search a radius of 900 km (560 mi). U.S. Navy Hellcat fighters could only attack within and only search within a radius. Ozawa planned to use this advantage by positioning his fleet out. The Japanese planes would hit the U.S. carriers, land at Guam to refuel, then hit the enemy again when returning to their carriers. Ozawa also counted on about 500 land-based planes at Guam and other islands.
Admiral Raymond A. Spruance was in overall command of Fifth Fleet. The Japanese plan would have failed if the much larger U.S. fleet had closed on Ozawa and attacked aggressively; Ozawa correctly inferred Spruance would not attack. U.S. Admiral Marc Mitscher, in tactical command of Task Force 58, with its 15 carriers, was aggressive but Spruance vetoed Mitscher's plan to hunt down Ozawa because Spruance's orders made protecting the landings on Saipan his first priority.
thumb|Marines fire captured mountain gun during the attack on Garapan, Saipan, 21 June 1944.
The forces converged in the largest sea battle of World War II up to that point. Over the previous month American destroyers had destroyed 17 of 25 submarines out of Ozawa's screening force.Blair, Clay, Jr. Silent Victory (New York: Bantam, 1976).Morison, S. E. U.S. Navy in World War Two. Repeated U.S. raids destroyed the Japanese land-based planes. Ozawa's main attack lacked coordination, with the Japanese planes arriving at their targets in a staggered sequence. Following a directive from Nimitz, the U.S. carriers all had combat information centers, which interpreted the flow of radar data and radioed interception orders to the Hellcats. The result was later dubbed the Great Marianas Turkey Shoot. The few attackers to reach the U.S. fleet encountered massive AA fire with proximity fuzes. Only one American warship was slightly damaged.
On the second day, U.S. reconnaissance planes located Ozawa's fleet, away, and submarines sank two Japanese carriers. Mitscher launched 230 torpedo planes and dive bombers. He then discovered the enemy was actually another further off, out of aircraft range (based on a roundtrip flight). Mitscher decided this chance to destroy the Japanese fleet was worth the risk of aircraft losses due to running out of fuel on the return flight. Overall, the U.S. lost 130 planes and 76 aircrew; however, Japan lost 450 planes, three carriers, and 445 aircrew. The Imperial Japanese Navy's carrier force was effectively destroyed.
Leyte Gulf, 1944
The Battle of Leyte Gulf was arguably the largest naval battle in history and was the largest naval battle of World War II. It was a series of four distinct engagements fought off the Philippine island of Leyte from 23 to 26 October 1944. Leyte Gulf featured the largest battleships ever built, was the last time in history that battleships engaged each other, and was also notable as the first time that kamikaze aircraft were used. Allied victory in the Philippine Sea established Allied air and sea superiority in the western Pacific. Nimitz favored blockading the Philippines and landing on Formosa. This would give the Allies control of the sea routes to Japan from southern Asia, cutting off substantial Japanese garrisons. MacArthur favored an invasion of the Philippines, which also lay across the supply lines to Japan. Roosevelt adjudicated in favor of the Philippines. Meanwhile, Japanese Combined Fleet Chief Toyoda Soemu prepared four plans to cover all Allied offensive scenarios. On 12 October Nimitz launched a carrier raid against Formosa to make sure that planes based there could not intervene in the landings on Leyte. Toyoda put Plan Sho-2 into effect, launching a series of air attacks against the U.S. carriers. However the Japanese lost 600 planes in three days, leaving them without air cover.
right|thumbnail|The four engagements in the battle of Leyte Gulf
Sho-1 called for V. Adm. Jisaburō Ozawa's force to use an apparently vulnerable carrier force to lure the U.S. 3rd Fleet away from Leyte and remove air cover from the Allied landing forces, which would then be attacked from the west by three Japanese forces: V. Adm. Takeo Kurita's force would enter Leyte Gulf and attack the landing forces; R. Adm. Shōji Nishimura's force and V. Adm. Kiyohide Shima's force would act as mobile strike forces. The plan was likely to result in the destruction of one or more of the Japanese forces, but Toyoda justified it by saying that there would be no sense in saving the fleet and losing the Philippines.
Kurita's "Center Force" consisted of five battleships, 12 cruisers and 13 destroyers. It included the two largest battleships ever built: and . As they passed Palawan Island after midnight on 23 October the force was spotted, and U.S. submarines sank two cruisers. On 24 October, as Kurita's force entered the Sibuyan Sea, and launched 260 planes, which scored hits on several ships. A second wave of planes scored many direct hits on Musashi. A third wave, from and hit Musashi with 11 bombs and eight torpedoes. Kurita retreated but in the evening turned around to head for San Bernardino Strait. Musashi sank at about 19:30.
Meanwhile, V. Adm. Onishi Takijiro had directed his First Air Fleet, 80 land-based planes, against U.S. carriers, whose planes were attacking airfields on Luzon. The carrier was hit by an armor-piercing bomb and suffered a major explosion which killed 108 crew (out of 1,569) and 233 on the cruiser which was fire-fighting alongside. Princeton sank, and Birmingham was forced to retire.
Nishimura's force consisted of two battleships, one cruiser and four destroyers. Because they were observing radio silence, Nishimura was unable to synchronize with Shima and Kurita. Nishimura and Shima had failed to even coordinate their plans before the attacks – they were long-time rivals and neither wished to have anything to do with the other. When he entered the narrow Surigao Strait at about 02:00, Shima was 22 miles (40 km) behind him, and Kurita was still in the Sibuyan Sea, several hours from the beaches at Leyte. As they passed Panaon Island, Nishimura's force ran into a trap set for them by the U.S.-Australian 7th Fleet Support Force. R. Adm. Jesse Oldendorf had six battleships, four heavy cruisers, four light cruisers, 29 destroyers and 39 PT boats. To pass the strait and reach the landings, Nishimura had to run the gauntlet. At about 03:00 the Japanese battleship and three destroyers were hit by torpedoes and Fusō broke in two. At 03:50 the U.S. battleships opened fire. Radar fire control meant they could hit targets from a much greater distance than the Japanese. The battleship , a cruiser and a destroyer were crippled by 16-inch (406 mm) shells; Yamashiro sank at 04:19. Only one of Nishimura's force of seven ships survived the engagement. At 04:25 Shima's force of two cruisers and eight destroyers reached the battle. Seeing Fusō and believing her to be the wrecks of two battleships, Shima ordered a retreat, ending the last battleship-vs-battleship action in history.
Ozawa's "Northern Force" had four aircraft carriers, two obsolete battleships partly converted to carriers, three cruisers and nine destroyers. The carriers had only 108 planes. The force was not spotted by the Allies until 16:40 on 24 October. At 20:00 Toyoda ordered all remaining Japanese forces to attack. Halsey saw an opportunity to destroy the remnants of the Japanese carrier force. The U.S. Third Fleet was formidable – nine large carriers, eight light carriers, six battleships, 17 cruisers, 63 destroyers and 1,000 planes – and completely outgunned Ozawa's force. Halsey's ships set out in pursuit of Ozawa just after midnight. U.S. commanders ignored reports that Kurita had turned back towards San Bernardino Strait. They had taken the bait set by Ozawa. On the morning of 25 October Ozawa launched 75 planes. Most were shot down by U.S. fighter patrols. By 08:00 U.S. fighters had destroyed the screen of Japanese fighters and were hitting ships. By evening, they had sunk the carriers , , and , and a destroyer. The fourth carrier, , and a cruiser were disabled and later sank.
thumb|The Japanese aircraft carriers , left, and (probably) come under attack by dive bombers early in the battle off Cape Engaño.
Kurita passed through San Bernardino Strait at 03:00 on 25 October and headed along the coast of Samar. The only thing standing in his path were three groups (Taffy 1, 2 and 3) of the Seventh Fleet, commanded by Admiral Thomas Kinkaid. Each group had six escort carriers, with a total of more than 500 planes, and seven or eight destroyers or destroyer escorts (DE). Kinkaid still believed that Lee's force was guarding the north, so the Japanese had the element of surprise when they attacked Taffy 3 at 06:45. Kurita mistook the Taffy carriers for large fleet carriers and thought he had the whole Third Fleet in his sights. Since escort carriers stood little chance against a battleship, Adm. Clifton Sprague directed the carriers of Taffy 3 to turn and flee eastward, hoping that bad visibility would reduce the accuracy of Japanese gunfire, and used his destroyers to divert the Japanese battleships. The destroyers made harassing torpedo attacks against the Japanese. For ten minutes Yamato was caught up in evasive action. Two U.S. destroyers and a DE were sunk, but they had bought enough time for the Taffy groups to launch planes. Taffy 3 turned and fled south, with shells scoring hits on some of its carriers and sinking one of them. The superior speed of the Japanese force allowed it to draw closer and fire on the other two Taffy groups. However, at 09:20 Kurita suddenly turned and retreated north. Signals had disabused him of the notion that he was attacking the Third Fleet, and the longer Kurita continued to engage, the greater the risk of major air strikes. Destroyer attacks had broken the Japanese formations, shattering tactical control. Three of Kurita's heavy cruisers had been sunk and another was too damaged to continue the fight. The Japanese retreated through the San Bernardino Strait, under continuous air attack. The Battle of Leyte Gulf was over; and a large part of the Japanese surface fleet destroyed.
The battle secured the beachheads of the U.S. Sixth Army on Leyte against attack from the sea, broke the back of Japanese naval power and opened the way for an advance to the Ryukyu Islands in 1945. The only significant Japanese naval operation afterwards was the disastrous Operation Ten-Go in April 1945. Kurita's force had begun the battle with five battleships; when he returned to Japan, only Yamato was combat-worthy. Nishimura's sunken Yamashiro was the last battleship in history to engage another in combat.
Philippines, 1944–45
thumb|General Douglas MacArthur wading ashore at Leyte
On 20 October 1944 the U.S. Sixth Army, supported by naval and air bombardment, landed on the favorable eastern shore of Leyte, north of Mindanao. The U.S. Sixth Army continued its advance from the east, as the Japanese rushed reinforcements to the Ormoc Bay area on the western side of the island. While the Sixth Army was reinforced successfully, the U.S. Fifth Air Force was able to devastate the Japanese attempts to resupply. In torrential rains and over difficult terrain, the advance continued across Leyte and the neighboring island of Samar to the north. On 7 December U.S. Army units landed at Ormoc Bay and, after a major land and air battle, cut off the Japanese ability to reinforce and supply Leyte. Although fierce fighting continued on Leyte for months, the U.S. Army was in control.
On 15 December 1944 landings against minimal resistance were made on the southern beaches of the island of Mindoro, a key location in the planned Lingayen Gulf operations, in support of major landings scheduled on Luzon. On 9 January 1945, on the south shore of Lingayen Gulf on the western coast of Luzon, General Krueger's Sixth Army landed his first units. Almost 175,000 men followed across the twenty-mile (32 km) beachhead within a few days. With heavy air support, Army units pushed inland, taking Clark Field, northwest of Manila, in the last week of January.
thumb|U.S. troops approaching Japanese positions near Baguio, Luzon, 23 March 1945
Two more major landings followed, one to cut off the Bataan Peninsula, and another, that included a parachute drop, south of Manila. Pincers closed on the city and, on 3 February 1945, elements of the 1st Cavalry Division pushed into the northern outskirts of Manila and the 8th Cavalry passed through the northern suburbs and into the city itself.
As the advance on Manila continued from the north and the south, the Bataan Peninsula was rapidly secured. On 16 February paratroopers and amphibious units assaulted the island fortress of Corregidor, and resistance ended there on 27 February.
In all, ten U.S. divisions and five independent regiments battled on Luzon, making it the largest campaign of the Pacific war, involving more troops than the United States had used in North Africa, Italy, or southern France. Of the 250,000 Japanese troops defending Luzon, 80 percent died. The last Japanese soldier in the Philippines to surrender was Hiroo Onoda on 9 March 1974.Powers, D. (2011): Japan: No Surrender in World War Two BBC History (17 February 2011).
Palawan Island, between Borneo and Mindoro, the fifth largest and western-most Philippine Island, was invaded on 28 February with landings of the Eighth Army at Puerto Princesa. The Japanese put up little direct defense of Palawan, but cleaning up pockets of Japanese resistance lasted until late April, as the Japanese used their common tactic of withdrawing into the mountain jungles, dispersed as small units. Throughout the Philippines, U.S. forces were aided by Filipino guerrillas to find and dispatch the holdouts.
The U.S. Eighth Army then moved on to its first landing on Mindanao (17 April), the last of the major Philippine Islands to be taken. Mindanao was followed by invasion and occupation of Panay, Cebu, Negros and several islands in the Sulu Archipelago. These islands provided bases for the U.S. Fifth and Thirteenth Air Forces to attack targets throughout the Philippines and the South China Sea.
Final stages
thumb|Iwo Jima Location Map
Iwo Jima, February 1945
The battle of Iwo Jima ("Operation Detachment") in February 1945 was one of the bloodiest battles fought by the Americans in the Pacific War. Iwo Jima is an 8 sq mile (21 km2) island situated halfway between Tokyo and the Mariana Islands. Holland Smith, the commander of the invasion force, aimed to capture the island, and utilize its three airfields as bases to carry out air attacks against the Home Islands. Lt. General Tadamichi Kuribayashi, the commander of the island's defense, knew that he could not win the battle, but he hoped to make the Americans suffer far more than they could endure.
From early 1944 until the days leading up to the invasion, Kuribayashi transformed the island into a massive network of bunkers, hidden guns, and 11 mi (18 km) of underground tunnels. The heavy American naval and air bombardment did little but drive the Japanese further underground, making their positions impervious to enemy fire. Their pillboxes and bunkers were all connected so that if one was knocked out, it could be reoccupied again. The network of bunkers and pillboxes greatly favored the defender.
Starting in mid-June 1944, Iwo Jima came under sustained aerial bombardment and naval artillery fire. However, Kuribayashi's hidden guns and defenses survived the constant bombardment virtually unscathed. On 19 February 1945, some 30,000 men of the 3rd, 4th, and 5th Marine Divisions landed on the southeast coast of Iwo, just under Mount Suribachi; where most of the island's defenses were concentrated. For some time, they did not come under fire. This was part of Kuribayashi's plan to hold fire until the landing beaches were full. As soon as the Marines pushed inland to a line of enemy bunkers, they came under devastating machine gun and artillery fire which cut down many of the men. By the end of the day, the Marines reached the west coast of the island, but their losses were appalling; almost 2,000 men killed or wounded.
On 23 February, the 28th Marine Regiment reached the summit of Suribachi, prompting the now famous Raising the Flag on Iwo Jima picture. Navy Secretary James Forrestal, upon seeing the flag, remarked "there will be a Marine Corps for the next 500 years". The flag raising is often cited as the most reproduced photograph of all time and became the archetypal representation not only of that battle, but of the entire Pacific War. For the rest of February, the Americans pushed north, and by 1 March, had taken two-thirds of the island. But it was not until 26 March that the island was finally secured. The Japanese fought to the last man, killing 6,800 Marines and wounding nearly 20,000 more. The Japanese losses totaled well over 20,000 men killed, and only 1,083 prisoners were taken. Historians debate whether it was strategically worth the casualties sustained.Robert S. Burrell, "Breaking the Cycle of Iwo Jima Mythology: A Strategic Study of Operation Detachment," Journal of Military History Volume 68, Number 4, October 2004, pp. 1143–1186 and rebuttal in Project MUSE
Allied offensives in Burma, 1944–45
right|thumb|British Royal Marines land at Ramree
In late 1944 and early 1945, the Allied South East Asia Command launched offensives into Burma, intending to recover most of the country, including Rangoon, the capital, before the onset of the monsoon in May.
The Indian XV Corps advanced along the coast in Arakan province, at last capturing Akyab Island after failures in the two previous years. They then landed troops behind the retreating Japanese, inflicting heavy casualties, and captured Ramree Island and Cheduba Island off the coast, establishing airfields on them which were used to support the offensive into Central Burma.
The Chinese Expeditionary Force captured Mong-Yu and Lashio,. while the Chinese and American Northern Combat Area Command resumed its advance in northern Burma. In late January 1945, these two forces linked up with each other at Hsipaw. The Ledo Road was completed, linking India and China, but too late in the war to have any significant effect.
The Japanese Burma Area Army attempted to forestall the main Allied attack on the central part of the front by withdrawing their troops behind the Irrawaddy River. Lieutenant General Heitarō Kimura, the new Japanese commander in Burma, hoped that the Allies' lines of communications would be overstretched trying to cross this obstacle. However, the advancing British Fourteenth Army under Lieutenant General William Slim switched its axis of advance to outflank the main Japanese armies.
During February, Fourteenth Army secured bridgeheads across the Irrawaddy on a broad front. On 1 March, units of IV Corps captured the supply centre of Meiktila, throwing the Japanese into disarray. While the Japanese attempted to recapture Meiktila, XXXIII Corps captured Mandalay. The Japanese armies were heavily defeated, and with the capture of Mandalay, the Burmese population and the Burma National Army (which the Japanese had raised) turned against the Japanese.
During April, Fourteenth Army advanced south towards Rangoon, the capital and principal port of Burma, but was delayed by Japanese rearguards north of Rangoon at the end of the month. Slim feared that the Japanese would defend Rangoon house-to-house during the monsoon, placing his army in a disastrous supply situation, and in March he had asked that a plan to capture Rangoon by an amphibious force, Operation Dracula, which had been abandoned earlier, be reinstated. Dracula was launched on 1 May, but Rangoon was found to have been abandoned. The troops which occupied Rangoon linked up with Fourteenth Army five days later, securing the Allies' lines of communication.
The Japanese forces which had been bypassed by the Allied advances attempted to break out across the Sittaung River during June and July to rejoin the Burma Area Army which had regrouped in Tenasserim in southern Burma. They suffered 14,000 casualties, half their strength. Overall, the Japanese lost some 150,000 men in Burma. Only 1,700 prisoners were taken.
The Allies were preparing to make amphibious landings in Malaya when word of the Japanese surrender arrived.
Liberation of Borneo
thumb|U.S. LVTs land Australian soldiers at Balikpapan on 7 July 1945
The Borneo Campaign of 1945 was the last major campaign in the South West Pacific Area. In a series of amphibious assaults between 1 May and 21 July, the Australian I Corps, under General Leslie Morshead, attacked Japanese forces occupying the island. Allied naval and air forces, centered on the U.S. 7th Fleet under Admiral Thomas Kinkaid, the Australian First Tactical Air Force and the U.S. Thirteenth Air Force also played important roles in the campaign.
The campaign opened with a landing on the small island of Tarakan on 1 May. This was followed on 1 June by simultaneous assaults in the north west, on the island of Labuan and the coast of Brunei. A week later the Australians attacked Japanese positions in North Borneo. The attention of the Allies then switched back to the central east coast, with the last major amphibious assault of World War II, at Balikpapan on 1 July.
Although the campaign was criticized in Australia at the time, and in subsequent years, as pointless or a "waste" of the lives of soldiers, it did achieve a number of objectives, such as increasing the isolation of significant Japanese forces occupying the main part of the Dutch East Indies, capturing major oil supplies and freeing Allied prisoners of war, who were being held in deteriorating conditions.. Pages 184–186. At one of the very worst sites, around Sandakan in Borneo, only six of some 2,500 British and Australian prisoners survived.
China, 1945
By April 1945, China had already been at war with Japan for more than seven years. Both nations were exhausted by years of battles, bombings and blockades. After Japanese victories in Operation Ichi-Go, Japan were losing the battle in Burma and facing constant attacks from Chinese Nationalists forces and Communist guerrillas in the country side. The Japanese army began preparations for the Battle of West Hunan in March 1945. Japanese mobilized 34th, 47th, 64th, 68th and 116th Divisions, as well as the 86th Independent Brigade, for a total of 80,000 men to seize Chinese airfields and secure railroads in West Hunan by early April.Wilson, Dick. When Tigers Fight. New York, NY: The Viking Press, 1982. pp. 248 In response, the Chinese National Military Council dispatched the 4th Front Army and the 10th and 27th Army Groups with He Yingqin as commander-in-chief.. At the same time, it airlifted the entire Chinese New 6th Corps, an American-equipped corps and veterans of the Burma Expeditionary Force, from Kunming to Zhijiang. Chinese forces totaled 110,000 men in 20 divisions. They were supported by about 400 aircraft from Chinese and American air forces."National Revolutionary Army Order of Battle for the Battle of West Hunan". China Whampoa Academy Net. 11 September 2007 <http://www.hoplite.cn/Templates/hpjh0106.htm>. Chinese forces achieved a decisive victory and launched a large counterattack in this campaign. Concurrently, the Chinese managed to repel a Japanese offensive in Henan and Hubei. Afterwards, Chinese forces retook Hunan and Hubei provinces in South China. Chinese launched a counter offensive to retake Guangxi which was the last major Japanese stronghold in South China. In August 1945, Chinese forces successfully retook Guangxi.
Okinawa
thumb| burns after being hit by two kamikazes. At Okinawa the kamikazes caused 4,900 American deaths.
The largest and bloodiest American battle came at Okinawa, as the U.S. sought airbases for 3,000 B-29 bombers and 240 squadrons of B-17 bombers for the intense bombardment of Japan's home islands in preparation for a full-scale invasion in late 1945. The Japanese, with 115,000 troops augmented by thousands of civilians on the heavily populated island, did not resist on the beaches—their strategy was to maximize the number of soldier and Marine casualties, and naval losses from Kamikaze attacks. After an intense bombardment the Americans landed on 1 April 1945 and declared victory on 21 June.Joseph H. Alexander, The final campaign: Marines in the victory on Okinawa (1996) short official history online The supporting naval forces were the targets for 4,000 sorties, many by Kamikaze suicide planes. U.S. losses totaled 38 ships of all types sunk and 368 damaged with 4,900 sailors killed. The Americans suffered 75,000 casualties on the ground; 94% of the Japanese soldiers died along with many civilians.Hiromichi Yahara, The Battle For Okinawa (1997), Japanese perspective excerpt and text search
The British Pacific Fleet operated as a separate unit from the American task forces in the Okinawa operation. Its objective was to strike airfields on the chain of islands between Formosa and Okinawa, to prevent the Japanese reinforcing the defences of Okinawa from that direction.
Landings in the Japanese home islands
Hard-fought battles on the Japanese home islands of Iwo Jima, Okinawa, and others resulted in horrific casualties on both sides but finally produced a Japanese defeat. Of the 117,000 Japanese troops defending Okinawa, 94 percent died."Creating military power: the sources of military effectiveness". Risa Brooks, Elizabeth A. Stanley (2007). Stanford University Press. p.41. ISBN 0-8047-5399-7 Faced with the loss of most of their experienced pilots, the Japanese increased their use of kamikaze tactics in an attempt to create unacceptably high casualties for the Allies. The U.S. Navy proposed to force a Japanese surrender through a total naval blockade and air raids.Skates, James. Invasion of Japan.
thumb|upright|left|The mushroom cloud from the nuclear explosion over Nagasaki rising 60,000 feet (18 km) into the air on the morning of 9 August 1945.
Towards the end of the war as the role of strategic bombing became more important, a new command for the U.S. Strategic Air Forces in the Pacific was created to oversee all U.S. strategic bombing in the hemisphere, under United States Army Air Forces General Curtis LeMay. Japanese industrial production plunged as nearly half of the built-up areas of 67 cities were destroyed by B-29 firebombing raids. On 9–10 March 1945 alone, about 100,000 people were killed in a conflagration caused by an incendiary attack on Tokyo. LeMay also oversaw Operation Starvation, in which the inland waterways of Japan were extensively mined by air, which disrupted the small amount of remaining Japanese coastal sea traffic. On 26 July 1945, the President of the United States Harry S. Truman, the President of the Nationalist Government of China Chiang Kai-shek and the Prime Minister of Great Britain Winston Churchill issued the Potsdam Declaration, which outlined the terms of surrender for the Empire of Japan as agreed upon at the Potsdam Conference. This ultimatum stated that, if Japan did not surrender, it would face "prompt and utter destruction."
The Atomic bomb
On 6 August 1945, the U.S. dropped an atomic bomb on the Japanese city of Hiroshima in the first nuclear attack in history. In a press release issued after the atomic bombing of Hiroshima, Truman warned Japan to surrender or "...expect a rain of ruin from the air, the like of which has never been seen on this earth." Three days later, on 9 August, the U.S. dropped another atomic bomb on Nagasaki, the last nuclear attack in history. More than 140,000–240,000 people died as a direct result of these two bombings.Professor Duncan Anderson, 2005,"Nuclear Power: The End of the War Against Japan" (World War Two, BBC History website) Access date: 11 September 2007. The necessity of the atomic bombings has long been debated, with detractors claiming that a naval blockade and aerial bombing campaign had already made invasion, hence the atomic bomb, unnecessary.See, for example, Alperowitz, G., The Decision to Use the Atomic Bomb (1995; New York, Knopf; ISBN 0-679-44331-2) for this argument. However, other scholars have argued that the bombings shocked the Japanese government into surrender, with Emperor finally indicating his wish to stop the war. Another argument in favor of the atomic bombs is that they helped avoid Operation Downfall, or a prolonged blockade and bombing campaign, any of which would have exacted much higher casualties among Japanese civilians. Historian Richard B. Frank wrote that a Soviet invasion of Japan was never likely because they had insufficient naval capability to mount an amphibious invasion of Hokkaidō.
Soviet invasion of Manchuria
thumb|right|Pacific Fleet marines of the Soviet Navy hoisting the Soviet naval ensign in Port Arthur, on 1 October 1945.
On 3 February 1945 the Soviet Union agreed with Roosevelt to enter the Pacific conflict. It promised to act 90 days after the war ended in Europe and did so exactly on schedule on 9 August by invading Manchuria. A battle-hardened, one million-strong Soviet force, transferred from Europe, attacked Japanese forces in Manchuria and landed a heavy blow against the Japanese Kantōgun (Kwantung Army).Raymond L. Garthoff. The Soviet Manchurian Campaign, August 1945. Military Affairs, Vol. 33, No. 2 (Oct. 1969), pp. 312–336
The Manchurian Strategic Offensive Operation began on 9 August 1945, with the Soviet invasion of the Japanese puppet state of Manchukuo and was the last campaign of the Second World War and the largest of the 1945 Soviet–Japanese War which resumed hostilities between the Soviet Union and the Empire of Japan after almost six years of peace. Soviet gains on the continent were Manchukuo, Mengjiang (Inner Mongolia) and northern Korea. The USSR's early entry into the war was a significant factor in the Japanese decision to surrender as it became apparent the Soviets were no longer willing to act as an intermediary for a negotiated settlement on favorable terms.
Surrender
thumb|Douglas MacArthur signs the formal Japanese Instrument of Surrender on the , 2 September 1945.
The effects of the "Twin Shocks"—the Soviet entry and the atomic bombing—were profound. On 10 August the "sacred decision" was made by Japanese Cabinet to accept the Potsdam terms on one condition: the "prerogative of His Majesty as a Sovereign Ruler". At noon on 15 August, after the American government's intentionally ambiguous reply, stating that the "authority" of the emperor "shall be subject to the Supreme Commander of the Allied Powers", the Emperor broadcast to the nation and to the world at large the rescript of surrender,Sadao Asada. "The Shock of the Atomic Bomb and Japan's Decision to Surrender: A Reconsideration". The Pacific Historical Review, Vol. 67, No. 4 (Nov. 1998), pp. 477–512. ending the Second World War.
In Japan, 14 August is considered to be the day that the Pacific War ended. However, as Imperial Japan actually surrendered on 15 August, this day became known in the English-speaking countries as "V-J Day" (Victory in Japan). The formal Japanese Instrument of Surrender was signed on 2 September 1945, on the battleship , in Tokyo Bay. The surrender was accepted by General Douglas MacArthur as Supreme Commander for the Allied Powers, with representatives of several Allied nations, from a Japanese delegation led by Mamoru Shigemitsu and Yoshijirō Umezu.
Following this period, MacArthur went to Tokyo to oversee the postwar development of the country. This period in Japanese history is known as the occupation.
War crimes
thumb|upright|Australian POW moments before his execution
On 7 December 1941, 2,403 non-combatants (2,335 neutral military personnel and 68 civilians) were killed and 1,247 wounded during the Japanese surprise attack on Pearl Harbor. Because the attack happened without a declaration of war and without explicit warning, it was judged by the Tokyo Trials to be a war crime.
During the Pacific War, Japanese soldiers killed millions of non-combatants, including prisoners of war, from surrounding nations. At least 20 million Chinese died during the Sino-Japanese War (1937–1945).
Unit 731 was one example of wartime atrocities committed on a civilian population during World War II, where experiments were performed on thousands of Chinese civilians and Allied prisoners of war. In military campaigns, the Japanese Army used biological weapons and chemical weapons on the Chinese, killing around 400,000 civilians. The Rape of Nanking is another example of atrocity committed by Japanese soldiers on a civilian population.
thumb|Chinese corpses in a ditch after being killed by the Japanese Army, Hsuchow
According to the findings of the Tokyo Tribunal, the death rate of Western prisoners was 27.1%, seven times that of POWs under the Germans and Italians."Japanese prisoners of war". Philip Towle, Margaret Kosuge, Yōichi Kibata (2000). Continuum International Publishing Group. pp.47–48. ISBN 1-85285-192-9. The most notorious use of forced labour was in the construction of the Burma–Thailand Death Railway. Around 1,536 U.S. civilians were killed or otherwise died of abuse and mistreatment in Japanese interment camps in the Far East; in comparison, only 883 U.S. civilians died in German internment camps in Europe.U.S. Prisoners of War and Civilian American Citizens Captured and Interned by Japan in World War II: The Issue of Compensation by Japan.
A widely publicised example of institutionalised sexual slavery are "comfort women", a euphemism for the 200,000 women, mostly from Korea and China, who served in the Japanese army's camps during World War II. Some 35 Dutch comfort women brought a successful case before the Batavia Military Tribunal in 1948. In 1993, Chief Cabinet Secretary Yōhei Kōno said that women were coerced into brothels run by Japan's wartime military. Other Japanese leaders have apologized, including former Prime Minister Junichiro Koizumi in 2001. In 2007, then-Prime Minister Shinzō Abe asserted: "The fact is, there is no evidence to prove there was coercion.""No government coercion in war's sex slavery: Abe", The Japan Times, 2 March 2007.
The Three Alls Policy (Sankō Sakusen) was a Japanese scorched earth policy adopted in China, the three alls being: "Kill All, Burn All and Loot All". Initiated in 1940 by Ryūkichi Tanaka, the Sankō Sakusen was implemented in full scale in 1942 in north China by Yasuji Okamura. According to historian Mitsuyoshi Himeta, the scorched earth campaign was responsible for the deaths of "more than 2.7 million" Chinese civilians.Himeta, Mitsuyoshi (姫田光義) (日本軍による『三光政策・三光作戦をめぐって』) (Concerning the Three Alls Strategy/Three Alls Policy By the Japanese Forces), Iwanami Bukkuretto, 1996, Bix, Hirohito and the Making of Modern Japan, 2000.
The collection of skulls and other remains of Japanese soldiers by allied soldiers was shown by several studies to have been widespread enough to be commented upon by Allied military authorities and U.S. wartime press."Simon Harrison, Dark Trophies: hunting and the enemy body in modern war, Berghahn Booksl, 2012
Following the defeat of Japan, the International Military Tribunal for the Far East took place in Ichigaya, Tokyo from 29 April 1946 to 12 November 1948 to try those accused of the most serious war crimes. Meanwhile, military tribunals were also held by the returning powers throughout Asia and the Pacific for lesser figures.Dennis et al. 2008, pp. 576–577.McGibbon 2000, pp. 580–581.
See also
European theatre of World War II
Pacific War campaigns
Japanese holdout
Operation Downfall
Timeline WW II – Pacific Theater
Yasukuni Shrine
Notes
References
Citations
Sources
Eric M. Bergerud, Fire in the Sky: The Air War in the South Pacific (2000)
Blair, Jr., Clay. Silent Victory. Philadelphia: Lippincott, 1975 (submarine war).
Buell, Thomas. Master of Seapower: A Biography of Admiral Ernest J. King Naval Institute Press, 1976.
Buell, Thomas. The Quiet Warrior: A Biography of Admiral Raymond Spruance. 1974.
Channel 4 (UK). Hell in the Pacific (television documentary series). 2001.
Costello, John. The Pacific War. 1982, overview
Craven, Wesley, and James Cate, eds. The Army Air Forces in World War II. Vol. 1, Plans and Early Operations, January 1939 to August 1942. University of Chicago Press, 1958. Official history; Vol. 4, The Pacific: Guadalcanal to Saipan, August 1942 to July 1944. 1950; Vol. 5, The Pacific: Matterhorn to Nagasaki. 1953.
Dunnigan, James F., and Albert A. Nofi. The Pacific War Encyclopedia. Facts on File, 1998. 2 vols. 772p.
Gailey, Harry A. The War in the Pacific: From Pearl Harbor to Tokyo Bay (1995) online
Gordon, David M. "The China-Japan War, 1931–1945" Journal of Military History (January 2006) v 70#1, pp 137–82. Historiographical overview of major books
Seki, Eiji. (2006). Mrs. Ferguson's Tea-Set, Japan and the Second World War: The Global Consequences Following Germany's Sinking of the SS Automedon in 1940. London: Global Oriental. ISBN 978-1-905246-28-1 (cloth) (reprinted by University of Hawaii Press), Honolulu, 2007. previously announced as Sinking of the SS Automedon and the Role of the Japanese Navy: A New Interpretation.
Saburo Hayashi and Alvin Coox. Kogun: The Japanese Army in the Pacific War. Quantico, Virginia: Marine Corps Assoc., 1959.
Hsiung, James C. and Steven I. Levine, eds. China's Bitter Victory: The War with Japan, 1937–1945 M. E. Sharpe, 1992
Hsi-sheng, Ch'i. Nationalist China at War: Military Defeats and Political Collapse, 1937–1945 University of Michigan Press, 1982
Hsu Long-hsuen and Chang Ming-kai, History of The Sino-Japanese War (1937–1945), 2nd Ed., 1971. Translated by Wen Ha-hsiung, Chung Wu Publishing; 33, 140th Lane, Tung-hwa Street, Taipei, Taiwan Republic of China.
Inoguchi, Rikihei, Tadashi Nakajima, and Robert Pineau. The Divine Wind. Ballantine, 1958. Kamikaze.
James, D. Clayton. The Years of MacArthur. Vol. 2. Houghton Mifflin, 1972.
Kirby, S. Woodburn The War Against Japan. 4 vols. London: H.M.S.O., 1957–1965. Official Royal Navy history.
Leary, William M. We Shall Return: MacArthur's Commanders and the Defeat of Japan. University Press of Kentucky, 1988.
Maurice Matloff and Edwin M. Snell Strategic Planning for Coalition Warfare 1941–1942, United States Army Center of Military History, Washington, D. C., 1990
Samuel Eliot Morison, History of United States Naval Operations in World War II. Vol. 3, The Rising Sun in the Pacific. Boston: Little, Brown, 1961; Vol. 4, Coral Sea, Midway and Submarine Actions. 1949; Vol. 5, The Struggle for Guadalcanal. 1949; Vol. 6, Breaking the Bismarcks Barrier. 1950; Vol. 7, Aleutians, Gilberts, and Marshalls. 1951; Vol. 8, New Guinea and the Marianas. 1962; Vol. 12, Leyte. 1958; vol. 13, The Liberation of the Philippines: Luzon, Mindanao, the Visayas. 1959; Vol. 14, Victory in the Pacific. 1961.
Masatake Okumiya, and Mitso Fuchida. Midway: The Battle That Doomed Japan. Naval Institute Press, 1955.
E. B. Potter, and Chester W. Nimitz. Triumph in the Pacific. Prentice Hall, 1963. Naval battles
E. B. Potter, Bull Halsey Naval Institute Press, 1985.
E. B. Potter, Nimitz. Annapolis, Maryland: Naval Institute Press, 1976.
John D. Potter, Yamamoto 1967.
Gordon W. Prange, Donald Goldstein, and Katherine Dillon. At Dawn We Slept. Penguin, 1982. Pearl Harbor
——, et al. Miracle at Midway. Penguin, 1982.
——, et al. Pearl Harbor: The Verdict of History.
Henry Shaw, and Douglas Kane. History of U.S. Marine Corps Operations in World War II. Vol. 2, Isolation of Rabaul. Washington, D.C.: Headquarters, U.S. Marine Corps, 1963
Henry Shaw, Bernard Nalty, and Edwin Turnbladh. History of U.S. Marine Corps Operations in World War II. Vol. 3, Central Pacific Drive. Washington, D.C.: Office of the Chief of Military History, 1953.
E.B. Sledge, With the Old Breed: At Peleliu and Okinawa. Presidio, 1981. Memoir.
J. Douglas Smith, and Richard Jensen. World War II on the Web: A Guide to the Very Best Sites. (2002)
Ronald Spector, Eagle Against the Sun: The American War with Japan Free Press, 1985.
John Toland, The Rising Sun. 2 vols. Random House, 1970. Japan's war.
Ian W. Toll. Pacific Crucible: War at Sea in the Pacific, 1941–1942 (2011)
H. P. Willmott. Empires in the Balance. Annapolis: United States Naval Institute Press, 1982.
H. P. Willmott. The Barrier and the Javelin. Annapolis: United States Naval Institute Press, 1983.
Gerhard L. Weinberg, A World at Arms: A Global History of World War II, Cambridge University Press. ISBN 0-521-44317-2. (2005).
William Y'Blood, Red Sun Setting: The Battle of the Philippine Sea. Annapolis, Maryland: Naval Institute Press, 1980.
Further reading
External links
"The Pacific War Online Encyclopedia" compiled by Kent G. Budge, 4000 short articles
Film Footage of the Pacific War
Animated History of the Pacific War
The Pacific War Series – at The War Times Journal
Morinoske: Japanese Pilot testimonials – and more
Imperial Japanese Navy Page
01
War
Category:World War II theatres involving the United Kingdom | 342,641 | 2017-01 |
San Diego | San Diego (Spanish for "Saint Didacus") is a major city in California, United States. It is in San Diego County, on the coast of the Pacific Ocean in Southern California, approximately south of Los Angeles and immediately adjacent to the border with Mexico.
With an estimated population of 1,394,928 as of July 1, 2015, San Diego is the eighth-largest city in the United States and second-largest in California. It is part of the San Diego–Tijuana conurbation, the second-largest transborder agglomeration between the US and a bordering country after Detroit–Windsor, with a population of 4,922,723 people. San Diego has been called "the birthplace of California". It is known for its mild year-round climate, natural deep-water harbor, extensive beaches, long association with the United States Navy, and recent emergence as a healthcare and biotechnology development center.
Historically home to the Kumeyaay people, San Diego was the first site visited by Europeans on what is now the West Coast of the United States. Upon landing in San Diego Bay in 1542, Juan Rodríguez Cabrillo claimed the area for Spain, forming the basis for the settlement of Alta California 200 years later. The Presidio and Mission San Diego de Alcalá, founded in 1769, formed the first European settlement in what is now California. In 1821, San Diego became part of the newly independent Mexico, which reformed as the First Mexican Republic two years later. In 1850, California became part of the United States following the Mexican–American War and the admission of California to the union.
The city is the seat of San Diego County and is the economic center of the region as well as the San Diego–Tijuana metropolitan area. San Diego's main economic engines are military and defense-related activities, tourism, international trade, and manufacturing. The presence of the University of California, San Diego (UCSD), with the affiliated UCSD Medical Center, has helped make the area a center of research in biotechnology.
History
thumb|left|upright|alt=Full length portrait of a man in his thirties wearing a long robe, woman and child visible behind him and dog to his left|Kumeyaay people lived in San Diego before Europeans settled there.
thumb|left|alt=Man in his twenties or thirties standing transfixed in front of a cross his height, five onlookers|Namesake of the city, Didacus of Alcalá: Saint Didacus in Ecstasy Before the Cross by Murillo (Musée des Augustins)
thumb|left|Mission San Diego de Alcalá
Pre-colonial period
The original inhabitants of the region are now known as the San Dieguito and La Jolla people.Gallegos, Dennis R. (editor). 1987. San Dieguito-La Jolla: Chronology and Controversy. San Diego County Archaeological Society, Research Paper No. 1. The area of San Diego has been inhabited by the Kumeyaay people.
Spanish period
The first European to visit the region was Portuguese-born explorer Juan Rodríguez Cabrillo sailing under the flag of Castile. Sailing his flagship San Salvador from Navidad, New Spain, Cabrillo claimed the bay for the Spanish Empire in 1542, and named the site 'San Miguel'. In November 1602, Sebastián Vizcaíno was sent to map the California coast. Arriving on his flagship San Diego, Vizcaíno surveyed the harbor and what are now Mission Bay and Point Loma and named the area for the Catholic Saint Didacus, a Spaniard more commonly known as San Diego de Alcalá. On November 12, 1602, the first Christian religious service of record in Alta California was conducted by Friar Antonio de la Ascensión, a member of Vizcaíno's expedition, to celebrate the feast day of San Diego.
In May 1769, Gaspar de Portolà established the Fort Presidio of San Diego on a hill near the San Diego River. It was the first settlement by Europeans in what is now the state of California. In July of the same year, Mission San Diego de Alcalá was founded by Franciscan friars under Junípero Serra. By 1797, the mission boasted the largest native population in Alta California, with over 1,400 neophytes living in and around the mission proper. Mission San Diego was the southern anchor in California of the historic mission trail El Camino Real. Both the Presidio and the Mission are National Historic Landmarks.
Mexican period
In 1821, Mexico won its independence from Spain, and San Diego became part of the Mexican territory of Alta California. In 1822, Mexico began attempting to extend its authority over the coastal territory of Alta California. The fort on Presidio Hill was gradually abandoned, while the town of San Diego grew up on the level land below Presidio Hill. The Mission was secularized by the Mexican government in 1833, and most of the Mission lands were sold to wealthy Californio settlers. The 432 residents of the town petitioned the governor to form a pueblo, and Juan María Osuna was elected the first alcalde ("municipal magistrate"), defeating Pío Pico in the vote. (See, List of pre-statehood mayors of San Diego.) However, San Diego had been losing population throughout the 1830s and in 1838 the town lost its pueblo status because its size dropped to an estimated 100 to 150 residents. Beyond town Mexican land grants expanded the number of California ranchos that modestly added to the local economy.
In 1846, the United States went to war against Mexico and sent a naval and land expedition to conquer Alta California. At first they had an easy time of it capturing the major ports including San Diego, but the Californios in southern Alta California struck back. Following the successful revolt in Los Angeles, the American garrison at San Diego was driven out without firing a shot in early October 1846. Mexican partisans held San Diego for three weeks until October 24, 1846, when the Americans recaptured it. For the next several months the Americans were blockaded inside the pueblo. Skirmishes occurred daily and snipers shot into the town every night. The Californios drove cattle away from the pueblo hoping to starve the Americans and their Californio supporters out. On December 1 the Americans garrison learned that the dragoons of General Stephen W. Kearney were at Warner's Ranch. Commodore Robert F. Stockton sent a mounted force of fifty under Captain Archibald Gillespie to march north to meet him. Their joint command of 150 men, returning to San Diego, encountered about 93 Californios under Andrés Pico. In the ensuing Battle of San Pasqual, fought in the San Pasqual Valley which is now part of the city of San Diego, the Americans suffered their worst losses in the campaign. Subsequently, a column led by Lieutenant Gray arrived from San Diego, rescuing Kearny's battered and blockaded command.
Stockton and Kearny went on to recover Los Angeles and force the capitulation of Alta California with the "Treaty of Cahuenga" on January 13, 1847. As a result of the Mexican–American War of 1846–48, the territory of Alta California, including San Diego, was ceded to the United States by Mexico, under the terms of the Treaty of Guadalupe Hidalgo in 1848. The Mexican negotiators of that treaty tried to retain San Diego as part of Mexico, but the Americans insisted that San Diego was "for every commercial purpose of nearly equal importance to us with that of San Francisco," and the Mexican–American border was eventually established to be one league south of the southernmost point of San Diego Bay, so as to include the entire bay within the United States.
American period
thumb|left|upright|alt=Oval, black and white shoulder-height portrait of a man in his forties or fifties, slightly balding wearing a suit|Namesake of Horton Plaza, Alonzo Horton developed "New Town" which became Downtown San Diego.
The state of California was admitted to the United States in 1850. That same year San Diego was designated the seat of the newly established San Diego County and was incorporated as a city. Joshua H. Bean, the last alcalde of San Diego, was elected the first mayor. Two years later the city was bankrupt; the California legislature revoked the city's charter and placed it under control of a board of trustees, where it remained until 1889. A city charter was re-established in 1889 and today's city charter was adopted in 1931.
The original town of San Diego was located at the foot of Presidio Hill, in the area which is now Old Town San Diego State Historic Park. The location was not ideal, being several miles away from navigable water. In 1850, William Heath Davis promoted a new development by the Bay shore called "New San Diego", several miles south of the original settlement; however, for several decades the new development consisted only a few houses, a pier and an Army depot. In the late 1860s, Alonzo Horton promoted a move to the bayside area, which he called "New Town" and which became Downtown San Diego. Horton promoted the area heavily, and people and businesses began to relocate to New Town because of its location on San Diego Bay convenient to shipping. New Town soon eclipsed the original settlement, known to this day as Old Town, and became the economic and governmental heart of the city. Still, San Diego remained a relative backwater town until the arrival of a railroad connection in 1878.
thumb|right|upright|alt=Hand drawn illustration of Balboa Park|Balboa Park on the cover of a guidebook for the World Exposition of 1915
In the early part of the 20th century, San Diego hosted two World's Fairs: the Panama-California Exposition in 1915 and the California Pacific International Exposition in 1935. Both expositions were held in Balboa Park, and many of the Spanish/Baroque-style buildings that were built for those expositions remain to this day as central features of the park. The buildings were intended to be temporary structures, but most remained in continuous use until they progressively fell into disrepair. Most were eventually rebuilt, using castings of the original façades to retain the architectural style. The menagerie of exotic animals featured at the 1915 exposition provided the basis for the San Diego Zoo. During the 1950s there was a citywide festival called Fiesta del Pacifico highlighting the area's Spanish and Mexican past. In the 2010s there was a proposal for a large-scale celebration of the 100th anniversary of Balboa Park, but the plans were abandoned when the organization tasked with putting on the celebration went out of business.
The southern portion of the Point Loma peninsula was set aside for military purposes as early as 1852. Over the next several decades the Army set up a series of coastal artillery batteries and named the area Fort Rosecrans. Significant U.S. Navy presence began in 1901 with the establishment of the Navy Coaling Station in Point Loma, and expanded greatly during the 1920s.University of San Diego: Military Bases in San Diego By 1930, the city was host to Naval Base San Diego, Naval Training Center San Diego, San Diego Naval Hospital, Camp Matthews, and Camp Kearny (now Marine Corps Air Station Miramar). The city was also an early center for aviation: as early as World War I, San Diego was proclaiming itself "The Air Capital of the West". The city was home to important airplane developers and manufacturers like Ryan Airlines (later Ryan Aeronautical), founded in 1925, and Consolidated Aircraft (later Convair), founded in 1923. Charles A. Lindbergh's plane The Spirit of St. Louis was built in San Diego in 1927 by Ryan Airlines.
During World War II, San Diego became a major hub of military and defense activity, due to the presence of so many military installations and defense manufacturers. The city's population grew rapidly during and after World War II, more than doubling between 1930 (147,995) and 1950 (333,865).Moffatt, Riley. Population History of Western U.S. Cities & Towns, 1850–1990. Lanham: Scarecrow, 1996, 54. During the final months of the war, the Japanese had a plan to target multiple U.S. cities for biological attack, starting with San Diego. The plan was called "Operation Cherry Blossoms at Night" and called for kamikaze planes filled with fleas infected with plague (Yersinia pestis) to crash into civilian population centers in the city, hoping to spread plague in the city and effectively kill tens of thousands of civilians. The plan was scheduled to launch on September 22, 1945, but was not carried out because Japan surrendered five weeks earlier.Naomi Baumslag, Murderous Medicine: Nazi Doctors, Human Experimentation, and Typhus, 2005, p.207
After World War II, the military continued to play a major role in the local economy, but post-Cold War cutbacks took a heavy toll on the local defense and aerospace industries. The resulting downturn led San Diego leaders to seek to diversify the city's economy by focusing on research and science, as well as tourism.
From the start of the 20th century through the 1970s, the American tuna fishing fleet and tuna canning industry were based in San Diego, "the tuna capital of the world". San Diego's first tuna cannery was founded in 1911, and by the mid-1930s the canneries employed more than 1,000 people. A large fishing fleet supported the canneries, mostly staffed by immigrant fishermen from Japan, and later from the Portuguese Azores and Italy whose influence is still felt in neighborhoods like Little Italy and Point Loma. Due to rising costs and foreign competition, the last of the canneries closed in the early 1980s.
Downtown San Diego was in decline in the 1960s and 1970s, but experienced some urban renewal since the early 1980s, including the opening of Horton Plaza, the revival of the Gaslamp Quarter, and the construction of the San Diego Convention Center; Petco Park opened in 2004.
Geography
thumb|left|Urban aerial of San Diego and Tijuana, Mexico
According to SDSU professor emeritus Monte Marshall, San Diego Bay is "the surface expression of a north-south-trending, nested graben". The Rose Canyon and Point Loma fault zones are part of the San Andreas Fault system. About east of the bay are the Laguna Mountains in the Peninsular Ranges, which are part of the backbone of the American continents.
The city lies on approximately 200 deep canyons and hills separating its mesas, creating small pockets of natural open space scattered throughout the city and giving it a hilly geography. Traditionally, San Diegans have built their homes and businesses on the mesas, while leaving the urban canyons relatively wild. Thus, the canyons give parts of the city a segmented feel, creating gaps between otherwise proximate neighborhoods and contributing to a low-density, car-centered environment. The San Diego River runs through the middle of San Diego from east to west, creating a river valley which serves to divide the city into northern and southern segments. The river used to flow into San Diego Bay and its fresh water was the focus of the earliest Spanish explorers. Several reservoirs and Mission Trails Regional Park also lie between and separate developed areas of the city.
Notable peaks within the city limits include Cowles Mountain, the highest point in the city at ; Black Mountain at ; and Mount Soledad at . The Cuyamaca Mountains and Laguna Mountains rise to the east of the city, and beyond the mountains are desert areas. The Cleveland National Forest is a half-hour drive from downtown San Diego. Numerous farms are found in the valleys northeast and southeast of the city.
In its 2013 ParkScore ranking, The Trust for Public Land reported that San Diego had the 9th-best park system among the 50 most populous U.S. cities."Report: San Diego has 9th best parks among survey of 50 U.S. cities" June 6, 2013. ABC 10 News. Retrieved on July 18, 2013. ParkScore ranks city park systems by a formula that analyzes acreage, access, and service and investment.
Communities and neighborhoods
thumb|right|Normal Heights, a neighborhood
The city of San Diego recognizes 52 individual areas as Community Planning Areas. Within a given planning area there may be several distinct neighborhoods. Altogether the city contains more than 100 identified neighborhoods.
Downtown San Diego is located on San Diego Bay. Balboa Park encompasses several mesas and canyons to the northeast, surrounded by older, dense urban communities including Hillcrest and North Park. To the east and southeast lie City Heights, the College Area, and Southeast San Diego. To the north lies Mission Valley and Interstate 8. The communities north of the valley and freeway, and south of Marine Corps Air Station Miramar, include Clairemont, Kearny Mesa, Tierrasanta, and Navajo. Stretching north from Miramar are the northern suburbs of Mira Mesa, Scripps Ranch, Rancho Peñasquitos, and Rancho Bernardo. The far northeast portion of the city encompasses Lake Hodges and the San Pasqual Valley, which holds an agricultural preserve. Carmel Valley and Del Mar Heights occupy the northwest corner of the city. To their south are Torrey Pines State Reserve and the business center of the Golden Triangle. Further south are the beach and coastal communities of La Jolla, Pacific Beach, Mission Beach, and Ocean Beach. Point Loma occupies the peninsula across San Diego Bay from downtown. The communities of South San Diego, such as San Ysidro and Otay Mesa, are located next to the Mexico–United States border, and are physically separated from the rest of the city by the cities of National City and Chula Vista. A narrow strip of land at the bottom of San Diego Bay connects these southern neighborhoods with the rest of the city.
For the most part, San Diego neighborhood boundaries tend to be understood by its residents based on geographical boundaries like canyons and street patterns. The city recognized the importance of its neighborhoods when it organized its 2008 General Plan around the concept of a "City of Villages".
Cityscape
San Diego was originally centered on the Old Town district, but by the late 1860s the focus had shifted to the Bayfront, in the belief that this new location would increase trade. As the "New Town" – present-day Downtown – waterfront location quickly developed, it eclipsed Old Town as the center of San Diego.
The development of skyscrapers over in San Diego is attributed to the construction of the El Cortez Hotel in 1927, the tallest building in the city from 1927 to 1963. As time went on multiple buildings claimed the title of San Diego's tallest skyscraper, including the Union Bank of California Building and Symphony Towers. Currently the tallest building in San Diego is One America Plaza, standing tall, which was completed in 1991. The downtown skyline contains no super-talls, as a regulation put in place by the Federal Aviation Administration in the 1970s set a limit on the height of buildings within a radius of the San Diego International Airport. An iconic description of the skyline includes its skyscrapers being compared to the tools of a toolbox.
Climate
thumb|left|A surfer at Black's Beach
San Diego is one of the top-ten best climates in the Farmers' Almanac and is one of the two best summer climates in America as scored by The Weather Channel. Under the Köppen–Geiger climate classification system, the San Diego area has been variously categorized as having either a semi-arid climate (BSh in the original classification and BSkn in modified Köppen classification)Atlas of the Biodiversity of California. California Department of Fish and Game. p.15. or a Mediterranean climateFrancisco Pugnaire and Fernando Valladares eds. Functional Plant Ecology. 2d ed. 2007. p.287. (Csa and Csb).Michael Allaby, Martyn Bramwell, Jamie Stokes, eds. Weather and Climate: An Illustrated Guide to Science. 2006. p.182. San Diego's climate is characterized by warm, dry summers and mild winters with most of the annual precipitation falling between December and March. The city has a mild climate year-round,Michalski, Greg et al. First Measurements and Modeling of ∆17O in atmospheric nitrate. Geophysical Research Letters, Vol. 30, No. 16. p.3. 2003. with an average of 201 days above and low rainfall ( annually).
The climate in San Diego, like most of Southern California, often varies significantly over short geographical distances resulting in microclimates. In San Diego, this is mostly because of the city's topography (the Bay, and the numerous hills, mountains, and canyons). Frequently, particularly during the "May gray/June gloom" period, a thick "marine layer" cloud cover keeps the air cool and damp within a few miles of the coast, but yields to bright cloudless sunshine approximately inland. Sometimes the June gloom lasts into July, causing cloudy skies over most of San Diego for the entire day. Even in the absence of June gloom, inland areas experience much more significant temperature variations than coastal areas, where the ocean serves as a moderating influence. Thus, for example, downtown San Diego averages January lows of and August highs of . The city of El Cajon, just inland from downtown San Diego, averages January lows of and August highs of .
A sign of global warming, scientists at Scripps Institution of Oceanography say the average surface temperature of the water at Scripps Pier in the California Current has increased by almost 3 degrees since 1950.
thumb|right|alt=Several people, some wearing full length suits and carrying surf boards, on a beachfront with houses visible above them|Surfers in Pacific Beach
Annual rainfall along the coast averages and the median is . The months of December through March supply most of the rain, with February the only month averaging or more. The months of May through September tend to be almost completely dry. Although there are few wet days per month during the rainy period, rainfall can be heavy when it does fall. Rainfall is usually greater in the higher elevations of San Diego; some of the higher areas can receive per year. Variability from year to year can be dramatic: in the wettest years of 1883/1884 and 1940/1941 more than fell, whilst in the driest years as little as . The wettest month on record is December 1921 with .
Snow in the city is so rare that it has been observed only five times in the century-and-a-half that records have been kept. In 1949 and 1967, snow stayed on the ground for a few hours in higher locations like Point Loma and La Jolla. The other three occasions, in 1882, 1946, and 1987, involved flurries but no accumulation.
Ecology
thumb|left|alt=Torrey Pines State Park Valley|Coastal canyon in Torrey Pines State Reserve
Like most of southern California, the majority of San Diego's current area was originally occupied by chaparral, a plant community made up mostly of drought-resistant shrubs. The endangered Torrey pine has the bulk of its population in San Diego in a stretch of protected chaparral along the coast. The steep and varied topography and proximity to the ocean create a number of different habitats within the city limits, including tidal marsh and canyons. The chaparral and coastal sage scrub habitats in low elevations along the coast are prone to wildfire, and the rates of fire have increased in the 20th century, due primarily to fires starting near the borders of urban and wild areas.
San Diego's broad city limits encompass a number of large nature preserves, including Torrey Pines State Reserve, Los Peñasquitos Canyon Preserve, and Mission Trails Regional Park. Torrey Pines State Reserve and a coastal strip continuing to the north constitute the only location where the rare species of Torrey Pine, P. torreyana torreyana, is found. thumb|right|alt=San Diego against Witch Creek Fire smoke|San Diego viewed against the Witch Creek Fire smoke Due to the steep topography that prevents or discourages building, along with some efforts for preservation, there are also a large number of canyons within the city limits that serve as nature preserves, including Switzer Canyon, Tecolote Canyon Natural Park, and Marian Bear Memorial Park in the San Clemente Canyon, as well as a number of small parks and preserves.
San Diego County has one of the highest counts of animal and plant species that appear on the endangered list of counties in the United States. Because of its diversity of habitat and its position on the Pacific Flyway, San Diego County has recorded 492 different bird species, more than any other region in the country. San Diego always scores highly in the number of bird species observed in the annual Christmas Bird Count, sponsored by the Audubon Society, and it is known as one of the "birdiest" areas in the United States.
San Diego and its backcountry suffer from periodic wildfires. In October 2003, San Diego was the site of the Cedar Fire, called the largest wildfire in California over the past century. The fire burned , killed 15 people, and destroyed more than 2,200 homes. In addition to damage caused by the fire, smoke resulted in a significant increase in emergency room visits due to asthma, respiratory problems, eye irritation, and smoke inhalation; the poor air quality caused San Diego County schools to close for a week. Wildfires four years later destroyed some areas, particularly within the communities of Rancho Bernardo, Rancho Santa Fe, and Ramona.
Demographics
Racial composition 2010 1990 1970 1940 White 58.9% 67.1% 88.9% 96.9% —Non-Hispanic 45.1% 58.7% 78.9%From 15% sample n/a Black or African American 6.7% 9.4% 7.6% 2.0% Hispanic or Latino (of any race) 28.8% 20.7% 10.7% n/a Asian 15.9% 11.8% 2.2% 1.0%
The city had a population of 1,307,402 according to the 2010 census, distributed over a land area of . The urban area of San Diego extends beyond the administrative city limits and had a total population of 2,956,746, making it the third-largest urban area in the state, after that of the Los Angeles metropolitan area and San Francisco metropolitan area. They, along with the Riverside–San Bernardino, form those metropolitan areas in California larger than the San Diego metropolitan area, with a total population of 3,095,313 at the 2010 census.
As of the Census of 2010, there were 1,307,402 people living in the city of San Diego. That represents a population increase of just under 7% from the 1,223,400 people, 450,691 households, and 271,315 families reported in 2000. The estimated city population in 2009 was 1,306,300. The population density was . The racial makeup of San Diego was 45.1% White, 6.7% African American, 0.6% Native American, 15.9% Asian (5.9% Filipino, 2.7% Chinese, 2.5% Vietnamese, 1.3% Indian, 1.0% Korean, 0.7% Japanese, 0.4% Laotian, 0.3% Cambodian, 0.1% Thai). 0.5% Pacific Islander (0.2% Guamanian, 0.1% Samoan, 0.1% Native Hawaiian), 12.3% from other races, and 5.1% from two or more races. The ethnic makeup of the city was 28.8% Hispanic or Latino (of any race); 24.9% of the total population were Mexican American, and 0.6% were Puerto Rican.
, San Diego has the third-largest homeless population in the United States; the city's homeless population has the largest percentage of homeless veterans in the nation. The population of homeless veterans in San Diego has been reduced to 1,150 people in 2016, from 2,100 in 2009.
thumb|left|A U.S. Navy vice admiral and an intelligence specialist celebrating Hispanic American Heritage Month in San Diego
As of January 1, 2008 estimates by the San Diego Association of Governments revealed that the household median income for San Diego rose to $66,715, up from $45,733, and that the city population rose to 1,336,865, up 9.3% from 2000. The population was 45.3% non-Hispanic whites, down from 78.9% in 1970, 27.7% Hispanics, 15.6% Asians/Pacific Islanders, 7.1% blacks, 0.4% American Indians, and 3.9% from other races. Median age of Hispanics was 27.5 years, compared to 35.1 years overall and 41.6 years among non-Hispanic whites; Hispanics were the largest group in all ages under 18, and non-Hispanic whites constituted 63.1% of population 55 and older.
In 2000 there were 451,126 households out of which 30.2% had children under the age of 18 living with them, 44.6% were married couples living together, 11.4% had a female householder with no husband present, and 39.8% were non-families. Households made up of individuals account for 28.0% and 7.4% had someone living alone who was 65 years of age or older. The average household size was 2.61 and the average family size was 3.30.
The U.S. Census Bureau reported that in 2000, 24.0% of San Diego residents were under 18, and 10.5% were 65 and over. the median age was 35.6; more than a quarter of residents were under age 20 and 11% were over age 65. Millennials (ages 18 through 34) constitute 27.1% of San Diego's population, the second-highest percentage in a major U.S. city. The San Diego County regional planning agency, SANDAG, provides tables and graphs breaking down the city population into 5-year age groups.
In 2000, the median income for a household in the city was $45,733, and the median income for a family was $53,060. Males had a median income of $36,984 versus $31,076 for females. The per capita income for the city was $23,609. According to Forbes in 2005, San Diego was the fifth wealthiest U.S. city but about 10.6% of families and 14.6% of the population were below the poverty line, including 20.0% of those under age 18 and 7.6% of those age 65 or over. Nonetheless, San Diego was rated the fifth-best place to live in the United States in 2006 by Money magazine.
San Diego was named the ninth-most LGBT-friendly city in the U.S. in 2013. The city also has the seventh-highest percentage of gay residents in the U.S. Additionally in 2013, San Diego State University (SDSU), one of the city's prominent universities, was named one of the top LGBT-friendly campuses in the nation.
According to a 2014 study by the Pew Research Center, 68% of the population of the city identified themselves as Christians, with 22% professing attendance at a variety of churches that could be considered Protestant, and 32% professing Roman Catholic beliefs.Major U.S. metropolitan areas differ in their religious profiles, Pew Research Center while 27% claim no religious affiliation. The same study says that other religions (including Judaism, Buddhism, Islam, and Hinduism) collectively make up about 5% of the population.
Economy
The largest sectors of San Diego's economy are defense/military, tourism, international trade, and research/manufacturing, respectively. In 2014, San Diego was designated by a Forbes columnist as the best city in the country to launch a small business or startup company.
Defense and military
thumb|right|F/A-18 Hornet flying over San Diego and the USS John C. Stennis
The economy of San Diego is influenced by its deepwater port, which includes the only major submarine and shipbuilding yards on the West Coast. Several major national defense contractors were started and are headquartered in San Diego, including General Atomics, Cubic, and NASSCO.
San Diego hosts the largest naval fleet in the world: In 2008 it was home to 53 ships, over 120 tenant commands, and more than 35,000 sailors, soldiers, Department of Defense civilian employees and contractors. About 5 percent of all civilian jobs in the county are military-related, and 15,000 businesses in San Diego County rely on Department of Defense contracts.
Military bases in San Diego include US Navy facilities, Marine Corps bases, and Coast Guard stations.
The city is "home to the majority of the U.S. Pacific Fleet's surface combatants, all of the Navy's West Coast amphibious ships and a variety of Coast Guard and Military Sealift Command vessels".
Tourism
thumb|upright|Downtown San Diego
Tourism is a major industry owing to the city's climate, beaches, and tourist attractions such as Balboa Park, Belmont amusement park, San Diego Zoo, San Diego Zoo Safari Park, and SeaWorld San Diego. San Diego's Spanish and Mexican heritage is reflected in many historic sites across the city, such as Mission San Diego de Alcala and Old Town San Diego State Historic Park. Also, the local craft brewing industry attracts an increasing number of visitors for "beer tours" and the annual San Diego Beer Week in November; San Diego has been called "America's Craft Beer Capital."
San Diego County hosted more than 32 million visitors in 2012; collectively they spent an estimated $8 billion. The visitor industry provides employment for more than 160,000 people.
San Diego's cruise ship industry used to be the second-largest in California. Numerous cruise lines operate out of San Diego. However, cruise ship business has been in decline since 2008, when the Port hosted over 250 ship calls and more than 900,000 passengers. By 2011 the number of ship calls had fallen to 103 (estimated).
Local sight-seeing cruises are offered in San Diego Bay and Mission Bay, as well as whale-watching cruises to observe the migration of gray whales, peaking in mid-January. Sport fishing is another popular tourist attraction; San Diego is home to Southern California's biggest sport fishing fleet.
International trade
San Diego's commercial port and its location on the United States–Mexico border make international trade an important factor in the city's economy. The city is authorized by the United States government to operate as a Foreign Trade Zone.
The city shares a border with Mexico that includes two border crossings. San Diego hosts the busiest international border crossing in the world, in the San Ysidro neighborhood at the San Ysidro Port of Entry. A second, primarily commercial border crossing operates in the Otay Mesa area; it is the largest commercial crossing on the California-Baja California border and handles the third-highest volume of trucks and dollar value of trade among all United States-Mexico land crossings.
One of the Port of San Diego's two cargo facilities is located in Downtown San Diego at the Tenth Avenue Marine Terminal. This terminal has facilities for containers, bulk cargo, and refrigerated and frozen storage, so that it can handle the import and export of many commodities. In 2009 the Port of San Diego handled 1,137,054 short tons of total trade; foreign trade accounted for 956,637 short tons while domestic trade amounted to 180,417 short tons.
Historically tuna fishing and canning was one of San Diego's major industries, and although the American tuna fishing fleet is no longer based in San Diego, seafood companies Bumble Bee Foods and Chicken of the Sea are still headquartered there.
Companies
thumb|alt=Modern five-story office building|Qualcomm corporate headquarters
San Diego hosts several major producers of wireless cellular technology. Qualcomm was founded and is headquartered in San Diego, and is one of the largest private-sector employers in San Diego. Other wireless industry manufacturers headquartered here include Nokia, LG Electronics, Kyocera International., Cricket Communications and Novatel Wireless. The largest software company in San Diego is security software company Websense Inc. San Diego also has the U.S. headquarters for the Slovakian security company ESET. San Diego has been designated as an iHub Innovation Center for collaboration potentially between wireless and life sciences.
The University of California, San Diego and other research institutions have helped to fuel the growth of biotechnology. In 2013, San Diego had the second-largest biotech cluster in the United States, below the Boston area and above the San Francisco Bay Area. There are more than 400 biotechnology companies in the area. In particular, the La Jolla and nearby Sorrento Valley areas are home to offices and research facilities for numerous biotechnology companies. Major biotechnology companies like Illumina and Neurocrine Biosciences are headquartered in San Diego, while many other biotech and pharmaceutical companies have offices or research facilities in San Diego. San Diego is also home to more than 140 contract research organizations (CROs) that provide contract services for pharmaceutical and biotechnology companies.Bigelow, Bruce V. "San Diego's Life Sciences CROs—The Map of Clinical Research Organizations", "Xconomy", San Diego, January 27, 2010.
Top employers
According to the City's 2015 Comprehensive Annual Financial Report,City of San Diego, California Comprehensive Annual Financial Report, for the Year ended June 30, 2014, page 321 the top employers in the city are:
Employer EmployeesUnited States Navy29,948University of California, San Diego28,459Sharp HealthCare16,896San Diego County16,427Qualcomm13,725San Diego Unified School District13,446City of San Diego10,968Dexcom10,540Kaiser Permanente7,549Scripps Health6,111
Real estate
450px|thumb|Skyline view of the Village of La Jolla in San Diego
San Diego has high real estate prices. As of May 2015 the median price of a house was $520,000. However, since February 2016 the median home price has dropped to $455,000. The San Diego metropolitan area had one of the worst housing affordability of all metropolitan areas in the United States.
Consequently, San Diego has experienced negative net migration since 2004. A significant number of people moved to adjacent Riverside County, commuting daily to jobs in San Diego, while others are leaving the region altogether and moving to more affordable regions.
San Diego home prices peaked in 2005, and then declined along with the national trend. As of December 2010, prices were down 36 percent from the peak. The median home price declined by more than $200,000 between 2005 and 2010.
Culture
upright|thumb|right|The Museum of Man
Many popular museums, such as the San Diego Museum of Art, the San Diego Natural History Museum, the San Diego Museum of Man, the Museum of Photographic Arts, and the San Diego Air & Space Museum are located in Balboa Park, which is also the location of the San Diego Zoo. The Museum of Contemporary Art San Diego (MCASD) is located in La Jolla and has a branch located at the Santa Fe Depot downtown. The downtown branch consists of two building on two opposite streets. The Columbia district downtown is home to historic ship exhibits belonging to the San Diego Maritime Museum, headlined by the Star of India, as well as the unrelated San Diego Aircraft Carrier Museum featuring the USS Midway aircraft carrier.
The San Diego Symphony at Symphony Towers performs on a regular basis and is directed by Jahja Ling. The San Diego Opera at Civic Center Plaza, directed by Ian Campbell, was ranked by Opera America as one of the top 10 opera companies in the United States. Old Globe Theatre at Balboa Park produces about 15 plays and musicals annually. The La Jolla Playhouse at UCSD is directed by Christopher Ashley. Both the Old Globe Theatre and the La Jolla Playhouse have produced the world premieres of plays and musicals that have gone on to win Tony Awards or nominations on Broadway. The Joan B. Kroc Theatre at Kroc Center's Performing Arts Center is a 600-seat state-of-the-art theatre that hosts music, dance, and theatre performances. The San Diego Repertory Theatre at the Lyceum Theatres in Horton Plaza produces a variety of plays and musicals. Hundreds of movies and a dozen TV shows have been filmed in San Diego, a tradition going back as far as 1898.
Sports
Club Sport Since League Venue (capacity) Attendance San Diego Padres Baseball 1969 Major League Baseball Petco Park (41,200) 27,103 San Diego Gulls Ice hockey 2015 American Hockey League Valley View Casino Center (13,000) 8,541 San Diego Breakers Rugby union 2016 PRO Rugby Torero Stadium (6,000) —
thumb|right|alt=Full stands, both teams on the field, cheerleaders and lots of people milling around|Qualcomm Stadium hosts a Chargers game against the St. Louis Rams.
San Diego is home to one major professional team—Major League Baseball's San Diego Padres, who play at Petco Park.
From 1961 to the 2016 season, the team hosted a National Football League franchise, the San Diego Chargers. In 2017, they moved to Los Angeles and became the Los Angeles Chargers.
In two separate stints, the National Basketball Association had a franchise in San Diego, the San Diego Rockets from 1967 to 1971 and the San Diego Clippers from 1978 to 1984. The franchises moved to Houston and Los Angeles respectively.
From 1972 to 1975, San Diego was home to an American Basketball Association team. First named the Conquistadors (aka "The Q's") the name was changed to the San Diego Sails for the 1975–76 season, but the team folded before completing that campaign.
San Diego hosts three NCAA universities. NCAA Division I San Diego State Aztecs men's and women's basketball games are played at Viejas Arena. Other prominent Aztec sports include college football, as well as soccer, basketball and volleyball. The San Diego State Aztecs (MWC) and the San Diego Toreros (WCC) are NCAA Division I teams. The UCSD Tritons are members of NCAA Division II.
San Diego has hosted several sports events. Three NFL Super Bowl championships have been held at Qualcomm Stadium. Two of college football's annual bowl games are also held at Qualcomm Stadium: the Holiday Bowl and the Poinsettia Bowl. Parts of the World Baseball Classic were played at Petco Park in 2006 and 2009.
thumb|left|Petco Park in 2006
Qualcomm Stadium also hosts international soccer games and supercross events. Soccer, American football, and track and field are also played in Balboa Stadium, the city's first stadium, constructed in 1914.
Rugby union is a developing sport in the city. The San Diego Breakers begins play in the PRO Rugby competition at Torero Stadium in 2016. The USA Sevens, a major international rugby event, was held there from 2007 through 2009. San Diego is represented by Old Mission Beach Athletic Club RFC, the former home club of USA Rugby's former Captain Todd Clever. San Diego will participate in the Western American National Rugby League which starts in 2011.
The San Diego Surf of the American Basketball Association is located in the city. The annual Farmers Insurance Open golf tournament (formerly the Buick Invitational) on the PGA Tour occurs at Torrey Pines Golf Course. This course was also the site of the 2008 U.S. Open Golf Championship. The San Diego Yacht Club hosted the America's Cup yacht races three times during the period 1988 to 1995. The amateur beach sport Over-the-line was invented in San Diego, and the annual world Over-the-line championships are held at Mission Bay every year.
Government
Local government
thumb|left|upright|Mayor Kevin Faulconer
The city is governed by a mayor and a nine-member city council. In 2006, its government changed from a council–manager government to a strong mayor government, as decided by a citywide vote in 2004. The mayor is in effect the chief executive officer of the city, while the council is the legislative body. The City of San Diego is responsible for police, public safety, streets, water and sewer service, planning and zoning, and similar services within its borders. San Diego is a sanctuary city, however, San Diego County is a participant of the Secure Communities program. , the city had one employee for every 137 residents, with a payroll greater than $733 million.
thumb|right|alt=Wood paneling floor to ceiling with seats for 8 members and support staff|San Diego City Council chambers
The members of the city council are each elected from single member districts within the city. The mayor and city attorney are elected directly by the voters of the entire city. The mayor, city attorney, and council members are elected to four-year terms, with a two-term limit. Elections are held on a non-partisan basis per California state law; nevertheless, most officeholders do identify themselves as either Democrats or Republicans. In 2007, registered Democrats outnumbered Republicans by about 7 to 6 in the city, and Democrats currently () hold a 5–4 majority in the city council. The current mayor, Kevin Faulconer, is a Republican.
San Diego is part of San Diego County, and includes all or part of the 1st, 2nd, 3rd and 4th supervisorial districts of the San Diego County Board of Supervisors, Other county officers elected in part by city residents include the Sheriff, District Attorney, Assessor/Recorder/County Clerk, and Treasurer/Tax Collector.
Areas of the city immediately adjacent to San Diego Bay ("tidelands") are administered by the Port of San Diego, a quasi-governmental agency which owns all the property in the tidelands and is responsible for its land use planning, policing, and similar functions. San Diego is a member of the regional planning agency San Diego Association of Governments (SANDAG). Public schools within the city are managed and funded by independent school districts (see above).
State and federal representation
In the California State Senate, San Diego covers the 38th, 39th and 40th districts, represented by , , and , respectively.
In the California State Assembly, San Diego covers the 77th, 78th, 79th, and 80th districts, represented by , and , , and , respectively.
In the United States House of Representatives, San Diego covers California's 49th, 50th, 51st, 52nd, and 53rd congressional districts, represented by , , , , and , respectively.
Major scandals
San Diego was the site of the 1912 San Diego free speech fight, in which the city restricted speech, vigilantes brutalized and tortured anarchists, and the San Diego Police Department killed an IWW member.
In 1916, rainmaker Charles Hatfield was blamed for $4 million in damages and accused of causing San Diego's worst flood, during which about 20 Japanese American farmers died.
Then-mayor Roger Hedgecock was forced to resign his post in 1985, after he was found guilty of one count of conspiracy and twelve counts of perjury, related to the alleged failure to report all campaign contributions. After a series of appeals, the twelve perjury counts were dismissed in 1990 based on claims of juror misconduct; the remaining conspiracy count was reduced to a misdemeanor and then dismissed.
A 2002 scheme to underfund pensions for city employees led to the San Diego pension scandal. This resulted in the resignation of newly re-elected Mayor Dick Murphy and the criminal indictment of six pension board members.Strumpf, Daniel (June 15, 2005) San Diego's Pension Scandal for Dummies, San Diego City Beat via Internet Archive. Retrieved April 3, 2011. Those charges were finally dismissed by a federal judge in 2010.
On November 28, 2005, U.S. Congressman Randy "Duke" Cunningham resigned after being convicted on federal bribery charges. He had represented California's 50th congressional district, which includes much of the northern portion of the city of San Diego. In 2006, Cunningham was sentenced to a 100-month prison sentence.
In 2005 two city council members, Ralph Inzunza and Deputy Mayor Michael Zucchet – who briefly took over as acting mayor when Murphy resigned – were convicted of extortion, wire fraud, and conspiracy to commit wire fraud for taking campaign contributions from a strip club owner and his associates, allegedly in exchange for trying to repeal the city's "no touch" laws at strip clubs. Both subsequently resigned. Inzunza was sentenced to 21 months in prison. In 2009, a judge acquitted Zucchet on seven out of the nine counts against him, and granted his petition for a new trial on the other two charges; the remaining charges were eventually dropped.
In July 2013, three former supporters of mayor Bob Filner asked him to resign because of allegations of repeated sexual harassment.Filner apologizes, gets professional help, San Diego Union Tribune, July 11, 2013 Over the ensuing six weeks, 18 women came forward to publicly claim that Filner had sexually harassed them, and multiple individuals and groups called for him to resign. Filner agreed to resign effective August 30, 2013, subsequently pleaded guilty to one felony count of false imprisonment and two misdemeanor battery charges, and was sentenced to house arrest and probation.
Crime
thumb|right|San Diego Police Department car in the city center
San Diego was ranked as the 20th-safest city in America in 2013 by Business Insider.Safe Cities In America. Business Insider (July 25, 2013). Retrieved on September 6, 2013. According to Forbes magazine, San Diego was the ninth-safest city in the top 10 list of safest cities in the U.S. in 2010. Like most major cities, San Diego had a declining crime rate from 1990 to 2000. Crime in San Diego increased in the early 2000s. In 2004, San Diego had the sixth lowest crime rate of any U.S. city with over half a million residents. From 2002 to 2006, the crime rate overall dropped 0.8%, though not evenly by category. While violent crime decreased 12.4% during this period, property crime increased 1.1%. Total property crimes per 100,000 people were lower than the national average in 2008.
According to Uniform Crime Report statistics compiled by the Federal Bureau of Investigation (FBI) in 2010, there were 5,616 violent crimes and 30,753 property crimes. Of these, the violent crimes consisted of forcible rapes, 73 robberies and 170 aggravated assaults, while 6,387 burglaries, 17,977 larceny-thefts, 6,389 motor vehicle thefts and 155 acts of arson defined the property offenses. In 2013, San Diego had the lowest murder rate of the ten largest cities in the United States.
Education
Public schools in San Diego are operated by independent school districts. The majority of the public schools in the city are served by the San Diego Unified School District, the second-largest school district in California, which includes 11 K-8 schools, 107 elementary schools, 24 middle schools, 13 atypical and alternative schools, 28 high schools, and 45 charter schools.
thumb|right|San Diego State University's Hepner HallSeveral adjacent school districts which are headquartered outside the city limits serve some schools within the city; these include the Poway Unified School District, Del Mar Union School District, San Dieguito Union High School District and Sweetwater Union High School District. In addition, there are a number of private schools in the city.
Colleges and universities
According to education rankings released by the U.S. Census Bureau, 40.4 percent of San Diegans ages 25 and older hold bachelor's degrees. The census ranks the city as the ninth-most educated city in the United States based on these figures.
Public colleges and universities in the city include San Diego State University (SDSU), University of California, San Diego (UCSD), and the San Diego Community College District, which includes San Diego City College, San Diego Mesa College, and San Diego Miramar College.
Private colleges and universities in the city include University of San Diego (USD), Point Loma Nazarene University (PLNU), Alliant International University (AIU), National University, California International Business University (CIBU), San Diego Christian College, John Paul the Great Catholic University, California College San Diego, Coleman University, University of Redlands School of Business, Design Institute of San Diego (DISD), Fashion Institute of Design & Merchandising's San Diego campus, NewSchool of Architecture and Design, Pacific Oaks College San Diego Campus, Chapman University's San Diego Campus, The Art Institute of California – San Diego, Platt College, Southern States University (SSU), UEI College, and Woodbury University School of Architecture's satellite campus.
There is one medical school in the city, the UCSD School of Medicine. There are three ABA accredited law schools in the city, which include California Western School of Law, Thomas Jefferson School of Law, and University of San Diego School of Law. There is also one law school, Western Sierra Law School, not accredited by the ABA.
Libraries
thumb|University of California, San Diego's Geisel Library, named for Theodor Seuss Geisel ("Dr. Seuss")
The city-run San Diego Public Library system is headquartered downtown and has 36 branches throughout the city. The newest location is in Skyline Hills, which broke ground in 2015. The libraries have had reduced operating hours since 2003 due to the city's financial problems. In 2006 the city increased spending on libraries by $2.1 million. A new nine-story Central Library on Park Boulevard at J Street opened on September 30, 2013."New main library is a creation in concrete", San Diego Union-Tribune, November 16, 2011
In addition to the municipal public library system, there are nearly two dozen libraries open to the public run by other governmental agencies, and by schools, colleges, and universities. Noteworthy are the Malcolm A. Love Library at San Diego State University, and the Geisel Library at the University of California.
Media
Published within the city are the daily newspaper, U-T San Diego and its online portal of the same name, and the alternative newsweeklies, the San Diego CityBeat and San Diego Reader. Times of San Diego is a free online newspaper covering news in the metropolitan area. Voice of San Diego is a non-profit online news outlet covering government, politics, education, neighborhoods, and the arts. The San Diego Daily Transcript is a business-oriented daily newspaper.
thumb|left|upright|alt=Several buildings in front with signs for various stores, high skyscraper behind them on left with NBC logo|NBC San Diego (left) is outside Horton Plaza on Broadway downtown.
San Diego led U.S. local markets with 69.6 percent broadband penetration in 2004 according to Nielsen//NetRatings.
San Diego's first television station was KFMB, which began broadcasting on May 16, 1949. Since the Federal Communications Commission (FCC) licensed seven television stations in Los Angeles, two VHF channels were available for San Diego because of its relative proximity to the larger city. In 1952, however, the FCC began licensing UHF channels, making it possible for cities such as San Diego to acquire more stations. Stations based in Mexico (with ITU prefixes of XE and XH) also serve the San Diego market. Television stations today include XHTJB 3 (Once TV), XETV 6 (CW), KFMB 8 (CBS), KGTV 10 (ABC), XEWT 12 (Televisa Regional), KPBS 15 (PBS), KBNT-CD 17 (Univision), XHTIT-TDT 21 (Azteca 7), XHJK-TDT 27 (Azteca 13), XHAS 33 (Telemundo), K35DG-D 35 (UCSD-TV), KDTF-LD 51 (Telefutura), KNSD 39 (NBC), KZSD-LP 41 (Azteca America), KSEX-CD 42 (Infomercials), XHBJ-TDT 45 (Gala TV), XHDTV 49 (MNTV), KUSI 51 (Independent), XHUAA-TDT 57 (Canal de las Estrellas), and KSWB-TV 69 (Fox). San Diego has an 80.6 percent cable penetration rate.San Diego market in
Due to the ratio of U.S. and Mexican-licensed stations, San Diego is the largest media market in the United States that is legally unable to support a television station duopoly between two full-power stations under FCC regulations, which disallow duopolies in metropolitan areas with fewer than nine full-power television stations and require that there must be eight unique station owners that remain once a duopoly is formed (there are only seven full-power stations on the California side of the San Diego-Tijuana market). Though the E. W. Scripps Company owns KGTV and KZSD-LP, they are not considered a duopoly under the FCC's legal definition as common ownership between full-power and low-power television stations in the same market is permitted regardless to the number of stations licensed to the area. As a whole, the Mexico side of the San Diego-Tijuana market has two duopolies and one triopoly (Entravision Communications owns both XHAS-TV and XHDTV-TV, Azteca owns XHJK-TV and XHTIT-TV, and Grupo Televisa owns XHUAA-TV and XHWT-TV along with being the license holder for XETV-TV, which is run by California-based subsidiary Bay City Television).
San Diego's television market is limited to only San Diego county. The Imperial Valley has its own market (which also extends into western Arizona), while neighboring Orange and Riverside counties are part of the Los Angeles market. (Sometimes in the past, a missing network affiliate in the Imperial Valley would be available on cable TV from San Diego.)
The radio stations in San Diego include nationwide broadcaster, Clear Channel Communications; CBS Radio, Midwest Television, Lincoln Financial Media, Finest City Broadcasting, and many other smaller stations and networks. Stations include: KOGO AM 600, KFMB AM 760, KCEO AM 1000, KCBQ AM 1170, K-Praise, KLSD AM 1360 Air America, KFSD 1450 AM, KPBS-FM 89.5, Channel 933, Star 94.1, FM 94/9, FM News and Talk 95.7, Q96 96.1, KyXy 96.5, Free Radio San Diego (AKA Pirate Radio San Diego) 96.9FM FRSD, KSON 97.3/92.1, KXSN 98.1, Jack-FM 100.7, 101.5 KGB-FM, KLVJ 102.1, Rock 105.3, and another Pirate Radio station at 106.9FM, as well as a number of local Spanish-language radio stations.
Infrastructure
Utilities
Water is supplied to residents by the Water Department of the City of San Diego. The city receives most of its water from the Metropolitan Water District of Southern California.
Gas and electric utilities are provided by San Diego Gas & Electric, a division of Sempra Energy.
Street lights
In the mid 20th century the city had mercury vapor street lamps. In 1978 the city decided to replace them with more efficient sodium vapor lamps. This triggered an outcry from astronomers at Palomar Observatory north of the city, concerned that the new lamps would increase light pollution and hinder astronomical observation. The city altered its lighting regulations to limit light pollution within of Palomar.
In 2011, the city announced plans to upgrade 80% of its street lighting to new energy-efficient lights that use induction technology, a modified form of fluorescent lamp producing a broader spectrum than sodium-vapor lamps. The new system is predicted to save $2.2 million per year in energy and maintenance. The city stated the changes would "make our neighborhoods safer." They also increase light pollution.City of San Diego official website, "Street Division: Electrical Street Lights" Retrieved February 15, 2014
In 2014, San Diego announced plans to become the first U.S. city to install cyber-controlled street lighting, using an "intelligent" lighting system to control 3,000 LED street lights.
Transportation
thumb|left|I-5 looking south toward downtown San Diego
With the automobile being the primary means of transportation for over 80 percent of residents, San Diego is served by a network of freeways and highways. This includes Interstate 5, which runs south to Tijuana and north to Los Angeles; Interstate 8, which runs east to Imperial County and the Arizona Sun Corridor; Interstate 15, which runs northeast through the Inland Empire to Las Vegas and Salt Lake City; and Interstate 805, which splits from I-5 near the Mexican border and rejoins I-5 at Sorrento Valley.
Major state highways include SR 94, which connects downtown with I-805, I-15 and East County; SR 163, which connects downtown with the northeast part of the city, intersects I-805 and merges with I-15 at Miramar; SR 52, which connects La Jolla with East County through Santee and SR 125; SR 56, which connects I-5 with I-15 through Carmel Valley and Rancho Peñasquitos; SR 75, which spans San Diego Bay as the San Diego-Coronado Bridge, and also passes through South San Diego as Palm Avenue; and SR 905, which connects I-5 and I-805 to the Otay Mesa Port of Entry.
The stretch of SR 163 that passes through Balboa Park is San Diego's oldest freeway, and has been called one of America's most beautiful parkways.Marshall, David. San Diego's Balboa Park. Arcadia Publishing. 2007.
San Diego's roadway system provides an extensive network of cycle routes. Its dry and mild climate makes cycling a convenient year-round option; however, the city's hilly terrain and long average trip distances make cycling less practicable. Older and denser neighborhoods around the downtown tend to be utility cycling oriented. This is partly because of the grid street patterns now absent in newer developments farther from the urban core, where suburban style arterial roads are much more common. As a result, a majority of cycling is recreational. In 2006, San Diego was rated the best city (with a population over 1 million) for cycling in the U.S.
thumb|right|260px|View of Coronado and San Diego from the air
San Diego is served by the San Diego Trolley light rail system, by the SDMTS bus system, and by Coaster and Amtrak Pacific Surfliner commuter rail; northern San Diego county is also served by the Sprinter light rail line. The Trolley primarily serves downtown and surrounding urban communities, Mission Valley, east county, and coastal south bay. A planned Mid-Coast extension of the Trolley will operate from Old Town to University City and the University of California, San Diego along the I-5 Freeway, with planned operation by 2018. The Amtrak and Coaster trains currently run along the coastline and connect San Diego with Los Angeles, Orange County, Riverside, San Bernardino, and Ventura via Metrolink and the Pacific Surfliner. There are two Amtrak stations in San Diego, in Old Town and the Santa Fe Depot downtown. San Diego transit information about public transportation and commuting is available on the Web and by dialing "511" from any phone in the area.
The city has two major commercial airports within or near its city limits. Downtown San Diego International Airport (SAN), also known as Lindbergh Field, is the busiest single-runway airport in the United States. It served over 17 million passengers in 2005, and is dealing with larger numbers every year. It is located on San Diego Bay from downtown, and maintains scheduled flights to the rest of the United States (including Hawaii), as well as to Canada, Mexico, Japan, and the United Kingdom. It is operated by an independent agency, the San Diego Regional Airport Authority. Tijuana International Airport has a terminal within the city limits in the Otay Mesa district connected to the rest of the airport in Tijuana, Mexico via the Cross Border Xpress cross-border footbridge. It is the primary airport for flights to the rest of Mexico, and offers connections via Mexico City to the rest of Latin America. In addition, the city has two general-aviation airports, Montgomery Field (MYF) and Brown Field (SDM).
thumb|left|Cross Border Xpress bridge from the terminal in San Diego on the right to the main terminal of Tijuana Airport on the left
Recent regional transportation projects have sought to mitigate congestion, including improvements to local freeways, expansion of San Diego Airport, and doubling the capacity of the cruise ship terminal. Freeway projects included expansion of Interstates 5 and 805 around "The Merge" where these two freeways meet, as well as expansion of Interstate 15 through North County, which includes new high-occupancy-vehicle (HOV) "managed lanes". A tollway (The South Bay Expressway) connects SR 54 and Otay Mesa, near the Mexican border. According to an assessment in 2007, 37 percent of city streets were in acceptable condition. However, the proposed budget fell $84.6 million short of bringing streets up to an acceptable level. Expansion at the port has included a second cruise terminal on Broadway Pier, opened in 2010. Airport projects include expansion of Terminal Two.
Notable people
Sister cities
San Diego has 16 sister cities, as designated by Sister Cities International:
Alcalá de Henares, Spain
Campinas, Brazil
Cúcuta, Colombia
Cavite City, Philippines
Edinburgh, Scotland, United Kingdom
Jalalabad, Afghanistan
Jeonju, South Korea
León, Mexico
Perth, Australia
Quanzhou, China
Taichung City, Taiwan
Tema, Ghana
Tijuana, Mexico
Vladivostok, Russia
Warsaw, Poland
Yantai, China
Yokohama, Japan
See also
1858 San Diego hurricane
Notes
References
Bibliography
External links
Civic San Diego (replaced redevelopment corporations)
SANDAG, San Diego's Regional Planning Agency
Demographic Fact Sheet from Census Bureau
History of San Diego from San Diego Historical Society
San Diego Unified School District
San Diego Public Library
San Diego Tourism Authority (formerly the San Diego Convention and Visitors Bureau)
Category:1769 establishments in California
Category:1850 establishments in California
Category:Cities in San Diego County, California
Category:County seats in California
Category:Incorporated cities and towns in California
Category:Populated coastal places in California
Category:Populated places established in 1769
Category:San Antonio-San Diego Mail Line
Category:San Diego County, California
Category:San Diego metropolitan area
Category:Spanish mission settlements in North America
Category:Special economic zones of the United States
Category:Stagecoach stops in the United States | 28,504 | 2017-01 |
British Isles | The British Isles are a group of islands off the north-western coast of continental Europe that consist of the islands of Great Britain, Ireland and over six thousand smaller isles."British Isles", Encyclopædia Britannica Situated in the North Atlantic, the islands have a total area of approximately 315,159 km2, and a combined population of just under 70 million. Two sovereign states are located on the islands: Ireland (which covers roughly five-sixths of the island with the same name)The diplomatic and constitutional name of the Irish state is simply Ireland. For disambiguation purposes, Republic of Ireland is often used although technically not the name of the state but, according to the Republic of Ireland Act 1948, the state "may be described" as such. and the United Kingdom of Great Britain and Northern Ireland. The British Isles also include three Crown Dependencies: the Isle of Man and, by tradition, the Bailiwick of Jersey and the Bailiwick of Guernsey in the Channel Islands, although the latter are not physically a part of the archipelago.Oxford English Dictionary: "British Isles: a geographical term for the islands comprising Great Britain and Ireland with all their offshore islands including the Isle of Man and the Channel Islands."
The oldest rocks in the group are in the north west of Scotland, Ireland and North Wales and are 2,700 million years old. During the Silurian period the north-western regions collided with the south-east, which had been part of a separate continental landmass. The topography of the islands is modest in scale by global standards. Ben Nevis rises to an elevation of only , and Lough Neagh, which is notably larger than other lakes on the isles, covers . The climate is temperate marine, with mild winters and warm summers. The North Atlantic Drift brings significant moisture and raises temperatures 11 °C (20 °F) above the global average for the latitude. This led to a landscape which was long dominated by temperate rainforest, although human activity has since cleared the vast majority of forest cover. The region was re-inhabited after the last glacial period of Quaternary glaciation, by 12,000 BC when Great Britain was still a peninsula of the European continent. Ireland, which became an island by 12,000 BC, was not inhabited until after 8000 BC.http://www.tara.tcd.ie/bitstream/2262/40560/1/Edwards%26Brooks_INJ08_TARA.pdf Great Britain became an island by 5600 BC.
Hiberni (Ireland), Pictish (northern Britain) and Britons (southern Britain) tribes, all speaking Insular Celtic, inhabited the islands at the beginning of the 1st millennium AD. Much of Brittonic-controlled Britain was conquered by the Roman Empire from AD 43. The first Anglo-Saxons arrived as Roman power waned in the 5th century and eventually dominated the bulk of what is now England.British Have Changed Little Since Ice Age, Gene Study SaysJames Owen for National Geographic News, 19 July 2005 Viking invasions began in the 9th century, followed by more permanent settlements and political change—particularly in England. The subsequent Norman conquest of England in 1066 and the later Angevin partial conquest of Ireland from 1169 led to the imposition of a new Norman ruling elite across much of Britain and parts of Ireland. By the Late Middle Ages, Great Britain was separated into the Kingdoms of England and Scotland, while control in Ireland fluxed between Gaelic kingdoms, Hiberno-Norman lords and the English-dominated Lordship of Ireland, soon restricted only to The Pale. The 1603 Union of the Crowns, Acts of Union 1707 and Acts of Union 1800 attempted to consolidate Britain and Ireland into a single political unit, the United Kingdom of Great Britain and Ireland, with the Isle of Man and the Channel Islands remaining as Crown Dependencies. The expansion of the British Empire and migrations following the Irish Famine and Highland Clearances resulted in the distribution of the islands' population and culture throughout the world and a rapid de-population of Ireland in the second half of the 19th century. Most of Ireland seceded from the United Kingdom after the Irish War of Independence and the subsequent Anglo-Irish Treaty (1919–1922), with six counties remaining in the UK as Northern Ireland.
The term British Isles is controversial in Ireland,Social work in the British Isles by Malcolm Payne, Steven Shardlow When we think about social work in the British Isles, a contentious term if ever there was one, what do we expect to see? where there are objections to its usage due to the association of the word British with Ireland. The Government of Ireland does not recognise or use the term"Written Answers – Official Terms", Dáil Éireann, Volume 606, 28 September 2005. In his response, the Irish Minister for Foreign Affairs stated that "The British Isles is not an officially recognised term in any legal or inter-governmental sense. It is without any official status. The Government, including the Department of Foreign Affairs, does not use this term. Our officials in the Embassy of Ireland, London, continue to monitor the media in Britain for any abuse of the official terms as set out in the Constitution of Ireland and in legislation. These include the name of the State, the President, Taoiseach and others." and its embassy in London discourages its use. As a result, Britain and Ireland is used as an alternative description, and Atlantic Archipelago has had limited use among a minority in academia, while British Isles is still commonly employed. Within them, they are also sometimes referred to as these islands.
Etymology
The earliest known references to the islands as a group appeared in the writings of sea-farers from the ancient Greek colony of Massalia.Foster, p. 1. The original records have been lost; however, later writings, e.g. Avienus's Ora maritima, that quoted from the Massaliote Periplus (6th century BC) and from Pytheas's On the Ocean (circa 325–320 BC)Harley, p. 150. have survived. In the 1st century BC, Diodorus Siculus has Prettanikē nēsos,Diodorus Siculus' Bibliotheca Historica Book V. Chapter XXI. Section 1
Greek text at the Perseus Project. "the British Island", and Prettanoi,Diodorus Siculus' Bibliotheca Historica Book V. Chapter XXI. Section 2
Greek text at the Perseus Project. "the Britons".Allen, p. 172–174. Strabo used Βρεττανική (Brettanike),Strabo's Geography Book I. Chapter IV. Section 2 Greek text and English translation at the Perseus Project.Strabo's Geography Book IV. Chapter II. Section 1 Greek text and English translation at the Perseus Project.Strabo's Geography Book IV. Chapter IV. Section 1 Greek text and English translation at the Perseus Project. and Marcian of Heraclea, in his Periplus maris exteri, used αἱ Πρεττανικαί νῆσοι (the Prettanic Isles) to refer to the islands. Greek text and Latin Translation thereof archived at the Open Library Project. Historians today, though not in absolute agreement, largely agree that these Greek and Latin names were probably drawn from native Celtic-language names for the archipelago.Davies, p. 47. Along these lines, the inhabitants of the islands were called the Πρεττανοί (Priteni or Pretani).Snyder, p. 68. The shift from the "P" of Pretannia to the "B" of Britannia by the Romans occurred during the time of Julius Caesar.Snyder, p. 12.
The Greco-Egyptian scientist Claudius Ptolemy referred to the larger island as great Britain (μεγάλης Βρεττανίας - megális Brettanias) and to Ireland as little Britain (μικρής Βρεττανίας - mikris Brettanias) in his work Almagest (147–148 AD). In his later work, Geography (c. 150 AD), he gave these islands the names Alwion, Iwernia, and Mona (the Isle of Man), suggesting these may have been names of the individual islands not known to him at the time of writing Almagest. The name Albion appears to have fallen out of use sometime after the Roman conquest of Great Britain, after which Britain became the more commonplace name for the island called Great Britain.
The earliest known use of the phrase Brytish Iles in the English language is dated 1577 in a work by John Dee.John Dee, 1577. 1577 J. Arte Navigation, p. 65 "The syncere Intent, and faythfull Aduise, of Georgius Gemistus Pletho, was, I could..frame and shape very much of Gemistus those his two Greek Orations..for our Brytish Iles, and in better and more allowable manner." From the OED, s.v. "British Isles" Today, this name is seen by some as carrying imperialist overtones although it is still commonly used. Other names used to describe the islands include the Anglo-Celtic Isles, Atlantic archipelago, British-Irish Isles,John Oakland, 2003, British Civilization: A Student's Dictionary, Routledge: London
British-Irish Isles, the (geography) see BRITISH ISLES
British Isles, the (geography) A geographical (not political or CONSTITUTIONAL) term for ENGLAND, SCOTLAND, WALES, and IRELAND (including the REPUBLIC OF IRELAND), together with all offshore islands. A more accurate (and politically acceptable) term today is the British-Irish Isles. Britain and Ireland, UK and Ireland, and British Isles and Ireland. Owing to political and national associations with the word British, the Government of Ireland does not use the term British Isles and in documents drawn up jointly between the British and Irish governments, the archipelago is referred to simply as "these islands". Nonetheless, British Isles is still the most widely accepted term for the archipelago.
Geography
thumb|right|alt=An image showing the geological shelf of the British Isles.|The British Isles in relation to the north-west European continental shelf.
The British Isles lie at the juncture of several regions with past episodes of tectonic mountain building. These orogenic belts form a complex geology that records a huge and varied span of Earth's history. Of particular note was the Caledonian Orogeny during the Ordovician Period, c. 488–444 Ma and early Silurian period, when the craton Baltica collided with the terrane Avalonia to form the mountains and hills in northern Britain and Ireland. Baltica formed roughly the northwestern half of Ireland and Scotland. Further collisions caused the Variscan orogeny in the Devonian and Carboniferous periods, forming the hills of Munster, southwest England, and southern Wales. Over the last 500 million years the land that forms the islands has drifted northwest from around 30°S, crossing the equator around 370 million years ago to reach its present northern latitude.Ibid., p. 5.
The islands have been shaped by numerous glaciations during the Quaternary Period, the most recent being the Devensian. As this ended, the central Irish Sea was deglaciated and the English Channel flooded, with sea levels rising to current levels some 4,000 to 5,000 years ago, leaving the British Isles in their current form. Whether or not there was a land bridge between Great Britain and Ireland at this time is somewhat disputed, though there was certainly a single ice sheet covering the entire sea.
The west coasts of Ireland and Scotland that directly face the Atlantic Ocean are generally characterised by long peninsulas, and headlands and bays; the internal and eastern coasts are "smoother".
There are about 136 permanently inhabited islands in the group, the largest two being Great Britain and Ireland. Great Britain is to the east and covers . Ireland is to the west and covers . The largest of the other islands are to be found in the Hebrides, Orkney and Shetland to the north, Anglesey and the Isle of Man between Great Britain and Ireland, and the Channel Islands near the coast of France.
The islands are at relatively low altitudes, with central Ireland and southern Great Britain particularly low lying: the lowest point in the islands is Holme, Cambridgeshire at . The Scottish Highlands in the northern part of Great Britain are mountainous, with Ben Nevis being the highest point on the islands at . Other mountainous areas include Wales and parts of Ireland, however only seven peaks in these areas reach above . Lakes on the islands are generally not large, although Lough Neagh in Northern Ireland is an exception, covering . The largest freshwater body in Great Britain (by area) is Loch Lomond at , and Loch Ness, by volume whilst Loch Morar is the deepest freshwater body in the British Isles, with a maximum depth of .Gazetteer for Scotland Morar, Loch There are a number of major rivers within the British Isles. The longest is the Shannon in Ireland at . The river Severn at is the longest in Great Britain.
The isles have a temperate marine climate. The North Atlantic Drift ("Gulf Stream") which flows from the Gulf of Mexico brings with it significant moisture and raises temperatures 11 °C (20 °F) above the global average for the islands' latitudes. Winters are cool and wet, with summers mild and also wet. Most Atlantic depressions pass to the north of the islands, combined with the general westerly circulation and interactions with the landmass, this imposes an east-west variation in climate.Ibid., pp. 13–14.
Flora and fauna
thumb|right|Some female red deer in Killarney National Park, Ireland.
The islands enjoy a mild climate and varied soils, giving rise to a diverse pattern of vegetation. Animal and plant life is similar to that of the northwestern European continent. There are however, fewer numbers of species, with Ireland having even less. All native flora and fauna in Ireland is made up of species that migrated from elsewhere in Europe, and Great Britain in particular. The only window when this could have occurred was between the end of the last Ice Age (about 12,000 years ago) and when the land bridge connecting the two islands was flooded by sea (about 8,000 years ago).
As with most of Europe, prehistoric Britain and Ireland were covered with forest and swamp. Clearing began around 6000 BC and accelerated in medieval times. Despite this, Britain retained its primeval forests longer than most of Europe due to a small population and later development of trade and industry, and wood shortages were not a problem until the 17th century. By the 18th century, most of Britain's forests were consumed for shipbuilding or manufacturing charcoal and the nation was forced to import lumber from Scandinavia, North America, and the Baltic. Most forest land in Ireland is maintained by state forestation programmes. Almost all land outside urban areas is farmland. However, relatively large areas of forest remain in east and north Scotland and in southeast England. Oak, elm, ash and beech are amongst the most common trees in England. In Scotland, pine and birch are most common. Natural forests in Ireland are mainly oak, ash, wych elm, birch and pine. Beech and lime, though not native to Ireland, are also common there. Farmland hosts a variety of semi-natural vegetation of grasses and flowering plants. Woods, hedgerows, mountain slopes and marshes host heather, wild grasses, gorse and bracken.
Many larger animals, such as wolf, bear and the European elk are today extinct. However, some species such as red deer are protected. Other small mammals, such as rabbits, foxes, badgers, hares, hedgehogs, and stoats, are very common and the European beaver has been reintroduced in parts of Scotland. Wild boar have also been reintroduced to parts of southern England, following escapes from boar farms and illegal releases. Many rivers contain otters and seals are common on coasts. Over 200 species of bird reside permanently and another 200 migrate. Common types are the common chaffinch, common blackbird, house sparrow and common starling; all small birds. Large birds are declining in number, except for those kept for game such as pheasant, partridge, and red grouse. Fish are abundant in the rivers and lakes, in particular salmon, trout, perch and pike. Sea fish include dogfish, cod, sole, pollock and bass, as well as mussels, crab and oysters along the coast. There are more than 21,000 species of insects.
Few species of reptiles or amphibians are found in Great Britain or Ireland. Only three snakes are native to Great Britain: the common European adder, the grass snake and the smooth snake; none are native to Ireland. In general, Great Britain has slightly more variation and native wild life, with weasels, polecats, wildcats, most shrews, moles, water voles, roe deer and common toads also being absent from Ireland. This pattern is also true for birds and insects. Notable exceptions include the Kerry slug and certain species of wood lice native to Ireland but not Great Britain.
Domestic animals include the Connemara pony, Shetland pony, English Mastiff, Irish wolfhound and many varieties of cattle and sheep.
Demographics
thumb|alt=A map of the British Isles showing the relative population densities across the area.|Population density per km² of the British Isles' regions.
The demographics of the British Isles today are characterised by a generally high density of population in England, which accounts for almost 80% of the total population of the islands. In elsewhere on Great Britain and on Ireland, high density of population is limited to areas around, or close to, a few large cities. The largest urban area by far is the Greater London Urban Area with 9 million inhabitants. Other major populations centres include Greater Manchester Urban Area (2.4 million), West Midlands conurbation (2.4 million), West Yorkshire Urban Area (1.6 million) in England,http://www.ons.gov.uk/ons/about-ons/what-we-do/publication-scheme/published-ad-hoc-data/population/august-2012/mid-2010-urban-area-syoa-ests-england-and-wales.xls Greater Glasgow (1.2 million) in ScotlandMid-2010 population estimates - Settlements in order of size General Register Office for Scotland and Greater Dublin Area (1.1 million) in Ireland.
The population of England rose rapidly during the 19th and 20th centuries whereas the populations of Scotland and Wales have shown little increase during the 20th century, with the population of Scotland remaining unchanged since 1951. Ireland for most of its history comprised a population proportionate to its land area (about one third of the total population). However, since the Great Irish Famine, the population of Ireland has fallen to less than one tenth of the population of the British Isles. The famine, which caused a century-long population decline, drastically reduced the Irish population and permanently altered the demographic make-up of the British Isles. On a global scale, this disaster led to the creation of an Irish diaspora that numbers fifteen times the current population of the island.
The linguistic heritage of the British Isles is rich, with twelve languages from six groups across four branches of the Indo-European family. The Insular Celtic languages of the Goidelic sub-group (Irish, Manx and Scottish Gaelic) and the Brittonic sub-group (Cornish, Welsh and Breton, spoken in north-western France) are the only remaining Celtic languages—the last of their continental relations becoming extinct before the 7th century. The Norman languages of Guernésiais, Jèrriais and Sarkese spoken in the Channel Islands are similar to French. A cant, called Shelta, is spoken by Irish Travellers, often as a means to conceal meaning from those outside the group. However, English, sometimes in the form of Scots, is the dominant language, with few monoglots remaining in the other languages of the region. The Norn language of Orkney and Shetland became extinct around 1880.
History
At the end of the last ice age, what are now the British Isles were joined to the European mainland as a mass of land extending north west from the modern-day northern coastline of France, Belgium and the Netherlands. Ice covered almost all of what is now Scotland, most of Ireland and Wales, and the hills of northern England. From 14,000 to 10,000 years ago, as the ice melted, sea levels rose separating Ireland from Great Britain and also creating the Isle of Man. About two to four millennia later, Great Britain became separated from the mainland. Britain probably became repopulated with people before the ice age ended and certainly before it became separated from the mainland. It is likely that Ireland became settled by sea after it had already become an island.
At the time of the Roman Empire, about two thousand years ago, various tribes, which spoke Celtic dialects of the Insular Celtic group, were inhabiting the islands. The Romans expanded their civilisation to control southern Great Britain but were impeded in advancing any further, building Hadrian's Wall to mark the northern frontier of their empire in 122 AD. At that time, Ireland was populated by a people known as Hiberni, the northern third or so of Great Britain by a people known as Picts and the southern two thirds by Britons. thumb|The Alfred Jewel (9th century)Anglo-Saxons arrived as Roman power waned in the 5th century AD. Initially, their arrival seems to have been at the invitation of the Britons as mercenaries to repulse incursions by the Hiberni and Picts. In time, Anglo-Saxon demands on the British became so great that they came to culturally dominate the bulk of southern Great Britain, though recent genetic evidence suggests Britons still formed the bulk of the population. This dominance creating what is now England and leaving culturally British enclaves only in the north of what is now England, in Cornwall and what is now known as Wales. Ireland had been unaffected by the Romans except, significantly, having been Christianised, traditionally by the Romano-Briton, Saint Patrick. As Europe, including Britain, descended into turmoil following the collapse of Roman civilisation, an era known as the Dark Ages, Ireland entered a golden age and responded with missions (first to Great Britain and then to the continent), the founding of monasteries and universities. These were later joined by Anglo-Saxon missions of a similar nature.
Viking invasions began in the 9th century, followed by more permanent settlements, particularly along the east coast of Ireland, the west coast of modern-day Scotland and the Isle of Man. Though the Vikings were eventually neutralised in Ireland, their influence remained in the cities of Dublin, Cork, Limerick, Waterford and Wexford. England however was slowly conquered around the turn of the first millennium AD, and eventually became a feudal possession of Denmark. The relations between the descendants of Vikings in England and counterparts in Normandy, in northern France, lay at the heart of a series of events that led to the Norman conquest of England in 1066. The remnants of the Duchy of Normandy, which conquered England, remain associated to the English Crown as the Channel Islands to this day. A century later the marriage of the future Henry II of England to Eleanor of Aquitaine created the Angevin Empire, partially under the French Crown. At the invitation of a provincial king and under the authority of Pope Adrian IV (the only Englishman to be elected pope), the Angevins invaded Ireland in 1169. Though initially intended to be kept as an independent kingdom, the failure of the Irish High King to ensure the terms of the Treaty of Windsor led Henry II, as King of England, to rule as effective monarch under the title of Lord of Ireland. This title was granted to his younger son but when Henry's heir unexpectedly died the title of King of England and Lord of Ireland became entwined in one person.
thumb|left|James VI of Scotland (James I of England)
By the Late Middle Ages, Great Britain was separated into the Kingdoms of England and Scotland. Power in Ireland fluxed between Gaelic kingdoms, Hiberno-Norman lords and the English-dominated Lordship of Ireland. A similar situation existed in the Principality of Wales, which was slowly being annexed into the Kingdom of England by a series of laws. During the course of the 15th century, the Crown of England would assert a claim to the Crown of France, thereby also releasing the King of England as from being vassal of the King of France. In 1534, King Henry VIII, at first having been a strong defender of Roman Catholicism in the face of the Reformation, separated from the Roman Church after failing to secure a divorce from the Pope. His response was to place the King of England as "the only Supreme Head in Earth of the Church of England", thereby removing the authority of the Pope from the affairs of the English Church. Ireland, which had been held by the King of England as Lord of Ireland, but which strictly speaking had been a feudal possession of the Pope since the Norman invasion was declared a separate kingdom in personal union with England.
Scotland, meanwhile had remained an independent Kingdom. In 1603, that changed when the King of Scotland inherited the Crown of England, and consequently the Crown of Ireland also. The subsequent 17th century was one of political upheaval, religious division and war. English colonialism in Ireland of the 16th century was extended by large-scale Scottish and English colonies in Ulster. Religious division heightened and the king in England came into conflict with parliament over his tolerance towards Catholicism. The resulting English Civil War or War of the Three Kingdoms led to a revolutionary republic in England. Ireland, largely Catholic was mainly loyal to the king. Following defeat to the parliaments army, large scale land distributions from loyalist Irish nobility to English commoners in the service of the parliamentary army created a new Ascendancy class which obliterated the remnants of Old English (Hiberno-Norman) and Gaelic Irish nobility in Ireland. The new ruling class was Protestant and English, whilst the populace was largely Catholic and Irish. This theme would influence Irish politics for centuries to come. When the monarchy was restored in England, the king found it politically impossible to restore the lands of former land-owners in Ireland. The "Glorious Revolution" of 1688 repeated similar themes: a Catholic king pushing for religious tolerance in opposition to a Protestant parliament in England. The king's army was defeated at the Battle of the Boyne and at the militarily crucial Battle of Aughrim in Ireland. Resistance held out, eventually forcing the guarantee of religious tolerance in the Treaty of Limerick. However, the terms were never honoured and a new monarchy was installed.
The Kingdoms of England and Scotland were unified in 1707 creating the Kingdom of Great Britain. Following an attempted republican revolution in Ireland in 1798, the Kingdoms of Ireland and Great Britain were unified in 1801, creating the United Kingdom. The Isle of Man and the Channel Islands remaining outside of the United Kingdom but with their ultimate good governance being the responsibility of the British Crown (effectively the British government). Although, the colonies of North America that would become the United States of America were lost by the start of the 19th century, the British Empire expanded rapidly elsewhere. A century later it would cover one third of the globe. Poverty in the United Kingdom remained desperate however and industrialisation in England led to terrible condition for the working class. Mass migrations following the Irish Famine and Highland Clearances resulted in the distribution of the islands' population and culture throughout the world and a rapid de-population of Ireland in the second half of the 19th century. Most of Ireland seceded from the United Kingdom after the Irish War of Independence and the subsequent Anglo-Irish Treaty (1919–1922), with the six counties that formed Northern Ireland remaining as an autonomous region of the UK.
Politics
thumb|300px|left|Subdivisions of the British Isles
There are two sovereign states in the isles: Ireland and the United Kingdom of Great Britain and Northern Ireland. Ireland, sometimes called the Republic of Ireland, governs five sixths of the island of Ireland, with the remainder of the island forming Northern Ireland. Northern Ireland is a part of the United Kingdom of Great Britain and Northern Ireland, usually shortened to simply the United Kingdom, which governs the remainder of the archipelago with the exception of the Isle of Man and the Channel Islands. The Isle of Man and the two states of the Channel Islands, Jersey and Guernsey, are known as the Crown Dependencies. They exercise constitutional rights of self-government and judicial independence; responsibility for international representation rests largely upon the UK (in consultation with the respective governments); and responsibility for defence is reserved by the UK. The United Kingdom is made up of four constituent parts: England, Scotland and Wales, forming Great Britain, and Northern Ireland in the north-east of the island of Ireland. Of these, Scotland, Wales and Northern Ireland have "devolved" governments meaning that they have their own parliaments/assemblies and are self-governing with respect to certain areas set down by law. For judicial purposes, Scotland, Northern Ireland and England and Wales (the latter being one entity) form separate legal jurisdiction, with there being no single law for the UK as a whole.
Ireland, the United Kingdom and the three Crown Dependencies are all parliamentary democracies, with their own separate parliaments. All parts of the United Kingdom return members to parliament in London. In addition to this, voters in Scotland, Wales and Northern Ireland return members to a parliament in Edinburgh and to assemblies in Cardiff and Belfast respectively. Governance in the norm is by majority rule, however, Northern Ireland uses a system of power sharing whereby unionists and nationalists share executive posts proportionately and where the assent of both groups are required for the Northern Ireland Assembly to make certain decisions. (In the context of Northern Ireland, unionists are those who want Northern Ireland to remain a part of the United Kingdom and nationalists are those who want Northern Ireland join with the rest of Ireland.) The British monarch is the head of state for the United Kingdom while in the Republic of Ireland the head of state is the President of Ireland.
Ireland and the United Kingdom are both part of the European Union (EU). The Crown Dependencies are not a part of the EU however do participate in certain aspects that were negotiated as a part of the UK's accession to the EU. Neither the United Kingdom or Ireland are part of the Schengen area, that allow passport-free travel between EU members states. However, since the partition of Ireland, an informal free-travel area had existed across the region. In 1997, this area required formal recognition during the course of negotiations for the Amsterdam Treaty of the European Union and is now known as the Common Travel Area.
Reciprocal arrangements allow British and Irish citizens full voting rights in the two states. Exceptions to this are presidential elections and constitutional referendums in the Republic of Ireland, for which there is no comparable franchise in the other states. In the United Kingdom, these pre-date European Union law, and in both jurisdictions go further than that required by European Union law. Other EU nationals may only vote in local and European Parliament elections while resident in either the UK or Ireland. In 2008, a UK Ministry of Justice report investigating how to strengthen the British sense of citizenship proposed to end this arrangement arguing that, "the right to vote is one of the hallmarks of the political status of citizens; it is not a means of expressing closeness between countries."Goldsmith, 2008, Citizenship: Our Common Bond, Ministry of Justice: London
In addition, some civil bodies are organised throughout the islands as a whole. For example the Samaritans, which is deliberately organised without regard to national boundaries on the basis that a service which is not political or religious should not recognise sectarian or political divisions. The RNLI, the life boats service, is also organised throughout the islands as a whole, covering the waters of the United Kingdom, Ireland, the Isle of Man, and the Channel Islands.RNLI.org.uk, The RNLI is a charity that provides a 24-hour lifesaving service around the UK and Republic of Ireland.
The Northern Ireland Peace Process has led to a number of unusual arrangements between the Republic of Ireland, Northern Ireland and the United Kingdom. For example, citizens of Northern Ireland are entitled to the choice of Irish or British citizenship or both and the Governments of Ireland and the United Kingdom consult on matters not devolved to the Northern Ireland Executive. The Northern Ireland Executive and the Government of Ireland also meet as the North/South Ministerial Council to develop policies common across the island of Ireland. These arrangements were made following the 1998 Good Friday Agreement.
British–Irish Council
Another body established under the Good Friday Agreement, the British–Irish Council, is made up of all of the states and territories of the British Isles. The British–Irish Parliamentary Assembly () predates the British–Irish Council and was established in 1990. Originally it comprised 25 members of the Oireachtas, the Irish parliament, and 25 members of the parliament of the United Kingdom, with the purpose of building mutual understanding between members of both legislatures. Since then the role and scope of the body has been expanded to include representatives from the Scottish Parliament, the National Assembly for Wales, the Northern Ireland Assembly, the States of Jersey, the States of Guernsey and the High Court of Tynwald (Isle of Man).
The Council does not have executive powers but meets biannually to discuss issues of mutual importance. Similarly, the Parliamentary Assembly has no legislative powers but investigates and collects witness evidence from the public on matters of mutual concern to its members. Reports on its findings are presented to the Governments of Ireland and the United Kingdom. During the February 2008 meeting of the British–Irish Council, it was agreed to set up a standing secretariat that would serve as a permanent 'civil service' for the Council.[Communiqué of the British-Irish Council], February 2008 Leading on from developments in the British–Irish Council, the chair of the British–Irish Inter-Parliamentary Assembly, Niall Blaney, has suggested that the body should shadow the British–Irish Council's work.Martina Purdy, 28 February 2008 2008, Unionists urged to drop boycott, BBC: London
Culture
thumb|One Day Cricket International at Lord's; England v Australia 10 July 2005
thumb|right|Pádraig Harrington teeing off at the Open Championship (golf) in 2007.
The United Kingdom and Ireland have separate media, although British television, newspapers and magazines are widely available in Ireland, giving people in Ireland a high level of familiarity with cultural matters in the United Kingdom. Irish newspapers are also available in the UK, and Irish state and private television is widely available in Northern Ireland. Certain reality TV shows have embraced the whole of the islands, for example The X Factor, seasons 3, 4 and 7 of which featured auditions in Dublin and were open to Irish voters, whilst the show previously known as Britain's Next Top Model became Britain and Ireland's Next Top Model in 2011. A few cultural events are organised for the island group as a whole. For example, the Costa Book Awards are awarded to authors resident in the UK or Ireland. The Mercury Music Prize is handed out every year to the best album from a British or Irish musician or group.
Many globally popular sports had modern rules codified in the British Isles, including golf, association football, cricket, rugby, snooker and darts, as well as many minor sports such as croquet, bowls, pitch and putt, water polo and handball. A number of sports are popular throughout the British Isles, the most prominent of which is association football. While this is organised separately in different national associations, leagues and national teams, even within the UK, it is a common passion in all parts of the islands. Rugby union is also widely enjoyed across the islands with four national teams from England, Ireland, Scotland and Wales. The British and Irish Lions is a team chosen from each national team and undertakes tours of the southern hemisphere rugby playing nations every four years. Ireland play as a united team, represented by players from both Northern Ireland and the Republic. These national rugby teams play each other each year for the Triple Crown as part of the Six Nations Championship. Also since 2001 the professional club teams of Ireland, Scotland, Wales and Italy compete against each other in the Guinness Pro12.
The Ryder Cup in golf was originally played between a United States team and a team representing Great Britain and Ireland. From 1979 onwards this was expanded to include the whole of Europe.
Transport
thumb|HSC Stena Explorer, a large fast ferry the formerly operated Holyhead–Dún Laoghaire route between Great Britain and Ireland.
London Heathrow Airport is Europe's busiest airport in terms of passenger traffic and the Dublin-London route was once the busiest air route in Europe,Seán McCárthaigh, Dublin–London busiest air traffic route within EU Irish Examiner, 31 March 2003 and it remains the busiest route out of Heathrow. The English Channel and the southern North Sea are the busiest seaways in the world. The Channel Tunnel, opened in 1994, links Great Britain to France and is the second-longest rail tunnel in the world.
The idea of building a tunnel under the Irish Sea has been raised since 1895,"Tunnel under the Sea", The Washington Post, 2 May 1897 (Archive link) when it was first investigated. Several potential Irish Sea tunnel projects have been proposed, most recently the Tusker Tunnel between the ports of Rosslare and Fishguard proposed by The Institute of Engineers of Ireland in 2004. A rail tunnel was proposed in 1997 on a different route, between Dublin and Holyhead, by British engineering firm Symonds. Either tunnel, at , would be by far the longest in the world, and would cost an estimated £15 billion or €20 billion. A proposal in 2007,BBC News, From Twinbrook to the Trevi Fountain, 21 August 2007 estimated the cost of building a bridge from County Antrim in Northern Ireland to Galloway in Scotland at £3.5bn (€5bn).
See also
British Islands
Extreme points of the British Isles
List of islands in the British Isles
Terminology of the British Isles
References
Further reading
A History of Britain: At the Edge of the World, 3500 B.C. – 1603 A.D. by Simon Schama, BBC/Miramax, 2000 ISBN 978-0-7868-6675-5
A History of Britain—The Complete Collection on DVD by Simon Schama, BBC 2002
Shortened History of England by G. M. Trevelyan Penguin Books ISBN 978-0-14-023323-0
External links
An interactive geological map of the British Isles.
Category:Geography of Western Europe
Category:Regions of Europe | 3,736 | 2017-01 |
Mosaic | thumb|Irano-Roman floor mosaic detail from the palace of Shapur I at Bishapur.
thumb|Cone mosaic courtyard from Uruk in Mesopotamia 3000 BC
A mosaic is a piece of art or image made from the assemblage of small pieces of colored glass, stone, or other materials. It is often used in decorative art or as interior decoration. Most mosaics are made of small, flat, roughly square, pieces of stone or glass of different colors, known as tesserae. Some, especially floor mosaics, are made of small rounded pieces of stone, and called "pebble mosaics". Others are made of other materials.
Mosaics have a long history, starting in Mesopotamia in the 3rd millennium BC. Pebble mosaics were made in Tiryns in Mycenean Greece; mosaics with patterns and pictures became widespread in classical times, both in Ancient Greece and Ancient Rome. Early Christian basilicas from the 4th century onwards were decorated with wall and ceiling mosaics. Mosaic art flourished in the Byzantine Empire from the 6th to the 15th centuries; that tradition was adopted by the Norman kingdom in Sicily in the 12th century, by eastern-influenced Venice, and among the Rus in Ukraine. Mosaic fell out of fashion in the Renaissance, though artists like Raphael continued to practise the old technique. Roman and Byzantine influence led Jews to decorate 5th and 6th century synagogues in the Middle East with floor mosaics.
Mosaic was widely used on religious buildings and palaces in early Islamic art, including Islam's first great religious building, the Dome of the Rock in Jerusalem, and the Umayyad Mosque in Damascus. Mosaic went out of fashion in the Islamic world after the 8th century.
Modern mosaics are made by professional artists, street artists, and as a popular craft. Many materials other than traditional stone and ceramic tesserae may be employed, including shells, glass and beads.
History
thumb|Stag Hunt Mosaic from the House of the Abduction of Helen at Pella, ancient Macedonia, late 4th century BC
300px|thumb|A mosaic of the Kasta Tomb in Amphipolis depicting the abduction of Persephone by Pluto, 4th century BC
The earliest known examples of mosaics made of different materials were found at a temple building in Abra, Mesopotamia, and are dated to the second half of 3rd millennium BC. They consist of pieces of colored stones, shells and ivory. Excavations at Susa and Chogha Zanbil show evidence of the first glazed tiles, dating from around 1500 BC.Iran: Visual Arts: history of Iranian Tile, Iran Chamber Society However, mosaic patterns were not used until the times of Sassanid Empire and Roman influence.
Greek and Roman
thumb|Epiphany of Dionysus mosaic, from the Villa of Dionysus (2nd century AD) in Dion, Greece. Now in the Archeological Museum of Dion.
Bronze age pebble mosaics have been found at Tiryns; mosaics of the 4th century BC are found in the Macedonian palace-city of Aegae, and the 4th-century BC mosaic of The Beauty of Durrës discovered in Durrës, Albania in 1916, is an early figural example; the Greek figural style was mostly formed in the 3rd century BC. Mythological subjects, or scenes of hunting or other pursuits of the wealthy, were popular as the centrepieces of a larger geometric design, with strongly emphasized borders.
Pliny the Elder mentions the artist Sosus of Pergamon by name, describing his mosaics of the food left on a floor after a feast and of a group of doves drinking from a bowl.
Both of these themes were widely copied.
Greek figural mosaics could have been copied or adapted paintings, a far more prestigious artform, and the style was enthusiastically adopted by the Romans so that large floor mosaics enriched the floors of Hellenistic villas and Roman dwellings from Britain to Dura-Europos. Most recorded names of Roman mosaic workers are Greek, suggesting they dominated high quality work across the empire; no doubt most ordinary craftsmen were slaves. Splendid mosaic floors are found in Roman villas across North Africa, in places such as Carthage, and can still be seen in the extensive collection in Bardo Museum in Tunis, Tunisia.
There were two main techniques in Greco-Roman mosaic: opus vermiculatum used tiny tesserae, typically cubes of 4 millimeters or less, and was produced in workshops in relatively small panels which were transported to the site glued to some temporary support. The tiny tesserae allowed very fine detail, and an approach to the illusionism of painting. Often small panels called emblemata were inserted into walls or as the highlights of larger floor-mosaics in coarser work. The normal technique was opus tessellatum, using larger tesserae, which was laid on site. There was a distinct native Italian style using black on a white background, which was no doubt cheaper than fully coloured work.
In Rome, Nero and his architects used mosaics to cover some surfaces of walls and ceilings in the Domus Aurea, built 64 AD, and wall mosaics are also found at Pompeii and neighbouring sites. However it seems that it was not until the Christian era that figural wall mosaics became a major form of artistic expression. The Roman church of Santa Costanza, which served as a mausoleum for one or more of the Imperial family, has both religious mosaic and decorative secular ceiling mosaics on a round vault, which probably represent the style of contemporary palace decoration.
The mosaics of the Villa Romana del Casale near Piazza Armerina in Sicily are the largest collection of late Roman mosaics in situ in the world, and are protected as a UNESCO World Heritage Site. The large villa rustica, which was probably owned by Emperor Maximian, was built largely in the early 4th century. The mosaics were covered and protected for 700 years by a landslide that occurred in the 12th Century. The most important pieces are the Circus Scene, the 64m long Great Hunting Scene, the Little Hunt, the Labours of Hercules and the famous Bikini Girls, showing women undertaking a range of sporting activities in garments that resemble 20th Century bikinis. The peristyle, the imperial apartments and the thermae were also decorated with ornamental and mythological mosaics. Other important examples of Roman mosaic art in Sicily were unearthed on the Piazza Vittoria in Palermo where two houses were discovered. The most important scenes there depicted Orpheus, Alexander the Great's Hunt and the Four Seasons.
In 1913 the Zliten mosaic, a Roman mosaic famous for its many scenes from gladiatorial contests, hunting and everyday life, was discovered in the Libyan town of Zliten. In 2000 archaeologists working in Leptis Magna, Libya, uncovered a 30 ft length of five colorful mosaics created during the 1st or 2nd century AD. The mosaics show a warrior in combat with a deer, four young men wrestling a wild bull to the ground, and a gladiator resting in a state of fatigue, staring at his slain opponent. The mosaics decorated the walls of a cold plunge pool in a bath house within a Roman villa. The gladiator mosaic is noted by scholars as one of the finest examples of mosaic art ever seen — a "masterpiece comparable in quality with the Alexander Mosaic in Pompeii."
A specific genre of Roman mosaic was called asaroton (Greek for "unswept floor"). It depicted in trompe l'oeil style the feast leftovers on the floors of wealthy houses.
Christian mosaics
Early Christian art
With the building of Christian basilicas in the late 4th century, wall and ceiling mosaics were adopted for Christian uses. The earliest examples of Christian basilicas have not survived, but the mosaics of Santa Constanza and Santa Pudenziana, both from the 4th century, still exist. The winemaking putti in the ambulatory of Santa Constanza still follow the classical tradition in that they represent the feast of Bacchus, which symbolizes transformation or change, and are thus appropriate for a mausoleum, the original function of this building. In another great Constantinian basilica, the Church of the Nativity in Bethlehem the original mosaic floor with typical Roman geometric motifs is partially preserved. The so-called Tomb of the Julii, near the crypt beneath St Peter's Basilica, is a 4th-century vaulted tomb with wall and ceiling mosaics that are given Christian interpretations. The Rotunda of Galerius in Thessaloniki, converted into a Christian church during the course of the 4th century, was embellished with very high artistic quality mosaics. Only fragments survive of the original decoration, especially a band depicting saints with hands raised in prayer, in front of complex architectural fantasies.
In the following century Ravenna, the capital of the Western Roman Empire, became the center of late Roman mosaic art (see details in Ravenna section). Milan also served as the capital of the western empire in the 4th century. In the St Aquilinus Chapel of the Basilica of San Lorenzo, mosaics executed in the late 4th and early 5th centuries depict Christ with the Apostles and the Abduction of Elijah; these mosaics are outstanding for their bright colors, naturalism and adherence to the classical canons of order and proportion. The surviving apse mosaic of the Basilica of Sant'Ambrogio, which shows Christ enthroned between Saint Gervasius and Saint Protasius and angels before a golden background date back to the 5th and to the 8th century, although it was restored many times later. The baptistery of the basilica, which was demolished in the 15th century, had a vault covered with gold-leaf tesserae, large quantities of which were found when the site was excavated. In the small shrine of San Vittore in ciel d'oro, now a chapel of Sant'Ambrogio, every surface is covered with mosaics from the second half of the 5th century. Saint Victor is depicted in the center of the golden dome, while figures of saints are shown on the walls before a blue background. The low spandrels give space for the symbols of the four Evangelists.
Albingaunum was the main Roman port of Liguria. The octagonal baptistery of the town was decorated in the 5th century with high quality blue and white mosaics representing the Apostles. The surviving remains are somewhat fragmented. Massilia remained a thriving port and a Christian spiritual center in Southern Gaul where favourable societal and economic conditions ensured the survival of mosaic art in the 5th and 6th centuries. The large baptistery, once the grandest building of its kind in Western Europe, had a geometric floor mosaic which is only known from 19th century descriptions. Other parts of the episcopal complex were also decorated with mosaics as new finds, that were unearthed in the 2000s, attest. The funerary basilica of Saint Victor, built in a quarry outside the walls, was decorated with mosaics but only a small fragment with blue and green scrolls survived on the intrados of an arch (the basilica was later buried under a medieval abbey).
A mosaic pavement depicting humans, animals and plants from the original 4th-century cathedral of Aquileia has survived in the later medieval church. This mosaic adopts pagan motifs such as the Nilotic scene, but behind the traditional naturalistic content is Christian symbolism such as the ichthys. The 6th-century early Christian basilicas of Sant' Eufemia :it:Basilica di Sant'Eufemia (Grado) and Santa Maria delle Grazie in Grado also have mosaic floors.
Ravenna
thumb|The Good Shepherd mosaic in the Mausoleum of Galla Placidia, Ravenna
In the 5th-century Ravenna, the capital of the Western Roman Empire, became the center of late Roman mosaic art. The Mausoleum of Galla Placidia was decorated with mosaics of high artistic quality in 425–430. The vaults of the small, cross-shaped structure are clad with mosaics on blue background. The central motif above the crossing is a golden cross in the middle of the starry sky. Another great building established by Galla Placidia was the church of San Giovanni Evangelista. She erected it in fulfillment of a vow that she made having escaped from a deadly storm in 425 on the sea voyage from Constantinople to Ravenna. The mosaics depicted the storm, portraits of members of the western and eastern imperial family and the bishop of Ravenna, Peter Chrysologus. They are known only from Renaissance sources because almost all were destroyed in 1747.
Ostrogoths kept alive the tradition in the 6th century, as the mosaics of the Arian Baptistry, Baptistry of Neon, Archbishop's Chapel, and the earlier phase mosaics in the Basilica of San Vitale and Basilica of Sant'Apollinare Nuovo testify.
After 539 Ravenna was reconquered by the Romans in the form of the Eastern Roman Empire (Byzantine Empire) and became the seat of the Exarchate of Ravenna. The greatest development of Christian mosaics unfolded in the second half of the 6th century. Outstanding examples of Byzantine mosaic art are the later phase mosaics in the Basilica of San Vitale and Basilica of Sant'Apollinare Nuovo. The mosaic depicting Emperor Saint Justinian I and Empress Theodora in the Basilica of San Vitale were executed shortly after the Byzantine conquest. The mosaics of the Basilica of Sant'Apollinare in Classe were made around 549. The anti-Arian theme is obvious in the apse mosaic of San Michele in Affricisco, executed in 545–547 (largely destroyed; the remains in Berlin).
The last example of Byzantine mosaics in Ravenna was commissioned by bishop Reparatus between 673–79 in the Basilica of Sant'Apollinare in Classe. The mosaic panel in the apse showing the bishop with Emperor Constantine IV is obviously an imitation of the Justinian panel in San Vitale.
Butrint
The mosaic pavement of the Vrina Plain basilica of Butrint, Albania appear to pre-date that of the Baptistery by almost a generation, dating to the last quarter of the 5th or the first years of the 6th century. The mosaic displays a variety of motifs including sea-creatures, birds, terrestrial beasts, fruits, flowers, trees and abstracts – designed to depict a terrestrial paradise of God’s creation. Superimposed on this scheme are two large tablets, tabulae ansatae, carrying inscriptions. A variety of fish, a crab, a lobster, shrimps, mushrooms, flowers, a stag and two cruciform designs surround the smaller of the two inscriptions, which reads: In fulfilment of the vow (prayer) of those whose names God knows. This anonymous dedicatory inscription is a public demonstration of the benefactors’ humility and an acknowledgement of God’s omniscience.
The abundant variety of natural life depicted in the Butrint mosaics celebrates the richness of God’s creation; some elements also have specific connotations. The kantharos vase and vine refer to the eucharist, the symbol of the sacrifice of Christ leading to salvation. Peacocks are symbols of paradise and resurrection; shown eating or drinking from the vase they indicate the route to eternal life. Deer or stags were commonly used as images of the faithful aspiring to Christ: "As a heart desireth the water brook, so my souls longs for thee, O God." Water-birds and fish and other sea-creatures can indicate baptism as well as the members of the Church who are christened.
Late Antique and Early Medieval Rome
thumb|5th century mosaic in the triumphal arch of Santa Maria Maggiore, Rome
Christian mosaic art also flourished in Rome, gradually declining as conditions became more difficult in the Early Middle Ages. 5th century mosaics can be found over the triumphal arch and in the nave of the basilica of Santa Maria Maggiore. The 27 surviving panels of the nave are the most important mosaic cycle in Rome of this period. Two other important 5th century mosaics are lost but we know them from 17th-century drawings. In the apse mosaic of Sant'Agata dei Goti (462–472, destroyed in 1589) Christ was seated on a globe with the twelve Apostles flanking him, six on either side. At Sant'Andrea in Catabarbara (468–483, destroyed in 1686) Christ appeared in the center, flanked on either side by three Apostles. Four streams flowed from the little mountain supporting Christ. The original 5th-century apse mosaic of the Santa Sabina was replaced by a very similar fresco by Taddeo Zuccari in 1559. The composition probably remained unchanged: Christ flanked by male and female saints, seated on a hill while lambs drinking from a stream at its feet. All three mosaics had a similar iconography.
6th-century pieces are rare in Rome but the mosaics inside the triumphal arch of the basilica of San Lorenzo fuori le mura belong to this era. The Chapel of Ss. Primo e Feliciano in Santo Stefano Rotondo has very interesting and rare mosaics from the 7th century. This chapel was built by Pope Theodore I as a family burial place.
In the 7th–9th centuries Rome fell under the influence of Byzantine art, noticeable on the mosaics of Santa Prassede, Santa Maria in Domnica, Sant'Agnese fuori le Mura, Santa Cecilia in Trastevere, Santi Nereo e Achilleo and the San Venanzio chapel of San Giovanni in Laterano. The great dining hall of Pope Leo III in the Lateran Palace was also decorated with mosaics. They were all destroyed later except for one example, the so-called Triclinio Leoniano of which a copy was made in the 18th century. Another great work of Pope Leo, the apse mosaic of Santa Susanna, depicted Christ with the Pope and Charlemagne on one side, and SS. Susanna and Felicity on the other. It was plastered over during a renovation in 1585. Pope Paschal I (817–824) embellished the church of Santo Stefano del Cacco with an apsidal mosaic which depicted the pope with a model of the church (destroyed in 1607).
The fragment of an 8th-century mosaic, the Epiphany is one of the very rare remaining pieces of the medieval decoration of Old St. Peter's Basilica, demolished in the late 16th century. The precious fragment is kept in the sacristy of Santa Maria in Cosmedin. It proves the high artistic quality of the destroyed St. Peter's mosaics.
Byzantine mosaics
thumb|upright=1.1|The so-called Gothic chieftain, from the Mosaic Peristyle of the Great Palace of Constantinople
thumb|upright|Saint Peter mosaic from the Chora Church
thumb|Byzantine mosaic above the entrance portal of the Euphrasian Basilica in Poreč, Croatia (6th century)
Mosaics were more central to Byzantine culture than to that of Western Europe. Byzantine church interiors were generally covered with golden mosaics. Mosaic art flourished in the Byzantine Empire from the 6th to the 15th centuries. The majority of Byzantine mosaics were destroyed without trace during wars and conquests, but the surviving remains still form a fine collection.
The great buildings of Emperor Justinian like the Hagia Sophia in Constantinople, the Nea Church in Jerusalem and the rebuilt Church of the Nativity in Bethlehem were certainly embellished with mosaics but none of these survived.
Important fragments survived from the mosaic floor of the Great Palace of Constantinople which was commissioned during Justinian's reign. The figures, animals, plants all are entirely classical but they are scattered before a plain background. The portrait of a moustached man, probably a Gothic chieftain, is considered the most important surviving mosaic of the Justinianian age. The so-called small sekreton of the palace was built during Justin II's reign around 565–577. Some fragments survive from the mosaics of this vaulted room. The vine scroll motifs are very similar to those in the Santa Constanza and they still closely follow the Classical tradition. There are remains of floral decoration in the Church of the Acheiropoietos in Thessaloniki (5th–6th centuries).
thumb|left|upright=0.7|A pre-Iconoclastic depiction of St. Demetrios at the Hagios Demetrios Basilica in Thessaloniki.
In the 6th century, Ravenna, the capital of Byzantine Italy, became the center of mosaic making. Istria also boasts some important examples from this era. The Euphrasian Basilica in Parentium was built in the middle of the 6th century and decorated with mosaics depicting the Theotokos flanked by angels and saints.
Fragments remain from the mosaics of the Church of Santa Maria Formosa in Pola. These pieces were made during the 6th century by artists from Constantinople. Their pure Byzantine style is different from the contemporary Ravennate mosaics.
Very few early Byzantine mosaics survived the Iconoclastic destruction of the 8th century. Among the rare examples are the 6th-century Christ in majesty (or Ezekiel's Vision) mosaic in the apse of the Church of Hosios David in Thessaloniki that was hidden behind mortar during those dangerous times. Nine mosaic panels in the Hagios Demetrios Church, which were made between 634 and 730, also escaped destruction. Unusually almost all represent Saint Demetrius of Thessaloniki, often with suppliants before him.
In the Iconoclastic era, figural mosaics were also condemned as idolatry. The Iconoclastic churches were embellished with plain gold mosaics with only one great cross in the apse like the Hagia Irene in Constantinople (after 740). There were similar crosses in the apses of the Hagia Sophia Church in Thessaloniki and in the Church of the Dormition in Nicaea. The crosses were substituted with the image of the Theotokos in both churches after the victory of the Iconodules (787–797 and in 8th–9th centuries respectively, the Dormition church was totally destroyed in 1922).
A similar Theotokos image flanked by two archangels were made for the Hagia Sophia in Constantinople in 867. The dedication inscription says: "The images which the impostors had cast down here pious emperors have again set up." In the 870s the so-called large sekreton of the Great Palace of Constantinople was decorated with the images of the four great iconodule patriarchs.
The post-Iconoclastic era was the heyday of Byzantine art with the most beautiful mosaics executed. The mosaics of the Macedonian Renaissance (867–1056) carefully mingled traditionalism with innovation. Constantinopolitan mosaics of this age followed the decoration scheme first used in Emperor Basil I's Nea Ekklesia. Not only this prototype was later totally destroyed but each surviving composition is battered so it is necessary to move from church to church to reconstruct the system.
An interesting set of Macedonian-era mosaics make up the decoration of the Hosios Loukas Monastery. In the narthex there is the Crucifixion, the Pantokrator and the Anastasis above the doors, while in the church the Theotokos (apse), Pentecost, scenes from Christ's life and ermit St Loukas (all executed before 1048). The scenes are treated with a minimum of detail and the panels are dominated with the gold setting.
left|thumb|upright|Detail of mosaic from Nea Moni Monastery
The Nea Moni Monastery on Chios was established by Constantine Monomachos in 1043–1056. The exceptional mosaic decoration of the dome showing probably the nine orders of the angels was destroyed in 1822 but other panels survived (Theotokos with raised hands, four evangelists with seraphim, scenes from Christ's life and an interesting Anastasis where King Salomon bears resemblance to Constantine Monomachos). In comparison with Osios Loukas Nea Moni mosaics contain more figures, detail, landscape and setting.
Another great undertaking by Constantine Monomachos was the restoration of the Church of the Holy Sepulchre in Jerusalem between 1042 and 1048. Nothing survived of the mosaics which covered the walls and the dome of the edifice but the Russian abbot Daniel, who visited Jerusalem in 1106–1107 left a description: "Lively mosaics of the holy prophets are under the ceiling, over the tribune. The altar is surmounted by a mosaic image of Christ. In the main altar one can see the mosaic of the Exaltation of Adam. In the apse the Ascension of Christ. The Annunciation occupies the two pillars next to the altar."The Holy Sepulchre – The great destruction of 1009
The Daphni Monastery houses the best preserved complex of mosaics from the early Comnenan period (ca. 1100) when the austere and hieratic manner typical for the Macedonian epoch and represented by the awesome Christ Pantocrator image inside the dome, was metamorphosing into a more intimate and delicate style, of which The Angel before St Joachim — with its pastoral backdrop, harmonious gestures and pensive lyricism — is considered a superb example.
The 9th- and 10th-century mosaics of the Hagia Sophia in Constantinople are truly classical Byzantine artworks. The north and south tympana beneath the dome was decorated with figures of prophets, saints and patriarchs. Above the principal door from the narthex we can see an Emperor kneeling before Christ (late 9th or early 10th century). Above the door from the southwest vestibule to the narthex another mosaic shows the Theotokos with Justinian and Constantine. Justinian I is offering the model of the church to Mary while Constantine is holding a model of the city in his hand. Both emperors are beardless – this is an example for conscious archaization as contemporary Byzantine rulers were bearded. A mosaic panel on the gallery shows Christ with Constantine Monomachos and Empress Zoe (1042–1055). The emperor gives a bulging money sack to Christ as a donation for the church.
The dome of the Hagia Sophia Church in Thessaloniki is decorated with an Ascension mosaic (c. 885). The composition resembles the great baptistries in Ravenna, with apostles standing between palms and Christ in the middle. The scheme is somewhat unusual as the standard post-Iconoclastic formula for domes contained only the image of the Pantokrator.
thumb|upright|Mosaic of Christ Pantocrator from Hagia Sophia from the Deesis mosaic.
thumb|A mosaic from the Hagia Sophia of Constantinople (modern Istanbul), depicting Mary and Jesus, flanked by John II Komnenos (left) and his wife Irene of Hungary (right), c. 1118 AD
There are very few existing mosaics from the Komnenian period but this paucity must be due to accidents of survival and gives a misleading impression. The only surviving 12th-century mosaic work in Constantinople is a panel in Hagia Sophia depicting Emperor John II and Empress Eirene with the Theotokos (1122–34). The empress with her long braided hair and rosy cheeks is especially capturing. It must be a lifelike portrayal because Eirene was really a redhead as her original Hungarian name, Piroska shows. The adjacent portrait of Emperor Alexios I Komnenos on a pier (from 1122) is similarly personal. The imperial mausoleum of the Komnenos dynasty, the Pantokrator Monastery was certainly decorated with great mosaics but these were later destroyed. The lack of Komnenian mosaics outside the capital is even more apparent. There is only a "Communion of the Apostles" in the apse of the cathedral of Serres.
A striking technical innovation of the Komnenian period was the production of very precious, miniature mosaic icons. In these icons the small tesserae (with sides of 1 mm or less) were set on wax or resin on a wooden panel. These products of extraordinary craftmanship were intended for private devotion. The Louvre Transfiguration is a very fine example from the late 12th century. The miniature mosaic of Christ in the Museo Nazionale at Florence illustrates the more gentle, humanistic conception of Christ which appeared in the 12th century.
The sack of Constantinople in 1204 caused the decline of mosaic art for the next five decades. After the reconquest of the city by Michael VIII Palaiologos in 1261 the Hagia Sophia was restored and a beautiful new Deesis was made on the south gallery. This huge mosaic panel with figures two and a half times lifesize is really overwhelming due to its grand scale and superlative craftsmanship. The Hagia Sophia Deesis is probably the most famous Byzantine mosaic in Constantinople.
The Pammakaristos Monastery was restored by Michael Glabas, an imperial official, in the late 13th century. Only the mosaic decoration of the small burial chapel (parekklesion) of Glabas survived. This domed chapel was built by his widow, Martha around 1304–08. In the miniature dome the traditional Pantokrator can be seen with twelve prophets beneath. Unusually the apse is decorated with a Deesis, probably due to the funerary function of the chapel.
The Church of the Holy Apostles in Thessaloniki was built in 1310–14. Although some vandal systematically removed the gold tesserae of the background it can be seen that the Pantokrator and the prophets in the dome follow the traditional Byzantine pattern. Many details are similar to the Pammakaristos mosaics so it is supposed that the same team of mosaicists worked in both buildings. Another building with a related mosaic decoration is the Theotokos Paregoritissa Church in Arta. The church was established by the Despot of Epirus in 1294–96. In the dome is the traditional stern Pantokrator, with prophets and cherubim below.
thumb|left|Mosaic of Theodore Metochites offering the Chora Church to Christ
The greatest mosaic work of the Palaeologan renaissance in art is the decoration of the Chora Church in Constantinople. Although the mosaics of the naos have not survived except three panels, the decoration of the exonarthex and the esonarthex constitute the most important full-scale mosaic cycle in Constantinople after the Hagia Sophia. They were executed around 1320 by the command of Theodore Metochites. The esonarthex has two fluted domes, specially created to provide the ideal setting for the mosaic images of the ancestors of Christ. The southern one is called the Dome of the Pantokrator while the northern one is the Dome of the Theotokos. The most important panel of the esonarthex depicts Theodore Metochites wearing a huge turban, offering the model of the church to Christ. The walls of both narthexes are decorated with mosaic cycles from the life of the Virgin and the life of Christ. These panels show the influence of the Italian trecento on Byzantine art especially the more natural settings, landscapes, figures.
The last Byzantine mosaic work was created for the Hagia Sophia, Constantinople in the middle of the 14th century. The great eastern arch of the cathedral collapsed in 1346, bringing down the third of the main dome. By 1355 not only the big Pantokrator image was restored but new mosaics were set on the eastern arch depicting the Theotokos, the Baptist and Emperor John V Palaiologos (discovered only in 1989).
In addition to the large-scale monuments several miniature mosaic icons of outstanding quality was produced for the Palaiologos court and nobles. The loveliest examples from the 14th century are Annunciation in the Victoria and Albert Museum and a mosaic diptych in the Cathedral Treasury of Florence representing the Twelve Feasts of the Church.
In the troubled years of the 15th century the fatally weakened empire could not afford luxurious mosaics. Churches were decorated with wall-paintings in this era and after the Turkish conquest.
Rome in the High Middle Ages
thumb|Apse mosaic in the Santa Maria Maggiore
The last great period of Roman mosaic art was the 12th–13th century when Rome developed its own distinctive artistic style, free from the strict rules of eastern tradition and with a more realistic portrayal of figures in the space. Well-known works of this period are the floral mosaics of the Basilica di San Clemente, the façade of Santa Maria in Trastevere and San Paolo fuori le Mura. The beautiful apse mosaic of Santa Maria in Trastevere (1140) depicts Christ and Mary sitting next to each other on the heavenly throne, the first example of this iconographic scheme. A similar mosaic, the Coronation of the Virgin, decorates the apse of Santa Maria Maggiore. It is a work of Jacopo Torriti from 1295. The mosaics of Torriti and Jacopo da Camerino in the apse of San Giovanni in Laterano from 1288–94 were thoroughly restored in 1884. The apse mosaic of San Crisogono is attributed to Pietro Cavallini, the greatest Roman painter of the 13th century. Six scenes from the life of Mary in Santa Maria in Trastevere were also executed by Cavallini in 1290. These mosaics are praised for their realistic portrayal and attempts of perspective. There is an interesting mosaic medaillon from 1210 above the gate of the church of San Tommaso in Formis showing Christ enthroned between a white and a black slave. The church belonged to the Order of the Trinitarians which was devoted to ransoming Christian slaves.
The great Navicella mosaic (1305–1313) in the atrium of the Old St. Peter's is attributed to Giotto di Bondone. The giant mosaic, commissioned by Cardinal Jacopo Stefaneschi, was originally situated on the eastern porch of the old basilica and occupied the whole wall above the entrance arcade facing the courtyard. It depicted St. Peter walking on the waters. This extraordinary work was mainly destroyed during the construction of the new St. Peter's in the 17th century. Navicella means "little ship" referring to the large boat which dominated the scene, and whose sail, filled by the storm, loomed over the horizon. Such a natural representation of a seascape was known only from ancient works of art.
Sicily
thumb|upright|Saracen arches and Byzantine mosaics in the Cappella Palatina of Roger II of Sicily
The heyday of mosaic making in Sicily was the age of the independent Norman kingdom in the 12th century. The Norman kings adopted the Byzantine tradition of mosaic decoration to enhance the somewhat dubious legality of their rule. Greek masters working in Sicily developed their own style, that shows the influence of Western European and Islamic artistic tendencies. Best examples of Sicilian mosaic art are the Cappella Palatina of Roger II,Some Palatine Aspects of the Cappella Palatina in Palermo, Slobodan Ćurčić, Dumbarton Oaks Papers, Vol. 41, 139. the Martorana church in Palermo and the cathedrals of Cefalù and Monreale.
The Cappella Palatina clearly shows evidence for blending the eastern and western styles. The dome (1142–42) and the eastern end of the church (1143–1154) were decorated with typical Byzantine mosaics i.e. Pantokrator, angels, scenes from the life of Christ. Even the inscriptions are written in Greek. The narrative scenes of the nave (Old Testament, life of Sts Peter and Paul) are resembling to the mosaics of the Old St. Peter's and St. Paul's Basilica in Rome (Latin inscriptions, 1154–66).
The Martorana church (decorated around 1143) looked originally even more Byzantine although important parts were later demolished. The dome mosaic is similar to that of the Cappella Palatina, with Christ enthroned in the middle and four bowed, elongated angels. The Greek inscriptions, decorative patterns, and evangelists in the squinches are obviously executed by the same Greek masters who worked on the Cappella Palatina. The mosaic depicting Roger II of Sicily, dressed in Byzantine imperial robes and receiving the crown by Christ, was originally in the demolished narthex together with another panel, the Theotokos with Georgios of Antiochia, the founder of the church.
In Cefalù (1148) only the high, French Gothic presbytery was covered with mosaics: the Pantokrator on the semidome of the apse and cherubim on the vault. On the walls are Latin and Greek saints, with Greek inscriptions.
thumb|upright|Monreale mosaics: William II offering the Monreale Cathedral to the Virgin Mary
The Monreale mosaics constitute the largest decoration of this kind in Italy, covering 0,75 hectares with at least 100 million glass and stone tesserae. This huge work was executed between 1176 and 1186 by the order of King William II of Sicily. The iconography of the mosaics in the presbytery is similar to Cefalu while the pictures in the nave are almost the same as the narrative scenes in the Cappella Palatina. The Martorana mosaic of Roger II blessed by Christ was repeated with the figure of King William II instead of his predecessor. Another panel shows the king offering the model of the cathedral to the Theotokos.
The Cathedral of Palermo, rebuilt by Archbishop Walter in the same time (1172–85), was also decorated with mosaics but none of these survived except the 12th-century image of Madonna del Tocco above the western portal.
The cathedral of Messina, consecrated in 1197, was also decorated with a great mosaic cycle, originally on par with Cefalù and Monreale, but heavily damaged and restored many times later. In the left apse of the same cathedral 14th-century mosaics survived, representing the Madonna and Child between Saints Agata and Lucy, the Archangels Gabriel and Michael and Queens Eleonora and Elisabetta.
Southern Italy was also part of the Norman kingdom but great mosaics did not survive in this area except the fine mosaic pavement of the Otranto Cathedral from 1166, with mosaics tied into a tree of life, mostly still preserved. The scenes depict biblical characters, warrior kings, medieval beasts, allegories of the months and working activity. Only fragments survived from the original mosaic decoration of Amalfi's Norman Cathedral. The mosaic ambos in the churches of Ravello prove that mosaic art was widespread in Southern Italy during the 11th–13th centuries.
The palaces of the Norman kings were decorated with mosaics depicting animals and landscapes. The secular mosaics are seemingly more Eastern in character than the great religious cycles and show a strong Persian influence. The most notable examples are the Sala di Ruggero in the Palazzo dei Normanni, Palermo and the Sala della Fontana in the Zisa summer palace, both from the 12th century.
Venice
In parts of Italy, which were under eastern artistic influences, like Sicily and Venice, mosaic making never went out of fashion in the Middle Ages. The whole interior of the St Mark's Basilica in Venice is clad with elaborate, golden mosaics. The oldest scenes were executed by Greek masters in the late 11th century but the majority of the mosaics are works of local artists from the 12th–13th centuries. The decoration of the church was finished only in the 16th century. One hundred and ten scenes of mosaics in the atrium of St Mark's were based directly on the miniatures of the Cotton Genesis, a Byzantine manuscript that was brought to Venice after the sack of Constantinople (1204). The mosaics were executed in the 1220s.
Other important Venetian mosaics can be found in the Cathedral of Santa Maria Assunta in Torcello from the 12th century, and in the Basilical of Santi Maria e Donato in Murano with a restored apse mosaic from the 12th century and a beautiful mosaic pavement (1140). The apse of the San Cipriano Church in Murano was decorated with an impressive golden mosaic from the early 13th century showing Christ enthroned with Mary, St John and the two patron saints, Cipriano and Cipriana. When the church was demolished in the 19th century, the mosaic was bought by Frederick William IV of Prussia. It was reassembled in the Friedenskirche of Potsdam in the 1840s.
Trieste was also an important center of mosaic art. The mosaics in the apse of the Cathedral of San Giusto were laid by master craftsmen from Veneto in the 12th–13th centuries.
Medieval Italy
The monastery of Grottaferrata founded by Greek Basilian monks and consecrated by the Pope in 1024 was decorated with Italo-Byzantine mosaics, some of which survived in the narthex and the interior. The mosaics on the triumphal arch portray the Twelve Apostles sitting beside an empty throne, evoking Christ's ascent to Heaven. It is a Byzantine work of the 12th century. There is a beautiful 11th-century Deesis above the main portal.
The Abbot of Monte Cassino, Desiderius sent envoys to Constantinople some time after 1066 to hire expert Byzantine mosaicists for the decoration of the rebuilt abbey church. According to chronicler Leo of Ostia the Greek artists decorated the apse, the arch and the vestibule of the basilica. Their work was admired by contemporaries but was totally destroyed in later centuries except two fragments depicting greyhounds (now in the Monte Cassino Museum). "The abbot in his wisdom decided that great number of young monks in the monastery should be thoroughly initiated in these arts" – says the chronicler about the role of the Greeks in the revival of mosaic art in medieval Italy.
thumb|Florence Baptistry
In Florence a magnificiant mosaic of the Last Judgement decorates the dome of the Baptistery. The earliest mosaics, works of art of many unknown Venetian craftsmen (including probably Cimabue), date from 1225. The covering of the ceiling was probably not completed until the 14th century.
The impressive mosaic of Christ in Majesty, flanked by the Blessed Virgin and St. John the Evangelist in the apse of the cathedral of Pisa was designed by Cimabue in 1302. It evokes the Monreale mosaics in style. It survived the great fire of 1595 which destroyed most of the mediveval interior decoration.
Sometimes not only church interiors but façades were also decorated with mosaics in Italy like in the case of the St Mark's Basilica in Venice (mainly from the 17th–19th centuries, but the oldest one from 1270–75, "The burial of St Mark in the first basilica"), the Cathedral of Orvieto (golden Gothic mosaics from the 14th century, many times redone) and the Basilica di San Frediano in Lucca (huge, striking golden mosaic representing the Ascension of Christ with the apostles below, designed by Berlinghiero Berlinghieri in the 13th century). The Cathedral of Spoleto is also decorated on the upper façade with a huge mosaic portraying the Blessing Christ (signed by one Solsternus from 1207).
Western and Central Europe
thumb|left|A “painting” made from tesserae in St Peter's Basilica, Vatican State, Italy
Beyond the Alps the first important example of mosaic art was the decoration of the Palatine Chapel in Aachen, commissioned by Charlemagne. It was completely destroyed in a fire in 1650. A rare example of surviving Carolingian mosaics is the apse semi-dome decoration of the oratory of Germigny-des-Prés built in 805–806 by Theodulf, bishop of Orléans, a leading figure of the Carolingian renaissance. This unique work of art, rediscovered only in the 19th century, had no followers.
Only scant remains prove that mosaics were still used in the Early Middle Ages. The Abbey of Saint-Martial in Limoges, originally an important place of pilgrimage, was totally demolished during the French Revolution except its crypt which was rediscovered in the 1960s. A mosaic panel was unearthed which was dated to the 9th century. It somewhat incongruously uses cubes of gilded glass and deep green marble, probably taken from antique pavements. This could also be the case with the early 9th century mosaic found under the Basilica of Saint-Quentin in Picardy, where antique motifs are copied but using only simple colors. The mosaics in the Cathedral of Saint-Jean at Lyon have been dated to the 11th century because they employ the same non-antique simple colors. More fragments were found on the site of Saint-Croix at Poitiers which might be from the 6th or 9th century.
thumb|left|Close up of the bottom left corner of the picture above. Click the picture to see the individual tesserae
Later fresco replaced the more labor-intensive technique of mosaic in Western-Europe, although mosaics were sometimes used as decoration on medieval cathedrals. The Royal Basilica of the Hungarian kings in Székesfehérvár (Alba Regia) had a mosaic decoration in the apse. It was probably a work of Venetian or Ravennese craftsmen, executed in the first decades of the 11th century. The mosaic was almost totally destroyed together with the basilica in the 17th century. The Golden Gate of the St. Vitus Cathedral in Prague got its name from the golden 14th-century mosaic of the Last Judgement above the portal. It was executed by Venetian craftsmen.
thumb|right|Carolingian mosaic in Germigny-des-Prés
The Crusaders in the Holy Land also adopted mosaic decoration under local Byzantine influence. During their 12th-century reconstruction of the Church of the Holy Sepulchre in Jerusalem they complemented the existing Byzantine mosaics with new ones. Almost nothing of them survived except the "Ascension of Christ" in the Latin Chapel (now confusingly surrounded by many 20th-century mosaics). More substantial fragments were preserved from the 12th-century mosaic decoration of the Church of the Nativity in Bethlehem. The mosaics in the nave are arranged in five horizontal bands with the figures of the ancestors of Christ, Councils of the Church and angels. In the apses the Annunciation, the Nativity, Adoration of the Magi and Dormition of the Blessed Virgin can be seen. The program of redecoration of the church was completed in 1169 as a unique collaboration of the Byzantine emperor, the king of Jerusalem and the Latin Church.
In 2003, the remains of a mosaic pavement were discovered under the ruins of the Bizere Monastery near the River Mureş in present-day Romania. The panels depict real or fantastic animal, floral, solar and geometric representations. Some archeologists supposed that it was the floor of an Orthodox church, built some time between the 10th and 11th century. Other experts claim that it was part of the later Catholic monastery on the site because it shows the signs of strong Italianate influence. The monastery was situated that time in the territory of the Kingdom of Hungary.
Renaissance and Baroque
Although mosaics went out of fashion and were substituted by frescoes, some of the great Renaissance artists also worked with the old technique. Raphael's Creation of the World in the dome of the Chigi Chapel in Santa Maria del Popolo is a notable example that was executed by a Venetian craftsman, Luigi di Pace.
During the papacy of Clement VIII (1592–1605), the “Congregazione della Reverenda Fabbrica di San Pietro" was established, providing an independent organisation charged with completing the decorations in the newly built St. Peter's Basilica. Instead of frescoes the cavernous Basilica was mainly decorated with mosaics. Among the explanations are:
The old St. Peter's Basilica had been decorated with mosaic, as was common in churches built during the early Christian era; the 17th century followed the tradition to enhance continuity.
In a church like this with high walls and few windows, mosaics were brighter and reflected more light.
Mosaics had greater intrinsic longevity than either frescoes or canvases.
Mosaics had an association with bejeweled decoration, flaunting richness.
The mosaics of St. Peter's often show lively Baroque compositions based on designs or canvases from like Ciro Ferri, Guido Reni, Domenichino, Carlo Maratta, and many others. Raphael is represented by a mosaic replica of this last painting, the Transfiguration. Many different artists contributed to the 17th- and 18th-century mosaics in St. Peter's, including Giovanni Battista Calandra, Fabio Cristofari (died 1689), and Pietro Paolo Cristofari (died 1743).DiFederico, F. R. (1983), The mosaics of Saint Peter's Decorating the New Basilica, University Park: Pennsylvania State University Press, pp. 3–26. Works of the Fabbrica were often used as papal gifts.
The Christian East
thumb|Jerusalem on the Madaba Map
The eastern provinces of the Eastern Roman and later the Byzantine Empires inherited a strong artistic tradition from the Late Antiquity. Similarly to Italy and Constantinople churches and important secular buildings in the region of Syria and Egypt were decorated with elaborate mosaic panels between the 5th and 8th centuries. The great majority of these works of art were later destroyed but archeological excavations unearthed many surviving examples.
The single most important piece of Byzantine Christian mosaic art in the East is the Madaba Map, made between 542 and 570 as the floor of the church of Saint George at Madaba, Jordan. It was rediscovered in 1894. The Madaba Map is the oldest surviving cartographic depiction of the Holy Land. It depicts an area from Lebanon in the north to the Nile Delta in the south, and from the Mediterranean Sea in the west to the Eastern Desert. The largest and most detailed element of the topographic depiction is Jerusalem, at the center of the map. The map is enriched with many naturalistic features, like animals, fishing boats, bridges and palm trees
One of the earliest examples of Byzantine mosaic art in the region can be found on Mount Nebo, an important place of pilgrimage in the Byzantine era where Moses died. Among the many 6th-century mosaics in the church complex (discovered after 1933) the most interesting one is located in the baptistery. The intact floor mosaic covers an area of 9 x 3 m and was laid down in 530. It depicts hunting and pastoral scenes with rich Middle Eastern flora and fauna.
thumb|left|Mosaic floor from the church on Mount Nebo (baptistery, 530)
The Church of Sts. Lot and Procopius was founded in 567 in Nebo village under Mount Nebo (now Khirbet Mukhayyat). Its floor mosaic depicts everyday activities like grape harvest. Another two spectacular mosaics were discovered in the ruined Church of Preacher John nearby. One of the mosaics was placed above the other one which was completely covered and unknown until the modern restoration. The figures on the older mosaic have thus escaped the iconoclasts.
The town of Madaba remained an important center of mosaic making during the 5th–8th centuries. In the Church of the Apostles the middle of the main panel Thalassa, goddess of the sea, can be seen surrounded by fishes and other sea creatures. Native Middle Eastern birds, mammals, plants and fruits were also added.The mosaics of Jordan
thumb|The Transfiguration of Jesus in the Saint Catherine's Monastery
Important Justinian era mosaics decorated the Saint Catherine's Monastery on Mount Sinai in Egypt. Generally wall mosaics have not survived in the region because of the destruction of buildings but the St. Catherine's Monastery is exceptional. On the upper wall Moses is shown in two panels on a landscape background. In the apse we can see the Transfiguration of Jesus on a golden background. The apse is surrounded with bands containing medallions of apostles and prophets, and two contemporary figure, "Abbot Longinos" and "John the Deacon". The mosaic was probably created in 565/6.
Jerusalem with its many holy places probably had the highest concentration of mosaic-covered churches but very few of them survived the subsequent waves of destructions. The present remains do not do justice to the original richness of the city. The most important is the so-called "Armenian Mosaic" which was discovered in 1894 on the Street of the Prophets near Damascus Gate. It depicts a vine with many branches and grape clusters, which springs from a vase. Populating the vine's branches are peacocks, ducks, storks, pigeons, an eagle, a partridge, and a parrot in a cage. The inscription reads: "For the memory and salvation of all those Armenians whose name the Lord knows." Beneath a corner of the mosaic is a small, natural cave which contained human bones dating to the 5th or 6th centuries. The symbolism of the mosaic and the presence of the burial cave indicates that the room was used as a mortuary chapel.
An exceptionally well preserved, carpet-like mosaic floor was uncovered in 1949 in Bethany, the early Byzantine church of the Lazarium which was built between 333 and 390. Because of its purely geometrical pattern, the church floor is to be grouped with other mosaics of the time in Palestine and neighboring areas, especially the Constantinian mosaics in the central nave at Bethlehem.Bethany in Byzantine times I A second church was built above the older one during the 6th century with another more simple geometric mosaic floor.
thumb|left|Detail from the mosaic floor of the Byzantine church of in Masada. The monastic community lived here in the 5th–7th centuries.
The monastic communities of the Judean Desert also decorated their monasteries with mosaic floors. The Monastery of Martyrius was founded in the end of the 5th century and it was re-discovered in 1982–85. The most important work of art here is the intact geometric mosaic floor of the refectory although the severely damaged church floor was similarly rich.The Monastery of Martyrius The mosaics in the church of the nearby Monastery of Euthymius are of later date (discovered in 1930). They were laid down in the Umayyad era, after a devastating earthquake in 659. Two six pointed stars and a red chalice are the most important surviving features.
thumb|upright|Detail from the mosaic floor of the Petra Church
Mosaic art also flourished in Christian Petra where three Byzantine churches were discovered. The most important one was uncovered in 1990. It is known that the walls were also covered with golden glass mosaics but only the floor panels survived as usual. The mosaic of the seasons in the southern aisle is from this first building period from the middle of the 5th century. In the first half of the 6th century the mosaics of the northern aisle and the eastern end of the southern aisle were installed. They depict native as well as exotic or mythological animals, and personifications of the Seasons, Ocean, Earth and Wisdom.Petra Church – Mosaic Floors – Petra, Jordan « Mosaic Art Source
The Arab conquest of the Middle East in the 7th century did not break off the art of mosaic making. Arabs learned and accepted the craft as their own and carried on the classical tradition. During the Umayyad era Christianity retained its importance, churches were built and repaired and some of the most important mosaics of the Christian East were made during the 8th century when the region was under Islamic rule.
The mosaics of the Church of St Stephen in ancient Kastron Mefaa (now Umm ar-Rasas) were made in 785 (discovered after 1986). The perfectly preserved mosaic floor is the largest one in Jordan. On the central panel hunting and fishing scenes are depicted while another panel illustrates the most important cities of the region. The frame of the mosaic is especially decorative. Six mosaic masters signed the work: Staurachios from Esbus, Euremios, Elias, Constantinus, Germanus and Abdela. It overlays another, damaged, mosaic floor of the earlier (587) "Church of Bishop Sergius." Another four churches were excavated nearby with traces of mosaic decoration.
The last great mosaics in Madaba were made in 767 in the Church of the Virgin Mary (discovered in 1887). It is a masterpiece of the geometric style with a Greek inscription in the central medallion.
With the fall of the Umayyad dynasty in 750 the Middle East went through deep cultural changes. No great mosaics were made after the end of the 8th century and the majority of churches gradually fell into disrepair and were eventually destroyed. The tradition of mosaic making died out among the Christians and also in the Islamic community.
Orthodox countries
thumb|upright=0.7|Early 12th-century Kievan mosaic depicting St. Demetrius.
The craft has also been popular in early medieval Rus, inherited as part of the Byzantine tradition. Yaroslav, the Grand Prince of the Kievan Rus' built a large cathedral in his capital, Kiev. The model of the church was the Hagia Sophia in Constantinople, and it was also called Saint Sophia Cathedral. It was built mainly by Byzantine master craftsmen, sent by Constantine Monomachos, between 1037 and 1046. Naturally the more important surfaces in the interior were decorated with golden mosaics. In the dome we can see the traditional stern Pantokrator supported by angels. Between the 12 windows of the drum were apostles and the four evangelists on the pendentives. The apse is dominated by an orant Theotokos with a Deesis in three medallions above. Below is a Communion of the Apostles.
thumb|left|Apse mosaic "Glory of the Theotokos" in Gelati, Georgia. c. 1125–1130.
Prince Sviatopolk II built St. Michael's Golden-Domed Monastery in Kiev in 1108. The mosaics of the church are undoubtedly works of Byzantine artists. Although the church was destroyed by Soviet authorities, majority of the panels were preserved. Small parts of ornamental mosaic decoration from the 12th century survived in the Saint Sophia Cathedral in Novgorod but this church was largely decorated with frescoes.
Using mosaics and frescoes in the same building was a unique practice in Ukraine. Harmony was achieved by using the same dominant colors in mosaic and fresco. Both Saint Sophia Cathedral and Saint Michael's Golden-Domed Monastery in Kiev use this technique. Mosaics stopped being used for church decoration as early as the 12th century in the eastern Slavic countries. Later Russian churches were decorated with frescoes, similarly then orthodox churches in the Balkan.
The apse mosaic of the Gelati Monastery is a rare example of mosaic use in Georgia. Began by king David IV and completed by his son Demetrius I of Georgia, the fragmentary panel depicts Theotokos flanked by two archangels. The use of mosaic in Gelati attests to some Byzantine influence in the country and was a demonstration of the imperial ambition of the Bagrationids. The mosaic covered church could compete in magnificence with the churches of Constantinople. Gelati is one of few mosaic creations which survived in Georgia but fragments prove that the early churches of Pitsunda and Tsromi were also decorated with mosaic as well as other, lesser known sites. The destroyed 6th century mosaic floors in the Pitsunda Cathedral have been inspired by Roman prototypes. In Tsromi the tesserae are still visible on the walls of the 7th-century church but only faint lines hint at the original scheme. Its central figure was Christ standing and displaying a scroll with Georgian text.
Jewish mosaics
thumb|right|Zodiac wheel on the floor of the synagogue in Sepphoris
Under Roman and Byzantine influence Jews also decorated their synagogues with classical floor mosaics. Many interesting examples were discovered in Galilee and the Judean Desert.
The remains of a 6th-century synagogue have been uncovered in Sepphoris, which was an important centre of Jewish culture between the 3rd–7th centuries and a multicultural town inhabited by Jews, Christians and pagans. The mosaic reflects an interesting fusion of Jewish and pagan beliefs. In the center of the floor the zodiac wheel was depicted. Helios sits in the middle, in his sun chariot, and each zodiac is matched with a Jewish month. Along the sides of the mosaic are strips depicting Biblical scenes, such as the binding of Isaac, as well as traditional rituals, including a burnt sacrifice and the offering of fruits and grains.
Another zodiac mosaic decorated the floor of the Beit Alfa synagogue which was built during the reign of Justin I (518–27). It is regarded one of the most important mosaics discovered in Israel. Each of its three panels depicts a scene – the Holy Ark, the zodiac, and the story of the sacrifice of Isaac. In the center of the zodiac is Helios, the sun god, in his chariot. The four women in the corners of the mosaic represent the four seasons.
A third superbly preserved zodiac mosaic was discovered in the Severus synagogue in the ancient resort town of Hammat Tiberias. In the center of the 4th-century mosaic the Sun god, Helios sits in his chariot holding the celestial sphere and a whip. Nine of the 12 signs of the zodiac survived intact. Another panel shows the Ark of Covenant and Jewish cultic objects used in the Temple at Jerusalem.
In 1936, a synagogue was excavated in Jericho which was named Shalom Al Yisrael Synagogue after an inscription on its mosaic floor ("Peace on Israel"). It appears to have been in use from the 5th to 8th centuries and contained a big mosaic on the floor with drawings of the Ark of the Covenant, the Menorah, a Shofar and a Lulav. Nearby in Naaran, there is another synagogue (discovered in 1918) from the 6th century that also has a mosaic floor.
The synagogue in Eshtemoa (As-Samu) was built around the 4th century. The mosaic floor is decorated with only floral and geometric patterns. The synagogue in Khirbet Susiya (excavated in 1971–72, founded in the end of the 4th century) has three mosaic panels, the eastern one depicting a Torah shrine, two menorahs, a lulav and an etrog with columns, deer and rams. The central panel is geometric while the western one is seriously damaged but it has been suggested that it depicted Daniel in the lion’s den. The Roman synagogue in Ein Gedi was remodeled in the Byzantine era and a more elaborate mosaic floor was laid down above the older white panels. The usual geometric design was enriched with birds in the center. It includes the names of the signs of the zodiac and important figures from the Jewish past but not their images suggesting that it served a rather conservative community.
The ban on figurative depiction was not taken so seriously by the Jews living in Byzantine Gaza. In 1966 remains of a synagogue were found in the ancient harbour area. Its mosaic floor depicts King David as Orpheus, identified by his name in Hebrew letters. Near him were lion cubs, a giraffe and a snake listening to him playing a lyre. A further portion of the floor was divided by medallions formed by vine leaves, each of which contains an animal: a lioness suckling her cub, a giraffe, peacocks, panthers, bears, a zebra and so on. The floor was paved in 508/509. It is very similar to that of the synagogue at Maon (Menois) and the Christian church at Shellal, suggesting that the same artist most probably worked at all three places.
The House of Leontius in Bet She'an (excavated in 1964–72) is a rare example of a synagogue which was part of an inn. It was built in the Byzantine period. The colorful mosaic floor of the synagogue room had an outer stripe decorated with flowers and birds, around medallions with animals, created by vine trellises emerging from an amphora. The central medallion enclosed a menorah (candelabrum) beneath the word shalom (peace).
A 5th-century building in Huldah may be a Samaritan synagogue. Its mosaic floor contains typical Jewish symbols (menorah, lulav, etrog) but the inscriptions are Greek. Another Samaritan synagogue with a mosaic floor was located in Bet She'an (excavated in 1960). The floor had only decorative motifs and an aedicule (shrine) with cultic symbols. The ban on human or animal images was more strictly observed by the Samaritans than their Jewish neighbours in the same town (see above). The mosaic was laid by the same masters who made the floor of the Beit Alfa synagogue. One of the inscriptions was written in Samaritan script.
In 2003, a synagogue of the 5th or 6th century was uncovered in the coastal Ionian town of Saranda, Albania. It had exceptional mosaics depicting items associated with Jewish holidays, including a menorah, ram's horn, and lemon tree. Mosaics in the basilica of the synagogue show the facade of what resembles a Torah, animals, trees, and other biblical symbols. The structure measures 20 by 24 m. and was probably last used in the 6th century as a church.
Middle Eastern and Western Asian art
Pre-Islamic Arabia
In South Arabia two mosaic works were excavated in a Qatabanian from the late 3rd century, those two plates formed geometric and grapevines formation reflecting the traditions of that culture. In the Ghassanid era religious mosaic art flourished in their territory, so far five churches with mosaic were recorded from that era, two built by Ghassanid rulers and the other three by the Christian Arab community who wrote their names and dedications.
thumb|upright|Floor pavement representing female dancers, Shapur palace, Bishapur
Pre-Islamic Persia
Tilework had been known there for about two thousand years when cultural exchange between Sassanid Empire and Romans influenced Persian artists to create mosaic patterns. Shapur I decorated his palace with tile compositions depicting dancers, musicians, courtesans, etc. This was the only significant example of figurative Persian mosaic, which became prohibited after Arab conquest and arrival of Islam.
Islamic art
thumb|Complex Mosaic patterns also known as Girih are popular forms of architectural art in many Muslim cultures. Tomb of Hafez, Shiraz, Iran
Arab
thumb|upright|left|Islamic mosaics inside the Dome of the Rock in Palestine (c. 690)
Islamic architecture used mosaic technique to decorate religious buildings and palaces after the Muslim conquests of the eastern provinces of the Byzantine Empire. In Syria and Egypt the Arabs were influenced by the great tradition of Roman and Early Christian mosaic art. During the Umayyad Dynasty mosaic making remained a flourishing art form in Islamic culture and it is continued in the art of zellige and azulejo in various parts of the Arab world, although tile was to become the main Islamic form of wall decoration.
The first great religious building of Islam, the Dome of the Rock in Jerusalem, which was built between 688–692, was decorated with glass mosaics both inside and outside, by craftsmen of the Byzantine tradition. Only parts of the original interior decoration survive. The rich floral motifs follow Byzantine traditions, and are "Islamic only in the sense that the vocabulary is syncretic and does not include representation of men or animals."Jerusalem, Israel. sacredsites.com. Retrieved on 12 April 2008.
thumb|The Umayyad mosaics of Hisham's Palace closely followed classical traditions
The most important early Islamic mosaic work is the decoration of the Umayyad Mosque in Damascus, then capital of the Arab Caliphate. The mosque was built between 706 and 715. The caliph obtained 200 skilled workers from the Byzantine Emperor to decorate the building. This is evidenced by the partly Byzantine style of the decoration. The mosaics of the inner courtyard depict Paradise with beautiful trees, flowers and small hill towns and villages in the background. The mosaics include no human figures, which makes them different from the otherwise similar contemporary Byzantine works. The biggest continuous section survives under the western arcade of the courtyard, called the "Barada Panel" after the river Barada. It is thought that the mosque used to have the largest gold mosaic in the world, at over 4 m2. In 1893 a fire damaged the mosque extensively, and many mosaics were lost, although some have been restored since.
The mosaics of the Umayyad Mosque gave inspiration to later Damascene mosaic works. The Dome of the Treasury, which stands in the mosque courtyard, is covered with fine mosaics, probably dating from 13th- or 14th-century restoration work. The style of them are strikingly similar to the Barada Panel. The mausoleum of Sultan Baibars, Madrassa Zahiriyah, which was built after 1277, is also decorated with a band of golden floral and architectural mosaics, running around inside the main prayer hall.Zahiriyya Madrasa and Mausoleum of Sultan al-Zahir Baybars
Non-religious Umayyad mosaic works were mainly floor panels which decorated the palaces of the caliphs and other high-ranking officials. They were closely modeled after the mosaics of the Roman country villas, once common in the Eastern Mediterranean. The most superb example can be found in the bath house of Hisham's Palace, Palestine which was made around 744. The main panel depicts a large tree and underneath it a lion attacking a deer (right side) and two deers peacefully grazing (left side). The panel probably represents good and bad governance. Mosaics with classical geometric motifs survived in the bath area of the 8th-century Umayyad palace complex in Anjar, Lebanon. The luxurious desert residence of Al-Walid II in Qasr al-Hallabat (in present-day Jordan) was also decorated with floor mosaics that show a high level of technical skill. The best preserved panel at Hallabat is divided by a Tree of Life flanked by "good" animals on one side and "bad" animals on the other. Among the Hallabat representations are vine scrolls, grapes, pomegranates, oryx, wolves, hares, a leopard, pairs of partridges, fish, bulls, ostriches, rabbits, rams, goats, lions and a snake. At Qastal, near Amman, excavations in 2000 uncovered the earliest known Umayyad mosaics in present-day Jordan, dating probably from the caliphate of Abd al-Malik ibn Marwan (685–705). They cover much of the floor of a finely decorated building that probably served as the palace of a local governor. The Qastal mosaics depict geometrical patterns, trees, animals, fruits and rosettes. Except for the open courtyard, entrance and staircases, the floors of the entire palace were covered in mosaics.Saudi Aramco World : Mosaic Country
thumb|right|Golden mosaics in the dome of the Great Mosque in Corduba, Moorish Spain (965–970)
Some of the best examples of later Islamic mosaics were produced in Moorish Spain. The golden mosaics in the mihrab and the central dome of the Great Mosque in Corduba have a decidedly Byzantine character. They were made between 965 and 970 by local craftsmen, supervised by a master mosaicist from Constantinople, who was sent by the Byzantine Emperor to the Umayyad Caliph of Spain. The decoration is composed of colorful floral arabesques and wide bands of Arab calligraphy. The mosaics were purported to evoke the glamour of the Great Mosque in Damascus, which was lost for the Umayyad family.Marianne Barrucand – Achim Bednorz: Moorish Architecture in Andalusia, Taschen, 2002, p. 84
Mosaics generally went out of fashion in the Islamic world after the 8th century. Similar effects were achieved by the use of painted tilework, either geometric with small tiles, sometimes called mosaic, like the zillij of North Africa, or larger tiles painted with parts of a large decorative scheme (Qashani) in Persia, Turkey and further east.
Modern mosaics
thumb|left|upright=0.8|Mosaic embedded in stone wall, Italian area of Switzerland
thumb|Running Rug, 2001 – structural mosaic work by Marcelo de Melo
Noted 19th-century mosaics include those by Edward Burne-Jones at St Pauls within the Walls in Rome. Another modern mosaic of note is the world's largest mosaic installation located at the Cathedral Basilica of St. Louis, located in St. Louis, Missouri. A modern example of mosaic is the Museum of Natural History station of the New York City Subway (there are many such works of art scattered throughout the New York City subway system, though many IND stations are usually designed with bland mosaics.) Another example of mosaics in ordinary surroundings is the use of locally themed mosaics in some restrooms in the rest areas along some Texas interstate highways.
Some modern mosaics are the work of modernisme style architects Antoni Gaudí and Josep Maria Jujol, for example the mosaics in the Park Güell in Barcelona. Today, among the leading figures of the mosaic world are Emma Biggs (UK), Marcelo de Melo (Brazil), Sonia King (USA) and Saimir Strati (Albania).
Mosaics as a popular craft
thumb|upright|A detail of mosaic mural made of modern bottle screw tops. A high school in Jerusalem, Israel
Mosaics have developed into a popular craft and art, and are not limited to professionals. Today's artisans and crafters work with stone, ceramics, shells, art glass, mirror, beads, and even odd items like doll parts, pearls, or photographs. While ancient mosaics tended to be architectural, modern mosaics are found covering everything from park benches and flowerpots to guitars and bicycles. Items can be as small as an earring or as large as a house.
Mosaics in street art
thumb|left|A work by Invader in Emaux de Briare.
In styles that owe as much to videogame pixel art and popculture as to traditional mosaic, street art has seen a novel reinvention and expansion of mosaic artwork. The most prominent artist working with mosaics in street art is the French Invader. He has done almost all his work in two very distinct mosaic styles, the first of which are small "traditional" tile mosaics of 8 bit video game character, installed in cities across the globe, and the second of which are a style he refers to as "Rubikcubism", which uses a kind of dual layer mosaic via grids of scrambled Rubik's Cubes. Although he is the most prominent, other street and urban artists do work in Mosaic styles as well.
Calçada Portuguesa
thumb|upright|Copacabana (Rio de Janeiro)
Portuguese pavement (in Portuguese, Calçada Portuguesa) is a kind of two-tone stone mosaic paving created in Portugal, and common throughout the Lusosphere. Most commonly taking the form of geometric patterns from the simple to the complex, it also is used to create complex pictorial mosaics in styles ranging from iconography to classicism and even modern design. In Portuguese-speaking countries, many cities have a large amount of their sidewalks and even, though far more occasionally, streets done in this mosaic form. Lisbon in particular maintains almost all walkways in this style.
thumb|Mosaic art at Bonifacio High Street in Bonifacio Global City, Philippines
Despite its prevalence and popularity throughout Portugal and its former colonies, and its relation to older art and architectural styles like Azulejo, Portuguese and Spanish painted tilework, it is a relatively young mosaic artform, its first definitive appearance in a modernly recognizable form being in the mid-1800s. Among the most commonly used stones in this style are basalt and limestone.
Terminology
thumb|Fernand Léger – Grand parade with red background, mosaic 1958 (designed 1953). National Gallery of Victoria (NGV), Australia
Mosaic is an art form which uses small pieces of materials placed together to create a unified whole. The materials commonly used are marble or other stone, glass, pottery, mirror or foil-backed glass, or shells.
The word mosaic is from the Italian mosaico deriving from the Latin mosaicus and ultimately from the Greek mouseios meaning belonging to the Muses, hence artistic. Each piece of material is a Tessera (plural: tesserae). The space in between where the grout goes is an interstice. Andamento is the word used to describe the movement and flow of Tesserae. The 'opus', the Latin for ‘work’, is the way in which the pieces are cut and placed.
Common techniques include:
Opus regulatum: A grid; all tesserae align both vertically and horizontally.
Opus tessellatum: Tesserae form vertical or horizontal rows, but not both.
Opus vermiculatum: One or more lines of tesserae follow the edge of a special shape (letters or a major central graphic).
Opus musivum: Vermiculatum extends throughout the entire background.
Opus palladianum: Instead of forming rows, tesserae are irregularly shaped. Also known as "crazy paving".
Opus sectile: A major shape (e.g. heart, letter, cat) is formed by a single tessera, as later in pietra dura.
Opus classicum: When vermiculatum is combined with tessellatum or regulatum.
Opus circumactum: Tesserae are laid in overlapping semicircles or fan shapes.
Micromosaic: using very small tesserae, in Byzantine icons and Italian panels for jewellery from the Renaissance on.
Three techniques
thumb|upright|Tool table for ancient Roman mosaics at Roman villa of La Olmeda in Pedrosa de la Vega, Province of Palencia (Castile and León, Spain).thumb|These are the hammer and hardie,mosaic tools used for cutting stone by Italian mosaic artists
There are three main methods: the direct method, the indirect method and the double indirect method.
Direct method
right|thumb|A 'Direct Method' mosaic courtyard made from irregular pebbles and stone strips, Li Jiang, Yunnan, PRC (China)
The direct method of mosaic construction involves directly placing (gluing) the individual tesserae onto the supporting surface. This method is well suited to surfaces that have a three-dimensional quality, such as vases. This was used for the historic European wall and ceiling mosaics, following underdrawings of the main outlines on the wall below, which are often revealed again when the mosaic falls away.
The direct method suits small projects that are transportable. Another advantage of the direct method is that the resulting mosaic is progressively visible, allowing for any adjustments to tile color or placement.
The disadvantage of the direct method is that the artist must work directly at the chosen surface, which is often not practical for long periods of time, especially for large-scale projects. Also, it is difficult to control the evenness of the finished surface. This is of particular importance when creating a functional surface such as a floor or a table top.
A modern version of the direct method, sometimes called "double direct," is to work directly onto fiberglass mesh. The mosaic can then be constructed with the design visible on the surface and transported to its final location. Large work can be done in this way, with the mosaic being cut up for shipping and then reassembled for installation. It enables the artist to work in comfort in a studio rather than at the site of installation.
Indirect method
The indirect method of applying tesserae is often used for very large projects, projects with repetitive elements or for areas needing site specific shapes. Tiles are applied face-down to a backing paper using an adhesive, and later transferred onto walls, floors or craft projects. This method is most useful for extremely large projects as it gives the maker time to rework areas, allows the cementing of the tiles to the backing panel to be carried out quickly in one operation and helps ensure that the front surfaces of the mosaic tiles and mosaic pieces are flat and in the same plane on the front, even when using tiles and pieces of differing thicknesses. Mosaic murals, benches and tabletops are some of the items usually made using the indirect method, as it results in a smoother and more even surface.
Double indirect method
The double indirect method can be used when it is important to see the work during the creation process as it will appear when completed. The tesserae are placed face-up on a medium (often adhesive-backed paper, sticky plastic or soft lime or putty) as it will appear when installed. When the mosaic is complete, a similar medium is placed atop it. The piece is then turned over, the original underlying material is carefully removed, and the piece is installed as in the indirect method described above. In comparison to the indirect method, this is a complex system to use and requires great skill on the part of the operator, to avoid damaging the work. Its greatest advantage lies in the possibility of the operator directly controlling the final result of the work, which is important e.g. when the human figure is involved.
This method was created in 1989 by Maurizio Placuzzi and registered for industrial use (patent n. 0000222556) under the name of his company, Sicis International Srl, now Sicis The Art Mosaic Factory Srl.
Mathematics
The best way to arrange variously shaped tiles on a surface leads to the mathematical field of tessellation.
The artist M. C. Escher was influenced by Moorish mosaics to begin his investigations into tessellation.
Digital imaging
A mosaic in digital imaging is a plurality of non-overlapping images, arranged in some tessellation. A photomosaic is a picture made up of various other pictures (pioneered by Joseph Francis), in which each "pixel" is another picture, when examined closely. This form has been adopted in many modern media and digital image searches.
A tile mosaic is a digital image made up of individual tiles, arranged in a non-overlapping fashion, e.g. to make a static image on a shower room or bathing pool floor, by breaking the image down into square pixels formed from ceramic tiles (a typical size is , as for example, on the floor of the University of Toronto pool, though sometimes larger tiles such as are used). These digital images are coarse in resolution and often simply express text, such as the depth of the pool in various places, but some such digital images are used to show a sunset or other beach theme.
Recent developments in digital image processing have led to the ability to design physical tile mosaics using computer aided design (CAD) software. The software typically takes as inputs a source bitmap and a palette of colored tiles. The software makes a best-fit match of the tiles to the source image.
Robotic manufacturing
With high cost of labor in developed countries, production automation has become increasingly popular. Rather than being assembled by hand, mosaics designed using computer aided design (CAD) software can be assembled by a robot. Production can be greater than 10 times faster with higher accuracy. But these "computer" mosaics have a different look than hand-made "artisanal" mosaics. With robotic production, colored tiles are loaded into buffers, and then the robot picks and places tiles individually according to a command file from the design software.
See also
Pixel art
Terrazzo
International Association of Marble, Slate and Stone Polishers, Rubbers and Sawyers, Tile and Marble Setters' Helpers and Marble Mosaic and Terrazzo Workers' Helpers
Notes
References
(for the section of Byzantium and Sicily)
External links
Category:Handicrafts
Category:Decorative arts
Category:Architectural elements
Category:Persian art
Category:Ancient Roman architecture
Category:Byzantine art
Category:Art materials
Category:Pavements | 61,309 | 2017-01 |
Pesticide | thumb|right|A crop-duster spraying pesticide on a field
thumb|A Lite-Trac four-wheeled self-propelled crop sprayer spraying pesticide on a field
Pesticides are substances meant for attracting, seducing, and then destroying any pest.US Environmental (July 24, 2007), What is a pesticide? epa.gov. Retrieved on September 15, 2007.
They are a class of biocide. The most common use of pesticides is as plant protection products (also known as crop protection products), which in general protect plants from damaging influences such as weeds, fungi, or insects. This use of pesticides is so common that the term pesticide is often treated as synonymous with plant protection product, although it is in fact a broader term, as pesticides are also used for non-agricultural purposes. The term pesticide includes all of the following: herbicide, insecticide, insect growth regulator, nematicide, termiticide, molluscicide, piscicide, avicide, rodenticide, predacide, bactericide, insect repellent, animal repellent, antimicrobial, fungicide, disinfectant (antimicrobial), and sanitizer.Carolyn Randall (ed.), et al., National Pesticide Applicator Certification Core Manual (2013) National Association of State Departments of Agriculture Research Foundation, Washington, DC, Ch.1
In general, a pesticide is a chemical or biological agent (such as a virus, bacterium, antimicrobial, or disinfectant) that deters, incapacitates, kills, or otherwise discourages pests. Target pests can include insects, plant pathogens, weeds, mollusks, birds, mammals, fish, nematodes (roundworms), and microbes that destroy property, cause nuisance, or spread disease, or are disease vectors. Although pesticides have benefits, some also have drawbacks, such as potential toxicity to humans and other species. According to the Stockholm Convention on Persistent Organic Pollutants, 9 of the 12 most dangerous and persistent organic chemicals are organochlorine pesticides.Beginner's guide
Definition
Type of pesticideTarget pest group Herbicides Plant Algicides or Algaecides Algae Avicides Birds Bactericides Bacteria Fungicides Fungi and Oomycetes Insecticides Insects Miticides or Acaricides Mites Molluscicides Snails Nematicides Nematodes Rodenticides Rodents Virucides Viruses
The Food and Agriculture Organization (FAO) has defined pesticide as:
any substance or mixture of substances intended for preventing, destroying, or controlling any pest, including vectors of human or animal disease, unwanted species of plants or animals, causing harm during or otherwise interfering with the production, processing, storage, transport, or marketing of food, agricultural commodities, wood and wood products or animal feedstuffs, or substances that may be administered to animals for the control of insects, arachnids, or other pests in or on their bodies. The term includes substances intended for use as a plant growth regulator, defoliant, desiccant, or agent for thinning fruit or preventing the premature fall of fruit. Also used as substances applied to crops either before or after harvest to protect the commodity from deterioration during storage and transport.
Pesticides can be classified by target organism (e.g., herbicides, insecticides, fungicides, rodenticides, and pediculicides - see table), chemical structure (e.g., organic, inorganic, synthetic, or biological (biopesticide),Council on Scientific Affairs, American Medical Association. (1997). Educational and Informational Strategies to Reduce Pesticide Risks. Preventive Medicine, Volume 26, Number 2 although the distinction can sometimes blur), and physical state (e.g. gaseous (fumigant)). Biopesticides include microbial pesticides and biochemical pesticides.EPA. Types of Pesticides. Last updated on Thursday, January 29th, 2009. Plant-derived pesticides, or "botanicals", have been developing quickly. These include the pyrethroids, rotenoids, nicotinoids, and a fourth group that includes strychnine and scilliroside.Kamrin MA. (1997). Pesticide Profiles: toxicity, environmental impact, and fate. CRC Press.
Many pesticides can be grouped into chemical families. Prominent insecticide families include organochlorines, organophosphates, and carbamates. Organochlorine hydrocarbons (e.g., DDT) could be separated into dichlorodiphenylethanes, cyclodiene compounds, and other related compounds. They operate by disrupting the sodium/potassium balance of the nerve fiber, forcing the nerve to transmit continuously. Their toxicities vary greatly, but they have been phased out because of their persistence and potential to bioaccumulate. Organophosphate and carbamates largely replaced organochlorines. Both operate through inhibiting the enzyme acetylcholinesterase, allowing acetylcholine to transfer nerve impulses indefinitely and causing a variety of symptoms such as weakness or paralysis. Organophosphates are quite toxic to vertebrates, and have in some cases been replaced by less toxic carbamates. Thiocarbamate and dithiocarbamates are subclasses of carbamates. Prominent families of herbicides include phenoxy and benzoic acid herbicides (e.g. 2,4-D), triazines (e.g., atrazine), ureas (e.g., diuron), and Chloroacetanilides (e.g., alachlor). Phenoxy compounds tend to selectively kill broad-leaf weeds rather than grasses. The phenoxy and benzoic acid herbicides function similar to plant growth hormones, and grow cells without normal cell division, crushing the plant's nutrient transport system. Triazines interfere with photosynthesis. Many commonly used pesticides are not included in these families, including glyphosate.
Pesticides can be classified based upon their biological mechanism function or application method. Most pesticides work by poisoning pests.Cornell University. Toxicity of pesticides. Pesticide fact sheets and tutorial, module 4. Pesticide Safety Education Program. Retrieved on 2007-10-10. A systemic pesticide moves inside a plant following absorption by the plant. With insecticides and most fungicides, this movement is usually upward (through the xylem) and outward. Increased efficiency may be a result. Systemic insecticides, which poison pollen and nectar in the flowers, may kill bees and other needed pollinators.
In 2009, the development of a new class of fungicides called paldoxins was announced. These work by taking advantage of natural defense chemicals released by plants called phytoalexins, which fungi then detoxify using enzymes. The paldoxins inhibit the fungi's detoxification enzymes. They are believed to be safer and greener.EurekAlert. (2009). New 'green' pesticides are first to exploit plant defenses in battle of the fungi.
Uses
Pesticides are used to control organisms that are considered to be harmful.The benefits of pesticides: A story worth telling. Purdue.edu. Retrieved on September 15, 2007. For example, they are used to kill mosquitoes that can transmit potentially deadly diseases like West Nile virus, yellow fever, and malaria. They can also kill bees, wasps or ants that can cause allergic reactions. Insecticides can protect animals from illnesses that can be caused by parasites such as fleas. Pesticides can prevent sickness in humans that could be caused by moldy food or diseased produce. Herbicides can be used to clear roadside weeds, trees and brush. They can also kill invasive weeds that may cause environmental damage. Herbicides are commonly applied in ponds and lakes to control algae and plants such as water grasses that can interfere with activities like swimming and fishing and cause the water to look or smell unpleasant.Helfrich, LA, Weigmann, DL, Hipkins, P, and Stinson, ER (June 1996), Pesticides and aquatic animals: A guide to reducing impacts on aquatic systems. Virginia Cooperative Extension. Retrieved on 2007-10-14. Uncontrolled pests such as termites and mold can damage structures such as houses. Pesticides are used in grocery stores and food storage facilities to manage rodents and insects that infest food such as grain. Each use of a pesticide carries some associated risk. Proper pesticide use decreases these associated risks to a level deemed acceptable by pesticide regulatory agencies such as the United States Environmental Protection Agency (EPA) and the Pest Management Regulatory Agency (PMRA) of Canada.
DDT, sprayed on the walls of houses, is an organochlorine that has been used to fight malaria since the 1950s. Recent policy statements by the World Health Organization have given stronger support to this approach.World Health Organization (September 15, 2006), WHO gives indoor use of DDT a clean bill of health for controlling malaria. Retrieved on September 13, 2007. However, DDT and other organochlorine pesticides have been banned in most countries worldwide because of their persistence in the environment and human toxicity. DDT use is not always effective, as resistance to DDT was identified in Africa as early as 1955, and by 1972 nineteen species of mosquito worldwide were resistant to DDT.PANNA: PAN Magazine: In Depth: DDT & MalariaA STORY TO BE SHARED: THE SUCCESSFUL FIGHT AGAINST MALARIA IN VIETNAM
Amount used
In 2006 and 2007, the world used approximately of pesticides, with herbicides constituting the biggest part of the world pesticide use at 40%, followed by insecticides (17%) and fungicides (10%). In 2006 and 2007 the U.S. used approximately of pesticides, accounting for 22% of the world total, including of conventional pesticides, which are used in the agricultural sector (80% of conventional pesticide use) as well as the industrial, commercial, governmental and home & garden sectors.Pesticides are also found in majority of U.S. households with 78 million out of the 105.5 million households indicating that they use some form of pesticide.EPA Pesticide Industry Sales and Usage Report As of 2007, there were more than 1,055 active ingredients registered as pesticides, which yield over 20,000 pesticide products that are marketed in the United States.
The US used some 1 kg (2.2 pounds) per hectare of arable land compared with: 4.7 kg in China, 1.3 kg in the UK, 0.1 kg in Cameroon, 5.9 kg in Japan and 2.5 kg in Italy. Insecticide use in the US has declined by more than half since 1980, (.6%/yr) mostly due to the near phase-out of organophosphates. In corn fields, the decline was even steeper, due to the switchover to transgenic Bt corn.
For the global market of crop protection products, market analysts forecast revenues of over 52 billion US$ in 2019.
Benefits
Pesticides can save farmers' money by preventing crop losses to insects and other pests; in the U.S., farmers get an estimated fourfold return on money they spend on pesticides.Kellogg RL, Nehring R, Grube A, Goss DW, and Plotkin S (February 2000), Environmental indicators of pesticide leaching and runoff from farm fields. United States Department of Agriculture Natural Resources Conservation Service. Retrieved on 2007-10-03. One study found that not using pesticides reduced crop yields by about 10%. Another study, conducted in 1999, found that a ban on pesticides in the United States may result in a rise of food prices, loss of jobs, and an increase in world hunger.Knutson, R. (1999). Economic Impact of Reduced Pesticide Use in the United States.Agricultural and Food Policy Center. Texas A&M University.
There are two levels of benefits for pesticide use, primary and secondary. Primary benefits are direct gains from the use of pesticides and secondary benefits are effects that are more long-term.
Primary benefits
Controlling pests and plant disease vectors
Improved crop/livestock yields
Improved crop/livestock quality
Invasive species controlled
Controlling human/livestock disease vectors and nuisance organisms
Human lives saved and suffering reduced
Animal lives saved and suffering reduced
Diseases contained geographically
Controlling organisms that harm other human activities and structures
Drivers view unobstructed
Tree/brush/leaf hazards prevented
Wooden structures protected
Monetary
Every dollar ($1) that is spent on pesticides for crops yields four dollars ($4) in crops saved.Pimentel, David, H. Acquay, M. Biltonen, P. Rice, and M. Silva. "Environmental and Economic Costs of Pesticide Use." BioScience 42.10 (1992): 750-60., . Retrieved on February 25, 2011. This means based that, on the amount of money spent per year on pesticides, $10 billion, there is an additional $40 billion savings in crop that would be lost due to damage by insects and weeds. In general, farmers benefit from having an increase in crop yield and from being able to grow a variety of crops throughout the year. Consumers of agricultural products also benefit from being able to afford the vast quantities of produce available year-round.Cooper, Jerry and Hans Dobson. "The benefits of pesticides to mankind and the environment" Crop Protection 26 (2007): 1337-1348., Retrieved on February 25, 2011. The general public also benefits from the use of pesticides for the control of insect-borne diseases and illnesses, such as malaria. The use of pesticides creates a large job market within the agrichemical sector.
Costs
On the cost side of pesticide use there can be costs to the environment, costs to human health, as well as costs of the development and research of new pesticides.
Health effects
thumb|A sign warning about potential pesticide exposure.
Pesticides may cause acute and delayed health effects in people who are exposed.U.S. Environmental Protection Agency (August 30, 2007), Pesticides: Health and Safety. National Assessment of the Worker Protection Workshop #3. Pesticide exposure can cause a variety of adverse health effects, ranging from simple irritation of the skin and eyes to more severe effects such as affecting the nervous system, mimicking hormones causing reproductive problems, and also causing cancer. A 2007 systematic review found that "most studies on non-Hodgkin lymphoma and leukemia showed positive associations with pesticide exposure" and thus concluded that cosmetic use of pesticides should be decreased. There is substantial evidence of associations between organophosphate insecticide exposures and neurobehavioral alterations. Limited evidence also exists for other negative outcomes from pesticide exposure including neurological, birth defects, and fetal death.
The American Academy of Pediatrics recommends limiting exposure of children to pesticides and using safer alternatives:
The World Health Organization and the UN Environment Programme estimate that each year, 3 million workers in agriculture in the developing world experience severe poisoning from pesticides, about 18,000 of whom die. Owing to inadequate regulation and safety precautions, 99% of pesticide related deaths occur in developing countries that account for only 25% of pesticide usage. According to one study, as many as 25 million workers in developing countries may suffer mild pesticide poisoning yearly. There are several careers aside from agriculture that may also put individuals at risk of health effects from pesticide exposure including pet groomers, groundskeepers, and fumigators.
One study found pesticide self-poisoning the method of choice in one third of suicides worldwide, and recommended, among other things, more restrictions on the types of pesticides that are most harmful to humans.
A 2014 epidemiological review found associations between autism and exposure to certain pesticides, but noted that the available evidence was insufficient to conclude that the relationship was causal.
Environmental effect
Pesticide use raises a number of environmental concerns. Over 98% of sprayed insecticides and 95% of herbicides reach a destination other than their target species, including non-target species, air, water and soil. Pesticide drift occurs when pesticides suspended in the air as particles are carried by wind to other areas, potentially contaminating them. Pesticides are one of the causes of water pollution, and some pesticides are persistent organic pollutants and contribute to soil contamination.
In addition, pesticide use reduces biodiversity, contributes to pollinator decline, destroys habitat (especially for birds),Palmer, WE, Bromley, PT, and Brandenburg, RL. Wildlife & pesticides - Peanuts. North Carolina Cooperative Extension Service. Retrieved on 2007-10-11. and threatens endangered species.Miller GT (2004), Sustaining the Earth, 6th edition. Thompson Learning, Inc. Pacific Grove, California. Chapter 9, Pages 211-216.
Pests can develop a resistance to the pesticide (pesticide resistance), necessitating a new pesticide. Alternatively a greater dose of the pesticide can be used to counteract the resistance, although this will cause a worsening of the ambient pollution problem.
Since chlorinated hydrocarbon pesticides dissolve in fats and are not excreted, organisms tend to retain them almost indefinitely. Biological magnification is the process whereby these chlorinated hydrocarbons (pesticides) are more concentrated at each level of the food chain. Among marine animals, pesticide concentrations are higher in carnivorous fishes, and even more so in the fish-eating birds and mammals at the top of the ecological pyramid.Castro, Peter, and Michael E.Huber. Marine Biology. 8th. New York: McGraw-Hill Companies Inc., 2010. Print. Global distillation is the process whereby pesticides are transported from warmer to colder regions of the Earth, in particular the Poles and mountain tops. Pesticides that evaporate into the atmosphere at relatively high temperature can be carried considerable distances (thousands of kilometers) by the wind to an area of lower temperature, where they condense and are carried back to the ground in rain or snow.L. Quinn, Amie. "The impacts of agriculture and temperature on the physiological stress response in fish." Uleth. University of Lethbridge, n.d. Web. 20 Nov 2012.
In order to reduce negative impacts, it is desirable that pesticides be degradable or at least quickly deactivated in the environment. Such loss of activity or toxicity of pesticides is due to both innate chemical properties of the compounds and environmental processes or conditions.Sims, G. K. and A.M. Cupples. 1999. Factors controlling degradation of pesticides in soil. Pesticide Science 55:598-601. For example, the presence of halogens within a chemical structure often slows down degradation in an aerobic environment.Sims, G. K. and L.E. Sommers. 1986. Biodegradation of pyridine derivatives in soil suspensions. Environmental Toxicology and Chemistry. 5:503-509. Adsorption to soil may retard pesticide movement, but also may reduce bioavailability to microbial degraders.
Economics
Harm Annual US cost Public health $1.1 billion Pesticide resistance in pest $1.5 billion Crop losses caused by pesticides $1.4 billion Bird losses due to pesticides $2.2 billion Groundwater contamination $2.0 billion Other costs $1.4 billion Total costs $9.6 billion
Human health and environmental cost from pesticides in the United States is estimated at $9.6 billion offset by about $40 billion in increased agricultural production:Pimentel, David. "Environmental and Economic Costs of the Application of Pesticides Primarily in the United States" Environment, Development and Sustainability 7 (2005): 229-252. Retrieved on February 25, 2011.
Additional costs include the registration process and the cost of purchasing pesticides. The registration process can take several years to complete (there are 70 different types of field test) and can cost $50–70 million for a single pesticide. Annually the United States spends $10 billion on pesticides.
Alternatives
Alternatives to pesticides are available and include methods of cultivation, use of biological pest controls (such as pheromones and microbial pesticides), genetic engineering, and methods of interfering with insect breeding. Application of composted yard waste has also been used as a way of controlling pests.R. McSorley and R. N. Gallaher, "Effect of Yard Waste Compost on Nematode Densities and Maize Yield", J Nematology, Vol. 2, No. 4S, pp. 655–660, Dec. 1996. These methods are becoming increasingly popular and often are safer than traditional chemical pesticides. In addition, EPA is registering reduced-risk conventional pesticides in increasing numbers.
Cultivation practices include polyculture (growing multiple types of plants), crop rotation, planting crops in areas where the pests that damage them do not live, timing planting according to when pests will be least problematic, and use of trap crops that attract pests away from the real crop. Trap crops have successfully controlled pests in some commercial agricultural systems while reducing pesticide usage; however, in many other systems, trap crops can fail to reduce pest densities at a commercial scale, even when the trap crop works in controlled experiments. In the U.S., farmers have had success controlling insects by spraying with hot water at a cost that is about the same as pesticide spraying.
Release of other organisms that fight the pest is another example of an alternative to pesticide use. These organisms can include natural predators or parasites of the pests. Biological pesticides based on entomopathogenic fungi, bacteria and viruses cause disease in the pest species can also be used.
Interfering with insects' reproduction can be accomplished by sterilizing males of the target species and releasing them, so that they mate with females but do not produce offspring. This technique was first used on the screwworm fly in 1958 and has since been used with the medfly, the tsetse fly,(July 2007), The biological control of pests. Retrieved on September 17, 2007. and the gypsy moth.SP-401 Skylab, Classroom in Space: Part III - Science Demonstrations, Chapter 17: Life Sciences. History.nasa.gov. Retrieved on September 17, 2007. However, this can be a costly, time consuming approach that only works on some types of insects.
Agroecology emphasize nutrient recycling, use of locally available and renewable resources, adaptation to local conditions, utilization of microenvironments, reliance on indigenous knowledge and yield maximization while maintaining soil productivity. Agroecology also emphasizes empowering people and local communities to contribute to development, and encouraging “multi-directional” communications rather than the conventional “top-down” method.
Push pull strategy
The term "push-pull" was established in 1987 as an approach for integrated pest management (IPM). This strategy uses a mixture of behavior-modifying stimuli to manipulate the distribution and abundance of insects. "Push" means the insects are repelled or deterred away from whatever resource that is being protected. "Pull" means that certain stimuli (semiochemical stimuli, pheromones, food additives, visual stimuli, genetically altered plants, etc.) are used to attract pests to trap crops where they will be killed. There are numerous different components involved in order to implement a Push-Pull Strategy in IPM.
Many case studies testing the effectiveness of the push-pull approach have been done across the world. The most successful push-pull strategy was developed in Africa for subsistence farming. Another successful case study was performed on the control of Helicoverpa in cotton crops in Australia. In Europe, the Middle East, and the United States, push-pull strategies were successfully used in the controlling of Sitona lineatus in bean fields.
Some advantages of using the push-pull method are less use of chemical or biological materials and better protection against insect habituation to this control method. Some disadvantages of the push-pull strategy is that if there is a lack of appropriate knowledge of behavioral and chemical ecology of the host-pest interactions then this method becomes unreliable. Furthermore, because the push-pull method is not a very popular method of IPM operational and registration costs are higher.
Effectiveness
Some evidence shows that alternatives to pesticides can be equally effective as the use of chemicals. For example, Sweden has halved its use of pesticides with hardly any reduction in crops. In Indonesia, farmers have reduced pesticide use on rice fields by 65% and experienced a 15% crop increase. A study of Maize fields in northern Florida found that the application of composted yard waste with high carbon to nitrogen ratio to agricultural fields was highly effective at reducing the population of plant-parasitic nematodes and increasing crop yield, with yield increases ranging from 10% to 212%; the observed effects were long-term, often not appearing until the third season of the study.
However, pesticide resistance is increasing. In the 1940s, U.S. farmers lost only 7% of their crops to pests. Since the 1980s, loss has increased to 13%, even though more pesticides are being used. Between 500 and 1,000 insect and weed species have developed pesticide resistance since 1945.
Types
Pesticides are often referred to according to the type of pest they control. Pesticides can also be considered as either biodegradable pesticides, which will be broken down by microbes and other living beings into harmless compounds, or persistent pesticides, which may take months or years before they are broken down: it was the persistence of DDT, for example, which led to its accumulation in the food chain and its killing of birds of prey at the top of the food chain. Another way to think about pesticides is to consider those that are chemical pesticides or are derived from a common source or production method.
Some examples of chemically-related pesticides are:
Organophosphate pesticides
Organophosphates affect the nervous system by disrupting, acetylcholinesterase activity, the enzyme that regulates acetylcholine, a neurotransmitter. Most organophosphates are insecticides. They were developed during the early 19th century, but their effects on insects, which are similar to their effects on humans, were discovered in 1932. Some are very poisonous. However, they usually are not persistent in the environment.
Carbamate pesticides
Carbamate pesticides affect the nervous system by disrupting an enzyme that regulates acetylcholine, a neurotransmitter. The enzyme effects are usually reversible. There are several subgroups within the carbamates.
Organochlorine insecticides
They were commonly used in the past, but many have been removed from the market due to their health and environmental effects and their persistence (e.g., DDT, chlordane, and toxaphene).
Pyrethroid pesticides
They were developed as a synthetic version of the naturally occurring pesticide pyrethrin, which is found in chrysanthemums. They have been modified to increase their stability in the environment. Some synthetic pyrethroids are toxic to the nervous system.
Sulfonylurea herbicides
The following sulfonylureas have been commercialized for weed control: amidosulfuron, azimsulfuron, bensulfuron-methyl, chlorimuron-ethyl, ethoxysulfuron, flazasulfuron, flupyrsulfuron-methyl-sodium, halosulfuron-methyl, imazosulfuron, nicosulfuron, oxasulfuron, primisulfuron-methyl, pyrazosulfuron-ethyl, rimsulfuron, sulfometuron-methyl
Sulfosulfuron, terbacil, bispyribac-sodium, cyclosulfamuron, and pyrithiobac-sodium.Arnold P. Appleby, Franz Müller, Serge Carpy "Weed Control" in Ullmann's Encyclopedia of Industrial Chemistry 2002, Wiley-VCH, Weinheim. Nicosulfuron, triflusulfuron methyl,EFSA September 30, 2008 EFSA Scientific Report (2008) 195, 1-115: Conclusion on the peer review of triflusulfuron and chlorsulfuron are broad-spectrum herbicides that kill plants by inhibiting the enzyme acetolactate synthase. In the 1960s, more than crop protection chemical was typically applied, while sulfonylureates allow as little as 1% as much material to achieve the same effect.
Biopesticides
Biopesticides are certain types of pesticides derived from such natural materials as animals, plants, bacteria, and certain minerals. For example, canola oil and baking soda have pesticidal applications and are considered biopesticides. Biopesticides fall into three major classes:
Microbial pesticides which consist of bacteria, entomopathogenic fungi or viruses (and sometimes includes the metabolites that bacteria or fungi produce). Entomopathogenic nematodes are also often classed as microbial pesticides, even though they are multi-cellular.Francis Borgio J, Sahayaraj K and Alper Susurluk I (eds) . Microbial Insecticides: Principles and Applications, Nova Publishers, USA. 492pp. ISBN 978-1-61209-223-2
Biochemical pesticides or herbal pesticides are naturally occurring substances that control (or monitor in the case of pheromones) pests and microbial diseases.
Plant-incorporated protectants (PIPs) have genetic material from other species incorporated into their genetic material (i.e. GM crops). Their use is controversial, especially in many European countries.National Pesticide Information Center Last updated November 21, 2013 Plant Incorporated Protectants (PIPs) / Genetically Modified Plants
Classified by type of pest
Pesticides that are related to the type of pests are:
Type Action Algicides Control algae in lakes, canals, swimming pools, water tanks, and other sites Antifouling agents Kill or repel organisms that attach to underwater surfaces, such as boat bottoms Antimicrobials Kill microorganisms (such as bacteria and viruses) Attractants Attract pests (for example, to lure an insect or rodent to a trap). (However, food is not considered a pesticide when used as an attractant.) Biopesticides Biopesticides are certain types of pesticides derived from such natural materials as animals, plants, bacteria, and certain minerals Biocides Kill microorganisms Disinfectants and sanitizers Kill or inactivate disease-producing microorganisms on inanimate objects Fungicides Kill fungi (including blights, mildews, molds, and rusts) Fumigants Produce gas or vapor intended to destroy pests in buildings or soil Herbicides Kill weeds and other plants that grow where they are not wanted Insecticides Kill insects and other arthropods Miticides Kill mites that feed on plants and animals Microbial pesticides Microorganisms that kill, inhibit, or out compete pests, including insects or other microorganisms Molluscicides Kill snails and slugs Nematicides Kill nematodes (microscopic, worm-like organisms that feed on plant roots) Ovicides Kill eggs of insects and mites Pheromones Biochemicals used to disrupt the mating behavior of insects Repellents Repel pests, including insects (such as mosquitoes) and birds Rodenticides Control mice and other rodents
Further types of pesticides
The term pesticide also include these substances:
Defoliants : Cause leaves or other foliage to drop from a plant, usually to facilitate harvest.
Desiccants : Promote drying of living tissues, such as unwanted plant tops.
Insect growth regulators : Disrupt the molting, maturity from pupal stage to adult, or other life processes of insects.
Plant growth regulators : Substances (excluding fertilizers or other plant nutrients) that alter the expected growth, flowering, or reproduction rate of plants.
Regulation
International
In most countries, pesticides must be approved for sale and use by a government agency.Willson, Harold R (February 23, 1996), Pesticide Regulations. University of Minnesota. Retrieved on 2007-10-15.
In Europe, recent EU legislation has been approved banning the use of highly toxic pesticides including those that are carcinogenic, mutagenic or toxic to reproduction, those that are endocrine-disrupting, and those that are persistent, bioaccumulative and toxic (PBT) or very persistent and very bioaccumulative (vPvB). Measures were approved to improve the general safety of pesticides across all EU member states.Pesticide Legislation Approved last retrieved 13 January 2009
Though pesticide regulations differ from country to country, pesticides, and products on which they were used are traded across international borders. To deal with inconsistencies in regulations among countries, delegates to a conference of the United Nations Food and Agriculture Organization adopted an International Code of Conduct on the Distribution and Use of Pesticides in 1985 to create voluntary standards of pesticide regulation for different countries. The Code was updated in 1998 and 2002.Food and Agriculture Organization of the United Nations, Programmes: International Code of Conduct on the Distribution and Use of Pesticides. Retrieved on 2007-10-25. The FAO claims that the code has raised awareness about pesticide hazards and decreased the number of countries without restrictions on pesticide use.Food and Agriculture Organization of the United Nations (2002), International Code of Conduct on the Distribution and Use of Pesticides. Retrieved on 2007-10-25.
Three other efforts to improve regulation of international pesticide trade are the United Nations London Guidelines for the Exchange of Information on Chemicals in International Trade and the United Nations Codex Alimentarius Commission. The former seeks to implement procedures for ensuring that prior informed consent exists between countries buying and selling pesticides, while the latter seeks to create uniform standards for maximum levels of pesticide residues among participating countries.Reynolds, JD (1997), International pesticide trade: Is there any hope for the effective regulation of controlled substances? Florida State University Journal of Land Use & Environmental Law, Volume 131. Retrieved on 2007-10-16. Both initiatives operate on a voluntary basis.
Pesticides safety education and pesticide applicator regulation are designed to protect the public from pesticide misuse, but do not eliminate all misuse. Reducing the use of pesticides and choosing less toxic pesticides may reduce risks placed on society and the environment from pesticide use. Integrated pest management, the use of multiple approaches to control pests, is becoming widespread and has been used with success in countries such as Indonesia, China, Bangladesh, the U.S., Australia, and Mexico. IPM attempts to recognize the more widespread impacts of an action on an ecosystem, so that natural balances are not upset.Daly H, Doyen JT, and Purcell AH III (1998), Introduction to insect biology and diversity, 2nd edition. Oxford University Press. New York, New York. Chapter 14, Pages 279-300. New pesticides are being developed, including biological and botanical derivatives and alternatives that are thought to reduce health and environmental risks. In addition, applicators are being encouraged to consider alternative controls and adopt methods that reduce the use of chemical pesticides.
Pesticides can be created that are targeted to a specific pest's lifecycle, which can be environmentally more friendly.Science Daily, (October 11, 2001), Environmentally-friendly pesticide to combat potato cyst nematodes. Sciencedaily.com. Retrieved on September 19, 2007. For example, potato cyst nematodes emerge from their protective cysts in response to a chemical excreted by potatoes; they feed on the potatoes and damage the crop. A similar chemical can be applied to fields early, before the potatoes are planted, causing the nematodes to emerge early and starve in the absence of potatoes.
United States
thumb|Preparation for an application of hazardous herbicide in USA.
In the United States, the Environmental Protection Agency (EPA) is responsible for regulating pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Food Quality Protection Act (FQPA). Studies must be conducted to establish the conditions in which the material is safe to use and the effectiveness against the intended pest(s). The EPA regulates pesticides to ensure that these products do not pose adverse effects to humans or the environment. Pesticides produced before November 1984 continue to be reassessed in order to meet the current scientific and regulatory standards. All registered pesticides are reviewed every 15 years to ensure they meet the proper standards. During the registration process, a label is created. The label contains directions for proper use of the material in addition to safety restrictions. Based on acute toxicity, pesticides are assigned to a Toxicity Class.
Some pesticides are considered too hazardous for sale to the general public and are designated restricted use pesticides. Only certified applicators, who have passed an exam, may purchase or supervise the application of restricted use pesticides. Records of sales and use are required to be maintained and may be audited by government agencies charged with the enforcement of pesticide regulations. These records must be made available to employees and state or territorial environmental regulatory agencies.
The EPA regulates pesticides under two main acts, both of which amended by the Food Quality Protection Act of 1996. In addition to the EPA, the United States Department of Agriculture (USDA) and the United States Food and Drug Administration (FDA) set standards for the level of pesticide residue that is allowed on or in crops.Stephen J. Toth, Jr., "Pesticide Impact Assessment Specialist, North Carolina Cooperative Extension Service, "Federal Pesticide Laws and Regulations" March, 1996. Retrieved on February 25, 2011. The EPA looks at what the potential human health and environmental effects might be associated with the use of the pesticide.US Environmental Protection Agency (February 16, 2011), Pesticide Registration Program epa.gov. Retrieved on February 25, 2011.
In addition, the U.S. EPA uses the National Research Council's four-step process for human health risk assessment: (1) Hazard Identification, (2) Dose-Response Assessment, (3) Exposure Assessment, and (4) Risk Characterization."Assessing Health Risks from Pesticides". U.S. Environmental Protection Agency]"
Recently Kaua'i County (Hawai'i) passed Bill No. 2491 to add an article to Chapter 22 of the county's code relating to pesticides and GMOs. The bill strengthens protections of local communities in Kaua'i where many large pesticide companies test their products.Bill No. 2491, Draft 2, Council of the County of Kaua‘i
History
Since before 2000 BC, humans have utilized pesticides to protect their crops. The first known pesticide was elemental sulfur dusting used in ancient Sumer about 4,500 years ago in ancient Mesopotamia. The Rig Veda, which is about 4,000 years old, mentions the use of poisonous plants for pest control. By the 15th century, toxic chemicals such as arsenic, mercury, and lead were being applied to crops to kill pests. In the 17th century, nicotine sulfate was extracted from tobacco leaves for use as an insecticide. The 19th century saw the introduction of two more natural pesticides, pyrethrum, which is derived from chrysanthemums, and rotenone, which is derived from the roots of tropical vegetables.Miller, GT (2002). Living in the Environment (12th Ed.). Belmont: Wadsworth/Thomson Learning. ISBN 0-534-37697-5 Until the 1950s, arsenic-based pesticides were dominant.Ritter SR. (2009). Pinpointing Trends In Pesticide Use In 1939. C&E News. Paul Müller discovered that DDT was a very effective insecticide. Organochlorines such as DDT were dominant, but they were replaced in the U.S. by organophosphates and carbamates by 1975. Since then, pyrethrin compounds have become the dominant insecticide. Herbicides became common in the 1960s, led by "triazine and other nitrogen-based compounds, carboxylic acids such as 2,4-dichlorophenoxyacetic acid, and glyphosate".
The first legislation providing federal authority for regulating pesticides was enacted in 1910; however, decades later during the 1940s manufacturers began to produce large amounts of synthetic pesticides and their use became widespread. Some sources consider the 1940s and 1950s to have been the start of the "pesticide era."Graeme Murphy (December 1, 2005), Resistance Management - Pesticide Rotation. Ontario Ministry of Agriculture, Food and Rural Affairs. Retrieved on September 15, 2007. Although the U.S. Environmental Protection Agency was established in 1970 and amendments to the pesticide law in 1972, pesticide use has increased 50-fold since 1950 and 2.3 million tonnes (2.5 million short tons) of industrial pesticides are now used each year. Seventy-five percent of all pesticides in the world are used in developed countries, but use in developing countries is increasing. A study of USA pesticide use trends through 1997 was published in 2003 by the National Science Foundation's Center for Integrated Pest Management.Arnold L. Aspelin (February, 2003), PESTICIDE USAGE IN THE UNITED STATES: Trends During the 20th Century. NSF CIPM Technical Bulletin 105. Retrieved on October 28, 2010.
In the 1960s, it was discovered that DDT was preventing many fish-eating birds from reproducing, which was a serious threat to biodiversity. Rachel Carson wrote the best-selling book Silent Spring about biological magnification. The agricultural use of DDT is now banned under the Stockholm Convention on Persistent Organic Pollutants, but it is still used in some developing nations to prevent malaria and other tropical diseases by spraying on interior walls to kill or repel mosquitoes.Lobe, J (Sept 16, 2006), "WHO urges DDT for malaria control Strategies," Inter Press Service, cited from Commondreams.org. Retrieved on September 15, 2007.
See also
Index of pesticide articles
Pesticide residue
Pest control
WHO Pesticide Evaluation Scheme
References
Further reading
Books
Larramendy, Marcelo L.; Soloneski, Sonia [Editors](2014): Pesticides: Toxic Aspects. InTech. ISBN 978-953-51-1217-4 [Open Access Download available]
Journal articles
World Health Organization Persistent Organic Pollutants: Impact on Child Health
News
External links
National Pesticide Information Center (NPIC) Information about pesticide-related topics.
Pesticide Modes of action (International Pesticide Application Research Centre)
Beyond Pesticides, founded in 1981 as the National Coalition Against the Misuse of Pesticides - Source of information on pesticide hazards, least-toxic practices and products, and on pesticide issues. Website has Daily News Blog relating to pesticides.
Compendium of Pesticide Common Names: Classified Lists of Pesticides Lists of pesticide names by type.
Pesticide Action Network. PAN Pesticides Database. Compilation of multiple regulatory databases into a web-accessible form.
PPDB Pesticide Properties Database A to Z index of pesticides
Pesticide regulatory authorities
UK Pesticides Safety Directorate
Pesticide laws guidance for Scotland and Northern Ireland on NetRegs.gov.uk
European Commission pesticide information
United States Environmental Protection Agency Office of Pesticides Program
US EPA Pesticide Chemical Search
USDA Pesticide Data Program, tracking residue levels in food
Human health
NIH encyclopedia pages with emergency treatment of Insecticide exposure
Hazard Communications for Agricultural Workers (October 2007)
National Agricultural Workers Survey
David Suzuki Foundation: Protecting Your Health from Pesticides
Field evaluation of protective clothing against non-agricultural pesticides by A Soutar and others. Institute of Occupational Medicine Research Report TM/00/04
A comparison of different methods for assessment of dermal exposure to nonagricultural pesticides in three sectors by SN Tannahill and others. Institute of Occupational Medicine Research Report TM/96/07
Category:Chemical substances
Category:Environmental health
Category:Soil contamination
Category:Biocides | 48,340 | 2017-01 |
University of Notre Dame | The University of Notre Dame du Lac (or simply Notre Dame ) is a Catholic research university located adjacent to South Bend, Indiana, in the United States. In French, Notre Dame du Lac means "Our Lady of the Lake" and refers to the university's patron saint, the Virgin Mary. The main campus covers in a suburban setting and it contains a number of recognizable landmarks, such as the Golden Dome, the "Word of Life" mural (commonly known as Touchdown Jesus), and the Basilica. The school was founded on November 26, 1842, by Father Edward Sorin, CSC, who was also its first president, as an all-male institution on land donated by the Bishop of Vincennes (Indiana). Today, many Holy Cross priests continue to work for the university, including the president of the university.
Notre Dame is a large, four-year, highly residential research university. It is consistently ranked among the top twenty universities in the United States and as a major global university and is highly regarded for its undergraduate education. Notre Dame is also ranked as one of the top research universities and it has one of the largest endowments in the nation with over $10 billion. Undergraduate students are organized into four colleges (Arts and Letters, Science, Engineering, Business), and the Architecture School. The latter is known for teaching New Classical Architecture and for awarding the globally renowned annual Driehaus Architecture Prize.
The university offers over 50 foreign study abroad yearlong programs and over 15 summer programs. Notre Dame's graduate program has more than 50 master, doctoral and professional degree programs offered by the five schools, with the addition of the Notre Dame Law School and a MD-PhD program offered in combination with IU medical School. It maintains a system of libraries, cultural venues, artistic and scientific museums, including the Hesburgh Library and the Snite Museum of Art. The university boasts one of the largest Navy ROTC programs in the nation. Over 80% of the university's 8,000 undergraduates live on campus in one of 29 single-sex residence halls, each with its own traditions, legacies, events, and intramural sports teams. The university counts approximately 120,000 alumni, considered among the strongest alumni networks among U.S. colleges.ND Alumni Association – Notre Dame Alumni Association
Notre Dame rose to national prominence in the early 1900s for its Fighting Irish football team under the guidance of legendary coach Knute Rockne. The university's athletic teams are members of the NCAA Division I and are known collectively as the Fighting Irish. The football team, an Independent with no conference affiliation, has accumulated eleven consensus national championships, seven Heisman Trophy winners, 62 members in the College Football Hall of Fame, 13 members in the Pro Football Hall of Fame, and is one of the most famed and successful college football teams in history. Other ND sport teams, chiefly in the Atlantic Coast Conference, have accumulated 16 national championships. The Notre Dame Victory March is often regarded as the most famous and recognizable collegiate fight song.
Started as a small all-male institution in 1842 and charter in 1844, Notre Dame reached international fame at the beginning of the 20th century. Major improvements to the university occurred during the administration of the Rev. Theodore Hesburgh between 1952 and 1987 as Hesburgh's administration greatly increased the university's resources, academic programs, and reputation and first enrolled women undergraduates in 1972. Ever since, the University has seen steady growth, and under the leadership of the next two presidents, Rev. Malloy and Rev. Jenkins, many infrastructure and research expansions have been completed.
History
Foundations
thumb|The Very Rev. Edward Sorin, founder of the university, arrived at Notre Dame in 1842. The picture was taken around 1890.
In 1842, the Bishop of Vincennes, Célestine Guynemer de la Hailandière, offered land to Father Edward Sorin of the Congregation of Holy Cross, on the condition that he build a college in two years."Founding Information". University of Notre Dame. Archived from the original on 2007-10-31. Retrieved 2007-12-31. Fr. Sorin arrived on the site with eight Holy Cross brothers from France and Ireland on November 26, 1842, and began the school using Father Stephen Badin's old log chapel. He soon erected additional buildings, including the Old College, the first church, and the first main building. They immediately acquired two students and set about building additions to the campus.
Notre Dame began as a primary and secondary school, but soon received its official college charter from the Indiana General Assembly on January 15, 1844.a b Hope, C.S.C., Arthur J. (1979) [1948]. "IV". Notre Dame: One Hundred Years (2 ed.). Notre Dame, IN: University Press. ISBN 0-89651-501-X. Under the charter the school is officially named the University of Notre Dame du Lac (University of Our Lady of the Lake).The university's campus actually contains two lakes, but according to legend, when Fr. Sorin arrived at the site everything was frozen, so he thought there was only one lake and named the university accordingly. Cohen, Ed (Autumn 2004). "One lake or two?". The Notre Dame Magazine. Archived from the original on 2007-07-01. Retrieved 2007-12-07. Because the university was originally only for male students, the female-only Saint Mary's College was founded by the Sisters of the Holy Cross near Notre Dame in 1844."Saint Mary's at a Glance". Saint Mary's College. Retrieved 2007-12-31.
Early history
The first degrees from the college were awarded in 1849.Hope, C.S.C., Arthur J. (1979) [1948]. "V". Notre Dame: One Hundred Years (2 ed.). Notre Dame, IN: University Press. ISBN 0-89651-501-X. The university was expanded with new buildings to accommodate more students and faculty. With each new president, new academic programs were offered and new buildings built to accommodate them. The original Main Building built by Fr. Sorin just after he arrived was replaced by a larger "Main Building" in 1865, which housed the university's administration, classrooms, and dormitories. Beginning in 1873, a library collection was started by Father Lemonnier, housed in the Main Building, and by 1879 it had grown to ten thousand volumes. thumb|300px|left|The current Main Building, built in after the great fire of 1879
This Main Building, and the library collection, was entirely destroyed by a fire in April 1879; school closed immediately and students were sent home."The Story of Notre Dame: Main Building". University of Notre Dame Archives. Retrieved 2007-12-31. The university founder, Fr. Sorin, and the president at the time, the Rev. William Corby, immediately planned for the rebuilding of the structure that had housed virtually the entire University. Construction was started on May 17, and by the incredible zeal of administrator and workers the building was completed before the fall semester of 1879. The library collection was also rebuilt and stayed housed in the new Main Building for years afterwards."The Story of Notre Dame: Lemmonier Library". University of Notre Dame Archives. Retrieved 2007-12-31. Around the time of the fire, a music hall was opened. Known as Washington Hall, it hosted plays and musical acts put on by the school."The Story of Notre Dame: Washington Hall". University of Notre Dame Archives. Retrieved 2007-12-31. By 1880, a science program was established at the university, and a Science Hall (today LaFortune Student Center) was built in 1883. The hall housed multiple classrooms and science labs needed for early research at the university."The Story of Notre Dame: Science Hall". University of Notre Dame Archives. Retrieved 2007-12-31.
Growth
By 1890, individual residence halls were built to house the increasing number of students."The Story of Notre Dame: Sorin Hall". University of Notre Dame Archives. Retrieved 2007-12-31.
William J. Hoynes was dean of the law school 1883–1919, and when its new building was opened shortly after his death it was renamed in his honor.a b Marvin R. O'Connell, Edward Sorin (2001) The Rev. John Zahm C.S.C. became the Holy Cross Provincial for the United States (1896–1906), with overall supervision of the university. He tried to modernize and expand Notre Dame, erecting buildings and adding to the campus art gallery and library, and amassing what became a famous Dante collection. His term was not renewed by the Congregation because of fears he had expanded Notre Dame too quickly and had run the Holy Cross order into serious debt.
left|thumb|279x279px|The University of Notre Dame in 1903
In 1919 Father James Burns became president of Notre Dame, and in three years he produced an academic revolution that brought the school up to national standards by adopting the elective system and moving away from the university's traditional scholastic and classical emphasis.Thomas T. McAvoy, "Notre Dame 1919–1922: The Burns Revolution," Review of Politics (1963) 25#4 pp: 431–450 in JSTOR. By contrast, the Jesuit colleges, bastions of academic conservatism, were reluctant to move to a system of electives; for this reason, their graduates were shut out of Harvard Law School.Kathleen A. Mahoney, Catholic higher education in Protestant America: The Jesuits and Harvard in the age of the university (2003). Notre Dame continued to grow over the years, adding more colleges, programs, and sports teams. By 1921, with the addition of the College of Commerce, Notre Dame had grown from a small college to a university with five colleges and a professional law school. The university continued to expand and add new residence halls and buildings with each subsequent president.
thumb|290x290px|The Basilica of the Sacred Heart, completed in 1888
One of the main driving forces in the growth of the University was its football team, the Notre Dame Fighting Irish. Knute Rockne became head coach in 1918. Under Rockne, the Irish would post a record of 105 wins, 12 losses, and five ties. During his 13 years the Irish won three national championships, had five undefeated seasons, won the Rose Bowl in 1925, and produced players such as George Gipp and the "Four Horsemen". Knute Rockne has the highest winning percentage (.881) in NCAA Division I/FBS football history. Rockne's offenses employed the Notre Dame Box and his defenses ran a 7–2–2 scheme. The last game Rockne coached was on December 14, 1930, when he led a group of Notre Dame all-stars against the New York Giants in New York City.
The success of its football team made Notre Dame a household name. The success of Notre Dame reflected rising status of Irish Americans and Catholics in the 1920s. Catholics rallied up around the team and listened to the games on the radio, especially when it knocked off the schools that symbolized the Protestant establishment in America—Harvard, Yale, Princeton, and Army. Yet this role as high-profile flagship institution of Catholicism made it an easy target of anti-Catholicism. The most remarkable episode of violence was the clash between Notre Dame students and the Ku Klux Klan, a white supremacist and anti-catholic movement, in 1924. Nativism and anti-Catholicism, especially when directed towards immigrants, were cornerstones of the KKK's rhetoric, and Notre Dame was seen as a symbol of the threat posed by the Catholic Church. The Klan decided to have a week-long Klavern in South Bend. Clashes with the student body started on March 17, when students, aware of the anti-Catholic animosity, blocked the Klansmen from descending from their trains in the South Bend station and ripped the KKK clothes and regalia. On May 19 thousands of students massed downtown protesting the Klavern, and only the arrival of college president Fr. Matthew Walsh prevented any further clashes. The next day, football coach Knute Rockne spoke at a campus rally and implored the students to obey the college president and refrain from further violence. A few days later the Klavern broke up, but the hostility shown by the students was an omen and a contribution to the downfall of the KKK in Indiana.
thumb|332x332px|South Quad, built in the 1920s–1940s, houses many residential halls
Expansion in the 1930s and 1940s
Holy Cross Father John Francis O'Hara was elected vice president in 1933 and president of Notre Dame in 1934. During his tenure at Notre Dame, he brought numerous refugee intellectuals to campus; he selected Frank H. Spearman, Jeremiah D. M. Ford, Irvin Abell, and Josephine Brownson for the Laetare Medal, instituted in 1883. O'Hara strongly believed that the Fighting Irish football team could be an effective means to "acquaint the public with the ideals that dominate" Notre Dame. He wrote, "Notre Dame football is a spiritual service because it is played for the honor and glory of God and of his Blessed Mother. When St. Paul said: 'Whether you eat or drink, or whatsoever else you do, do all for the glory of God,' he included football."[5]
The Rev. John J. Cavanaugh, C.S.C. served as president from 1946 to 1952. Cavanaugh's legacy at Notre Dame in the post-war years was devoted to raising academic standards and reshaping the university administration to suit it to an enlarged educational mission and an expanded student body and stressing advanced studies and research at a time when Notre Dame quadrupled in student census, undergraduate enrollment increased by more than half, and graduate student enrollment grew fivefold. Cavanaugh also established the Lobund Institute for Animal Studies and Notre Dame's Medieval Institute.Wolfgang Saxon, Rev. John Cavanaugh, 80, Former President of Notre Dame (Dec. 30, 1979). Cavanaugh also presided over the construction of the Nieuwland Science Hall, Fisher Hall, and the Morris Inn, as well as the Hall of Liberal Arts (now O'Shaughnessy Hall), made possible by a donation from I.A. O'Shaughnessy, at the time the largest ever made to an American Catholic university. He also established a system of advisory councils at the university, which continue today and are vital to the university's governance and development.
Hesburgh era: 1952–1987
thumb|left|The Memorial Library, renamed The Theodore Hesburgh Library in 1987, is one of the greatest accomplishments of the Hesburgh presidency.
The Rev. Theodore Hesburgh, C.S.C., (1917–2015) served as president for 35 years (1952–87) of dramatic transformations. In that time the annual operating budget rose by a factor of 18 from $9.7 million to $176.6 million, and the endowment by a factor of 40 from $9 million to $350 million, and research funding by a factor of 20 from $735,000 to $15 million. Enrollment nearly doubled from 4,979 to 9,600, faculty more than doubled 389 to 950, and degrees awarded annually doubled from 1,212 to 2,500.Michael O'Brien, Hesburgh: A Biography (1998); Theodore M. Hesburgh, God, Country, Notre Dame: The Autobiography of Theodore M. Hesburgh (2000)
Hesburgh is also credited with transforming the face of Notre Dame by making it a coeducational institution. In the mid-1960s Notre Dame and Saint Mary's College developed a co-exchange program whereby several hundred students took classes not offered at their home institution, an arrangement that added undergraduate women to a campus that already had a few women in the graduate schools. After extensive debate, merging with St. Mary's was rejected, primarily because of the differential in faculty qualifications and pay scales. "In American college education," explained the Rev. Charles E. Sheedy, C.S.C., Notre Dame's Dean of Arts and Letters, "certain features formerly considered advantageous and enviable are now seen as anachronistic and out of place.... In this environment of diversity, the integration of the sexes is a normal and expected aspect, replacing separatism." Thomas Blantz, C.S.C., Notre Dame's Vice President of Student Affairs, added that coeducation "opened up a whole other pool of very bright students."Susan L. Poulson and Loretta P. Higgins, "Gender, Coeducation, and the Transformation of Catholic Identity in American Catholic Higher Education," Catholic Historical Review 2003 89(3): 489–510, for quotes. Two of the male residence halls were converted for the newly admitted female students that first year,"Badin Hall". University of Notre Dame. Archived from the original on 2007-12-11. Retrieved 2008-01-01.^ "Walsh Hall". University of Notre Dame. Archived from the original on 2007-11-17. Retrieved 2008-01-01. while two others were converted for the next school year."Breen-Phillips Hall". University of Notre Dame. Archived from the original on 2007-11-17. Retrieved 2008-01-01."Farley Hall". University of Notre Dame. Archived from the original on 2007-12-11. Retrieved 2008-01-01. In 1971 Mary Ann Proctor became the first female undergraduate; she transferred from St. Mary's College. In 1972, Angela Sienko, who earned a bachelor's degree in marketing, became the first woman graduate from the university."A hardcover thank-you card". Notre Dame Magazine. Retrieved 2008-01-01.
Recent history
thumbnail|The new wing of the Law School|280px
In the 18 years under the presidency of Edward Malloy, C.S.C., (1987–2005), there was a rapid growth in the school's reputation, faculty, and resources. He increased the faculty by more than 500 professors; the academic quality of the student body has improved dramatically, with the average SAT score rising from 1240 to 1360; the number of minority students more than doubled; the endowment grew from $350 million to more than $3 billion; the annual operating budget rose from $177 million to more than $650 million; and annual research funding improved from $15 million to more than $70 million. Notre Dame's most recent (2014) capital campaign raised $2.014 billion, far exceeding its goal of $767 million, and is the largest in the history of Catholic higher education and the largest of any University without a medical school.
Since 2005, Notre Dame has been led by John I. Jenkins, C.S.C., the 17th president of the university."About Notre Dame: Officer Group Bios: Rev. John I. Jenkins, C.S.C.". University of Notre Dame. Archived from the original on 2007-11-11. Retrieved 2008-01-01. Jenkins took over the position from Malloy on July 1, 2005.Heninger, Claire (May 1, 2004). "Monk moves on: Jenkins will succeed Malloy after June 2005". The Observer. Retrieved 2008-01-01. In his inaugural address, Jenkins described his goals of making the university a leader in research that recognizes ethics and building the connection between faith and studies. During his tenure, Notre Dame has increased its endowment, enlarged its student body, and undergone many construction projects on campus, including Compton Family Ice Arena, a new architecture hall, additional residence halls, and the Campus Crossroads, a $400m enhancement and expansion of Notre Dame Stadium.Campus Crossroads Project. url=http://crossroads.nd.edu/ Retrieved 23 March 2016.
Campus
thumb|A chair overlooks Saint Joseph Lake in the fall
Notre Dame's campus is located in Notre Dame, Indiana, an unincorporated community in the Michiana area of Northern Indiana, north of South Bend and four miles (6 km) from the Michigan state line.
In September 2011, Travel+Leisure listed Notre Dame as having one of the most beautiful college campuses in the United States. Today it lies on just south of the Indiana Toll Road and includes 143 buildings located on quads throughout the campus.
Buildings and architecture
thumb|270px|left|upright|Historic Washington Hall on the Main Quadrangle, popularly termed the "God Quad"
Development of the campus began in the spring of 1843, when Fr. Sorin and some of his congregation built the "Old College," a building used for dormitories, a bakery, and a classroom. A year later, after an architect arrived, a small "Main Building" was built allowing for the launch of the college. The Main Building burned down in 1879, and it was immediately replaced with the current one. It was topped with the Golden Dome, which today has become Notre Dame's most distinguishable feature. Close to the Main Building stands Washington Hall (University of Notre Dame), a theater that was built in 1881 and has since then been used for theatrical and musical representation.
thumb|The Golden Dome, built by Fr. Sorin, has become the symbol of the University.
Because of its Catholic identity, a number of religious buildings stand on campus. The Old College building has become one of two seminaries on campus run by the Congregation of Holy Cross. The current Basilica of the Sacred Heart is located on the spot of Fr. Sorin's original church, which became too small for the growing college. It is built in French Revival style and it is decorated by stained glass windows imported directly from France. The interior was painted by Luigi Gregori, an Italian painter invited by Fr. Sorin to be artist in residence. The Basilica also features a bell tower with a carillon. Inside the church there are also sculptures by Ivan Mestrovic. The Grotto of Our Lady of Lourdes, which was built in 1896, is a replica of the original in Lourdes, France. It is very popular among students and alumni as a place of prayer and meditation, and it is considered one of the most beloved spots on campus.
A Science Hall was built in 1883 under the direction of Fr. Zahm, but in 1950 it was converted to a student union building and named LaFortune Center, after Joseph LaFortune, an oil executive from Tulsa, Oklahoma. Commonly known as "LaFortune" or "LaFun," it is a 4-story building of that provides the Notre Dame community with a meeting place for social, recreational, cultural, and educational activities. LaFortune employs 35 part-time student staff and 29 full-time non-student staff and has an annual budget of $1.2 million.
Many businesses, services, and divisions of The Office of Student Affairs are found within. The building also houses restaurants from national restaurant chains.
thumb|left|upright|Autumn on the God Quad, formally known as the Main Quadrangle
Since the construction of its oldest buildings, the university's physical plant has grown substantially. Over the years 29 residence halls have been built to accommodate students and each has been constructed with its own chapel. Many academic building were added together with a system of libraries, the most prominent of which is the Theodore Hesburgh Library, built in 1963 and today containing almost 4 million books. Since 2004, several buildings have been added, including the DeBartolo Performing Arts Center, the Guglielmino Complex, and the Jordan Hall of Science. Additionally, a new residence for men, Duncan Hall, was begun on March 8, 2007, and began accepting residents for the Fall 2008 semester. Ryan Hall was completed and began housing undergraduate women in the fall of 2009. A new engineering building, Stinson-Remick Hall, a new combination Center for Social Concerns/Institute for Church Life building, Geddes Hall, and a law school addition have recently been completed as well. Additionally the new hockey arena opened in the fall of 2011. The Stayer Center for Executive Education, which houses the Mendoza College of Business Executive Education Department opened in March 2013 just South of the Mendoza College of Business building. Because of its long athletic tradition, the university features also many building dedicated to sport. The most famous is Notre Dame Stadium, home of the Fighting Irish football team; it has been renovated several times and today it can hold more than 80 thousand people. Prominent venues include also the Edmund P. Joyce Center, with indoor basketball and volleyball courts, and the Compton Family Ice Arena, a two-rink facility dedicated to hockey. Also, there are many outdoor fields, as the Frank Eck Stadium for baseball.
300px|thumb|Notre Dame Stadium, home of the Fighting Irish
Legends of Notre Dame (commonly referred to as Legends) is a music venue, public house, and restaurant located on the campus of the University of Notre Dame, just south of Notre Dame Stadium. The former Alumni Senior Club opened its doors the first weekend in September 2003 after a $3.5 million renovation and transformed into the all-ages student hang-out that currently exists. Legends is made up of two parts: The Restaurant and Alehouse and the nightclub.
Environmental sustainability
The University of Notre Dame has made being a sustainability leader an integral part of its mission, creating the Office of Sustainability in 2008 to achieve a number of goals in the areas of power generation, design and construction, waste reduction, procurement, food services, transportation, and water. four building construction projects were pursuing LEED-Certified status and three were pursuing LEED Silver. Notre Dame's dining services sources 40% of its food locally and offers sustainably caught seafood as well as many organic, fair-trade, and vegan options. On the Sustainable Endowments Institute's College Sustainability Report Card 2010, University of Notre Dame received a "B" grade. The university also houses the Kellog Institute for International Peace Studies. Father Gustavo Gutierrez, the founder of Liberation Theology is a current faculty member.
Global Gateways
The university owns several centers around the world used for international studies and research, conferences abroad, and alumni support.
London. The university has had a presence in London, England, since 1968. Since 1998, its London center has been based in Fischer Hall, the former United University Club at 1 Suffolk Street in Trafalgar Square. The center enables the Colleges of Arts and Letters, Business Administration, Science, Engineering and the Law School to develop their own programs in London, as well as hosting conferences and symposia. The university also owns a residence facility, Conway Hall, which was previously a hospital. It houses students studying abroad in London.
Beijing. The university owns space in the Liangmaqiao Station area, Beijing. The center is the hub of Notre Dame Asia and it hosts a number of programs including study abroad.
thumb|Kylemore Abbey, in Ireland, which entered a study abroad partnership with the university
Dublin. The university owns the O'Connell House, a building in Merrion Square at the heart of Georgian Dublin. It hosts academic programs and summer internships for both undergraduate and graduate students in addition to seminars and is home to the Keough Naughton Centre. Since 2015, the university has entered a partnership with Kylemore Abbey. The university renovated spaces in the abbey, and the abbey will host academic programs for Notre Dame students.
Jerusalem. The Jerusalem Global Gateway shares space in common with the Tantur Ecumenical Institute, also directed by the University of Notre Dame. The space is located in a 100,000-square-foot facility on the seam between Jerusalem and Bethlehem. It hosts a number of religious and ecumenical programs.
Rome. The Rome Global Getaway is located in Via Ostilia, very close to the Colosseum. It was recently acquired and renovated, and it now has 32,000 square-foot space and hosts a variety of academic and educational activities of the university. The university purchased a second Roman villa on the Caelian hill.
In addition to the five Global Getaways, the University also holds a presence in Chicago where it owns the Santa Fe Building (Chicago).
Organization and administration
thumb|The Rev. Theodore Hesburgh was the 15th and longest-serving president.
The university of Notre Dame is under the leadership of the president, who is a priest of the Congregation of Holy Cross. The first president was Fr. Edward Sorin and the current president is Fr. John I. Jenkins. As of 2016, the provost of the university, who oversees academic functions, is Thomas Burish.
Until 1967 Notre Dame had been governed directly by the Congregation, but under the presidency of the Rev. Theodore Hesburgh two groups, the Board of Fellows and the Board of Trustees were established to govern the University. The Fellows are a group of six Holy Cross religious and six lay members who have final say over the operation of the university. The Fellows vote on potential trustees and sign off on all major decisions by that body. The Trustees elect the president and provide general guidance and governance to the university.
Endowment
Notre Dame's financial endowment was started in the early 1920s by university president James Burns, and increased to US$7 million by 1952 when Hesburgh became president. By the 1980s it reached $150 million, and in 2000, it returned a record 57.9% investment. For the 2007 fiscal year, the endowment had grown to approximately $6.5 billion, putting the university in the top-15 largest endowments in the country."U.S. and Canadian Institutions Listed by Fiscal Year 2012 Endowment Market Value and Percentage Change in Endowment Market Value from FY 2011 to FY 2012" (PDF). 2012 NACUBO-Commonfund Study of Endowments. National Association of College and University Business Officers. As of September 2015, Notre Dame's endowment was valued over $10 billion.
Academics
As of fall 2014, Notre Dame had 12,292 students and employed 1,126 full-time faculty members and another 190 part-time members to give a student/faculty ratio of 8:1.
Colleges
The College of Arts and Letters was established as the university's first college in 1842 with the first degrees given in 1849. The university's first academic curriculum was modeled after the Jesuit Ratio Studiorum from Saint Louis University. Today the college, housed in O'Shaughnessy Hall, includes 20 departments in the areas of fine arts, humanities, and social sciences, and awards Bachelor of Arts (B.A.) degrees in nearly 70 majors and minors, making it the largest of the university's colleges. There are more than 3000 undergraduates and 1,100 graduates enrolled in the college, taught by 500 faculty members.
The College of Science was established at the university in 1865 by president Father Patrick Dillon. Dillon's scientific courses were six years of work, including higher-level mathematics courses. Today the college, housed in the newly built Jordan Hall of Science, includes over 1,200 undergraduates in six departments of study – Biology, Neuroscience & Behavior, Chemistry and Biochemistry, Mathematics, Physics, pre-professional studies, applied and computational mathematics and statistics (ACMS), Science-Business, Science-Computing, Science-Education, and Statistic – each awarding Bachelor of Science (B.S.) degrees. According to university statistics, its science pre-professional program has one of the highest acceptance rates to medical school of any university in the United States. thumb|Bond Hall, house of the School of Architecture
The School of Architecture was established in 1899, although degrees in architecture were first awarded by the university in 1898. Today the school, housed in Bond Hall, offers a five-year undergraduate program leading to the Bachelor of Architecture degree. All undergraduate students study the third year of the program in Rome. The university is globally recognized for its Notre Dame School of Architecture, a faculty that teaches (pre-modernist) traditional and classical architecture and urban planning (e.g., following the principles of New Urbanism and New Classical Architecture).School of Architecture at the University of Notre Dame "Twenty years ago the curriculum was reformed to focus on traditional and classical architecture and urbanism." It also awards the renowned annual Driehaus Architecture Prize.
The College of Engineering was established in 1920, however, early courses in civil and mechanical engineering were a part of the College of Science since the 1870s. Today the college, housed in the Fitzpatrick, Cushing, and Stinson-Remick Halls of Engineering, includes five departments of study – aerospace and mechanical engineering, chemical and biomolecular engineering, civil engineering and geological sciences, computer science and engineering, and electrical engineering – with eight B.S. degrees offered. Additionally, the college offers five-year dual degree programs with the Colleges of Arts and Letters and of Business awarding additional B.A. and Master of Business Administration (MBA) degrees, respectively.
The Mendoza College of Business was established by Father John Francis O'Hara in 1921, although a foreign commerce program was launched in 1917. Today the college offers degrees in accountancy, finance, management, and marketing and enrolls over 1,600 students. In 2014, the college's undergraduate program was ranked No. 1 in the nation for the fifth consecutive year by Bloomberg Businessweek.
Special programs
All of Notre Dame's undergraduate students are a part of one of the five undergraduate colleges at the school or are in the First Year of Studies program.
thumb|left|The Hesburgh Library, which is the center of the campus' intellectual life
The First Year of Studies program was established in 1962 to guide incoming freshmen in their first year at the school before they have declared a major. Each student is given an academic advisor from the program who helps them to choose classes that give them exposure to any major in which they are interested. The program also includes a Learning Resource Center which provides time management, collaborative learning, and subject tutoring. This program has been recognized previously, by U.S. News & World Report, as outstanding. The program is designed to encourage intellectual and academic achievement and innovation among first year students. It includes programs such as FY advising, the Dean's A list, the Renaissance circle, NDignite, the First year Urban challenge and more.
Each admissions cycle, the Office of Undergraduate Admissions selects a small number of students for the Glynn Family Honors Program, which grants top students within the College of Arts and Letters and the College of Science access to smaller class sizes taught by distinguished faculty, endowed funding for independent research, and dedicated advising faculty and staff.
Graduate and professional schools
The university first offered graduate degrees, in the form of a Master of Arts (MA), in the 1854–1855 academic year. The program expanded to include Master of Laws (LL.M.) and Master of Civil Engineering in its early stages of growth, before a formal graduate school education was developed with a thesis not required to receive the degrees. This changed in 1924 with formal requirements developed for graduate degrees, including offering Doctorate (PhD) degrees.
Each of the five colleges offers graduate education in the form of Masters and Doctoral programs. Most of the departments from the College of Arts and Letters offer PhD programs, while a professional Master of Divinity (M.Div.) program also exists. All of the departments in the College of Science offer PhD programs, except for the Department of Pre-Professional Studies. The School of Architecture offers a Master of Architecture, while each of the departments of the College of Engineering offer PhD programs. The College of Business offers multiple professional programs including MBA and Master of Science in Accountancy programs. It also operates facilities in Chicago and Cincinnati for its executive MBA program. Additionally, the Alliance for Catholic Education program offers a Master of Education program where students study at the university during the summer and teach in Catholic elementary schools, middle schools, and high schools across the Southern United States for two school years. The Joan B. Kroc Institute for International Peace Studies at the University of Notre Dame is dedicated to research, education and outreach on the causes of violent conflict and the conditions for sustainable peace. It offers PhD, Master's, and undergraduate degrees in peace studies. It was founded in 1986 through the donations of Joan B. Kroc, the widow of McDonald's owner Ray Kroc. The institute was inspired by the vision of the Rev. Theodore M. Hesburgh CSC, President Emeritus of the University of Notre Dame. The institute has contributed to international policy discussions about peace building practices.History & Mission, Joan B. Kroc Institute for International Peace Studies
thumb|The Law School in winter
The Notre Dame Law School offers a professional program for students and Law degress. Established in 1869, Notre Dame was the first Catholic university in the United States to have a law program. Today the program has consistently ranked among the top law schools in the nation according to U.S. News & World Report. The Law School grants the professional Juris Doctor degree as well as the graduate LL.M. and Doctor of Juridical Science degrees.
Though Notre Dame does not have a medical school of its own, it offers a combined MD–PhD though the regional campus of the Indiana University School of Medicine, where Indiana University medical students may spend the first two years of their medical education before transferring to the main medical campus at IUPUI.
In 2014, Notre Dame announced plans to establish the Donald R. Keough School of Global Affairs, a professional school focused on the study of global government, human rights, and other areas of global social and political policy. The creation of the school is funded by a $50 million gift from Donald Keough and Marilyn Keough and will be housed in Jenkins Hall on Debartolo Quad. The school is scheduled to open in August 2017.
Libraries
thumb|right|The interior of the Kresge Law Library at the Notre Dame Law School
The library system of the university is divided between the main library and each of the colleges and schools. The main building is the 14-story Theodore M. Hesburgh Library, completed in 1963, which is the third building to house the main collection of books. The front of the library is adorned with the Word of Life mural designed by artist Millard Sheets. This mural is popularly known as "Touchdown Jesus" because of its proximity to Notre Dame Stadium and Jesus' arms appearing to make the signal for a touchdown.
thumbnail|left|Clarke Memorial Fountain
The library system also includes branch libraries for Architecture, Chemistry and Physics, Engineering, Law, and Mathematics as well as information centers in the Mendoza College of Business, the Kellogg Institute for International Studies, the Joan B. Kroc Institute for International Peace Studies, and a slide library in O'Shaughnessy Hall. A theology library was also opened in fall of 2015. Located on the first floor of Stanford Hall, it is the first branch of the library system to be housed in a dorm room. The library system holds over three million volumes, was the single largest university library in the world upon its completion, and remains one of the 100 largest libraries in the country.
Admissions
Notre Dame is known for its competitive admissions, with the incoming class enrolling in fall 2016 admitting 3,655 from a pool of 19,505 (18.7%). The academic profile of the enrolled class continues to rate among the top 10 to 15 in the nation for national research universities. Of the most recent class, the Class of 2020, 48% were in the top 1% of their high school, and 94% were in the top 10%. The median SAT score was 1510 and the median ACt score was 34. The university practices a non-restrictive early action policy that allows admitted students to consider admission to Notre Dame as well as any other colleges to which they were accepted. 1,400 of the 3,577 (39.1%) were admitted under the early action plan. Admitted students came from 1,311 high schools and the average student traveled more than 750 miles to Notre Dame, making it arguably the most representative university in the United States. While all entering students begin in the College of the First Year of Studies, 25% have indicated they plan to study in the liberal arts or social sciences, 24% in engineering, 24% in business, 24% in science, and 3% in architecture.
Rankings
In 2016–2017, Notre Dame ranked 7th for undegraduate teaching and 15th overall among "national universities" in the United States in U.S. News & World Reports Best Colleges 2016.http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universitieshttp://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities/undergraduate-teaching In 2014, USA Today ranked Notre Dame 10th overall for American universities. Forbess "America's Top Colleges" ranks Notre Dame 13th among colleges in the United States in 2016, 8th among Research Universities, and 1st in the Midwest. U.S. News & World Report also lists Notre Dame Law School as 22nd overall. BusinessWeek ranks Mendoza College of Business undergraduate school as 1st overall. It ranks the MBA program as 20th overall. The Philosophical Gourmet Report ranks Notre Dame's graduate philosophy program as 15th nationally, while ARCHITECT Magazine ranked the undergraduate architecture program as 12th nationally.
Additionally, the study abroad program ranks sixth in highest participation percentage in the nation, with 57.6% of students choosing to study abroad in 17 countries. According to PayScale, undergraduate alumni of University of Notre Dame have a mid-career median salary $110,000, making it the 24th highest among colleges and universities in the United States. The median starting salary of $55,300 ranked 58th in the same peer group.
Named by Newsweek as one of the "25 New Ivies," it is also an Oak Ridge Associated University.
Research
Science
thumbnail|200px|Jordan Hall of Science
Father Joseph Carrier, C.S.C. was Director of the Science Museum and the Library and Professor of Chemistry and Physics until 1874. Carrier taught that scientific research and its promise for progress were not antagonistic to the ideals of intellectual and moral culture endorsed by the Church. One of Carrier's students was Father John Augustine Zahm who was made Professor and Co-Director of the Science Department at age 23 and by 1900 was a nationally prominent scientist and naturalist. Zahm was active in the Catholic Summer School movement, which introduced Catholic laity to contemporary intellectual issues. His book Evolution and Dogma (1896) defended certain aspects of evolutionary theory as true, and argued, moreover, that even the great Church teachers Thomas Aquinas and Augustine taught something like it. The intervention of Irish American Catholics in Rome prevented Zahm's censure by the Vatican. In 1913, Zahm and former President Theodore Roosevelt embarked on a major expedition through the Amazon.Ralph Edward Weber, Notre Dame's John Zahm: American Catholic Apologist and Educator (1961)
In 1882, Albert Zahm (John Zahm's brother) built an early wind tunnel used to compare lift to drag of aeronautical models. Around 1899, Professor Jerome Green became the first American to send a wireless message. In 1931, Father Julius Nieuwland performed early work on basic reactions that was used to create neoprene. Study of nuclear physics at the university began with the building of a nuclear accelerator in 1936, and continues now partly through a partnership in the Joint Institute for Nuclear Astrophysics.
Lobund Institute
The Lobund Institute (Laboratory Of Biology University of Notre Dame) grew out of pioneering research in germ-free-life which began in 1928. This area of research originated in a question posed by Pasteur as to whether animal life was possible without bacteria. Though others had taken up this idea, their research was short lived and inconclusive. Lobund was the first research organization to answer definitively, that such life is possible and that it can be prolonged through generations. But the objective was not merely to answer Pasteur's question but also to produce the germ free animal as a new tool for biological and medical research. This objective was reached and for years Lobund was a unique center for the study and production of germ free animals and for their use in biological and medical investigations. Today the work has spread to other universities. In the beginning it was under the Department of Biology and a program leading to the master's degree accompanied the research program. In the 1940s Lobund achieved independent status as a purely research organization and in 1950 was raised to the status of an Institute. In 1958 it was brought back into the Department of Biology as integral part of that department, but with its own program leading to the degree of PhD in Gnotobiotics.See Philip S. Moore, The Story of Notre Dame: Academic Development: University of Notre Dame online
thumb|upright|Hallway within Hurley Hall
Humanities
Richard T. Sullivan taught English from 1936 to 1974 and published six novels, dozens of short stories, and various other efforts. He was known as a regional writer and a Catholic spokesman.Una M. Cadegan, "How Realistic Can a Catholic Writer Be? Richard Sullivan and American Catholic Literature," Religion & American Culture 1996 6(1): 35–61
Frank O'Malley was an English professor during the 1930s–1960s. Influenced by philosophers Jacques Maritain, John U. Nef, and others, O'Malley developed a concept of Christian philosophy that was a fundamental element in his thought. Through his course "Modern Catholic Writers" O'Malley introduced generations of undergraduates to Gabriel Marcel, Graham Greene, Evelyn Waugh, Sigrid Undset, Paul Claudel, and Gerard Manley Hopkins.Arnold Sparr, "The Catholic Laity, the Intellectual Apostolate and the Pre-Vatican II Church: Frank O'Malley of Notre Dame." U.S. Catholic Historian 1990 9(3): 305–320. 0735–8318
The Review of Politics was founded in 1939 by Waldemar Gurian, modeled after German Catholic journals. It quickly emerged as part of an international Catholic intellectual revival, offering an alternative vision to positivist philosophy. For 44 years, the Review was edited by Gurian, Matthew Fitzsimons, Frederick Crosson, and Thomas Stritch. Intellectual leaders included Gurian, Jacques Maritain, Frank O'Malley, Leo Richard Ward, F. A. Hermens, and John U. Nef. It became a major forum for political ideas and modern political concerns, especially from a Catholic and scholastic tradition.Thomas Stritch, "After Forty Years: Notre Dame and the Review of Politics" Review Of Politics 1978 40: 437–446. in JSTOR
Kenneth Sayre has explored the history of the Philosophy department. He stresses the abandonment of official Thomism to the philosophical pluralism of the 1970s, with attention to the issue of being Catholic. He pays special attention to the charismatic personalities of Ernan McMullin and Ralph McInerny, key leaders of the department in the 1960s and 1970s.Kenneth M. Sayre, Adventures in Philosophy at Notre Dame (University of Notre Dame Press, 2014) 382 pp.
European émigrés
The rise of Hitler and other dictators in the 1930s forced numerous Catholic intellectuals to flee Europe; president John O'Hara brought many to Notre Dame. From Germany came Anton-Hermann Chroust (1907–1982) in classics and law,See bibliography and Waldemar Gurian a German Catholic intellectual of Jewish descent. Positivism dominated American intellectual life in the 1920s onward but in marked contrast, Gurian received a German Catholic education and wrote his doctoral dissertation under Max Scheler.Frank O'Malley, "Waldemar Gurian at Notre Dame," Review of Politics, Vol. 17, No. 1, The Gurian Memorial Issue (Jan., 1955), pp. 19–23 in JSTOR Ivan Meštrović (1883–1962), a renowned sculptor, brought Croatian culture to campus, 1955–62.See Ivan Meštrovic (1883–1962) Yves Simon (1903–61), brought to ND in the 1940s the insights of French studies in the Aristotelian-Thomistic tradition of philosophy; his own teacher Jacques Maritain (1882–73) was a frequent visitor to campus.See Yves R. Simon (1903–61)
thumbnail|The Pieta by Ivan Meštrović, a European émigré
The exiles developed a distinctive emphasis on the evils of totalitarianism. For example, the political science courses of Gerhart Niemeyer (1907–97) discussed communist ideology and were particularly accessible to his students. He came to ND in 1955, and was a frequent contributor to the National Review and other conservative magazines.William S. Miller, "Gerhart Niemeyer: His Principles of Conservatism," Modern Age 2007 49(3): 273–284 online at EBSCO
Current research
research continued in many fields. The university president, John Jenkins, described his hope that Notre Dame would become "one of the pre–eminent research institutions in the world" in his inaugural address. The university has many multi-disciplinary institutes devoted to research in varying fields, including the Medieval Institute, the Kellogg Institute for International Studies, the Kroc Institute for International Peace studies, and the Center for Social Concerns. Recent research includes work on family conflict and child development, genome mapping, the increasing trade deficit of the United States with China, studies in fluid mechanics, computational science and engineering, and marketing trends on the Internet. As of 2013, the university is home to the Notre Dame Global Adaptation Index which ranks countries annually based on how vulnerable they are to climate change and how prepared they are to adapt.
Student life
In 2014 the Notre Dame student body consisted of 12,179 students, with 8,448 undergraduates, 2,138 graduate and professional and 1,593 professional (Law, M.Div., Business, M.Ed.) students. Around 21–24% of students are children of alumni, and although 37% of students come from the Midwestern United States, the student body represents all 50 states and 100 countries. 32% of students are U.S. students of color or international citizens. The Princeton Review ranked the school as the fifth highest 'dream school' for parents to send their children. The Princeton Review ranked Notre Dame as the ninth highest. It has also been commended by some diversity oriented publications; Hispanic Magazine in 2004 ranked the university ninth on its list of the top–25 colleges for Latinos, and The Journal of Blacks in Higher Education recognized the university in 2006 for raising enrollment of African-American students. With 6,000 participants, the university's intramural sports program was named in 2004 by Sports Illustrated as the best program in the country, while in 2007 The Princeton Review named it as the top school where "Everyone Plays Intramural Sports." The annual Bookstore Basketball tournament is the largest outdoor five-on-five tournament in the world with over 700 teams participating each year, while the Notre Dame Men's Boxing Club hosts the annual Bengal Bouts tournament that raises money for the Holy Cross Missions in Bangladesh.
thumbnail|Howard Hall, one of the fourteen female dormitories on campus
The strictly measured federal graduation rate for athletes was 90% for freshmen who entered between 2005 and 2008. This is the second highest in the country.
Residence halls
About 80% of undergraduates and 20% of graduate students live on campus. The majority of the graduate students on campus live in one of four graduate housing complexes on campus, while all on-campus undergraduates live in one of the 31 residence halls. Because of the religious affiliation of the university, all residence halls are single-sex, with 16 male dorms and 15 female dorms. The university maintains a visiting policy (known as parietal hours) for those students who live in dormitories, specifying times when members of the opposite sex are allowed to visit other students' dorm rooms; however, all residence halls have 24-hour social spaces for students regardless of gender. Many residence halls have at least one nun and/or priest as a resident. There are no traditional social fraternities or sororities at the university, but a majority of students live in the same residence hall for all four years. Some intramural sports are based on residence hall teams, where the university offers the only non-military academy program of full-contact intramural American football. At the end of the intramural season, the championship game is played on the field in Notre Dame Stadium.
Religious life
thumb|upright|The interior of the Basilica of the Sacred Heart
The university is affiliated with the Congregation of Holy Cross (Latin: Congregatio a Sancta Cruce, abbreviated postnominals: "CSC"). While religious affiliation is not a criterion for admission, more than 93% of students identify as Christian, with over 80% of the total being Catholic. Collectively, Catholic Mass is celebrated over 100 times per week on campus, and a large campus ministry program provides for the faith needs of the community. There are multitudes of religious statues and artwork around campus, most prominent of which are the statue of Mary on the Main Building, the Notre Dame Grotto, and the Word of Life mural on Hesburgh Library depicting Christ as a teacher. Additionally, every classroom displays a crucifix. There are many religious clubs (Catholic and non-Catholic) at the school, including Council #1477 of the Knights of Columbus (KOC), Baptist Collegiate Ministry (BCM), Jewish Club, Muslim Student Association, Orthodox Christian Fellowship, The Mormon Club, and many more. The Notre Dame KofC are known for being the first collegiate council of KofC, operating a charitable concession stand during every home football game and owning their own building on campus which can be used as a cigar lounge. Fifty-seven chapels are located throughout the campus.
300px|thumb|left|The Grotto to Our Lady of Lourdes, one of the many spiritual places on campus
Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes" (Come to me, all ye). Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.
The university is the major seat of the Congregation of Holy Cross (albeit not its official headquarters, which are in Rome). Its main seminary, Moreau Seminary, is located on the campus across St. Joseph lake from the Main Building. Old College, the oldest building on campus and located near the shore of St. Mary lake, houses undergraduate seminarians. Retired priests and brothers reside in Fatima House (a former retreat center), Holy Cross House, as well as Columba Hall near the Grotto.
The university has a highly regarded theology program, both undergraduate and graduate, with many scholars, including Lawrence Cunningham, John Cavadini, and Gary Anderson. The chair of the department, John Cavadini, was appointed to the International Theological Commission by Pope Benedict XVI in 2010; Prof. Brian E. Daley, SJ, received the Ratzinger Prize in Theology in 2012.
thumb|Basilica of the Sacred Heart at night
Student-run media
As at most other universities, Notre Dame's students run a number of news media outlets. The nine student-run outlets include three newspapers, both a radio and television station, and several magazines and journals. Begun as a one-page journal in September 1876, the Scholastic magazine is issued twice monthly and claims to be the oldest continuous collegiate publication in the United States. The other magazine, The Juggler, is released twice a year and focuses on student literature and artwork. The Dome yearbook is published annually. The newspapers have varying publication interests, with The Observer published daily and mainly reporting university and other news, and staffed by students from both Notre Dame and Saint Mary's College. Unlike Scholastic and The Dome, The Observer is an independent publication and does not have a faculty advisor or any editorial oversight from the University. In 1987, when some students believed that The Observer began to show a conservative bias, a liberal newspaper, Common Sense was published. Likewise, in 2003, when other students believed that the paper showed a liberal bias, the conservative paper Irish Rover went into production. Neither paper is published as often as The Observer; however, all three are distributed to all students. Finally, in Spring 2008 an undergraduate journal for political science research, Beyond Politics, made its debut.
The television station, NDtv, grew from one show in 2002 to a full 24-hour channel with original programming by September 2006. WSND-FM serves the student body and larger South Bend community at 88.9 FM, offering students a chance to become involved in bringing classical music, fine arts and educational programming, and alternative rock to the airwaves. Another radio station, WVFI, began as a partner of WSND-FM. More recently, however, WVFI has been airing independently and is streamed on the Internet.
Community development
The first phase of Eddy Street Commons, a $215 million development located adjacent to the University of Notre Dame campus and funded by the university, broke ground on June 3, 2008. The Eddy Street Commons drew union protests when workers hired by the City of South Bend to construct the public parking garage picketed the private work site after a contractor hired non-union workers. The developer, Kite Realty out of Indianapolis, has made agreements with major national chains rather than local businesses, a move that has led to criticism from alumni and students.The ObserverThe Observer
Athletics
thumb|right|Notre Dame Stadium
Notre Dame teams are known as the Fighting Irish. They compete as a member of the National Collegiate Athletic Association (NCAA) Division I, primarily competing in the Atlantic Coast Conference (ACC) for all sports since the 2013–14 school year. The Fighting Irish previously competed in the Horizon League from 1982–83 to 1985–86, and again from 1987–88 to 1994–95, and then in the Big East Conference through 2012–13. Men's sports include baseball, basketball, crew, cross country, fencing, football, golf, ice hockey, lacrosse, soccer, swimming and diving, tennis and track & field; while women's sports include basketball, cross country, fencing, golf, lacrosse, rowing, soccer, softball, swimming and diving, tennis, track and field, and volleyball. The football team competes as an Football Bowl Subdivision (FBS) Independent since its inception in 1887. Both fencing teams compete in the Midwest Fencing Conference, and the men's ice hockey team competes in Hockey East.
left|thumbnail|Football Stadium during a game
Notre Dame's conference affiliations for all of its sports except football and fencing changed in July 2013 as a result of major conference realignment, and its fencing affiliation will change in July 2014. The Irish left the Big East for the ACC during a prolonged period of instability in the Big East; while they maintain their football independence, they have committed to play five games per season against ACC opponents. In ice hockey, the Irish were forced to find a new conference home after the Big Ten Conference's decision to add the sport in 2013–14 led to a cascade of conference moves that culminated in the dissolution of the school's former hockey home, the Central Collegiate Hockey Association, after the 2012–13 season. Notre Dame moved its hockey team to Hockey East. After Notre Dame joined the ACC, the conference announced it would add fencing as a sponsored sport beginning in the 2014–15 school year.
There are many theories behind the adoption of the athletics moniker but it is known that the Fighting Irish name was used in the early 1920s with respect to the football team and was popularized by alumnus Francis Wallace in his New York Daily News columns. The official colors of Notre Dame are navy blue and gold which are worn in competition by its athletic teams. In addition, the color green is often worn because of the Fighting Irish nickname. The Notre Dame Leprechaun is the mascot of the athletic teams. Created by Theodore W. Drake in 1964, the leprechaun was first used on the football pocket schedule and later on the football program covers. The leprechaun was featured on the cover of Time in November 1964 and gained national exposure.
On July 1, 2014, the University of Notre Dame and Under Armour reached an agreement in which Under Armour will provide uniforms, apparel,equipment, and monetary compensation to Notre Dame for 10 years. This contract, worth almost $100 million, is the most lucrative in the history of the NCAA.
The university marching band plays at home games for most of the sports. The band, which began in 1846 and has a claim as the oldest university band in continuous existence in the United States, was honored by the National Music Council as a "Landmark of American Music" during the United States Bicentennial. The band regularly plays the school's fight song the Notre Dame Victory March, which was named as the most played and most famous fight song by Northern Illinois Professor William Studwell. According to College Fight Songs: An Annotated Anthology published in 1998, the "Notre Dame Victory March" ranks as the greatest fight song of all time.
200px|thumbnail|Coach Knute Rockne
According to some analysts without direct connection to the university or its athletic department, Notre Dame promotes Muscular Christianity through its athletic programs.
Football
The Notre Dame football team has a long history, first beginning when the Michigan Wolverines football team brought football to Notre Dame in 1887 and played against a group of students. In the long history since then, 13 Fighting Irish teams have won consensus national championships (although the university only claims 11), along with another nine teams being named national champion by at least one source. Additionally, the program has the most members in the College Football Hall of Fame, is tied with Ohio State University with the most Heisman Trophies won, and have the highest winning percentage in NCAA history. With the long history, Notre Dame has accumulated many rivals, and its annual game against USC for the Jeweled Shillelagh has been named by some as one of the most important in college football and is often called the greatest intersectional rivalry in college football in the country.Dave Revsine, Michigan, Ohio State set bar high for other rivalries, ESPN.com, November 24, 2006, Accessed March 24, 2009.The Greatest Intersectional Rivalry: Top 10 Moments from Notre Dame-USC, SI.com, October 12, 2005, Accessed March 24, 2009.Adam Rose, The Color of Misery, LATimes.com, October 20, 2007, Accessed March 24, 2009.This Week in Pac-10 Football, Pacific-10 Conference, November 20, 2006, Accessed March 24, 2009.
250px|thumbnail|left|Notre Dame playing against Navy
George Gipp was the school's legendary football player during 1916–20. He played semiprofessional baseball and smoked, drank, and gambled when not playing sports. He was also humble, generous to the needy, and a man of integrity.John U. Bacon, "The Gipper," Michigan History 2001 85(6): 48–55, It was in 1928 that famed coach Knute Rockne used his final conversation with the dying Gipp to inspire the Notre Dame team to beat the Army team and "win one for the Gipper." The 1940 film, Knute Rockne, All American, starred Pat O'Brien as Knute Rockne and Ronald Reagan as Gipp.
The team competes in Notre Dame Stadium, an 80,795-seat stadium on campus. The current head coach is Brian Kelly, hired from the University of Cincinnati on December 11, 2009. Kelly's record in midway through his sixth season at Notre Dame is 52–21. In 2012, Kelly's Fighting Irish squad went undefeated and played in the BCS National Championship Game. Kelly succeeded Charlie Weis, who was fired in November 2009 after five seasons. Although Weis led his team to two Bowl Championship Series bowl games, his overall record was 35–27, mediocre by Notre Dame standards, and the 2007 team had the most losses in school history. The football team generates enough revenue to operate independently while $22.1 million is retained from the team's profits for academic use. Forbes named the team as the most valuable in college football, worth a total of $101 million in 2007.Football gameday traditionsDuring home games, activities occur all around campus and different dorms decorate their halls with a traditional item (e.g., Zahm Hall's two-story banner). Traditional activities begin at the stroke of midnight with the Drummers' Circle. This tradition involves the drum line of the Band of the Fighting Irish and ushers in the rest of the festivities that will continue the rest of the gameday Saturday. Later that day, the trumpet section will play the Notre Dame Victory March and the Notre Dame Alma Mater under the dome. The entire band will play a concert at the steps of Bond Hall, from where they will march into Notre Dame Stadium, leading fans and students alike across the campus to the game.
Men's basketball
thumb|350px|The Joyce Center, where basketball is played
As of the 2014–2015 season, the men's basketball team has over 1,898 wins, one of only 8 schools with more wins, and have appeared in 28 NCAA tournaments. Former player Austin Carr holds the record for most points scored in a single game of the tournament with 61. Although the team has never won the NCAA Tournament, they were named by the Helms Athletic Foundation as national champions twice. The team has orchestrated a number of upsets of number one ranked teams, the most notable of which was ending UCLA's record 88-game winning streak in 1974. The team has beaten an additional eight number-one teams, and those nine wins rank second, to UCLA's 10, all-time in wins against the top team. The team plays in newly renovated Purcell Pavilion (within the Edmund P. Joyce Center), which reopened for the beginning of the 2009–2010 season. The team is coached by Mike Brey, who, as of the 2015–16 season, his fifteenth at Notre Dame, has achieved a 356–177 record. In 2009 they were invited to the NIT, where they advanced to the semifinals but were beaten by Penn State who went on and beat Baylor in the championship. The 2010–11 team concluded its regular season ranked number seven in the country, with a record of 25–5, Brey's fifth straight 20-win season, and a second-place finish in the Big East. During the 2014–15 season, the team went 32–6 and won the ACC conference tournament, later advancing to the Elite 8, where the Fighting Irish lost on a missed buzzer-beater against then undefeated Kentucky. Led by NBA draft picks Jerian Grant and Pat Connaughton, the Fighting Irish beat the eventual national champion Duke Blue Devils twice during the season. The 32 wins were the most by the Fighting Irish team since 1908–09.
Other sports
Notre Dame has been successful in other sports besides football, with an additional 14 national championships in various sports. Three teams have won multiple national championships with the fencing team leading them with seven, followed by the men's tennis and women's soccer teams each with two. The men's cross country, men's golf, and women's basketball teams have each won one in their histories.
In the first ten years that Notre Dame competed in the Big East Conference its teams won a total of 64 championships. As of 2010, the women's swimming and diving team holds the Big East record for consecutive conference championships in any sport with 14 straight conference titles (1997–2010).
Music
The Band of the Fighting Irish is the oldest university band in continuous existence. It was formed in 1846. The all-male Glee Club was formed in 1915. The Internationally recognized "Notre Dame Folk Choir" was founded by Steven "Cookie" Warner in 1980.
thumbnail|The Notre Dame Band of the Fighting Irish
The "Notre Dame Victory March" is the fight song for the University of Notre Dame. It was written by two brothers who were Notre Dame graduates. The Rev. Michael J. Shea, a 1904 graduate, wrote the music, and his brother, John F. Shea, who earned degrees in 1906 and 1908, wrote the original lyrics. The lyrics were revised in the 1920s; it first appeared under the copyright of the University of Notre Dame in 1928. The chorus is, "Cheer cheer for old Notre Dame, wake up the echos cheering her name. Send a volley cheer on high, shake down the thunder from the sky! What though the odds be great or small, old Notre Dame will win over all. While her loyal sons are marching, onward to victory!"
The chorus of the song is one of the most recognizable collegiate fight songs in the United States, and was ranked first among fight songs by Northern Illinois University Professor William Studwell, who remarked it was "more borrowed, more famous and, frankly, you just hear it more".
In the film Knute Rockne, All American, Knute Rockne (played by Pat O'Brien) delivers the famous "Win one for the Gipper" speech, at which point the background music swells with the "Notre Dame Victory March". George Gipp was played by Ronald Reagan, whose nickname "The Gipper" was derived from this role. This scene was parodied in the movie Airplane! with the same background music, only this time honoring George Zipp, one of Ted Striker's former comrades. The song also was prominent in the movie Rudy, with Sean Astin as Daniel "Rudy" Ruettiger, who harbored dreams of playing football at the University of Notre Dame despite significant obstacles.
Alumni
Notre Dame alumni number near 120,000, and are members of 275 alumni clubs around the world. Many alumni give yearly monetary support to the university, with a school-record 53.2% giving some donation in 2006. Many buildings on campus are named for those whose donations allowed their building, including residence halls, classroom buildings, and the performing arts center.
Notre Dame alumni work in various fields. Alumni working in political fields include state governors, members of the United States Congress, and former United States Secretary of State Condoleezza Rice. Notable alumni from the College of Science are Eric F. Wieschaus, winner of the 1995 Nobel Prize in medicine, and Philip Majerus, discoverer of the cardioprotective effects of aspirin. A number of university heads are alumni, including Notre Dame's current president, the Rev. John Jenkins. Additionally, many alumni are in the media, including talk show hosts Regis Philbin and Phil Donahue, and television and radio personalities such as Mike Golic and Hannah Storm. With the university having high-profile sports teams itself, a number of alumni went on to become involved in athletics outside the university, including professional baseball, basketball, football, and ice hockey players, such as Joe Theismann, Joe Montana, Tim Brown, Ross Browner, Rocket Ismail, Ruth Riley, Jeff Samardzija, Jerome Bettis, Brett Lebda, Olympic gold medalist Mariel Zagunis, professional boxer Mike Lee, former football coaches such as Charlie Weis, Frank Leahy and Knute Rockne, and Basketball Hall of Famers Austin Carr and Adrian Dantley. Other notable alumni include prominent businessman Edward J. DeBartolo, Jr. and astronaut Jim Wetherbee.
Literature and popular culture
The University of Notre Dame is the setting for numerous works of fiction, as well of the alma mater of many fictitious characters.
alt=Knute Rockne, All American (1940) is one of the most popular films featuring Notre Dame.|thumb|Knute Rockne, All American (1940) is one of the most popular films featuring Notre Dame.Film Knute Rockne, All American (1940) is a 1940 biographical film which tells the story of Knute Rockne, Notre Dame football coach.
The "Win one for the Gipper" speech was parodied in the 1980 movie Airplane! when, with the Victory March rising to a crescendo in the background, Dr. Rumak, played by Leslie Nielsen, urged reluctant pilot Ted Striker, played by Robert Hays, to "win just one for the Zipper", Striker's war buddy, George Zipp. The Victory March also plays during the film's credits.
Rudy (1993) is an account of the life of Daniel "Rudy" Ruettiger, who harbored dreams of playing football at the University of Notre Dame despite significant obstacles.
In Mr. & Mrs. Smith (2005), Brad Pitt's character Mr. Smith majored in art history at Notre Dame.
In the film Something Borrowed, Ginnifer Goodwin's character is not accepted into Notre Dame law school, which is depicted as a crushing event because her competitive best friend (Kate Hudson) manages to get in.
Lt. Walter J. "Touchdown" Schinoski, claims to have played football at Notre Dame in Stanley Kubrick's Full Metal Jacket.Television President Josiah Bartlet, from the show The West Wing is a Notre Dame graduate, and the First Lady Abigail Bartlet attended Saint Mary's College (Indiana). Danny Concannon, member of the White House press corps, is also a graduate of Notre Dame.
Regis Philbin, Guinness world record for most hours on camera, often spoke about his Alma Mater on screen.
Notre Dame was featured several times on The Simpsons. In the episode "Sunday, Cruddy Sunday" the character Rudy wearing his ND jacket makes an appearance. On the episode "The Father, the Son, and the Holy Guest Star" Homer and Bart go to Catholic Heaven, where there is a group of Irish, among whom a man wearing a ND sweatshirt.
In the drama Friday Night Lights, Jason Street is ranked as one of the top high school quarterbacks in the nation with a scholarship offer to the University of Notre Dame, but during the first game of the season he suffers a severe spinal cord injury.
Paul Lassiter, Press secretary on Spin City, is also a fictional graduate."The Great Pretender". Spin City. Season 1. Episode 2. September 24, 1996.
Edward Montgomery, Greg's father on Dharma and Greg, is an alumnus of Notre Dame.
William Walden, Vice President on Homeland, is an alumnus.
Li'l Sebastian, a miniature horse on Parks and Recreation, holds an honorary from Notre Dame degree.Other media'''
The Notre Dame Leprechaun and coach Ara Parseghian were featured on the cover of Time Magazine in November 1964.
See also
Catholic university
US-China University Presidents Roundtable
History of Science Society
Summer Shakespeare
Notre Dame Philosophical ReviewsReferences
Further reading
Burns, Robert E. Being Catholic, Being American: The Notre Dame Story, 1934–1952, Vol. 2. (2000). 632pp. excerpt and text search
Corson, Dorothy V. A Cave of Candles: The Spirit, History, Legends and Lore of Notre Dame and Saint Mary's (2006), 222pp.
Hesburgh, Theodore M. God, Country, Notre Dame: The Autobiography of Theodore M. Hesburgh (2000)
McAvoy, Thomas T. "Notre Dame, 1919–1922: The Burns Revolution." Review of Politics 1963 25(4): 431–450. in JSTOR
McAvoy, Thomas T. Father O'Hara of Notre Dame (1967)
Massa, Mark S. Catholics and American Culture: Fulton Sheen, Dorothy Day, and the Notre Dame Football Team. (1999). 278 pp.
O'Brien, Michael. Hesburgh: A Biography. (1998). 354 pp.
O'Connell, Marvin R. Edward Sorin. (2001). 792 pp.
Pilkinton, Mark C. Washington Hall at Notre Dame: Crossroads of the University, 1864–2004 (University of Notre Dame Press, 2011) 419 pp.
Rice, Charles E., Ralph McInerny, and Alfred J. Freddoso. What Happened to Notre Dame? (2009) laments the weakening of Catholicism at ND
Robinson, Ray. Rockne of Notre Dame: The Making of a Football Legend. (1999). 290 pp.
Sperber, Murray. Shake Down the Thunder: The Creation of Notre Dame Football. (1993) 634 pp.
Yaeger, Don and Looney, Douglas S. Under the Tarnished Dome: How Notre Dame Betrayed Its Ideals for Football Glory.'' (1993). 299 pp.
External links
Notre Dame Athletics website
Category:1842 establishments in Indiana
Category:Association of Catholic Colleges and Universities
Category:Buildings and structures in St. Joseph County, Indiana
Category:Education in St. Joseph County, Indiana
Category:Educational institutions established in 1842
Category:History of Catholicism in Indiana
Category:Holy Cross universities and colleges
Category:Notre Dame, Indiana
Category:Roman Catholic Diocese of Fort Wayne–South Bend
Category:Roman Catholic universities and colleges in Indiana
Category:Roman Catholic universities and colleges in the United States
Category:Universities and colleges in Indiana
Category:Tourist attractions in St. Joseph County, Indiana
Category:V-12 Navy College Training Program
Category:University and college buildings on the National Register of Historic Places in Indiana
Category:National Register of Historic Places in St. Joseph County, Indiana | 146,269 | 2017-01 |
Hunter-gatherer | A hunter-gatherer is a human living in a society in which most or all food is obtained by foraging (collecting wild plants and pursuing wild animals), in contrast to agricultural societies, which rely mainly on domesticated species.
Hunting and gathering was humanity's first and most successful adaptation, occupying at least 90 percent of human history. Following the invention of agriculture, hunter-gatherers who did not change have been displaced or conquered by farming or pastoralist groups in most parts of the world.
Only a few contemporary societies are classified as hunter-gatherers, and many supplement their foraging activity with horticulture and/or keeping animals.
Archaeological evidence
In the 1970s, Lewis Binford suggested that early humans were obtaining food via scavenging, not hunting. Early humans in the Lower Paleolithic lived in forests and woodlands, which allowed them to collect seafood, eggs, nuts, and fruits besides scavenging. Rather than killing large animals for meat, according to this view, they used carcasses of such animals that had either been killed by predators or that had died of natural causes.The Last Rain Forests: A World Conservation Atlas by David Attenborough, Mark Collins Archaeological and genetic data suggest that the source populations of Paleolithic hunter-gatherers survived in sparsely wooded areas and dispersed through areas of high primary productivity while avoiding dense forest cover.
According to the endurance running hypothesis, long-distance running as in persistence hunting, a method still practiced by some hunter-gatherer groups in modern times, was likely the driving evolutionary force leading to the evolution of certain human characteristics. This hypothesis does not necessarily contradict the scavenging hypothesis: both subsistence strategies could have been in use – sequentially, alternating or even simultaneously.
Hunting and gathering was presumably the subsistence strategy employed by human societies beginning some 1.8 million years ago, by Homo erectus, and from its appearance some 0.2 million years ago by Homo sapiens. It remained the only mode of subsistence until the end of the Mesolithic period some 10,000 years ago, and after this was replaced only gradually with the spread of the Neolithic Revolution.
Starting at the transition between the Middle to Upper Paleolithic period, some 80,000 to 70,000 years ago, some hunter-gatherers bands began to specialize, concentrating on hunting a smaller selection of (often larger) game and gathering a smaller selection of food. This specialization of work also involved creating specialized tools, like fishing nets and hooks and bone harpoons.Fagan, B: People of the Earth, pages 169-81. Scott, Foresman, 1989. The transition into the subsequent Neolithic period is chiefly defined by the unprecedented development of nascent agricultural practices. Agriculture originated and spread in several different areas including the Middle East, Asia, Mesoamerica, and the Andes beginning as early as 12,000 years ago.
Forest gardening was also being used as a food production system in various parts of the world over this period. Forest gardens originated in prehistoric times along jungle-clad river banks and in the wet foothills of monsoon regions. In the gradual process of families improving their immediate environment, useful tree and vine species were identified, protected and improved, whilst undesirable species were eliminated. Eventually superior introduced species were selected and incorporated into the gardens.
Many groups continued their hunter-gatherer ways of life, although their numbers have continually declined, partly as a result of pressure from growing agricultural and pastoral communities. Many of them reside in the developing world, either in arid regions or tropical forests. Areas that were formerly available to hunter-gatherers were—and continue to be—encroached upon by the settlements of agriculturalists. In the resulting competition for land use, hunter-gatherer societies either adopted these practices or moved to other areas. In addition, Jared Diamond has blamed a decline in the availability of wild foods, particularly animal resources. In North and South America, for example, most large mammal species had gone extinct by the end of the Pleistocene—according to Diamond, because of overexploitation by humans, one of several explanations offered for the Quaternary extinction event there.
As the number and size of agricultural societies increased, they expanded into lands traditionally used by hunter-gatherers. This process of agriculture-driven expansion led to the development of the first forms of government in agricultural centers, such as the Fertile Crescent, Ancient India, Ancient China, Olmec, Sub-Saharan Africa and Norte Chico.
As a result of the now near-universal human reliance upon agriculture, the few contemporary hunter-gatherer cultures usually live in areas unsuitable for agricultural use.
Archaeologists can use evidence such as stone tool use to track hunter-gatherer activities, including mobility.
Common characteristics
thumb|150px|right|A San man from Namibia. Many San still live as hunter-gatherers.
Habitat and population
Most hunter-gatherers are nomadic or semi-nomadic and live in temporary settlements. Mobile communities typically construct shelters using impermanent building materials, or they may use natural rock shelters, where they are available.
Some hunter-gatherer cultures, such as the indigenous peoples of the Pacific Northwest Coast, lived in particularly rich environments that allowed them to be sedentary or semi-sedentary.
Social and economic structure
Hunter-gatherers tend to have an egalitarian social ethos, although settled hunter-gatherers (for example, those inhabiting the Northwest Coast of North America) are an exception to this rule. Nearly all African hunter-gatherers are egalitarian, with women roughly as influential and powerful as men.Karen Endicott 1999. "Gender relations in hunter-gatherer societies". In R.B. Lee and R. Daly (eds), The Cambridge Encyclopedia of Hunters and Gatherers. Cambridge: Cambridge University Press, pp. 411-8. Karl Marx defined this socio-economic system as primitive communism.Scott, John; Marshall, Gordon (2007). A Dictionary of Sociology. USA: Oxford University Press. ISBN 978-0-19-860987-2.
thumb|left|Mbendjele meat sharing
The egalitarianism typical of human hunters and gatherers is never total, but is striking when viewed in an evolutionary context. One of humanity's two closest primate relatives, chimpanzees, are anything but egalitarian, forming themselves into hierarchies that are often dominated by an alpha male. So great is the contrast with human hunter-gatherers that it is widely argued by palaeoanthropologists that resistance to being dominated was a key factor driving the evolutionary emergence of human consciousness, language, kinship and social organization.Erdal, D. and A. Whiten 1996. Egalitarianism and Machiavellian intelligence in human evolution. In P. Mellars and K. Gibson (eds), Modelling the early human mind. Cambridge: McDonald Institute Monographs.Christopher Boehm (2001), Hierarchy in the Forest: The Evolution of Egalitarian Behavior, Cambridge, MA: Harvard University Press.
Anthropologists maintain that hunter/gatherers don't have permanent leaders; instead, the person taking the initiative at any one time depends on the task being performed.Erdal, D. & Whiten, A. (1996) "Egalitarianism and Machiavellian Intelligence in Human Evolution" in Mellars, P. & Gibadfson, K. (eds) Modelling the Early Human Mind. Cambridge MacDonald Monograph Series In addition to social and economic equality in hunter-gatherer societies, there is often, though not always, sexual parity as well. Hunter-gatherers are often grouped together based on kinship and band (or tribe) membership. Postmarital residence among hunter-gatherers tends to be matrilocal, at least initially. Young mothers can enjoy childcare support from their own mothers, who continue living nearby in the same camp. The systems of kinship and descent among human hunter-gatherers were relatively flexible, although there is evidence that early human kinship in general tended to be matrilineal.Knight, C. 2008. "Early human kinship was matrilineal". In N. J. Allen, H. Callan, R. Dunbar and W. James (eds.), Early Human Kinship. Oxford: Blackwell, pp. 61-82.
One common arrangement is the sexual division of labour, with women doing most of the gathering, while men concentrate on big game hunting. It might be imagined that this arrangement oppresses women, keeping them in the domestic sphere. However, according to some observers, hunter-gatherer women would not understand this interpretation. Since childcare is collective, with every baby having multiple mothers and male carers, the domestic sphere is not atomised or privatised but an empowering place to be. In all hunter-gatherer societies, women appreciate the meat brought back to camp by men. An illustrative account is Megan Biesele's study of the southern African Ju/'hoan, 'Women Like Meat'.Biesele, M. 1993. Women Like Meat. The folklore and foraging ideology of the Kalahari Ju/'hoan. Witwatersrand: University Press. Recent archaeological research suggests that the sexual division of labor was the fundamental organisational innovation that gave Homo sapiens the edge over the Neanderthals, allowing our ancestors to migrate from Africa and spread across the globe.
To this day, most hunter-gatherers have a symbolically structured sexual division of labour.Testart, A. 1986. Essai sur les fondements de la division sexuelle du travail chez les chasseurs-cueilleurs. Paris: Éditions de l'École des Hautes Études en Sciences Sociales. However, it is true that in a small minority of cases, women hunt the same kind of quarry as men, sometimes doing so alongside men. The best-known example are the Aeta people of the Philippines. According to one study, "About 85% of Philippine Aeta women hunt, and they hunt the same quarry as men. Aeta women hunt in groups and with dogs, and have a 31% success rate as opposed to 17% for men. Their rates are even better when they
combine forces with men: mixed hunting groups have a full 41% success rate among the Aeta." Among the Ju'/hoansi people of Namibia, women help men track down quarry. Women in the Australian Martu also primarily hunt small animals like lizards to feed their children and maintain relations with other women.
thumb|250px|A 19th century engraving of an Indigenous Australian encampment.
At the 1966 "Man the Hunter" conference, anthropologists Richard Borshay Lee and Irven DeVore suggested that egalitarianism was one of several central characteristics of nomadic hunting and gathering societies because mobility requires minimization of material possessions throughout a population. Therefore, no surplus of resources can be accumulated by any single member. Other characteristics Lee and DeVore proposed were flux in territorial boundaries as well as in demographic composition.
At the same conference, Marshall Sahlins presented a paper entitled, "Notes on the Original Affluent Society", in which he challenged the popular view of hunter-gatherers lives as "solitary, poor, nasty, brutish and short," as Thomas Hobbes had put it in 1651.
According to Sahlins, ethnographic data indicated that hunter-gatherers worked far fewer hours and enjoyed more leisure than typical members of industrial society, and they still ate well. Their "affluence" came from the idea that they were satisfied with very little in the material sense.Sahlins, M. (1968). "Notes on the Original Affluent Society", Man the Hunter. R.B. Lee and I. DeVore (New York: Aldine Publishing Company) pp. 85-89. ISBN 0-202-33032-X. See also: Jerome Lewis, "Managing abundance, not chasing scarcity" , Radical Anthropology, No.2, 2008, and John Gowdy, '"Hunter-Gatherers and the Mythology of the Market", in Lee, Richard B (2005). Cambridge Encyclopedia of Hunters and Gatherers. Later, in 1996, Ross Sackett performed two distinct meta-analyses to empirically test Sahlin's view. The first of these studies looked at 102 time-allocation studies, and the second one analyzed 207 energy-expenditure studies. Sackett found that adults in foraging and horticultural societies work, on average, about 6.5 hours a day, where as people in agricultural and industrial societies work on average 8.8 hours a day.Sackett, R. 1996. "Time, energy, and the indolent savage. A quantitative cross-cultural test of the primitive affluence hypothesis". Ph.D. diss., University of California, Angles.
Recent research also indicates that the life-expectancy of hunter-gatherers is surprisingly high.
Mutual exchange and sharing of resources (i.e., meat gained from hunting) are important in the economic systems of hunter-gatherer societies. Therefore, these societies can be described as based on a "gift economy."
Variability
Hunter-gatherer societies manifest significant variability, depending on climate zone/life zone, available technology and societal structure. Archaeologists examine hunter-gatherer tool kits to measure variability across different groups. Collard et al. (2005) found temperature to be the only statistically significant factor to impact hunter-gatherer tool kits. Using temperature as a proxy for risk, Collard et al.'s results suggest that environments with extreme temperatures pose a threat to hunter-gatherer systems significant enough to warrant increased variability of tools. These results support Torrence's (1989) theory that risk of failure is indeed the most important factor in determining the structure of hunter-gatherer toolkits.
One way to divide hunter-gatherer groups is by their return systems.
James Woodburn uses the categories "immediate return" hunter-gatherers for egalitarian and "delayed return" for nonegalitarian.
Immediate return foragers consume their food within a day or two after they procure it.
Delayed return foragers store the surplus food (Kelly, 31).
Hunting-gathering was the common human mode of subsistence throughout the Paleolithic, but the observation of current-day hunters and gatherers does not necessarily reflect Paleolithic societies; the hunter-gatherer cultures examined today have had much contact with modern civilization and do not represent "pristine" conditions found in uncontacted peoples.
The transition from hunting and gathering to agriculture is not necessarily a one way process.
It has been argued that hunting and gathering represents an adaptive strategy, which may still be exploited, if necessary, when environmental change causes extreme food stress for agriculturalists.
In fact, it is sometimes difficult to draw a clear line between agricultural and hunter-gatherer societies, especially since the widespread adoption of agriculture and resulting cultural diffusion that has occurred in the last 10,000 years. This anthropological view has remained unchanged since the 1960s.
Nowadays, some scholars speak about the existence within cultural evolution of the so-called mixed-economies or dual economies which imply a combination of food procurement (gathering and hunting) and food production or when foragers have trade relations with farmers.Svizzero, S.; Tisdell, C. The Persistence of Hunting and Gathering Economies // Social Evolution & History. Volume 14, Number 2 / September 2015
Modern and revisionist perspectives
thumb|280px|right|A Shoshone encampment in the Wind River Mountains of Wyoming, photographed by Percy Jackson, 1870
In the early 1980s, a small but vocal segment of anthropologists and archaeologists attempted to demonstrate that contemporary groups usually identified as hunter-gatherers do not, in most cases, have a continuous history of hunting and gathering, and that in many cases their ancestors were agriculturalists and/or pastoralists who were pushed into marginal areas as a result of migrations, economic exploitation, and/or violent conflict (see, for example, the Kalahari Debate). The result of their effort has been the general acknowledgement that there has been complex interaction between hunter-gatherers and non-hunter-gatherers for millennia.
Some of the theorists who advocate this "revisionist" critique imply that, because the "pure hunter-gatherer" disappeared not long after colonial (or even agricultural) contact began, nothing meaningful can be learned about prehistoric hunter-gatherers from studies of modern ones (Kelly, 24-29; see Wilmsen)
Lee and Guenther have rejected most of the arguments put forward by Wilmsen. Doron Shultziner and others have argued that we can learn a lot about the life-styles of prehistoric hunter-gatherers from studies of contemporary hunter-gatherers—especially their impressive levels of egalitarianism.
Many hunter-gatherers consciously manipulate the landscape through cutting or burning undesirable plants while encouraging desirable ones, some even going to the extent of slash-and-burn to create habitat for game animals. These activities are on an entirely different scale to those associated with agriculture, but they are nevertheless domestication on some level. Today, almost all hunter-gatherers depend to some extent upon domesticated food sources either produced part-time or traded for products acquired in the wild.
Some agriculturalists also regularly hunt and gather (e.g., farming during the frost-free season and hunting during the winter). Still others in developed countries go hunting, primarily for leisure. In the Brazilian rainforest, those groups that recently did, or even continue to, rely on hunting and gathering techniques seem to have adopted this lifestyle, abandoning most agriculture, as a way to escape colonial control and as a result of the introduction of European diseases reducing their populations to levels where agriculture became difficult.
250px|right|thumb|Three Indigenous Australians on Bathurst Island in 1939. According to Peterson (1998), the island was a population isolated for 6,000 years until the eighteenth century. In 1929, three-quarters of the population supported themselves off the bush.
There are nevertheless a number of contemporary hunter-gatherer peoples who, after contact with other societies, continue their ways of life with very little external influence or with modifications that perpetuate the viability of hunting and gathering in the 21st century. One such group is the Pila Nguru (Spinifex people) of Western Australia, whose habitat in the Great Victoria Desert has proved unsuitable for European agriculture (and even pastoralism). Another are the Sentinelese of the Andaman Islands in the Indian Ocean, who live on North Sentinel Island and to date have maintained their independent existence, repelling attempts to engage with and contact them. The Savanna Pumé of Venezuela also live in an area that is inhospitable to large scale economic exploitation and maintain their subsistence based on hunting and gathering, as well as incorporating a small amount of manioc horticulture that supplements, but is not replacing, reliance on foraged foods.
Americas
See also: Paleo-Indians period (Canada) and History of Mesoamerica (Paleo-Indian)
Evidence suggests big-game hunter gatherers crossed the Bering Strait from Asia (Eurasia) into North America over a land bridge (Beringia), that existed between 47,000–14,000 years ago. Around 18,500-15,500 years ago, these hunter-gatherers are believed to have followed herds of now-extinct Pleistocene megafauna along ice-free corridors that stretched between the Laurentide and Cordilleran ice sheets. Another route proposed is that, either on foot or using primitive boats, they migrated down the Pacific coast to South America.
Hunter-gatherers would eventually flourish all over the Americas, primarily based in the Great Plains of the United States and Canada, with offshoots as far east as the Gaspé Peninsula on the Atlantic coast, and as far south as Chile, Monte Verde. American hunter-gatherers were spread over a wide geographical area, thus there were regional variations in lifestyles. However, all the individual groups shared a common style of stone tool production, making knapping styles and progress identifiable. This early Paleo-Indian period lithic reduction tool adaptations have been found across the Americas, utilized by highly mobile bands consisting of approximately 25 to 50 members of an extended family.
The Archaic period in the Americas saw a changing environment featuring a warmer more arid climate and the disappearance of the last megafauna. The majority of population groups at this time were still highly mobile hunter-gatherers; but now individual groups started to focus on resources available to them locally, thus with the passage of time there is a pattern of increasing regional generalization like, the Southwest, Arctic, Poverty, Dalton and Plano traditions. These regional adaptations would become the norm, with reliance less on hunting and gathering, with a more mixed economy of small game, fish, seasonally wild vegetables and harvested plant foods.
See also
Modern hunter-gatherers and other Nomads
Cro-Magnon
Homo floresiensis
Human migration
History of the world
Indigenous peoples
Neanderthals
Neolithic Revolution
Origins of society
Paleolithic
Prehistoric music
Primitive skills
Stateless society
Tribe
Clan
Modern hunter-gatherer groups
thumb|upright|Negritos in the Philippines, 1595
Aeta people
Aka people
Andamanese people
Araweté people
Awá-Guajá people
Batek people
Efé people
Fuegians
Hadza people
Indigenous Australians
Indigenous peoples of the Pacific Northwest Coast
Inuit culture
Iñupiat
Jarawa people (Andaman Islands)
Kawahiva people
Maniq people
Mbuti people
Mlabri people
Moriori people
Nukak people
Onge people
Penan people
Pirahã people
San people
Semang people
Sentinelese people
Spinifex People
Tjimba people
Uncontacted peoples
Yaruro people
Ye'kuana people
Yupik peoples
Social movements
Anarcho-primitivism, which strives for the abolishment of civilization and the return to a life in the wild.
Freeganism involves gathering of food (and sometimes other materials) in the context of an urban or suburban environment.
Gleaning involves the gathering of food that traditional farmers have left behind in their fields.
Paleolithic diet, which strives to achieve a diet similar to that of ancient hunter-gatherer groups.
Paleolithic lifestyle, which extends the paleolithic diet to other elements of the hunter-gatherer way of life, such as movement and contact with nature
References
Further reading
Books
(Reviewed in The Montreal Review)
Articles
External links
Nature's Secret Larder - Wild Foods & Hunting Tools.
A wiki dedicated to the scientific study of the diversity of foraging societies without recreating myths
Category:Anthropological categories of peoples
Category:Nomads
Category:Stone Age
Category:Human evolution
Category:Economic systems
Category:Foraging | 210,098 | 2017-01 |
Hokkien | Hokkien (from ) is a group of Southern Min (Min Nan) Chinese dialects spoken throughout Southeastern China, Taiwan, Southeast Asia, and by other overseas Chinese. Hokkien originated in southern Fujian, the Min-speaking province. It is closely related to Teochew, though there is limited mutual intelligibility, and is somewhat more distantly related to Hainanese. Besides Hokkien, there are also other Min and Hakka dialects in Fujian province, most of which are not mutually intelligible with Hokkien.
Hokkien historically served as the lingua franca amongst overseas Chinese communities of all dialects and subgroups in Southeast Asia, and remains today as the most spoken variety of Chinese in the region, including in Singapore, Malaysia, Indonesia, the Philippines and some parts of Indochina.
Names
The term Hokkien () is etymologically derived from the Southern Min pronunciation for Fujian (福建), the province from which the language hails. In Southeast Asia and the English press, "Hokkien" is used in common parlance to refer to the Southern Min dialects of southern Fujian, and does not include reference to dialects of other Sinitic branches also present in Fujian such as Eastern Min or Hakka. In Chinese linguistics, these dialects are known by their classification under the Quanzhang Division () of Min Nan, which comes from the first characters of the two main Hokkien urban centers of Quanzhou and Zhangzhou. The variety is also known by other terms such as the more general Min Nan () or "Southern Min", "Holo" and "Hoklo" (). "Fujianese" and "Fukienese" are also used, although they are somewhat imprecise.
The term "Hokkien" is not usually used in Mainland China or Taiwan. Conversely "Hokkien" is the referred name in Southeast Asia in both English, Chinese or other languages.
Speakers of Hokkien, particularly those in Southeast Asia, typically refer to Hokkien as a dialect, rather than a language. People in Taiwan most often refer to Hokkien as the "Taiwanese language", with Minnan and Holo also being used and "福建話" (fújiàn huà) is not as common.
Geographic distribution
Hokkien originated in the southern area of Fujian province, an important center for trade and migration, and has since become one of the most common Chinese varieties overseas. The major pole of Hokkien varieties outside of Fujian is Taiwan, where, during the 200 years of Qing dynasty rule, thousands of immigrants from Fujian arrived yearly. The Taiwanese version mostly have origins with the Quanzhou and Zhangzhou variants, but since then, the Amoy dialect is becoming the modern standard for the language.
There are many Hokkien speakers among overseas Chinese in Southeast Asia as well as in the United States (Hoklo Americans). Many ethnic Han Chinese emigrants to the region were Hoklo from southern Fujian, and brought the language to what is now Burma (Myanmar), Indonesia (the former Dutch East Indies) and present day Malaysia and Singapore (formerly Malaya and the British Straits Settlements). Many of the Hokkien dialects of this region are highly similar to Taiwanese and Amoynese. Hokkien is reportedly the native language of up to 80% of the Chinese people in the Philippines, among which is known locally as Lan-nang or Lán-lâng-oē ("Our people’s language"). Hokkien speakers form the largest group of overseas Chinese in Singapore, Malaysia and Indonesia.
Classification
Southern and part of western Fujian is home to four principal Hokkien dialects: Chinchew, Amoy, Chiangchew and Longyan, originating from the cities of Quanzhou, Xiamen, Zhangzhou and Longyan (respectively).
As Xiamen (Amoy) is the principal city of southern Fujian, Amoy is considered the most important, or even the prestige dialect, of Hokkien. It is a hybrid of the Quanzhou and Zhangzhou dialects. It has played an influential role in history, especially in the relations of Western nations with China, and was one of the most frequently learned of all Chinese varieties by Westerners during the second half of the 19th century and the early 20th century.
Same as Amoy dialect, the varieties of Hokkien spoken in Taiwan are hybrids of the Quanzhou and Zhangzhou dialects, and are collectively known as Taiwanese Hokkien or just Taiwanese. Used by a majority of the population, it bears much importance from a socio-political perspective, forming the second (and perhaps today most significant) major pole of the language due to the popularity of Taiwanese-language media.
Southeast Asia
The varieties of Hokkien in Southeast Asia originate from these dialects.
The Singaporeans, Southern Malaysians and people in Indonesia's Riau and surrounding islands variant is from the Quanzhou area. They speak a distinct form of Quanzhou Hokkien called Southern Peninsular Malaysian Hokkien (SPMH).
Among ethnic Chinese inhabitants of Penang, and other states in Northern Malaysia and Medan, with other areas in North Sumatra, Indonesia, a distinct form of Zhangzhou Hokkien has developed. In Penang, it is called Penang Hokkien while across the Malacca Strait in Medan, an almost identical variant is known as Medan Hokkien.
The Philippines variant is mostly from Quanzhou or Amoy (Xiamen), as most of their ancestors are from the aforementioned area.
History
Variants of Hokkien dialects can be traced to two sources of origin: Quanzhou and Zhangzhou. Both Amoy and most Taiwanese are based on a mixture of Quanzhou and Zhangzhou dialects, while the rest of the Hokkien dialects spoken in South East Asia are either derived from Quanzhou and Zhangzhou, or based on a mixture of both dialects.
Quanzhou
During the Three Kingdoms period of ancient China, there was constant warfare occurring in the Central Plain of China. Northerners began to enter into Fujian region, causing the region to incorporate parts of northern Chinese dialects. However, the massive migration of northern Han Chinese into Fujian region mainly occurred after the Disaster of Yongjia. The Jìn court fled from the north to the south, causing large numbers of northern Han Chinese to move into Fujian region. They brought the Old Chinese spoken in Central Plain of China from prehistoric era to 3rd century into Fujian. This then gradually evolved into the Quanzhou dialect.
Zhangzhou
In 677 (during the reign of Emperor Gaozong), Chen Zheng, together with his son Chen Yuanguang, led a military expedition to suppress a rebellion of the She people. In 885, (during the reign of Emperor Xizong of Tang), the two brothers Wang Chao and Wang Shenzhi, led a military expedition force to suppress the Huang Chao rebellion.
These two waves of migration from the north brought the language of northern Middle Chinese into the Fujian region. This then gradually evolved into the Zhangzhou dialect.
Xiamen
Amoy dialect is the main dialect spoken in the Chinese city of Xiamen and its surrounding regions of Tong'an and Xiang'an, both of which are now included in the greater Xiamen area. This dialect developed in the late Ming dynasty when Xiamen was increasingly taking over Quanzhou's position as the main port of trade in southeastern China. Quanzhou traders began travelling southwards to Xiamen to carry on their businesses while Zhangzhou peasants began traveling northwards to Xiamen in search of job opportunities. A need for a common language arose. The Quanzhou and Zhangzhou varieties are similar in many ways (as can be seen from the common place of Henan Luoyang where they originated), but due to differences in accents, communication can be a problem. Quanzhou businessmen considered their speech to be the prestige accent and considered Zhangzhou's to be a village dialect. Over the centuries, dialect leveling occurred and the two speeches mixed to produce the Amoy dialect.
Early sources
Several playscripts survive from the late 16th century, written in a mixture of Quanzhou and Chaozhou dialects. The most important is the Romance of the Litchi Mirror, with extant manuscripts dating from 1566 and 1581.
In the early 17th century, Spanish missionaries in the Philippines produced materials documenting the Hokkien varieties spoken by the Chinese trading community who had settled there in the late 16th century:
Diccionarium Sino-Hispanicum (1604), a Spanish-Hokkien dictionary, giving equivalent words, but not definitions.
Doctrina Christiana en letra y lengua china (1607), a Hokkien translation of the Doctrina Christiana.
Bocabulario de la lengua sangleya (c. 1617), a Spanish-Hokkien dictionary, with definitions.
Arte de la Lengua Chiõ Chiu (1620), a grammar written by a Spanish missionary in the Philippines.
These texts appear to record a Zhangzhou dialect, from the area of Haicheng (an old port that is now part of Longhai).
Chinese scholars produced rhyme dictionaries describing Hokkien varieties at the beginning of the 19th century:
Huìyīn Miàowù (彙音妙悟 "Understanding of the collected sounds") was written around 1800 by Huang Qian (黃謙), and describes the Quanzhou dialect. The oldest extant edition dates from 1831.
Huìjí yǎsútōng shíwǔyīn (彙集雅俗通十五音 "Compilation of the fifteen elegant and vulgar sounds") by Xie Xiulan (謝秀嵐) describes the Zhangzhou dialect. The oldest extant edition dates from 1818.
Walter Henry Medhurst based his 1832 dictionary on the latter work.
Phonology
Hokkien has one of the most diverse phoneme inventories among Chinese varieties, with more consonants than Standard Mandarin or Cantonese. Vowels are more-or-less similar to that of Standard Mandarin. Hokkien varieties retain many pronunciations that are no longer found in other Chinese varieties. These include the retention of the initial, which is now (Pinyin 'zh') in Mandarin (e.g. 'bamboo' 竹 is tik, but zhú in Mandarin), having disappeared before the 6th century in other Chinese varieties.
Initials
Southern Min has aspirated, unaspirated as well as voiced consonant initials. For example, the word khui (; "open") and kuiⁿ (; "close") have the same vowel but differ only by aspiration of the initial and nasality of the vowel. In addition, Southern Min has labial initial consonants such as m in m̄-sī (; "is not").
Another example is cha-po͘-kiáⁿ (; "boy") and cha-bó͘-kiáⁿ (; "girl"), which differ in the second syllable in consonant voicing and in tone.
Finals
Unlike Mandarin, Southern Min retains all the final consonants corresponding to those of Middle Chinese. While Mandarin only preserves the n and ŋ finals, Southern Min also preserves the m, p, t and k finals and developed the ʔ (glottal stop).
Vowels
Front Near-front Central Near-back Back Close 300px () () Near‑close</tr> Close‑mid</tr> Mid</tr> Open‑mid</tr> Near‑open</tr> Open</tr>
The following table illustrates some of the more commonly seen vowel shifts. Characters with the same vowel are shown in parentheses.
English Chinese character Accent Pe̍h-ōe-jī IPA Teochew Peng'Im two 二 Quanzhou, Taipei lī jĭ ()for Teochew Peng'Im on the word 'two', ri6 can also be written as dzi6. Xiamen, Zhangzhou, Tainan jī sick 病 (生) Quanzhou, Xiamen, Taipei pīⁿ pēⁿ () Zhangzhou, Tainan pēⁿ egg 卵 (遠) Quanzhou, Xiamen, Taiwan nn̄g nn̆g () Zhangzhou nūi chopsticks 箸 (豬) Quanzhou tīr tēu () Xiamen tū Zhangzhou, Taiwan tī shoes 鞋 (街) Quanzhou, Xiamen, Taipei oê ôi () Zhangzhou, Tainan ê leather 皮 (未) Quanzhou phêr phuê () Xiamen, Taipei phê Zhangzhou, Tainan phôe chicken 雞 (細) Quanzhou, Xiamen, Taipei koe koi Zhangzhou, Tainan ke hair 毛 (兩) Quanzhou, Taiwan, Xiamen mng mo Zhangzhou mo return 還 Quanzhou hoan huêng Xiamen hai Zhangzhou, Taiwan hing Speech 話 (花) Quanzhou, Taiwan oe Zhangzhou oa
Tones
In general, Hokkien dialects have 5 to 7 phonemic tones. According to the traditional Chinese system, however, there are 7 to 9 tones if the two additional entering tones (see the discussion on Chinese tone). Tone sandhi is extensive. There are minor variations between the Quanzhou and Zhangzhou tone systems. Taiwanese tones follow the patterns of Amoy or Quanzhou, depending on the area of Taiwan. Many dialects have an additional phonemic tone ("tone 9" according to the traditional reckoning), used only in special or foreign loan words.
Tones平上去入陰平陽平陰上陽上陰去陽去陰入陽入Tone Number15263748調值Xiamen, Fujian442453 -2122324東 taŋ1銅 taŋ5董 taŋ2 -凍 taŋ3動 taŋ7觸 tak4逐 tak8Taipei, Taiwan442453 -1133324 -Tainan, Taiwan442341 -21333244 -Zhangzhou, Fujian341353 -212232121 -Quanzhou, Fujian3324552241524 -Penang, Malaysiahttps://www.academia.edu/5132554/Complete_and_not-so-complete_tonal_neutralization_in_Penang_Hokkien3323445 -2134 -
Comparison
The Amoy dialect (Xiamen) is a hybrid of the Quanzhou and Zhangzhou dialects. Taiwanese is also a hybrid of these two dialects. Taiwanese in northern and coastal Taiwan tends to be based on the Quanzhou variety, whereas the Taiwanese spoken in central Taiwan tends to be based on Zhangzhou speech. There are minor variations in pronunciation and vocabulary between Quanzhou and Zhangzhou dialects. The grammar is generally the same. Additionally, extensive contact with the Japanese language has left a legacy of Japanese loanwords in Taiwanese Hokkien. On the other hand, the variants spoken in Singapore and Malaysia have a substantial number of loanwords from Malay and to a lesser extent, from English and other Chinese varieties, such as the closely related Teochew and some Cantonese.
Penang Hokkien and Medan Hokkien are based on Zhangzhou dialect, whereas Southern Peninsular Malaysian Hokkien is based on Quanzhou dialect. Most
Mutual intelligibility
The Quanzhou dialect, Xiamen dialect, Zhangzhou dialect, Taiwanese, Penang Hokkien and Singaporean Hokkien are mutually intelligible.
The Min Nan varieties of Teochew and Amoy are 84% phonetically similar, and 34% lexically similar, whereas Mandarin and Amoy Min Nan are 62% phonetically similar and 15% lexically similar. In comparison, German and English are 60% lexically similar.
Hainanese, which is sometimes considered Southern Min, has almost no mutual intelligibility with any form of Hokkien.
Grammar
Hokkien is an analytic language; in a sentence, the arrangement of words is important to its meaning. A basic sentence follows the subject–verb–object pattern (i.e. a subject is followed by a verb then by an object), though this order is often violated because Hokkien dialects are topic-prominent. Unlike synthetic languages, seldom do words indicate time, gender and plural by inflection. Instead, these concepts are expressed through adverbs, aspect markers, and grammatical particles, or are deduced from the context. Different particles are added to a sentence to further specify its status or intonation.
A verb itself indicates no grammatical tense. The time can be explicitly shown with time-indicating adverbs. Certain exceptions exist, however, according to the pragmatic interpretation of a verb's meaning. Additionally, an optional aspect particle can be appended to a verb to indicate the state of an action. Appending interrogative or exclamative particles to a sentence turns a statement into a question or shows the attitudes of the speaker.
Hokkien dialects preserve certain grammatical reflexes and patterns reminiscent of the broad stage of Archaic Chinese. This includes the serialization of verb phrases (direct linkage of verbs and verb phrases) and the infrequency of nominalization, both similar to Archaic Chinese grammar.
?
You-go-buy-have watch-no (Gloss)
"Did you go to buy a watch?"
Choice of grammatical function words also varies significantly among the Hokkien dialects. For instance, 乞 khit (denoting the causative, passive or dative) is retained in Jinjiang (also unique to the Jinjiang dialect is 度 thoo) and in Jieyang, but not in Longxi and Xiamen, whose dialects use 互 (hoo) instead.
Pronouns
Hokkien dialects differ in their preferred choice of pronouns. For instance, while the second person pronoun lí (你) is standard in Taiwanese Hokkien, the Teochew loanword lú (汝) is more common among Hokkien-speaking communities in Southeast Asia. The plural personal pronouns tend to be nasalized forms of the singular ones. Personal pronouns found in the Hokkien dialects are listed below:
Person Singular Plural First person我góa阮1, 3gún, góan咱2, 3 or 俺lán or án我儂góa-lâng Second person你lí汝lú 恁lín恁儂lín lâng Third person 伊i 𪜶in伊儂i lâng
1 Inclusive
2 Exclusive
3 儂 (-lâng) is typically suffixed in Southeast Asian Hokkien dialects
Possessive pronouns are marked by the particle ê (的), or its literary version chi (之). Plural pronouns are typically unmarked (the nasalized final serves as the possessive indicator):
。
"My husband's surname is Tan."
Reflexive pronouns are made by appending the pronouns ka-kī, ka-tī (家己) or chū-kí (自己).
Hokkien dialects use a variety of differing demonstrative pronouns, which are as follows:
this - che (這, 即), chit-ê (這個, 即個)
that - he (許, 彼), hit-ê (彼個)
here - chia (者), hia/hiâ (遮, 遐), chit-tau 這兜)
there - hia (許, 遐), hit-tau (彼兜)
The interrogative pronouns are:
what - siáⁿ-mih (啥物), sīm-mi̍h (甚麼)
when - tī-sî (底時), kī-sî (幾時), tang-sî (當時), sīm-mi̍h-sî-chūn (甚麼時陣)
where - to-lo̍h (倒落), tó-uī (佗位, 叨位)
who - siáⁿ-lâng (啥人) or siáⁿ (啥)
why - án-chóaⁿ (按怎), khah (盍)
how - án-chóaⁿ (按怎) lû-hô (如何) chóaⁿ-iūⁿ (怎樣)
Copula ("to be")
States and qualities are generally expressed using stative verbs that do not require the verb "to be":
。
"I am hungry." (lit. I-stomach-hungry)
With noun complements, the verb sī (是) serves as the verb "to be".
。
"Yesterday was the Mid-Autumn festival."
To indicate location, the words tī (佇) tiàm (踮), teh/leh (咧), which are collectively known as the locatives or sometimes coverbs in Chinese linguistics, are used to express "(to be) at":
。
"I am here waiting for you."
。
"He's sleeping at home now."
Negation
Hokkien dialects have a variety of negation particles that are prefixed or affixed to the verbs they modify. There are five primary negation particles in Hokkien dialects:
m̄ (毋, 呣, 唔)
bē, bōe (袂, 未)
mài (莫, 勿)
bô (無)
put (不) - literary
Other negative particles include:
biàu (嫑) - a contraction of bô iàu (無要), as in biàu-kín (嫑緊)
bàng (甭)
bián (免)
thài (汰)
The particles m̄ (毋, 呣, 唔) is general and can negate almost any verb:
。
"He cannot read." (lit. he-not-know-word)
The particle mài (莫, 勿), a concatenation of m-ài (毋愛) is used to negate imperative commands:
!
"Don't speak!"
The particle bô (無) indicates the past tense:
。
"He did not eat."
The verb 'to have', ū (有) is replaced by bô (無) when negated (not 無有):
。
"He does not have any money."
The particle put (不) is used infrequently, mostly found in literary compounds and phrases:
。
"He is truly unfilial."
Vocabulary
The majority of Hokkien vocabulary is monosyllabic. Many Hokkien words have cognates in other Chinese varieties. That said, there are also many indigenous words that are unique to Hokkien and are potentially not of Sino-Tibetan origin, while others are shared by all the Min dialects (e.g. 'congee' is 糜 mê, bôe, bê, not 粥 zhōu, as in other dialects).
As compared to Standard Chinese (Mandarin), Hokkien dialects prefer to use the monosyllabic form of words, without suffixes. For instance, the Mandarin noun suffix 子 (zi) is not found in Hokkien words, while another noun suffix, 仔 (á) is used in many nouns. Examples are below:
'duck' - 鸭 ah or 鴨仔 ah-á (SC: 鸭子 yāzi)
'color' - 色 sek (SC: 顏色 yán sè)
In other bisyllabic morphemes, the syllables are inverted, as compared to Standard Chinese. Examples include the following:
'guest' - 人客 lâng-kheh (SC: 客人 kèrén)
In other cases, the same word can have different meanings in Hokkien and standard written Chinese. Similarly, depending on the region Hokkien is spoken in, loanwords from local languages (Malay, Tagalog, Burmese, among others), as well as other Chinese dialects (such as Southern Chinese dialects like Cantonese and Teochew), are commonly integrated into the vocabulary of Hokkien dialects.
Literary and colloquial readings
The existence of literary and colloquial readings (), called tha̍k-im (讀音), is a prominent feature of some Hokkien dialects and indeed in many Sinitic varieties in the south. The bulk of literary readings (, bûn-tha̍k), based on pronunciations of the vernacular during the Tang Dynasty, are mainly used in formal phrases and written language (e.g. philosophical concepts, surnames, and some place names), while the colloquial (or vernacular) ones (, pe̍h-tha̍k) are basically used in spoken language and vulgar phrases. Literary readings are more similar to the pronunciations of the Tang standard of Middle Chinese than their colloquial equivalents.
However, some dialects of Hokkien, such as Penang Hokkien as well as Philippine Hokkien overwhelmingly favor colloquial readings. For example, in both Penang Hokkien and Philippine Hokkien, the characters for 'university,' 大學, are pronounced tōa-o̍h (colloquial readings for both characters), instead of the literary reading tāi-ha̍k, which is common in Taiwanese and Mainland Chinese dialects.
The pronounced divergence between literary and colloquial pronunciations found in Hokkien dialects is attributed to the presence of several strata in the Min lexicon. The earliest, colloquial stratum is traced to the Han dynasty (206 BCE - 220 CE); the second colloquial one comes from the period of the Southern and Northern Dynasties (420 - 589 CE); the third stratum of pronunciations (typically literary ones) comes from the Tang Dynasty (618–907 CE) and is based on the prestige dialect of Chang'an (modern day Xi'an), its capital.
Some commonly seen sound correspondences (colloquial → literary) are as follows:
p- ([p-], [pʰ-]) → h ([h-])
ch-, chh- ([ts-], [tsʰ-], [tɕ-], [tɕʰ-]) → s ([s-], [ɕ-])
k-, kh- ([k-], [kʰ-]) → ch ([tɕ-], [tɕʰ-])
-ⁿ ([-ã], [-uã]) → n ([-an])
-h ([-ʔ]) → t ([-t])
i ([-i]) → e ([-e])
e ([-e]) → a ([-a])
ia ([-ia]) → i ([-i])
This table displays some widely used characters in Hokkien that have both literary and colloquial readings:
This feature extends to Chinese numerals, which have both literary and colloquial readings. Literary readings are typically used when the numerals are read out loud (e.g. phone numbers), while colloquial readings are used for counting items.
NumeralReadingNumeralReadingLiteraryColloquialLiteraryColloquial 一 it chi̍t 六 lio̍k la̍k 二 jī, lī 七 chhit 三 sam saⁿ 八 pat peh, poeh 四 sù, sìr sì 九 kiú káu 五 ngó gō 十 si̍p cha̍p
Semantic differences between Hokkien and Mandarin
Quite a few words from the variety of Old Chinese spoken in the state of Wu (where the ancestral language of Min and Wu dialect families originated and which was likely influenced by the Chinese spoken in the state of Chu which itself was not founded by Chinese speakers), and later words from Middle Chinese as well, have retained the original meanings in Hokkien, while many of their counterparts in Mandarin Chinese have either fallen out of daily use, have been substituted with other words (some of which are borrowed from other languages while others are new developments), or have developed newer meanings. The same may be said of Hokkien as well, since some lexical meaning evolved in step with Mandarin while others are wholly innovative developments.
This table shows some Hokkien dialect words from Classical Chinese, as contrasted to the written Chinese standard, Mandarin:
MeaningHokkienMandarinHanjiPOJHanziPinyin eye ba̍k-chiu yǎnjīng chopstick tī, tū kuàizi to chase jiok, lip zhuī wet jūn, lūn shī black o͘ hēi book chheh shū
For other words, the classical Chinese meanings of certain words, which are retained in Hokkien dialects, have evolved or deviated significantly in other Chinese dialects. The following table shows some words that are both used in both Hokkien dialects and Mandarin Chinese, while the meanings in Mandarin Chinese have been modified:
WordHokkienMandarinPOJMeaning(and Classical Chinese)PinyinMeaning cháu to flee zǒu to walk sè, sòe tiny, small, young xì thin, slender tiáⁿ pot dǐng tripod chia̍h to eat shí food kôan tall, high xuán to hang, to suspend chhuì mouth huì beak
Words from Minyue
Some commonly used words, shared by all Min Chinese dialects, came from the ancient Minyue languages. Jerry Norman suggested that these languages were Austroasiatic. Some terms are thought be cognates with words in Tai Kadai and Austronesian languages. They include the following examples, compared to the Fuzhou dialect, a Min Dong language:
WordHokkien POJFoochow RomanizedMeaning kha kă foot and leg kiáⁿ giāng son, child, whelp, a small amount khùn káung to sleep phiaⁿ piăng back, dorsum chhù chuó, chió home, house thâi tài to kill, to slaughter () bah, mah — meat suí — beautiful
Loanwords
Loanwords are not unusual among Hokkien dialects, as speakers readily adopted indigenous terms of the languages they came in contact with. As a result, there is a plethora of loanwords that are not mutually comprehensible among Hokkien dialects.
Taiwanese Hokkien, as a result of linguistic contact with Japanese and Formosan languages, contains many loanwords from these languages. Many words have also been formed as calques from Mandarin, and speakers will often directly use Mandarin vocabulary through codeswitching. Among these include the following examples:
'toilet' - piān-só͘ () from Japanese
Other Hokkien variants: 屎礐 (sái-ha̍k), 廁所 (chhek-só͘)
'car' - chū-tōng-chhia () from Japanese
Other Hokkien variants: 風車 (hong-chhia), 汽車 (khì-chhia)
'to admire' - kám-sim () from Japanese
Other Hokkien variants: 感動 (kám-tōng)
'fruit' - chúi-ké / chúi-kóe / chúi-kér (水果) from Mandarin ()
Other Hokkien variants: 果子 (ké-chí / kóe-chí / kér-chí)
Singaporean Hokkien, Penang Hokkien and other Malaysian Hokkien dialects tend to draw loanwords from Malay, English as well as other Chinese dialects, primarily Teochew. Examples include:
'but' - tapi, from Malay
Other Hokkien variants: 但是 (tān-sī)
'doctor' - 老君 lu-gun, from Malay dukun
Other Hokkien variants: 醫生(i-sing)
'stone/rock' - batu, from Malay batu
Other Hokkien variants: 石头(tsio-tau)
'market' - 巴剎 pa-sat, from Malay pasar from Persian bazaar (بازار)
Other Hokkien variants: 市場 (chhī-tiûⁿ)
'they' - 伊儂 i lâng from Teochew (i1 nang5)
Other Hokkien variants: 𪜶 (in)
'together' - 做瓠 chò-bú from Teochew 做瓠 (jo3 bu5)
Other Hokkien variants: 做夥 (chò-hóe), 同齊 (tâng-chê) or 鬥陣 (tàu-tīn)
茶箍 (Sap-bûn) from Malay sabun from Arabic ṣābūn (صابون).http://banlam.tawa.asia/2012/10/soap-feizhao-hokkien-sabun.html
Philippine Hokkien dialects, as a result of centuries-old contact with both Philippine language and Spanish also incorporate words from these languages. Examples include:
'cup' - ba-su, from Spanish vaso and Tagalog baso
Other Hokkien variants: 杯子 (poe-á)
'office' - o-pi-sin, from Spanish oficina and Tagalog opisina
Other Hokkien variants: 辦公室 (pān-kong-sek)
'soap' - sa-bun, from Spanish jabon and Tagalog sabon
Other Hokkien variants:
'but' - ka-so, from Tagalog kaso
Other Hokkien variants: 但是 (tan-si)
(em-ko)
Standard Hokkien
Hokkien originated from Quanzhou.http://www.taiwan.cn/twzlk/twgk/yywz/200512/t20051226_222977.htm After the Opium War in 1842, Xiamen (Amoy) became one of the major treaty ports to be opened for trade with the outside world. From mid-19th century onwards, Xiamen slowly developed to become the political and economical center of the Hokkien-speaking region in China. This caused Amoy dialect to gradually replace the position of dialect variants from Quanzhou and Zhangzhou. From mid-19th century until the end of World War II, western diplomats usually learned Amoy as the preferred dialect if they were to communicate with the Hokkien-speaking populace in China or South-East Asia. In the 1940s and 1950s, Taiwan also held Amoy Hokkien as its standard and tended to incline towards Amoy dialect.
However, from the 1980s onwards, the development of Hokkien pop music and media industry in Taiwan caused the Hokkien cultural hub to shift from Xiamen to Taiwan. The flourishing Hokkien entertainment and media industry from Taiwan in the 1990s and early 21st century led Taiwan to emerge as the new significant cultural hub for Hokkien.
In the 1990s, marked by the liberalization of language development and mother tongue movement in Taiwan, Taiwanese Hokkien had undergone a fast pace in its development. In 1993, Taiwan became the first region in the world to implement the teaching of Taiwanese Hokkien in Taiwanese schools. In 2001, the local Taiwanese language program was further extended to all schools in Taiwan, and Taiwanese Hokkien became one of the compulsory local Taiwanese languages to be learned in schools. The mother tongue movement in Taiwan even influenced Xiamen (Amoy) to the point that in 2010, Xiamen also began to implement the teaching of Hokkien dialect in its schools.有感于厦门学校“闽南语教学进课堂”_博客臧_新浪博客 In 2007, the Ministry of Education in Taiwan also completed the standardization of Chinese characters used for writing Hokkien and developed Tai-lo as the standard Hokkien pronunciation and romanization guide. A number of universities in Taiwan also offer Taiwanese degree courses for training Hokkien-fluent talents to work for the Hokkien media industry and education. Taiwan also has its own Hokkien literary and cultural circles whereby Hokkien poets and writers compose poetry or literature in Hokkien.
Thus by the 21st century, Taiwan has truly emerged as one of the most significant Hokkien cultural hubs of the world. The historical changes and development in Taiwan had led Taiwanese Hokkien to become the more influential pole of the Hokkien dialect after mid-20th century. Today, Taiwanese prestige dialect (Taiyu Youshiqiang/Tongxinqiang 台語優勢腔/通行腔), which is based on Tainan variant and heard on Taiwanese Hokkien media.
Writing systems
Chinese script
Hokkien dialects are typically written using Chinese characters (漢字, Hàn-jī). However, the written script was and remains adapted to the literary form, which is based on classical Chinese, not the vernacular and spoken form. Furthermore, the character inventory used for Mandarin (standard written Chinese) does not correspond to Hokkien words, and there are a large number of informal characters (替字, thè-jī or thòe-jī; 'substitute characters') which are unique to Hokkien (as is the case with Cantonese). For instance, about 20 to 25% of Taiwanese morphemes lack an appropriate or standard Chinese character.
While most Hokkien morphemes have standard designated characters, they are not always etymological or phono-semantic. Similar-sounding, similar-meaning or rare characters are commonly borrowed or substituted to represent a particular morpheme. Examples include "beautiful" (美 bí is the literary form), whose vernacular morpheme suí is represented by characters like 媠 (an obsolete character), 婎 (a vernacular reading of this character) and even 水 (transliteration of the sound suí), or "tall" (高 ko is the literary form), whose morpheme kôan is 懸. Common grammatical particles are not exempt; the negation particle m̄ (not) is variously represented by 毋, 呣 or 唔, among others. In other cases, characters are invented to represent a particular morpheme (a common example is the character 𪜶 in, which represents the personal pronoun "they"). In addition, some characters have multiple and unrelated pronunciations, adapted to represent Hokkien words. For example, the Hokkien word bah ("meat") has been reduced to the character 肉, which has etymologically unrelated colloquial and literary readings (he̍k and jio̍k, respectively). Another case is the word 'to eat,' chia̍h, which is often transcribed in Taiwanese newspapers and media as 呷 (a Mandarin transliteration, xiā, to approximate the Hokkien term), even though its recommended character in dictionaries is 食.
Moreover, unlike Cantonese, Hokkien does not have a universally accepted standardized character set. Thus, there is some variation in the characters used to express certain words and characters can be ambiguous in meaning. In 2007, the Ministry of Education of the Republic of China formulated and released a standard character set to overcome these difficulties. These standard Chinese characters for writing Taiwanese Hokkien are now taught in schools in Taiwan.
Latin script
Hokkien, especially Taiwanese Hokkien, is sometimes written in the Latin script using one of several alphabets. Of these the most popular is POJ, developed first by Presbyterian missionaries in China and later by the indigenous Presbyterian Church in Taiwan. Use of this script and orthography has been actively promoted since the late 19th century. The use of a mixed script of Han characters and Latin letters is also seen, though remains uncommon. Other Latin-based alphabets also exist.
Min Nan texts, all Hokkien, can be dated back to the 16th century. One example is the Doctrina Christiana en letra y lengua china, presumably written after 1587 by the Spanish Dominicans in the Philippines. Another is a Ming Dynasty script of a play called Tale of the Lychee Mirror (1566), supposedly the earliest Southern Min colloquial text, although it is written in Teochew dialect.
Taiwan has developed a Latin alphabet for Taiwanese Hokkien, derived from POJ, known as Tai-lo. Since 2006, it has been officially promoted by Taiwan's Ministry of Education and taught in Taiwanese schools. Xiamen University has also developed an alphabet based on Pinyin called Bbánlám pìngyīm.
Computing
thumb|The character for the third person pronoun (they) in some Hokkien dialects, 𪜶 (in), is now supported by the Unicode Standard at U+2A736.
Hokkien is registered as "Southern Min" per RFC 3066 as zh-min-nan.
When writing Hokkien in Chinese characters, some writers create 'new' characters when they consider it impossible to use directly or borrow existing ones; this corresponds to similar practices in character usage in Cantonese, Vietnamese chữ nôm, Korean hanja and Japanese kanji. Some of these are not encoded in Unicode (or the corresponding ISO/IEC 10646: Universal Character Set), thus creating problems in computer processing.
All Latin characters required by Pe̍h-ōe-jī can be represented using Unicode (or the corresponding ISO/IEC 10646: Universal Character Set), using precomposed or combining (diacritics) characters. Prior to June 2004, the vowel akin to but more open than o, written with a dot above right, was not encoded. The usual workaround was to use the (stand-alone; spacing) character Interpunct (U+00B7, ·) or less commonly the combining character dot above (U+0307). As these are far from ideal, since 1997 proposals have been submitted to the ISO/IEC working group in charge of ISO/IEC 10646—namely, ISO/IEC JTC1/SC2/WG2—to encode a new combining character dot above right. This is now officially assigned to U+0358 (see documents N1593, N2507, N2628,
N2699, and N2713). Font support is expected to follow.
Cultural and political role
Hokkien (or Min Nan) can trace its roots through the Tang Dynasty and also even further to the people of the Baiyue, the indigenous non-Han people of modern day southern China. Min Nan (Hokkien) people call themselves "Tang people," () which is synonymous to "Chinese people". Because of the widespread influence of the Tang culture during the great Tang dynasty, there are today still many Min Nan pronunciations of words shared by the Vietnamese, Korean and Japanese languages.
In 2002, the Taiwan Solidarity Union, a party with about 10% of the Legislative Yuan seats at the time, suggested making Taiwanese a second official language.http://www.taipeitimes.com/News/front/archives/2002/03/10/0000127068 This proposal encountered strong opposition not only from Mainlander groups but also from Hakka and Taiwanese aboriginal groups who felt that it would slight their home languages, as well as others including Hoklo who objected to the proposal on logistical grounds and on the grounds that it would increase ethnic tensions. Because of these objections, support for this measure was lukewarm among moderate Taiwan independence supporters, and the proposal did not pass.
English Chinese characters Mandarin Chinese Taiwanese Hokkien Korean Vietnamese Japanese Book 冊 Cè Chheh Chaek Tập/Sách Saku/Satsu/Shaku Bridge 橋 Qiáo Kiô Kyo Cầu/Kiều Kyō Dangerous 危險 Wēixiǎn Guî-hiám Wiheom Nguy hiểm Kiken Flag 旗 Qí Kî Ki Cờ/Kỳ Ki Insurance 保險 Bǎoxiǎn Pó-hiám Boheom Bảo hiểm Hoken News 新聞 Xīnwén Sin-bûn Shinmun Tân Văn Shinbun Student 學生 Xuéshēng Ha̍k-seng Haksaeng Học sinh Gakusei University 大學 Dàxué Tāi-ha̍k (Tōa-o̍h) Daehak Đại học Daigaku
See also
Penang Hokkien
Taiwanese Hokkien
Medan Hokkien
Singaporean Hokkien
Amoy dialect
Lan-nang (Philippine dialect of Hokkien)
Teochew dialect
Languages of China
Languages of Taiwan
Amoy Min Nan Swadesh list
Notes
References
Further reading
An analysis and facsimile of the Arte de la Lengua Chio-chiu (1620), the oldest extant grammar of Hokkien.
External links
A playscript from the late 16th century.
Hokkien translation of the Doctrina Christiana.
A manual for learning Hokkien written by a Spanish missionary in the Philippines.
The oldest known rhyme dictionary of a Zhangzhou dialect.
當代泉州音字彙, a dictionary of Quanzhou speech
Voyager - Spacecraft - Golden Record - Greetings From Earth - Amoy, includes translation and sound clip
(The voyager clip says: 太空朋友,恁好。恁食飽未?有閒著來阮遮坐哦!)
Category:Southern Min-language dialects
Category:Languages of China
Category:Languages of Taiwan
Category:Languages of Malaysia
Category:Languages of the Philippines
| 18,978,679 | 2017-01 |
Economy of Greece | The economy of Greece is the 46th largest in the world with a nominal gross domestic product (GDP) of $194.851 billion per annum. It is also the 54th largest in the world by purchasing power parity, at $288.245 billion per annum. As of 2015, Greece is the fifteenth-largest economy in the 28-member European Union. Greece is ranked 38th and 45th in the world at $17,988 and $26,391 for nominal GDP per capita and purchasing power parity per capita respectively.
Greece is a developed country with an economy based on the service (82.8%) and industrial sectors (13.3%). The agricultural sector contributed 3.9% of national economic output in 2015. Important Greek industries include tourism and shipping. With 18 million international tourists in 2013, Greece was the 7th most visited country in the European Union and 16th in the world. The Greek Merchant Navy is the largest in the world, with Greek-owned vessels accounting for 15% of global deadweight tonnage as of 2013. The increased demand for international maritime transportation between Greece and Asia has resulted in unprecedented investment in the shipping industry.
The country is a significant agricultural producer within the EU. Greece has the largest economy in the Balkans and is as an important regional investor. Greece was the largest foreign investor in Albania in 2013, the third in Bulgaria, in the top-three in Romania and Serbia and the most important trading partner and largest foreign investor in the former Yugoslav Republic of Macedonia. The Greek telecommunications company OTE has become a strong investor in former Yugoslavia and in other Balkan countries.
Greece is classified as an advanced, high-income economy, and was a founding member of the Organisation for Economic Co-operation and Development (OECD) and of the Organization of the Black Sea Economic Cooperation (BSEC). The country joined what is now the European Union in 1981. In 2001 Greece adopted the euro as its currency, replacing the Greek drachma at an exchange rate of 340.75 drachmae per euro. Greece is a member of the International Monetary Fund and of the World Trade Organization, and ranked 34th on Ernst & Young's Globalization Index 2011.
World War II (1939-1945) devastated the country's economy, but the high levels of economic growth that followed from 1950 to 1980 have been called the Greek economic miracle. From 2000 Greece saw high levels of GDP growth above the Eurozone average, peaking at 5.8% in 2003 and 5.7% in 2006. The subsequent Great Recession and Greek government-debt crisis, a central focus of the wider European debt crisis, plunged the economy into a sharp downturn, with real GDP growth rates of −0.3% in 2008, −4.3% in 2009, −5.5% in 2010, −9.1% in 2011, −7.3% in 2012 and −3.2% in 2013. In 2011, the country's public debt reached €356 billion (172% of nominal GDP). After negotiating the biggest debt restructuring in history with the private sector, Greece reduced its sovereign debt burden to €280 billion (137% of GDP) in the first quarter of 2012. Greece achieved a real GDP growth rate of 0.7% in 2014 after 6 years of economic decline, but fell back into recession in 2015.
History
The evolution of the Greek economy during the 19th century (a period that transformed a large part of the world because of the Industrial Revolution) has been little researched. Recent research from 2006 examines the gradual development of industry and further development of shipping in a predominantly agricultural economy, calculating an average rate of per capita GDP growth between 1833 and 1911 that was only slightly lower than that of the other Western European nations. Industrial activity, (including heavy industry like shipbuilding) was evident, mainly in Ermoupolis and Piraeus. Nonetheless, Greece faced economic hardships and defaulted on its external loans in 1826, 1843, 1860 and 1893.
Other studies support the above view on the general trends in the economy, providing comparative measures of standard of living. The per capita income (in purchasing power terms) of Greece was 65% that of France in 1850, 56% in 1890, 62% in 1938,Paul Bairoch, Europe's GNP 1800–1975, Journal of European Economic History, 5, pgs. 273–340 (1976)Angus Maddison, Monitoring the World Economy 1820–1992, OECD (1995) 75% in 1980, 90% in 2007, 96.4% in 2008 and 97.9% in 2009.Eurostat, including updated data since 1980 and data released in April 2008
The country's post-World War II development has largely been connected with the Greek economic miracle. During that period, Greece saw growth rates second only to those of Japan, while ranking first in Europe in terms of GDP growth. It is indicative that between 1960 and 1973 the Greek economy grew by an average of 7.7%, in contrast to 4.7% for the EU15 and 4.9% for the OECD. Also during that period, exports grew by an average annual rate of 12.6%.
Strengths and weaknesses
Greece enjoys a high standard of living and very high Human Development Index, ranking 29th in the world in 2014. However, the severe recession of recent years has seen GDP per capita fall from 94% of the EU average in 2009 to 68% in 2015. Actual Individual Consumption (AIC) per capita fell from 104% of the EU average to 77% during the same period.
Greece's main industries are tourism, shipping, industrial products, food and tobacco processing, textiles, chemicals, metal products, mining and petroleum. Greece's GDP growth has also, as an average, since the early 1990s been higher than the EU average. However, the Greek economy continues to face significant problems, including high unemployment levels, an inefficient public sector bureaucracy, tax evasion, corruption and low global competitiveness.Greek taxpayers sense evasion crackdown Financial Times
Greece is ranked 58th in the world on the Corruption Perceptions Index, alongside Romania. Greece has the EU's lowest Index of Economic Freedom and Global Competitiveness Index, ranking 138th and 86th in the world respectively.
thumb|350px|GDP growth rates of the Greek economy between 1961 and 2010
After fourteen consecutive years of economic growth, Greece went into recession in 2008. By the end of 2009, the Greek economy faced the highest budget deficit and government debt-to-GDP ratios in the EU. After several upward revisions, the 2009 budget deficit is now estimated at 15.7% of GDP. This, combined with rapidly rising debt levels (127.9% of GDP in 2009) led to a precipitous increase in borrowing costs, effectively shutting Greece out of the global financial markets and resulting in a severe economic crisis.Charter, David. Storm over bailout of Greece, EU's most ailing economy. Time Online: Brussels, 2010
Greece was accused of trying to cover up the extent of its massive budget deficit in the wake of the global financial crisis. The allegation was prompted by the massive revision of the 2009 budget deficit forecast by the new PASOK government elected in October 2009, from "6–8%" (estimated by the previous New Democracy government) to 12.7% (later revised to 15.7%). However, the accuracy of the revised figures has also been questioned, and in February 2012 the Hellenic Parliament voted in favor of an official investigation following accusations by a former member of the Hellenic Statistical Authority that the deficit had been artificially inflated in order to justify harsher austerity measures.
Average GDP growth by era 1961–1970 8.44%1971–1980 4.70%1981–1990 0.70%1991–20002.36%2001–2007 4.11%2008–2011−4.8%2012–2015−2.52%
The Greek labor force, which amount around workers, average 2,032 hours of work per worker annually in 2011, is ranked fourth among OECD countries, after Mexico, South Korea and Chile. The Groningen Growth & Development Centre has published a poll revealing that between 1995 and 2005, Greece was the country whose workers have the most hours/year work among European nations; Greeks worked an average of 1,900 hours per year, followed by Spaniards (average of 1,800 hours/year).
As a result of the ongoing economic crisis, industrial production in the country went down by 8% between March 2010 and March 2011, The volume of building activity saw a reduction of 73% in 2010. Additionally, the turnover in retail sales saw a decline of 9% between February 2010 and February 2011.
Between 2008 and 2013 unemployment skyrocketed, from a generational low of 7.2% in the second and third quarters of 2008 to a high of 27.9% in June 2013, leaving over a million jobless. Youth unemployment peaked at 64.9% in May 2013. In 2015, unemployment was rated around 24% and youth unemployment around 47%.
Eurozone entry
thumb|Greece entered the Eurozone in 2001
Greece was accepted into the Economic and Monetary Union of the European Union by the European Council on 19 June 2000, based on a number of criteria (inflation rate, budget deficit, public debt, long-term interest rates, exchange rate) using 1999 as the reference year. After an audit commissioned by the incoming New Democracy government in 2004, Eurostat revealed that the statistics for the budget deficit had been under-reported.
Most of the differences in the revised budget deficit numbers were due to a temporary change of accounting practices by the new government, i.e., recording expenses when military material was ordered rather than received. However, it was the retroactive application of ESA95 methodology (applied since 2000) by Eurostat, that finally raised the reference year (1999) budget deficit to 3.38% of GDP, thus exceeding the 3% limit. This led to claims that Greece (similar claims have been made about other European countries like Italy)
had not actually met all five accession criteria, and the common perception that Greece entered the Eurozone through "falsified" deficit numbers.
In the 2005 OECD report for Greece, it was clearly stated that "the impact of new accounting rules on the fiscal figures for the years 1997 to 1999 ranged from 0.7 to 1 percentage point of GDP; this retroactive change of methodology was responsible for the revised deficit exceeding 3% in 1999, the year of [Greece's] EMU membership qualification". The above led the Greek minister of finance to clarify that the 1999 budget deficit was below the prescribed 3% limit when calculated with the ESA79 methodology in force at the time of Greece's application, and thus the criteria had been met.
The original accounting practice for military expenses was later restored in line with Eurostat recommendations, theoretically lowering even the ESA95-calculated 1999 Greek budget deficit to below 3% (an official Eurostat calculation is still pending for 1999).
An error sometimes made is the confusion of discussion regarding Greece’s Eurozone entry with the controversy regarding usage of derivatives’ deals with U.S. Banks by Greece and other Eurozone countries to artificially reduce their reported budget deficits. A currency swap arranged with Goldman Sachs allowed Greece to "hide" 2.8 billion Euros of debt, however, this affected deficit values after 2001 (when Greece had already been admitted into the Eurozone) and is not related to Greece’s Eurozone entry.
A study of the period 1999–2009 by forensic accountants has found that data submitted to Eurostat by Greece, among other countries, had a statistical distribution indicative of manipulation; "Greece with a mean value of 17.74, shows the largest deviation from Benford's law among the members of the eurozone, followed by Belgium with a value of 17.21 and Austria with a value of 15.25".
2010–2015 government debt crisis
thumb|300px|Greek government debt levels from 1999 to present.
By the end of 2009, as a result of a combination of international and local factors the Greek economy faced its most-severe crisis since the restoration of democracy in 1974 as the Greek government revised its deficit from a prediction of 3.7% in early 2009 and 6% in September 2009, to 12.7% of gross domestic product (GDP).Lynn, Matthew (2011). Bust: Greece, the Euro and the Sovereign Debt Crisis. Hobeken, New Jersey: Bloomberg Press. ISBN 978-0-470-97611-1.
In early 2010, it was revealed that through the assistance of Goldman Sachs, JPMorgan Chase and numerous other banks, financial products were developed which enabled the governments of Greece, Italy and many other European countries to hide their borrowing. Dozens of similar agreements were concluded across Europe whereby banks supplied cash in advance in exchange for future payments by the governments involved; in turn, the liabilities of the involved countries were "kept off the books".
According to Der Spiegel, credits given to European governments were disguised as "swaps" and consequently did not get registered as debt because Eurostat at the time ignored statistics involving financial derivatives. A German derivatives dealer had commented to Der Spiegel that "The Maastricht rules can be circumvented quite legally through swaps," and "In previous years, Italy used a similar trick to mask its true debt with the help of a different US bank." These conditions had enabled Greek as well as many other European governments to spend beyond their means, while meeting the deficit targets of the European Union and the monetary union guidelines. In May 2010, the Greek government deficit was again revised and estimated to be 13.6% which was the second highest in the world relative to GDP with Iceland in first place at 15.7% and Great Britain third with 12.6%. Public debt was forecast, according to some estimates, to hit 120% of GDP during 2010.
As a consequence, there was a crisis in international confidence in Greece's ability to repay its sovereign debt, as reflected by the rise of the country's borrowing rates (although their slow rise – the 10-year government bond yield only exceeded 7% in April 2010 – coinciding with a large number of negative articles, has led to arguments about the role of international news media in the evolution of the crisis). In order to avert a default (as high borrowing rates effectively prohibited access to the markets), in May 2010 the other Eurozone countries, and the IMF, agreed to a "rescue package" which involved giving Greece an immediate € in bail-out loans, with more funds to follow, totaling €. In order to secure the funding, Greece was required to adopt harsh austerity measures to bring its deficit under control. Their implementation will be monitored and evaluated by the European Commission, the European Central Bank and the IMF."Greece's Austerity Measures". BBC News. Retrieved 9 May 2010."Greek Parliament Passes Austerity Measures". The New York Times. Retrieved 9 May 2010.
The financial crisis – particularly the austerity package put forth by the EU and the IMF – has been met with anger by the Greek public, leading to riots and social unrest, while there have been theories about the offect of international media. Despite - others say because of - the long range of austerity measures, the government deficit has not been reduced accordingly, mainly, according to many economists, because of the subsequent recession.
Public sector workers have come out on strike in order to resist job cuts and reductions to salaries as the government promises that a large scale privatisation programme will be accelerated. Immigrants are sometimes treated as scapegoats for economic problems by far-right extremists.
In 2013, Greece became the first developed market to be reclassified as an emerging market by financial services companies MSCI and S&P Dow Jones Indices.
By July 2014 there was still anger and protests about the austerity measures, with a 24-hour strike among government workers timed to coincide with an audit by inspectors from the International Monetary Fund, the European Union and European Central Bank in advance of a decision on a second bailout of one billion euros ($1.36 billion), due in late July.
Greece exited its six-year recession in the second quarter of 2014, but the challenges of securing political stability and debt sustainability remain.
Primary sector
Agriculture and fishery
thumb|Vineyard in Naoussa, central Macedonia.
In 2010, Greece was the European Union's largest producer of cotton (183,800 tons) and pistachios (8,000 tons) and ranked second in the production of rice (229,500 tons) and olives (147,500 tons), third in the production of figs (11,000 tons) and almonds (44,000 tons), tomatoes (1,400,000 tons) and watermelons (578,400 tons) and fourth in the production of tobacco (22,000 tons). Agriculture contributes 3.8% of the country's GDP and employs 12.4% of the country's labor force.
Greece is a major beneficiary of the Common Agricultural Policy of the European Union. As a result of the country's entry to the European Community, much of its agricultural infrastructure has been upgraded and agricultural output increased. Between 2000 and 2007 organic farming in Greece increased by 885%, the highest change percentage in the EU.
In 2007, Greece accounted for 19% of the EU's fishing haul in the Mediterranean Sea, ranked third with 85,493 tons, and ranked first in the number of fishing vessels in the Mediterranean between European Union members. Additionally, the country ranked 11th in the EU in total quantity of fish caught, with 87,461 tons.
Secondary sector
Industry
thumb|The fuselage for the Dassault nEUROn stealth jet is produced in Greece by the Hellenic Aerospace Industry.
Between 2005 and 2011, Greece has had the highest percentage increase in industrial output compared to 2005 levels out of all European Union members, with an increase of 6%. Eurostat statistics show that the industrial sector was hit by the Greek financial crisis throughout 2009 and 2010, with domestic output decreasing by 5.8% and industrial production in general by 13.4%. Currently, Greece is ranked third in the European Union in the production of marble (over 920,000 tons), after Italy and Spain.
Between 1999 and 2008, the volume of retail trade in Greece increased by an average of 4.4% per year (a total increase of 44%), while it decreased by 11.3% in 2009. The only sector that did not see negative growth in 2009 was administration and services, with a marginal growth of 2.0%.
In 2009, Greece's labor productivity was 98% that of the EU average, but its productivity-per-hour-worked was 74% that the Eurozone average. The largest industrial employer in the country (in 2007) was the manufacturing industry (407,000 people), followed by the construction industry (305,000) and mining (14,000).
+Industrial production (manufacturing) in Greece (2009)Rank Production Rank Production Industry Value Industry Value 1 Portland cement €897,378,450 6 Cigarettes €480,399,323 2 Pharmaceuticals €621,788,464 7 Beer €432,559,943 3 Ready-mix concrete €523,821,763 8 Dairy €418,527,007 4 Beverages (non-alcoholic) €519,888,468 9 Aluminium slabs €391,393,930 5 Rebars €499,789,102 10 Coca-Cola products €388,752,443 – Total production value: €20,310,940,279
+Industrial production (manufacturing) in Greece (2010; provisional data)Rank Production Rank Production Industry Value (€) Industry Value (€) 1 Portland cement 699,174,850 6 Ready-mixed concrete 438,489,443 2 Pharmaceuticals (medicaments of mixed or unmixed products (other), p.r.s., n.e.c) 670,923,632 7 Beer made from malt (excluding non-alcoholic beer, beer containing <= 0.5% by volume of alcohol, alcohol duty) 405,990,419 3 Waters, with added sugar, other sweetening matter or flavoured, i.e. soft drinks (including mineral and aerated) 561,611,081 8 Milk and cream of a fat content by weight of > 1% but <= 6%, not concentrated nor containing added sugar or other sweetening matter, in immediate packings of a net content <= 2l 373,780,989 4 Hot rolled concrete reinforcing bars 540,919,270 9 Cigarettes containing tobacco or mixtures of tobacco and tobacco substitutes (excluding tobacco duty) 350,420,600 5 Grated, powdered, blue-veined and other non-processed cheese (excluding fresh cheese, whey cheese and curd) 511,528,250 10 Cheese fondues and other food preparations, n.e.c. 300,883,207 – Total production value: €17,489,538,838 – p.r.s.: packed for retail sale; n.e.c.: non elsewhere classifiable
Mining
Tertiary sector
Maritime industry
thumb|The Port of Thessaloniki
thumb|Neorion shipyard, located in Ermoupolis
thumb|23% of the world's total merchant fleet is owned by Greek companies, making it the largest in the world. Greece is ranked in the top for all kinds of ships, including first for tankers and bulk carriers.
Shipping has traditionally been a key sector in the Greek economy since ancient times. In 1813, the Greek merchant navy was made up of 615 ships. Its total tonnage was 153,580 tons and was manned with 37,526 crewmembers and 5,878 cannons. In 1914 the figures stood at 449,430 tons and 1,322 ships (of which 287 were steam boats).
During the 1960s, the size of the Greek fleet nearly doubled, primarily through the investment undertaken by the shipping magnates Onassis, Vardinoyannis, Livanos and Niarchos. The basis of the modern Greek maritime industry was formed after World War II when Greek shipping businessmen were able to amass surplus ships sold to them by the United States Government through the Ship Sales Act of the 1940s.
Greece has the largest merchant navy in the world, accounting for more than 15% of the world's total deadweight tonnage (dwt) according to the United Nations Conference on Trade and Development. The Greek merchant navy's total dwt of nearly 245 million is comparable only to Japan's, which is ranked second with almost 224 million. Additionally, Greece represents 39.52% of all of the European Union's dwt. However, today's fleet roster is smaller than an all-time high of 5,000 ships in the late 1970s.
Greece is ranked fourth in the world by number of ships (3,695), behind China (5,313), Japan (3,991), and Germany (3,833). A European Community Shipowners' Associations report for 2011–2012 reveals that the Greek flag is the seventh-most-used internationally for shipping, while it ranks second in the EU.
In terms of ship categories, Greek companies have 22.6% of the world's tankers and 16.1% of the world's bulk carriers (in dwt). An additional equivalent of 27.45% of the world's tanker dwt is on order, with another 12.7% of bulk carriers also on order. Shipping accounts for an estimated 6% of Greek GDP, employs about 160,000 people (4% of the workforce), and represents 1/3 of the country's trade deficit. Earnings from shipping amounted to €14.1 billion in 2011, while between 2000 and 2010 Greek shipping contributed a total of €140 billion (half of the country's public debt in 2009 and 3.5 times the receipts from the European Union in the period 2000–2013). The 2011 ECSA report showed that there are approximately 750 Greek shipping companies in operation.
The latest available data from the Union of Greek Shipowners show that "the Greek-owned ocean-going fleet consists of 3,428 ships, totaling 245 million deadweight tonnes in capacity. This equals 15.6 percent of the carrying capacity of the entire global fleet, including 23.6 percent of the world tanker fleet and 17.2 percent of dry bulk".
Counting shipping as quasi-exports and in terms of monetary value, Greece ranked 4th globally in 2011 having exported shipping services worth 17,704.132 million $; only Denmark, Germany and South Korea ranked higher during that year. Similarly counting shipping services provided to Greece by other countries as quasi-imports and the difference between exports and imports as a trade balance, Greece in 2011 ranked in the latter second behind Germany, having imported shipping services worth 7,076.605 million US$ and having run a trade surplus of 10,712.342 million US$.
+ Greece, shipping servicesYear2000200120022003200420052006–2008200920102011Exports:Global ranking5th5th5th4th3rd5th-b5th6th4thValue (US$ million)7,558.9957,560.5597,527.17510,114.73615,402.20916,127.623-b17,033.71418,559.29217,704.132Value (€ million)8,172.5598,432.6707,957.6548,934.66012,382.63612,949.869-b12,213.78613,976.55812,710.859Value (%GDP)5.935.765.085.186.686.71n/a5.296.296.10Imports:Global ranking14th13th14th-b14th16th-b12th13th9thValue (US$ million)3,314.7183,873.7913,757.000-b5,570.1455,787.234-b6,653.3957,846.9507,076.605Value (€ million)3,583.7744,320.6333,971.863-b4,478.1294,646.929-b4,770.7245,909.3505,080.720Value (%GDP)2.602.952.54n/a2.422.41n/a2.062.662.44Trade balance:Global ranking1st2nd1st1ste1st1st-b2nd1st2ndValue (US$ million)4,244.2773,686.7683,770.17510,114.736e9,832.06410,340.389-b10,340.38910,380.31910,712.342Value (€ million)4,588.7854,112.0373,985.7918,934.660e7,904.5088,302.940-b7,443.0638,067.2087,630.140Value (%GDP)3.332.812.545.18e4.274.30n/a3.223.633.66GDP (€ million)137,930.1146,427.6156,614.3172,431.8185,265.7193,049.7bn/a231,081.2p222,151.5p208,531.7pb source reports break in time series; p source characterises data as provisional; e reported data may be erroneous because of relevant break in "Imports" time series
Telecommunications
thumb|OTE headquarters in Athens
Between 1949 and the 1980s, telephone communications in Greece were a state monopoly by the Hellenic Telecommunications Organization, better known by its acronym, OTE. Despite the liberalization of telephone communications in the country in the 1980s, OTE still dominates the Greek market in its field and has emerged as one of the largest telecommunications companies in Southeast Europe. Since 2011, the company's major shareholder is Deutsche Telekom with a 40% stake, while the Greek state continues to own 10% of the company's shares. OTE owns several subsidiaries across the Balkans, including Cosmote, Greece's top mobile telecommunications provider, Cosmote Romania and Albanian Mobile Communications.
Other mobile telecommunications companies active in Greece are Wind Hellas and Vodafone Greece. The total number of active cellular phone accounts in the country in 2009 based on statistics from the country's mobile phone providers was over 20 million, a penetration of 180%. Additionally, there are 5.745 million active landlines in the country.
Greece has tended to lag behind its European Union partners in terms of Internet use, with the gap closing rapidly in recent years. The percentage of households with access to the Internet more than doubled between 2006 and 2013, from 23% to 56% respectively (compared with an EU average of 49% and 79%). At the same time, there has been a massive increase in the proportion of households with a broadband connection, from 4% in 2006 to 55% in 2013 (compared with an EU average of 30% and 76%). However, Greece also has the EU's third highest percentage of people who have never used the Internet: 36% in 2013, down from 65% in 2006 (compared with an EU average of 21% and 42%).
Tourism
thumb|The island of Santorini.
thumb|The Temple of Poseidon in Sounion, popular tourist destination.
Tourism in the modern sense has only started to flourish in Greece in the years post-1950, although tourism in ancient times is also documented in relation to religious or sports festivals such as the Olympic Games. Since the 1950s, the tourism sector saw an unprecedented boost as arrivals went from 33,000 in 1950 to 11.4 million in 1994.
Greece attracts more than 16 million tourists each year, thus contributing 18.2% to the nation's GDP in 2008 according to an OECD report. The same survey showed that the average tourist expenditure while in Greece was $1,073, ranking Greece 10th in the world. The number of jobs directly or indirectly related to the tourism sector were 840,000 in 2008 and represented 19% of the country's total labor force. In 2009, Greece welcomed over 19.3 million tourists, a major increase from the 17.7 million tourists the country welcomed in 2008.
Among the member states of the European Union, Greece was the most popular destination for residents of Cyprus and Sweden in 2011.
The ministry responsible for tourism is the Ministry of Culture and Tourism, while Greece also owns the Greek National Tourism Organization which aims in promoting tourism in Greece.
In recent years a number of well-known tourism-related organizations have placed Greek destinations in the top of their lists. In 2009 Lonely Planet ranked Thessaloniki, the country's second-largest city, the world's fifth best "Ultimate Party Town", alongside cities such as Montreal and Dubai, while in 2011 the island of Santorini was voted as the best island in the world by Travel + Leisure. The neighbouring island of Mykonos was ranked as the 5th best island Europe. Thessaloniki was the European Youth Capital in 2014.
Trade and investment
Foreign investment
Since the fall of communism, Greece has invested heavily in neighbouring Balkan countries. Between 1997 and 2009, 12.11% of foreign direct investment capital in the Republic of Macedonia was Greek, ranking fourth. In 2009 alone, Greeks invested €380 million in the country, with companies such as Hellenic Petroleum having made important strategic investments.
Greece invested €1.38 billion in Bulgaria between 2005 and 2007 and many important companies (including Bulgarian Postbank, United Bulgarian Bank Coca-Cola Bulgaria) are owned by Greek financial groups. In Serbia, 250 Greek companies are active with a total investment of over €2 billion. Romanian statistics from 2005 show that Greek investment in the country exceeded €3 billion. Greece has been the largest investor in Albania since the fall of communism with 25% of foreign investments in 2016 coming from Greece, in addition business relations between both are extremely strong and continuously rising.http://albanians.gr/greqia-e-para-investitore-ne-shqiperi-25-te-totalit-te-investimeve.html
Trade
thumb|Graphical depiction of Greece's product exports in 2012 in 28 color-coded categories
Since the start of the debt crisis, Greece’s negative balance of trade has decreased significantly from €44.3 billion in 2008 to €17.7 billion in 2015. Imports decreased by 9.7% in 2015, while exports fell by 4.6%.
+Imports and exports in 2008; values in millionsRank Imports Rank Exports Origin Value Destination Value 1 €7,238.2 1 €2,001.9 2 €6,918.5 2 €1,821.3 3 €4,454.0 3 €1,237.0 4 €3,347.1 4 €1,103.0 5 €3,098.0 5 €885.4 – €33,330.5 – €11,102.0 – Total €60,669.9 – Total €17,334.1
+Imports and exports in 2011; values in millionsRank Imports Rank Exports – €22,688.5 – €11,377.7 – Total €42,045.4 – Total €22,451.1
Greece is also the largest trade partner of Cyprus (exports 23.0%, imports 21.6%) and the largest import partner of the Republic of Macedonia (19.0%).
+Imports and exports in 2012 Imports ExportsRank OriginValue(€ mil)Value(% of total)RankDestinationValue(€ mil)Value(% of total)0a0-10a0-115,967.2013212.612,940.2520310.824,381.926569.222,033.774137.533,668.886227.731,687.039476.242,674.005875.641,493.753555.552,278.038834.851,319.285984.862,198.571264.661,024.736863.871,978.484604.27822.740773#αOECD23,849.9465050.2#αOECD13,276.4810748.8#βG711,933.7541725.1#βG76,380.8670523.4#γBRICS8,682.1026518.3#εBRICS1,014.171463.7#δBRIC8,636.0294618.2#ζBRIC977.760163.6#εOPEC8,090.7697217#γOPEC2,158.604207.9#ζNAFTA751.806081.6#δNAFTA1,215.702574.5#a 2721,164.8931444.5#a 2711,512.3199042.3#b 1517,794.1934437.4#b 157,234.8359526.6#3Africa2,787.395025.9#3Africa1,999.465347.3#4America1,451.151363.1#4America1,384.040685.1#2Asia14,378.0270530.2#2Asia6,933.5120025.5#1Europe28,708.3814860.4#1Europe14,797.2064154.4#5Oceania71.706030.2#5Oceania169.240850.6#World47,537.63847100#World27,211.0636210024z100000000000000000010124z1000000000000000000101the International Organisations or Country Groups list and ranking presented above (i.e. #greek_letters and/or #latin_letters), is not indicative of the whole picture of Greece's trade;this is instead only an incomplete selection of some major and well known such Organisations and Groups;rounding errors possibly present
Transport
thumb|Corinth Canal
thumb|The Egnatia Odos, part of European route E90.
As of 2012, Greece had a total of 82 airports, of which 67 were paved and six had runways longer than 3,047 meters. Of these airports, two are classified as "international" by the Hellenic Civil Aviation Authority, but 15 offer international services. Additionally Greece has 9 heliports. Greece does not have a flag carrier, but the country’s airline industry is dominated by Aegean Airlines and its subsidiary Olympic Air.
Between 1975 and 2009, Olympic Airways (known after 2003 as Olympic Airlines) was the country’s state-owned flag carrier, but financial problems led to its privatization and relaunch as Olympic Air in 2009. Both Aegean Airlines and Olympic Air have won awards for their services; in 2009 and 2011, Aegean Airlines was awarded the "Best regional airline in Europe" award by Skytrax, and also has two gold and one silver awards by the ERA, while Olympic Air holds one silver ERA award for "Airline of the Year" as well as a "Condé Nast Traveller 2011 Readers Choice Awards: Top Domestic Airline" award.
The Greek road network is made up of 116,986 km of roads, of which 1863 km are highways, ranking 24th worldwide, as of 2016. Since the entry of Greece to the European Community (now the European Union), a number of important projects (such as the Egnatia Odos and the Attiki Odos) have been co-funded by the organization, helping to upgrade the country's road network. In 2007, Greece ranked 8th in the European Union in goods transported by road at almost 500 million tons.
Greece's rail network is estimated to be at 2,548 km. Rail transport in Greece is operated by TrainOSE, a subsidiary of the Hellenic Railways Organization (OSE). Most of the country's network is standard gauge (1,565 km), while the country also has 983 km of narrow gauge. A total of 764 km of rail are electrified. Greece has rail connections with Bulgaria, the Republic of Macedonia and Turkey. A total of three suburban railway systems (Proastiakos) are in operation (in Athens, Thessaloniki and Patras), while one metro system, the Athens Metro, is operational in Athens with another, the Thessaloniki Metro, under construction.
According to Eurostat, Greece's largest port by tons of goods transported in 2010 is the port of Aghioi Theodoroi, with 17.38 million tons. The Port of Thessaloniki comes second with 15.8 million tons, followed by the Port of Piraeus, with 13.2 million tons, and the port of Eleusis, with 12.37 million tons. The total number of goods transported through Greece in 2010 amounted to 124.38 million tons, a considerable drop from the 164.3 million tons transported through the country in 2007. Since then, Piraeus has grown to become the Mediterranean's third-largest port thanks to heavy investment by Chinese logistics giant COSCO. In 2013, Piraeus was declared the fastest-growing port in the world.
In 2010 Piraeus handled 513,319 TEUs, followed by Thessaloniki, which handled 273,282 TEUs. In the same year, 83.9 million people passed through Greece's ports, 12.7 million through the port of Paloukia in Salamis, another 12.7 through the port of Perama, 9.5 million through Piraeus and 2.7 million through Igoumenitsa. In 2013, Piraeus handled a record 3.16 million TEUs, the third-largest figure in the Mediterranean, of which 2.52 million were transported through Pier II, owned by COSCO and 644,000 were transported through Pier I, owned by the Greek state.
Energy
thumb|View of a wind farm, Panachaiko mountain.
thumb|The oil rig in Kavala
thumb|180px|A distillation facility owned by Hellenic Petroleum
Energy production in Greece is dominated by the Public Power Corporation (known mostly by its acronym ΔΕΗ, or in English DEI). In 2009 DEI supplied for 85.6% of all energy demand in Greece, while the number fell to 77.3% in 2010. Almost half (48%) of DEI's power output is generated using lignite, a drop from the 51.6% in 2009. Another 12% comes from Hydroelectric power plants and another 20% from natural gas. Between 2009 and 2010, independent companies' energy production increased by 56%, from 2,709 Gigawatt hour in 2009 to 4,232 GWh in 2010.
In 2008 renewable energy accounted for 8% of the country's total energy consumption, a rise from the 7.2% it accounted for in 2006, but still below the EU average of 10% in 2008. 10% of the country's renewable energy comes from solar power, while most comes from biomass and waste recycling. In line with the European Commission's Directive on Renewable Energy, Greece aims to get 18% of its energy from renewable sources by 2020. In 2013 and for several months, Greece produced more than 20% of its electricity from renewable energy sources and hydroelectric power plants. Greece currently does not have any nuclear power plants in operation, however in 2009 the Academy of Athens suggested that research in the possibility of Greek nuclear power plants begin.
Greece had 10 million barrels of proven oil reserves as of 1 January 2012. Hellenic Petroleum is the country's largest oil company, followed by Motor Oil Hellas. Greece's oil production stands at 1,751 barrels per day (bbl/d), ranked 95th worldwide, while it exports 19,960 bbl/d, ranked 53rd, and imports 355,600 bbl/d, ranked 25th.
In 2011 the Greek government approved the start of oil exploration and drilling in three locations within Greece, with an estimated output of 250 to 300 million barrels over the next 15 to 20 years. The estimated output in euros of the three deposits is €25 billion over a 15-year period, of which €13–€14 billion will enter state coffers. Greece's dispute with Turkey over the Aegean poses substantial obstacles to oil exploration in the Aegean Sea.
In addition to the above, Greece is also to start oil and gas exploration in other locations in the Ionian Sea, as well as the Libyan Sea, within the Greek exclusive economic zone, south of Crete. The Ministry of the Environment, Energy and Climate Change announced that there was interest from various countries (including Norway and the United States) in exploration, and the first results regarding the amount of oil and gas in these locations were expected in the summer of 2012. In November 2012, a report published by Deutsche Bank estimated the value of natural gas reserves south of Crete at €427 billion.
A number of oil and gas pipelines are currently under construction or under planning in the country. Such projects include the Interconnector Turkey-Greece-Italy (ITGI) and South Stream gas pipelines.
Taxation and tax evasion
thumb|Revenues of Greece between 1999 and 2010 as a percentage of GDP, compared to the EU average.
Greece has a tiered tax system based on progressive taxation. Greek law recognizes six categories of taxable income: immovable property, movable property (investment), income from agriculture, business, employment, and income from professional activities. Greece's personal income tax rate, until recently, ranged from 0% for annual incomes below €12,000 to 45% for annual incomes over €100,000. Under the new 2010 tax reform, tax exemptions have been abolished.
Also under the new austerity measures and among other changes, the personal income tax-free ceiling has been reduced to €5,000 per annum while further future changes, for example abolition of this ceiling, are already being planned.
Greece's corporate tax dropped from 40% in 2000 to 20% in 2010. For 2011 only, corporate tax will be at 24%. Value added tax (VAT) has gone up in 2010 compared to 2009: 23% as opposed to 19%.
The lowest VAT possible is 6.5% (previously 4.5%) for newspapers, periodicals and cultural event tickets, while a tax rate of 13% (from 9%) applies to certain service sector professions. Additionally, both employers and employees have to pay social contribution taxes, which apply at a rate of 16% for white collar jobs and 19.5% for blue collar jobs, and are used for social insurance.
The Ministry of Finance expected tax revenues for 2012 to be €52.7 billion (€23.6 billion in direct taxes and €29.1 billion in indirect taxes), an increase of 5.8% from 2011. In 2012, the government was expected to have considerably higher tax revenues than in 2011 on a number of sectors, primarily housing (an increase of 217.5% from 2011).
Tax evasion
Greece suffers from very high levels of tax evasion. In the last quarter of 2005, tax evasion reached 49%, while in January 2006 it fell to 41.6%. A study by researchers from the University of Chicago concluded that tax evasion in 2009 by self-employed professionals alone in Greece (accountants, dentists, lawyers, doctors, personal tutors and independent financial advisers) was €28 billion or 31% of the budget deficit that year.Inman, Phillip (9 September 2012) Primary Greek tax evaders are the professional classes The Guardian. Retrieved 6 October 2012
The Tax Justice Network has said that there are over €20 billion in Swiss bank accounts held by Greeks. The former Finance Minister of Greece, Evangelos Venizelos, was quoted as saying, "Around 15,000 individuals and companies owe the taxman 37 billion euros". Additionally, the TJN puts the number of Greek-owned off-shore companies to over 10,000.
Following similar actions by the United Kingdom and Germany, the Greek government is in talks with Switzerland in order to tax bank accounts in Switzerland owned by Greek citizens. The Ministry of Finance has revealed that Greek Swiss bank account holders will either have to pay a tax or reveal information such as the identity of the bank account holder to the Greek internal revenue services. The Greek and Swiss governments were to reach a deal on the matter by the end of 2011.
Wealth and standards of living
National and regional GDP
thumb|GDP per capita of the regions of Greece in 2008.
thumb|The country's two largest metropolitan areas account for almost 62% of the national economy.
Greece's most economically important regions are Attica, which contributed €85.579 billion to the economy in 2014, and Central Macedonia, which contributed €23.859 billion. The smallest regional economies were those of the North Aegean (€2.545 billion) and Ionian Islands (€3.137 billion).
In terms of GDP per capita, Attica (€22,200) far outranks any other Greek region. The poorest regions in 2014 were Eastern Macedonia and Thrace (€11,200) and Epirus (€11,400). At the national level, GDP per capita in 2014 was €16,200.
+Regional GDP, 2014Rank RegionGDP(€, billions) GDP(% of total)GDPannual growth(%) GDPper capita(€)GDPper capita(PPS, % EU average)0a00-10000 1 Attica 85.579 48.20 −1.72 22,200 99 2 Central Macedonia 23.859 13.44 −2.90 12,500 56 3 Thessaly 9.085 5.12 −4.25 12,300 55 4 Crete 8.934 5.03 −0.18 14,100 63 5 Western Greece 8.181 4.61 −5.87 12,100 54 6 Central Greece 7.734 4.36 −3.00 13,800 61 7 Peloponnese 7.611 4.29 −5.23 13,000 58 8 Eastern Macedonia and Thrace 6.820 3.84 −8.04 11,200 50 9 South Aegean 6.045 3.40 +0.92 18,000 80 10 Western Macedonia 4.125 2.32 +3.83 14,800 66 11 Epirus 3.904 2.20 −9.52 11,400 51 12 Ionian Islands 3.137 1.77 −3.39 15,100 67 13 North Aegean 2.545 1.43 −6.19 12,800 57 – Greece 177.559 100 −2.67 16,200 72 – European Union 13,959.739 7862.03 3.27% 27,500 100100z1000000000000000100010010000000001000
Welfare state
Greece is a welfare state which provides a number of social services such as quasi-universal health care and pensions. In the 2012 budget, expenses for the welfare state (excluding education) stand at an estimated €22.487 billion (€6.577 billion for pensions and €15.910 billion for social security and health care expenses), or 31.9% of the all state expenses.
Largest companies by revenue 2016
According to the 2016 Forbes Global 2000 index, Greece's largest publicly traded companies are:
+Forbes Global 2000 Rank Company Revenues(€ billion) Profit(€ billion) Assets(€ billion) Market value(€ billion) 1 Bank of Greece 2 National Bank of Greece 3 Piraeus Bank 4 Eurobank Ergasias 5 Alpha Bank 6 Public Power Corporation
Labour force
Working hours
In 2012, the average Greek worker worked for 2034 hours annually; this figure was the third highest among the OECD countries.
Currency
Between 1832 and 2002 the currency of Greece was the drachma. After signing the Maastricht Treaty, Greece applied to join the eurozone. The two main convergence criteria were a maximum budget deficit of 3% of GDP and a declining public debt if it stood above 60% of GDP. Greece met the criteria as shown in its 1999 annual public account. On 1 January 2001, Greece joined the eurozone, with the adoption of the euro at the fixed exchange rate ₯340.75 to €1. However, in 2001 the euro only existed electronically, so the physical exchange from drachma to euro only took place on 1 January 2002. This was followed by a ten-year period for eligible exchange of drachma to euro, which ended on 1 March 2012.
Prior to the adoption of the euro, 64% of Greek citizens viewed the new currency positively, but in February 2005 this figure fell to 26% and by June 2005 it fell further to 20%. Since 2010 the figure has risen again, and a survey in September 2011 showed that 63% of Greek citizens viewed of the euro positively.
Charts gallery
Unemployment rate
IMF's forecast said that Greece's unemployment rate would hit the highest 14.8 percent in 2012 and decrease to 14.1 in 2014.
Greece:Staff report on request for stand-by arrangement IMF, Country report No. 10/110 (2010) But in fact, the Greek economy suffered a prolonged high unemployemnt.Greece Comes to a Standstill as Unions Turn Against Tsipras N. Chrysoloras, Bloomberg, News, 12 Nov 2015 The unemployment figure was between 9 per cent and 11 per cent in 2009, and it soared to 28 per cent in 2013. In 2015, Greece's jobless rate is around 24 per cent. It is thought that Greece's potential output has been eroded by this prolonged massive unemployment due to the associated hysteresis effects.
Poverty
Greece has been hit hard with recession and the austerity measures that have been put in place and poverty has increased. Those living in extreme poverty rose to 15% in 2015, up from 8.9% in 2011, and a huge increase from 2009 when it wasn’t more than 2.2%. Those people at risk for poverty or social exclusion was one in three or 35.7%. The rate among children 0-17 is 17.6% and for young people 18-29 the rate is 24.4%. With unemployment on the rise, those without jobs are at the highest risk at 70-75%, up from less than 50% in 2011. With jobs harder and harder to come by, a quarter of the population is out of work, and for people under 25 the rate is 50%. In some harder hit areas of western Greece, the younger generation unemployment rate is more than 60%. When people are out of work for more than two years, they lose their health insurance, further increasing the problems of those in poverty. When younger people are out of work, they rely on the older generations of their families to provide for them to get them through the hard times. However, long term unemployment across the country causes pension funds to decrease because they are getting less money from the working population, so those older generations are getting less money to provide for the younger generations and their entire families, putting more of them in poverty. Many aspects of the economic problems add to the problem. The Greek people have continued job loss and wage cuts, as well as deep cuts in workers compensation and social welfare benefits. For those who are working, their wages have dropped. From 2008 to 2013, Greeks have become 40% poorer on average, and in 2014 saw their disposable household income drop below 2003 levels. The Economic Survey of Greece 2016 shows optimism in a stronger recovery in 2017 by using things like the reforms in place and outside investment in jobs to help change the course of the high levels of poverty.
References
Further reading
Pasiouras, Fotios. Greek Banking: From the Pre-Euro Reforms to the Financial Crisis and Beyond (Palgrave Macmillan; 2012) 217 pages; covers the mid-1990s to 2011.
External links
Nick Malkoutzis Greece – A Year in Crisis – Friedrich-Ebert-Stiftung, June 2011
The Greek Economy: Which Way Forward?, from the Center for Economic and Policy Research, January 2015
The Greek Economy – a bi-monthly publication by the Hellenic Statistical Authority on the state of the economy
The Greek Exports – Database of Greek Exporters
Greek Banks Digest – (in English)
World Bank Summary Trade Statistics Greece
Tariffs applied by Greece as provided by ITC's Market Access Map, an online database of customs tariffs and market requirements
New study on the "Economic, Social and Territorial Situation of Greece" – European Parliament, Committee on Regional Development's delegation to Greece, 13 – 15 July 2011
OECD Data > Greece
Greece
Greece | 12,113 | 2017-01 |
Windows 8 | Windows 8 is a personal computer operating system developed by Microsoft as part of the Windows NT family of operating systems. Development of Windows 8 started before the release of its predecessor, Windows 7, in 2009. It was announced at CES 2011, and followed by the release of three pre-release versions from September 2011 to May 2012. The operating system was released to manufacturing on August 1, 2012, and was released for general availability on October 26, 2012.
Windows 8 introduced major changes to the operating system's platform and user interface to improve its user experience on tablets, where Windows was now competing with mobile operating systems, including Android and iOS. In particular, these changes included a touch-optimized Windows shell based on Microsoft's "Metro" design language, the Start screen (which displays programs and dynamically updated content on a grid of tiles), a new platform for developing "apps" with an emphasis on touchscreen input, integration with online services (including the ability to synchronize apps and settings between devices), and Windows Store, an online store for downloading and purchasing new software. Windows 8 added support for USB 3.0, Advanced Format hard drives, near field communications, and cloud computing. Additional security features were introduced, such as built-in antivirus software, integration with Microsoft SmartScreen phishing filtering service and support for UEFI Secure Boot on supported devices with UEFI firmware, to prevent malware from infecting the boot process.
Windows 8 was released to a mixed critical reception. Although reaction towards its performance improvements, security enhancements, and improved support for touchscreen devices was positive, the new user interface of the operating system was widely criticized for being potentially confusing and difficult to learn, especially when used with a keyboard and mouse instead of a touchscreen. Despite these shortcomings, 60 million Windows 8 licenses have been sold through January 2013, a number that included both upgrades and sales to OEMs for new PCs.
On October 17, 2013, Microsoft released Windows 8.1. It addresses some aspects of Windows 8 that were criticized by reviewers and early adopters and incorporates additional improvements to various aspects of the operating system. Windows 8 was ultimately succeeded by Windows 10 in July 2015. Support for Windows 8 RTM ended on January 12, 2016; per Microsoft lifecycle policies regarding service packs, Windows 8.1 must be installed to maintain support and receive further updates.
Development history
Early development
Windows 8 development started before Windows 7 had shipped in 2009. At the Consumer Electronics Show in January 2011, it was announced that the next version of Windows would add support for ARM system-on-chips alongside the existing x86 processors produced by vendors, especially AMD and Intel. Windows division president Steven Sinofsky demonstrated an early build of the port on prototype devices, while Microsoft CEO Steve Ballmer announced the company's goal for Windows to be "everywhere on every kind of device without compromise." Details also began to surface about a new application framework for Windows 8 codenamed "Jupiter", which would be used to make "immersive" applications using XAML (similarly to Windows Phone and Silverlight) that could be distributed via a new packaging system and a rumored application store.
Three milestone releases of Windows 8 leaked to the general public. Milestone 1, Build 7850, was leaked on April 12, 2011. It was the first build where the text of a window was written centered instead of aligned to the left. It was also probably the first appearance of the Metro-style font, and its wallpaper had the text shhh... let's not leak our hard work. However, its detailed build number reveals that the build was created on September 22, 2010. The leaked copy was Enterprise edition. The OS still reads as "Windows 7". Milestone 2, Build 7955, was leaked on April 25, 2011. The traditional Blue Screen of Death (BSoD) was replaced by a new black screen, although this was later scrapped. This build introduced a new ribbon in Windows Explorer. Build 7959, with minor changes but the first 64-bit version, was leaked on May 1, 2011. The "Windows 7" logo was temporarily replaced with text displaying "Microsoft Confidential". On June 17, 2011, build 7989 64-bit edition was leaked. It introduced a new boot screen featuring the same fish as the default Windows 7 Beta wallpaper, which was later scrapped, and the circling dots as featured in the final (although the final version comes with smaller circling dots throbber). It also had the text Welcome below them, although this was also scrapped.
On June 1, 2011, Microsoft unveiled Windows 8's new user interface, as well as additional features at both Computex Taipei and the D9: All Things Digital conference in California.
The "Building Windows 8" blog launched on August 15, 2011, featuring details surrounding Windows 8's features and its development process.
Previews
thumb|A screenshot of Windows 8 Developer Preview running on a multi-monitor system, showcasing some features
Microsoft unveiled more Windows 8 features and improvements on the first day of the Build conference on September 13, 2011. Microsoft released the first public beta build of Windows 8, Windows Developer Preview (build 8102) at the event. A Samsung tablet running the build was also distributed to conference attendees.
The build was released for download later in the day in standard 32-bit and 64-bit versions, plus a special 64-bit version which included SDKs and developer tools (Visual Studio Express and Expression Blend) for developing Metro-style apps. The Windows Store was announced during the presentation, but was not available in this build. According to Microsoft, there were about 535,000 downloads of the developer preview within the first 12 hours of its release. Originally set to expire on March 11, 2012, in February 2012 the Developer Preview's expiry date was changed to January 15, 2013.
thumb|right|The new File Explorer interface with "Ribbon" in Windows 8
On February 19, 2012, Microsoft unveiled a new logo to be adopted for Windows 8. Designed by Pentagram partner Paula Scher, the Windows logo was changed to resemble a set of four window panes. Additionally, the entire logo is now rendered in a single solid color.
On February 29, 2012, Microsoft released Windows 8 Consumer Preview, the beta version of Windows 8, build 8250. Alongside other changes, the build removed the Start button from the taskbar for the first time since its debut on Windows 95; according to Windows manager Chaitanya Sareen, the Start button was removed to reflect their view that on Windows 8, the desktop was an "app" itself, and not the primary interface of the operating system. Windows president Steven Sinofsky said more than 100,000 changes had been made since the developer version went public. The day after its release, Windows 8 Consumer Preview had been downloaded over one million times. Like the Developer Preview, the Consumer Preview expired on January 15, 2013.
Many other builds were released until the Japan's Developers Day conference, when Steven Sinofsky announced that Windows 8 Release Preview (build 8400) would be released during the first week of June. On May 28, 2012, Windows 8 Release Preview (Standard Simplified Chinese x64 edition, not China-specific version, build 8400) was leaked online on various Chinese and BitTorrent websites. On May 31, 2012, Windows 8 Release Preview was released to the public by Microsoft. Major items in the Release Preview included the addition of Sports, Travel, and News apps, along with an integrated version of Adobe Flash Player in Internet Explorer. Like the Developer Preview and the Consumer Preview, the release preview expired on January 15, 2013.
Release
thumb|right|Windows 8 launch event at Pier 57 in New York City
On August 1, 2012, Windows 8 (build 9200) was released to manufacturing with the build number 6.2.9200.16384 . Microsoft planned to hold a launch event on October 25, 2012 and release Windows 8 for general availability on the next day. However, only a day after its release to manufacturing, a copy of the final version of Windows 8 Enterprise N (a version for European markets which lacks bundled media players to comply with an antitrust ruling) leaked online, followed by leaks of the final versions of Windows 8 Pro and Enterprise a few days later. On August 15, 2012, Windows 8 was made available to download for MSDN and TechNet subscribers. Windows 8 was made available to Software Assurance customers on August 16, 2012. Windows 8 was made available for students with a DreamSpark Premium subscription on August 22, 2012, earlier than advertised.
Relatively few changes were made from the Release Preview to the final version; these included updated versions of its pre-loaded apps, the renaming of Windows Explorer to File Explorer, the replacement of the Aero Glass theme from Windows Vista and 7 with a new flat and solid-colored theme, and the addition of new background options for the Start screen, lock screen, and desktop. Prior to its general availability on October 26, 2012, updates were released for some of Windows 8's bundled apps, and a "General Availability Cumulative Update" (which included fixes to improve performance, compatibility, and battery life) was released on Tuesday, October 9, 2012. Microsoft indicated that due to improvements to its testing infrastructure, general improvements of this nature are to be released more frequently through Windows Update instead of being relegated to OEMs and service packs only.
Microsoft began an advertising campaign centered around Windows 8 and its Surface tablet in October 2012, starting with its first television advertisement premiering on October 14, 2012. Microsoft's advertising budget of US$1.5–1.8 billion was significantly larger than the US$200 million campaign used to promote Windows 95. As part of its campaign, Microsoft set up 34 pop-up stores inside malls to showcase the Surface product line, provided training for retail employees in partnership with Intel, and collaborated with the electronics store chain Best Buy to design expanded spaces to showcase devices. In an effort to make retail displays of Windows 8 devices more "personal", Microsoft also developed a character known in English-speaking markets as "Allison Brown", whose fictional profile (including personal photos, contacts, and emails) is also featured on demonstration units of Windows 8 devices.
thumb|Windows 8 Pro DVD case, containing a 32-bit and a 64-bit installation disc
In May 2013, Microsoft launched a new television campaign for Windows 8 illustrating the capabilities and pricing of Windows 8 tablets in comparison to the iPad, which featured the voice of Siri remarking on the iPad's limitations in a parody of Apple's "Get a Mac" advertisements. On June 12, 2013 during game 1 of the 2013 Stanley Cup Finals, Microsoft premiered the first ad in its "Windows Everywhere" campaign, which promoted Windows 8, Windows Phone 8, and the company's suite of online services as an interconnected platform.
New and changed features
New features and functionality in Windows 8 include a faster startup through UEFI integration and the new "Hybrid Boot" mode (which hibernates the Windows kernel on shutdown to speed up the subsequent boot), a new lock screen with a clock and notifications, and the ability for enterprise users to create live USB versions of Windows (known as Windows To Go). Windows 8 also adds native support for USB 3.0 devices, which allow for faster data transfers and improved power management with compatible devices, and hard disk 4KB Advanced Format support, as well as support for near field communication to facilitate sharing and communication between devices.
Windows Explorer, which has been renamed File Explorer, now includes a ribbon in place of the command bar. File operation dialog boxes have been updated to provide more detailed statistics, the ability to pause file transfers, and improvements in the ability to manage conflicts when copying files. A new "File History" function allows incremental revisions of files to be backed up to and restored from a secondary storage device, while Storage Spaces allows users to combine different sized hard disks into virtual drives and specify mirroring, parity, or no redundancy on a folder-by-folder basis. For easier management of files and folders, Windows 8 introduces the ability to move selected files or folders via drag and drop from a parent folder into a subfolder listed within the breadcrumb hierarchy of the address bar in File Explorer.
Task Manager has been redesigned, including a new processes tab with the option to display fewer or more details of running applications and background processes, a heat map using different colors indicating the level of resource usage, network and disk counters, grouping by process type (e.g. applications, background processes and Windows processes), friendly names for processes and a new option which allows users to search the web to find information about obscure processes. Additionally, the Blue Screen of Death has been updated with a simpler and modern design with less technical information displayed.
Safety and security
New security features in Windows 8 include two new authentication methods tailored towards touchscreens (PINs and picture passwords), the addition of antivirus capabilities to Windows Defender (bringing it in parity with Microsoft Security Essentials). SmartScreen filtering integrated into Windows, Family Safety offers Parental controls, which allows parents to monitor and manage their children's activities on a device with activity reports and safety controls. Windows 8 also provides integrated system recovery through the new "Refresh" and "Reset" functions, including system recovery from USB drive. Windows 8's first security patches would be released on November 13, 2012; it would contain three fixes deemed "critical" by the company.
Windows 8 supports a feature of the UEFI specification known as "Secure boot", which uses a public-key infrastructure to verify the integrity of the operating system and prevent unauthorized programs such as bootkits from infecting the device's boot process. Some pre-built devices may be described as "certified" by Microsoft; these must have secure boot enabled by default, and provide ways for users to disable or re-configure the feature. ARM-based Windows RT devices must have secure boot permanently enabled.
Online services and functionality
Windows 8 provides heavier integration with online services from Microsoft and others. A user can now log in to Windows with a Microsoft account, which can be used to access services and synchronize applications and settings between devices. Windows 8 also ships with a client app for Microsoft's SkyDrive cloud storage service, which also allows apps to save files directly to SkyDrive. A SkyDrive client for the desktop and File Explorer is not included in Windows 8, and must be downloaded separately. Bundled multimedia apps are provided under the Xbox brand, including Xbox Music, Xbox Video, and the Xbox SmartGlass companion for use with an Xbox 360 console. Games can integrate into an Xbox Live hub app, which also allows users to view their profile and gamerscore. Other bundled apps provide the ability to link Flickr and Facebook. Due to Facebook Connect service changes, Facebook support is disabled in all bundled apps effective June 8, 2015.
Internet Explorer 10 is included as both a desktop program and a touch-optimized app, and includes increased support for HTML5, CSS3, and hardware acceleration. The Internet Explorer app does not support plugins or ActiveX components, but includes a version of Adobe Flash Player that is optimized for touch and low power usage. Initially, Adobe Flash would only work on sites included on a "Compatibility View" whitelist; however, after feedback from users and additional compatibility tests, an update in March 2013 changed this behavior to use a smaller blacklist of sites with known compatibility issues instead, allowing Flash to be used on most sites by default. The desktop version does not contain these limitations.
Windows 8 also incorporates improved support for mobile broadband; the operating system can now detect the insertion of a SIM card and automatically configure connection settings (including APNs and carrier branding), and reduce its Internet usage in order to conserve bandwidth on metered networks. Windows 8 also adds an integrated airplane mode setting to globally disable all wireless connectivity as well. Carriers can also offer account management systems through Windows Store apps, which can be automatically installed as a part of the connection process and offer usage statistics on their respective tile.
Windows Store apps
thumb|250px|Snap feature: Xbox Music, alongside Photos snapped into a sidebar to the right side of the screen
thumb|250px|Snap feature: Desktop, along Wikipedia App snapped into a sidebar to the right side of the screen. In Windows 8, desktop and everything on it are treated as one Metro-style app.
Windows 8 introduces a new style of application, Windows Store apps. According to Microsoft developer Jensen Harris, these apps are to be optimized for touchscreen environments and are more specialized than current desktop applications. Apps can run either in a full-screen mode, or be snapped to the side of a screen. Apps can provide toast notifications on screen or animate their tiles on the Start screen with dynamic content. Apps can use "contracts"; a collection of hooks to provide common functionality that can integrate with other apps, including search and sharing. Apps can also provide integration with other services; for example, the People app can connect to a variety of different social networks and services (such as Facebook, Skype, and People service), while the Photos app can aggregate photos from services such as Facebook and Flickr.
Windows Store apps run within a new set of APIs known as Windows Runtime, which supports programming languages such as C, C++, Visual Basic .NET, C#, along with HTML5 and JavaScript. If written in some "high-level" languages, apps written for Windows Runtime can be compatible with both Intel and ARM versions of Windows, otherwise they are not binary code compatible. Components may be compiled as Windows Runtime Components, permitting consumption by all compatible languages. To ensure stability and security, apps run within a sandboxed environment, and require permissions to access certain functionality, such as accessing the Internet or a camera.
Retail versions of Windows 8 are only able to install these apps through Windows Store — a namesake distribution platform that offers both apps, and listings for desktop programs certified for comparability with Windows 8. A method to sideload apps from outside Windows Store is available to devices running Windows 8 Enterprise and joined to a domain; Windows 8 Pro and Windows RT devices that are not part of a domain can also sideload apps, but only after special product keys are obtained through volume licensing.
The term "Immersive app" had been used internally by Microsoft developers to refer to the apps prior to the first official presentation of Windows 8, after which they were referred to as "Metro-style apps" in reference to the Metro design language. The term was phased out in August 2012; a Microsoft spokesperson denied rumors that the change was related to a potential trademark issue, and stated that "Metro" was only a codename that would be replaced prior to Windows 8's release. Following these reports, the terms "Modern UI-style apps", "Windows 8-style apps" and "Windows Store apps" began to be used by various Microsoft documents and material to refer to the new apps. In an interview on September 12, 2012, Soma Somasegar (vice president of Microsoft's development software division) confirmed that "Windows Store apps" would be the official term for the apps. An MSDN page explaining the Metro design language uses the term "Modern design" to refer to the language as a whole.
Web browsers
Exceptions to the restrictions faced by Windows Store apps are given to web browsers. The user's default browser can distribute a Metro-style web browser in same package as the desktop version, which has access to functionality unavailable to other apps, such as being able to permanently run in the background, use multiple background processes, and use Windows API code instead of WinRT (allowing for code to be re-used with the desktop version, while still taking advantage of features available to Windows Store apps, such as charms). Microsoft advertises this exception privilege "New experience enabled" (formerly "Metro-style enabled").
The developers of both Chrome and Firefox committed to developing Metro-style versions of their browsers; while Chrome's "Windows 8 mode" (discontinued on Chrome version 49) uses a full-screen version of the existing desktop interface, Firefox's version (which was first made available on the "Aurora" release channel in September 2013) uses a touch-optimized interface inspired by the Android version of Firefox. In October 2013, Chrome's app was changed to mimic the desktop environment used by Chrome OS. Development of the Firefox app for Windows 8 has since been cancelled, citing a lack of user adoption for the beta versions.
Interface and desktop
Windows 8 introduces significant changes to the operating system's user interface, many of which are aimed at improving its experience on tablet computers and other touchscreen devices. The new user interface is based on Microsoft's Metro design language, and uses a Start screen similar to that of Windows Phone 7 as the primary means of launching applications. The Start screen displays a customizable array of tiles linking to various apps and desktop programs, some of which can display constantly updated information and content through "live tiles". As a form of multi-tasking, apps can be snapped to the side of a screen. Alongside the traditional Control Panel, a new simplified and touch-optimized settings app known as "PC Settings" is used for basic configuration and user settings. It does not include many of the advanced options still accessible from the normal Control Panel.
A vertical toolbar known as the charms (accessed by swiping from the right edge of a touchscreen, or pointing the cursor at hotspots in the right corners of a screen) provides access to system and app-related functions, such as search, sharing, device management, settings, and a Start button. The traditional desktop environment for running desktop applications is accessed via a tile on the Start screen. The Start button on the taskbar from previous versions of Windows has been converted into a hotspot in the lower-left corner of the screen, which displays a large tooltip displaying a thumbnail of the Start screen. Swiping from the left edge of a touchscreen or clicking in the top-left corner of the screen allows one to switch between apps and Desktop. Pointing the cursor in the top-left corner of the screen and moving down reveals a thumbnail list of active apps. Aside from the removal of the Start button and the replacement of the Aero Glass theme with a flatter and solid-colored design, the desktop interface on Windows 8 is similar to that of Windows 7.
Removed features
Several notable features have been removed in Windows 8; support for playing DVD-Video was removed from Windows Media Player due to the cost of licensing the necessary decoders (especially for devices which do not include optical disc drives at all) and the prevalence of online streaming services. For the same reasons, Windows Media Center is not included by default on Windows 8, but Windows Media Center and DVD playback support can be purchased in the "Pro Pack" (which upgrades the system to Windows 8 Pro) or "Media Center Pack" add-on for Windows 8 Pro. As with prior versions, third-party DVD player software can still be used to enable DVD playback.
Backup and Restore, the backup component of Windows, is deprecated. It still ships with Windows 8 and continues to work on preset schedules, but is pushed to the background and can only be accessed through a Control Panel applet called "Windows 7 File Recovery". Shadow Copy, a component of Windows Explorer that once saved previous versions of changed files, no longer protects local files and folders. It can only access previous versions of shared files stored on a Windows Server computer. The subsystem on which these components worked, however, is still available for other software to use.
Hardware requirements
PCs
The minimum system requirements for Windows 8 are slightly higher than those of Windows 7. The CPU must support the Physical Address Extension (PAE), NX bit, and SSE2. Windows Store apps require a screen resolution of 1024×768 or higher to run; a resolution of 1366×768 or higher is required to use the snap functionality. To receive certification, Microsoft requires candidate x86 systems to resume from standby in 2 seconds or less.
+ Minimum hardware requirements for Windows 8 Component Minimum Recommended Processor 1 GHz clock rateIA-32 or x64 architectureSupport for PAE, NX and SSE2 x64 architectureSecond Level Address Translation (SLAT) support for Hyper-V Memory (RAM) IA-32 edition: 1 GBx64 edition: 2 GB 4 GB Graphics Card DirectX 9 graphics deviceWDDM 1.0 or higher driver DirectX 10 graphics device Display screen 1024×768 pixels Input device Keyboard and mouse multi-touch display screen Hard disk space IA-32 edition: 16 GBx64 edition: 20 GB Other UEFI v2.3.1 Errata B with Microsoft Windows Certification Authority in its databaseTrusted Platform Module (TPM)Internet connectivity
Microsoft's Connected Standby specification, which hardware vendors may optionally comply with, sets new power consumption requirements that extend above the above minimum specifications. Included in this standard are a number of security-specific requirements designed to improve physical security, notably against Cold Boot Attacks.
32-bit SKUs of Windows 8 only support a maximum of 4 GB of RAM. 64-bit SKUs, however support more: Windows 8 x64 supports 128 GB while Windows 8 Pro and Enterprise x64 support 512 GB.
Microsoft will no longer support Windows 8.1 on computers using CPUs that utilize Intel's Skylake microarchitecture effective July 17, 2018. All future CPU microarchitectures, as well as Skylake systems after this date, will only be supported on Windows 10. After the deadline, only critical security updates will be released for users on these platforms. This will not affect the support status of older CPUs on Windows 8.1.
Tablets and convertibles
Microsoft released minimum hardware requirements for tablet and laplet devices to be "certified" for Windows 8, and defined a convertible form factor as a standalone device that combines the PC, display and rechargeable power source with a mechanically attached keyboard and pointing device in a single chassis. A convertible can be transformed into a tablet where the attached input devices are hidden or removed leaving the display as the only input mechanism. On March 12, 2013, Microsoft amended its certification requirements to only require that screens on tablets have a minimum resolution of 1024×768 (down from the previous 1366×768). The amended requirement is intended to allow "greater design flexibility" for future products.
+ Hardware certification requirements for Windows tablets Graphics card DirectX 10 graphics device with WDDM 1.2 or higher driver Storage 10 GB free space, after the out-of-box experience completes Standard buttons , , , , Screen Touch screen supporting a minimum of 5-point digitizers and resolution of at least 1024×768. The physical dimensions of the display panel must match the aspect ratio of the native resolution. The native resolution of the panel can be greater than 1024 (horizontally) and 768 (vertically). Minimum native color depth is 32-bits. If the display is under 1366×768, disclaimers must be included in documentation to notify users that the Snap function is not available. Camera Minimum 720p Accelerometer 3 axes with data rates at or above 50 Hz USB 2.0 At least one controller and exposed port. Connect Wi-Fi and Bluetooth 4.0 + LE (low energy) Other Speaker, microphone, magnetometer and gyroscope.
If a mobile broadband device is integrated into a tablet or convertible system, then an assisted GPS radio is required.
Devices supporting near field communication need to have visual marks to help users locate and use the proximity technology.
The new button combination for Ctrl + Alt + Del is Windows Key + Power.
Updated certification requirements were implemented to coincide with Windows 8.1. As of 2014, all certified devices with integrated displays must contain a 720p webcam and higher quality speakers and microphones, while all certified devices that support Wi-Fi must support Bluetooth as well. As of 2015, all certified devices must contain Trusted Platform Module 2.0 chips.
Editions
Windows 8 is available in three different editions, of which the lowest version, branded simply as Windows 8, and Windows 8 Pro, were sold at retail in most countries, and as pre-loaded software on new computers. Each edition of Windows 8 includes all of the capabilities and features of the edition below it, and add additional features oriented towards their market segments. For example, Pro added BitLocker, Hyper-V, the ability to join a domain, and the ability to install Windows Media Center as a paid add-on. Users of Windows 8 can purchase a "Pro Pack" license that upgrades their system to Windows 8 Pro through Add features to Windows. This license also includes Windows Media Center. Windows 8 Enterprise contains additional features aimed towards business environments, and is only available through volume licensing. A port of Windows 8 for ARM architecture, Windows RT, is marketed as an edition of Windows 8, but was only included as pre-loaded software on devices specifically developed for it.
Windows 8 was distributed as a retail box product on DVD, and through a digital download that could be converted into DVD or USB install media. As part of a launch promotion, Microsoft offered Windows 8 Pro upgrades at a discounted price of US$39.99 online, or $69.99 for retail box from its launch until January 31, 2013; afterward the Windows 8 price has been $119.99 and the Pro price $199.99. Those who purchased new PCs pre-loaded with Windows 7 Home Basic, Home Premium, Professional, or Ultimate between June 2, 2012 and January 31, 2013 could digitally purchase a Windows 8 Pro upgrade for US$14.99. Several PC manufacturers offered rebates and refunds on Windows 8 upgrades obtained through the promotion on select models, such as Hewlett-Packard (in the U.S. and Canada on select models), and Acer (in Europe on selected Ultrabook models). During these promotions, the Windows Media Center add-on for Windows 8 Pro was also offered for free.
Unlike previous versions of Windows, Windows 8 was distributed at retail in "Upgrade" licenses only, which require an existing version of Windows to install. The "full version software" SKU, which was more expensive but could be installed on computers without an eligible OS or none at all, was discontinued. In lieu of full version, a specialized "System Builder" SKU was introduced. The "System Builder" SKU replaced the original equipment manufacturer (OEM) SKU, which was only allowed to be used on PCs meant for resale, but added a "Personal Use License" exemption that officially allowed its purchase and personal use by users on homebuilt computers.
Retail distribution of Windows 8 has since been discontinued in favor of Windows 8.1. Unlike 8, 8.1 is available as "full version software" at both retail and online for download that does not require a previous version of Windows in order to be installed. Pricing for these new copies remain identical. With the retail release returning to full version software for Windows 8.1, the "Personal Use License" exemption was removed from the OEM SKU, meaning that end users building their own PCs for personal use must use the full retail version in order to satisfy the Windows 8.1 licensing requirements. Windows 8.1 with Bing is a special OEM-specific SKU of Windows 8.1 subsidized by Microsoft's Bing search engine.
Software compatibility
The three desktop editions of Windows 8 support 32-bit and 64-bit architectures; retail copies of Windows 8 include install DVDs for both architectures, while the online installer automatically installs the version corresponding with the architecture of the system's existing Windows installation. The 32-bit version runs on CPUs compatible with x86 architecture 3rd generation (known as IA-32) or newer, and can run 32-bit and 16-bit applications, although 16-bit support must be enabled first. (16-bit applications are developed for CPUs compatible with x86 2nd generation, first conceived in 1978. Microsoft started moving away from this architecture after Windows 95.)
The 64-bit version runs on CPUs compatible with x86 8th generation (known as x86-64, or x64) or newer, and can run 32-bit and 64-bit programs. 32-bit programs and operating system are restricted to supporting only of memory while 64-bit systems can theoretically support of memory. 64-bit operating systems require a different set of device drivers than those of 32-bit operating systems.
Windows RT, the only edition of Windows 8 for systems with ARM processors, only supports applications included with the system (such as a special version of Office 2013), supplied through Windows Update, or Windows Store apps, to ensure that the system only runs applications that are optimized for the architecture. Windows RT does not support running IA-32 or x64 applications. Windows Store apps can either support both the x86 and ARM architectures, or compiled to support a specific architecture.
Reception
thumb|Windows 8 Ultrabooks in a Microsoft Store
Pre-release
Following the unveiling of Windows 8, Microsoft faced criticism (particularly from free software supporters) for mandating that devices receiving its optional certification for Windows 8 have secure boot enabled by default using a key provided by Microsoft. Concerns were raised that secure boot could prevent or hinder the use of alternate operating systems such as Linux. In a post discussing secure boot on the Building Windows 8 blog, Microsoft developer Tony Mangefeste indicated that vendors would provide means to customize secure boot, stating that "At the end of the day, the customer is in control of their PC. Microsoft's philosophy is to provide customers with the best experience first, and allow them to make decisions themselves." Microsoft's certification guidelines for Windows 8 ultimately revealed that vendors would be required to provide means for users to re-configure or disable secure boot in their device's UEFI firmware. It also revealed that ARM devices (Windows RT) would be required to have secure boot permanently enabled, with no way for users to disable it. However, Tom Warren of The Verge noted that other vendors have implemented similar hardware restrictions on their own ARM-based tablet and smartphone products (including those running Microsoft's own Windows Phone platform), but still argued that Microsoft should "keep a consistent approach across ARM and x86, though, not least because of the number of users who'd love to run Android alongside Windows 8 on their future tablets." No mandate is made regarding the installation of third-party certificates that would enable running alternative programs.
Several notable video game developers criticized Microsoft for making its Windows Store a closed platform subject to its own regulations, as it conflicted with their view of the PC as an open platform. Markus "Notch" Persson (creator of the indie game Minecraft), Gabe Newell (co-founder of Valve Corporation and developer of software distribution platform Steam), and Rob Pardo from Activision Blizzard voiced concern about the closed nature of the Windows Store. However, Tom Warren of The Verge stated that Microsoft's addition of the Store was simply responding to the success of both Apple and Google in pursuing the "curated application store approach."
Critical reception
Reviews of the various editions of Windows 8 have been mixed. Tom Warren of The Verge said that although Windows 8's emphasis on touch computing was significant and risked alienating desktop users, he felt that Windows 8 tablets "[make] an iPad feel immediately out of date" due to the capabilities of the operating system's hybrid model and increased focus on cloud services. David Pierce of The Verge described Windows 8 as "the first desktop operating system that understands what a computer is supposed to do in 2012" and praised Microsoft's "no compromise" approach and the operating system's emphasis on Internet connectivity and cloud services. Pierce also considered the Start Screen to be a "brilliant innovation for desktop computers" when compared with "folder-littered desktops on every other OS" because it allows users to interact with dynamic information. In contrast, an ExtremeTech article said it was Microsoft "flailing" and a review in PC Magazine condemned the Metro-style user interface. Some of the included apps in Windows 8 were considered to be basic and lacking in functionality, but the Xbox apps were praised for their promotion of a multi-platform entertainment experience. Other improvements and features (such as File History, Storage Spaces, and the updated Task Manager) were also regarded as positive changes. Peter Bright of Ars Technica wrote that while its user interface changes may overshadow them, Windows 8's improved performance, updated file manager, new storage functionality, expanded security features, and updated Task Manager were still positive improvements for the operating system. Bright also said that Windows 8's duality towards tablets and traditional PCs was an "extremely ambitious" aspect of the platform as well, but criticized Microsoft for emulating Apple's model of a closed distribution platform when implementing the Windows Store.
The interface of Windows 8 has been the subject of mixed reaction. Bright wrote that its system of hot corners and edge swiping "wasn't very obvious" due to the lack of instructions provided by the operating system on the functions accessed through the user interface, even by the video tutorial added on the RTM release (which only instructed users to point at corners of the screen or swipe from its sides). Despite this "stumbling block", Bright said that Windows 8's interface worked well in some places, but began to feel incoherent when switching between the "Metro" and desktop environments, sometimes through inconsistent means. Tom Warren of The Verge wrote that the new interface was "as stunning as it is surprising", contributing to an "incredibly personal" experience once it is customized by the user, but had a steep learning curve, and was awkward to use with a keyboard and mouse. He noted that while forcing all users to use the new touch-oriented interface was a risky move for Microsoft as a whole, it was necessary in order to push development of apps for the Windows Store. Others, such as Adrian Kingsley-Hughes from ZDNet, considered the interface to be "clumsy and impractical" due to its inconsistent design (going as far as considering it "two operating systems unceremoniously bolted together"), and concluded that "Windows 8 wasn't born out of a need or demand; it was born out of a desire on Microsoft's part to exert its will on the PC industry and decide to shape it in a direction—touch and tablets – that allows it to compete against, and remain relevant in the face of Apple's iPad."
In 2013, Frank X. Shaw, a Microsoft corporate vice president, said that while many of the negative reviews were extreme, it was a "good thing" that Microsoft was "listening to feedback and improving a product".
The American Customer Satisfaction Index (ACSI) reported a decline in Microsoft's customer satisfaction, the lowest it has been since Windows Vista.
Market share and sales
Microsoft says that 4 million users upgraded to Windows 8 over the weekend after its release, which CNET says was well below Microsoft's internal projections and was described inside the company as disappointing.
On November 27, 2012, Microsoft announced that it has sold 40 million licenses of Windows 8 in the first month, surpassing the pace of Windows 7.
However, according to research firm NPD, sales of devices running Windows in the United States have declined 21 percent compared to the same time period in 2011. As the holiday shopping season wrapped up, Windows 8 sales continued to lag, even as Apple reported brisk sales. The market research firm IDC reported an overall drop in PC sales for the quarter, and said the drop may have been partly due to consumer reluctance to embrace the new features of the OS and poor support from OEM for these features. This capped the first year of declining PC sales to the Asia Pacific region, as consumers bought more mobile devices than Windows PCs.
Windows 8 surpassed Windows Vista in market share with a 5.1% usage rate according to numbers posted in July 2013 by Net Applications, with usage on a steady upward trajectory. However, intake of Windows 8 still lags behind that of Windows Vista and Windows 7 at the same point in their release cycles. Windows 8's tablet market share has also been growing steadily, with 7.4% of tablets running Windows in Q1 2013 according to Strategy Analytics, up from nothing just a year before. However, this is still well below Android and iOS, which posted 43.4% and 48.2% market share respectively, although both operating systems have been on the market much longer than Windows 8. Strategy Analytics also noted "a shortage of top tier apps" for Windows tablets despite Microsoft strategy of paying developers to create apps for the operating system (in addition to for Windows Phone).
In March 2013, Microsoft also amended its certification requirements to allow tablets to use the 1024×768 resolution as a minimum; this change is expected to allow the production of certified Windows 8 tablets in smaller form factors—a market which is currently dominated by Android-based tablets. Despite the reaction of industry experts, Microsoft reported that they had sold 100 million licenses in the first six months. This matched sales of Windows 7 over a similar period.Windows 8 hits 100 million sales, tweaks for mini-tablets in works, Reuters, May 7, 2013 This statistic includes shipments to channel warehouses which now need to be sold in order to make way for new shipments.
In January 2014, Hewlett-Packard began a promotion for desktops running Windows 7, saying that it was "back by popular demand". Outside sources have suggested that this might be because HP or its customers thought the Windows 8 platform would be more appropriate for mobile computing than desktop computing, or that they were looking to attract customers forced to switch from XP who wanted a more familiar interface.
In February 2014, Bloomberg reported that Microsoft would be lowering the price of Windows 8 licenses by 70% for devices that retail under US$250; alongside the announcement that an update to the operating system would allow OEMs to produce devices with as little as 1 GB of RAM and 16 GB of storage, critics felt that these changes would help Windows compete against Linux-based devices in the low-end market, particularly those running Chrome OS. Microsoft had similarly cut the price of Windows XP licenses to compete against the early waves of Linux-based netbooks. Reports also indicated that Microsoft was planning to offer cheaper Windows 8 licenses to OEMs in exchange for setting Internet Explorer's default search engine to Bing. Some media outlets falsely reported that the SKU associated with this plan, "Windows 8.1 with Bing", was a variant which would be a free or low-cost version of Windows 8 for consumers using older versions of Windows. On April 2, 2014, Microsoft ultimately announced that it would be removing license fees entirely for devices with screens smaller than 9 inches, and officially confirmed the rumored "Windows 8.1 with Bing" OEM SKU on May 23, 2014.
On the information gathered by Net Applications, adoption rate in March 2015 for Windows 8.1 was at 10.55%, while the original Windows 8 was at 3.52%.
Chinese government ban
In May 2014, the Government of China banned the internal purchase of Windows 8-based products under government contracts requiring "energy-efficient" devices. The Xinhua News Agency claimed that Windows 8 was being banned in protest of Microsoft's support lifecycle policy and the end of support for Windows XP (which, as of January 2014, had a market share of 49% in China), as the government "obviously cannot ignore the risks of running an OS without guaranteed technical support." However, Ni Guangnan of the Chinese Academy of Sciences had also previously warned that Windows 8 could allegedly expose users to surveillance by the United States government due to its heavy use of Internet-based services.
In June 2014, state broadcaster China Central Television (CCTV) broadcast a news story further characterizing Windows 8 as a threat to national security. The story featured an interview with Ni Guangnan, who stated that operating systems could aggregate "sensitive user information" that could be used to "understand the conditions and activities of our national economy and society", and alleged that per documents leaked by Edward Snowden, the U.S. government had worked with Microsoft to retrieve encrypted information. Yang Min, a computer scientist at Fudan University, also stated that "the security features of Windows 8 are basically to the benefit of Microsoft, allowing them control of the users' data, and that poses a big challenge to the national strategy for information security." Microsoft denied the claims in a number of posts on the Chinese social network Sina Weibo, which stated that the company had never "assisted any government in an attack of another government or clients" or provided client data to the U.S. government, never "provided any government the authority to directly visit" or placed any backdoors in its products and services, and that it had never concealed government requests for client data.
Upgraded versions
An upgrade to Windows 8 known as Windows 8.1 was officially announced by Microsoft on May 14, 2013. Following a presentation devoted to the upgrade at Build 2013, a public beta version of the upgrade was released on June 26, 2013. Windows 8.1 was released to OEM hardware partners on August 27, 2013, and released publicly as a free download through Windows Store on October 17, 2013. Volume license customers and subscribers to MSDN Plus and TechNet Plus were initially unable to obtain the RTM version upon its release; a spokesperson said the policy was changed to allow Microsoft to work with OEMs "to ensure a quality experience at general availability." However, after criticism, Microsoft reversed its decision and released the RTM build on MSDN and TechNet on September 9, 2013.
The upgrade addressed a number of criticisms faced by Windows 8 upon its release, with additional customization options for the Start screen, the restoration of a visible Start button on the desktop, the ability to snap up to four apps on a single display, and the ability to boot to the desktop instead of the Start screen. Windows 8's stock apps were also updated, a new Bing-based unified search system was added, SkyDrive was given deeper integration with the operating system, and a number of new stock apps, along with a tutorial, were added. Windows 8.1 also added support for 3D printing, Miracast media streaming, NFC printing, and Wi-Fi Direct.
Microsoft markets Windows 8.1 as an "update" rather than as a "service pack" or "upgrade". However, Microsoft's support lifecycle policy treats Windows 8.1 similarly to previous Windows service packs: it is part of Windows 8's support lifecycle, and upgrading to 8.1 is required to maintain access to mainstream support and Windows updates after January 12, 2016. This also means that support for several versions of Internet Explorer Web browser (IE10 or below) will be discontinued.
Retail and OEM copies of Windows 8, Windows 8 Pro, and Windows RT can be upgraded through Windows Store free of charge. However, volume license customers, TechNet or MSDN subscribers and users of Windows 8 Enterprise must acquire a standalone installation media for 8.1 and install through the traditional Windows setup process, either as an in-place upgrade or clean install. This requires an 8.1-specific product key.
See also
List of operating systems
References
Further reading
—Analysis of Windows 8 downgrade rights
Category:2012 software
8
Category:IA-32 operating systems
Category:X86-64 operating systems
Category:Tablet operating systems | 24,806,506 | 2017-01 |
Nintendo Entertainment System | The Nintendo Entertainment System (commonly abbreviated as NES) is an 8-bit home video game console that was developed and manufactured by Nintendo.
The best-selling gaming console of its time, the NES helped revitalize the US video game industry following the video game crash of 1983. With the NES, Nintendo introduced a now-standard business model of licensing third-party developers, authorizing them to produce and distribute titles for Nintendo's platform.
It was initially released in Japan as the (also known by the portmanteau abbreviation and abbreviated as FC) on July 15, 1983, and was later released in North America during 1985, in Europe during 1986, and Australia in 1987. In South Korea, it was known as the Hyundai Comboy (현대 컴보이 Hyeondae Keomboi) and was distributed by SK Hynix which then was known as Hyundai Electronics. It was succeeded by the Super Nintendo Entertainment System.
In 2009, the Nintendo Entertainment System was named the single greatest video game console in history by IGN, in a list of 25. It was judged the second greatest console behind the Sega Dreamcast in PC Magazines "Top 10 Video Game Consoles of All Time".The 10 Greatest Video Game Consoles of All Time – Slide 9 – Slideshow from. PCMag.com (March 20, 2014). Retrieved on May 12, 2014.
History
Development
Following a series of arcade game successes in the early 1980s, Nintendo made plans to create a cartridge-based console called the Famicom, which is short for Family Computer. Masayuki Uemura designed the system. Original plans called for an advanced 16-bit system which would function as a full-fledged computer with a keyboard and floppy disk drive, but Nintendo president Hiroshi Yamauchi rejected this and instead decided to go for a cheaper, more conventional cartridge-based game console as he felt that features such as keyboards and disks were intimidating to non-technophiles. A test model was constructed in October 1982 to verify the functionality of the hardware, after which work began on programming tools. Because 65xx CPUs had not been manufactured or sold in Japan up to that time, no cross-development software was available and it had to be produced from scratch. Early Famicom games were written on a system that ran on an NEC PC-8001 computer and LEDs on a grid were used with a digitizer to design graphics as no software design tools for this purpose existed at that time.
The code name for the project was "GameCom", but Masayuki Uemura's wife proposed the name "Famicom", arguing that "In Japan, 'pasokon' is used to mean a personal computer, but it is neither a home or personal computer. Perhaps we could say it is a family computer." Meanwhile, Hiroshi Yamauchi decided that the console should use a red and white theme after seeing a billboard for DX Antenna which used those colors.
Original plans called for the Famicom's cartridges to be the size of a cassette tape, but ultimately they ended up being twice as big. Careful design attention was paid to the cartridge connectors since loose and faulty connections often plagued arcade machines. As it necessitated taking 60 connection lines for the memory and expansion, Nintendo decided to produce their own connectors in-house rather than use ones from an outside supplier.
The controllers were hard-wired to the console with no connectors for cost reasons. The game pad controllers were more-or-less copied directly from the Game & Watch machines, although the Famicom design team originally wanted to use arcade-style joysticks, even taking apart ones from American game consoles to see how they worked. There were concerns regarding the durability of the joystick design and that children might step on joysticks left on the floor. Katsuyah Nakawaka attached a Game & Watch D-pad to the Famicom prototype and found that it was easy to use and caused no discomfort. Ultimately though, they installed a 15-pin expansion port on the front of the console so that an optional arcade-style joystick could be used.
Uemura added an eject lever to the cartridge slot which was not really necessary, but he felt that children could be entertained by pressing it. He also added a microphone to the second controller with the idea that it could be used to make players' voices sound through the TV speaker.GlitterBerri's Game Translations » Synonymous With the Domestic Game Console. Glitterberri.com (April 21, 2012). Retrieved on August 23, 2013.
Release
The console was released on July 15, 1983 as the Family Computer (or Famicom for short) for ¥14,800 alongside three ports of Nintendo's successful arcade games Donkey Kong, Donkey Kong Jr. and Popeye. The Famicom was slow to gather momentum; a bad chip set caused the initial release of the system to crash. Following a product recall and a reissue with a new motherboard, the Famicom’s popularity soared, becoming the best-selling game console in Japan by the end of 1984.
Encouraged by this success, Nintendo turned its attention to the North American market, entering into negotiations with Atari to release the Famicom under Atari’s name as the Nintendo Advanced Video Gaming System. The deal was set to be finalized and signed at the Summer Consumer Electronics Show in June 1983. However, Atari discovered at that show that its competitor Coleco was illegally demonstrating its Coleco Adam computer with Nintendo's Donkey Kong game. This violation of Atari's exclusive license with Nintendo to publish the game for its own computer systems delayed the implementation of Nintendo's game console marketing contract with Atari. Atari's CEO Ray Kassar was fired the next month, so the deal went nowhere, and Nintendo decided to market its system on its own.
thumb|right|The proposed Advanced Video System bundle, including cassette drive and wireless accessories.
Subsequent plans to market a Famicom console in North America featuring a keyboard, cassette data recorder, wireless joystick controller and a special BASIC cartridge under the name "Nintendo Advanced Video System" likewise never materialized. By the beginning of 1985, the Famicom had sold more than 2.5 million units in Japan and Nintendo soon announced plans to release it in North America as the Advanced Video Entertainment System (AVS) that same year. The American video game press was skeptical that the console could have any success in the region, with the March 1985 issue of Electronic Games magazine stating that "the videogame market in America has virtually disappeared" and that "this could be a miscalculation on Nintendo's part."
At June 1985's Consumer Electronics Show (CES), Nintendo unveiled the American version of its Famicom, with a new case redesigned by Lance Barr and featuring a "zero insertion force" cartridge slot. This is the system which would eventually be officially deployed as the Nintendo Entertainment System, or the colloquial "NES". Nintendo seeded these first systems to limited American test markets starting in New York City on October 18, 1985, and following up with a full-fledged North American release in February of the following year. The nationwide release was in September 1986. Nintendo released 17 launch titles: 10-Yard Fight, Baseball, Clu Clu Land, Duck Hunt, Excitebike, Golf, Gyromite, Hogan’s Alley, Ice Climber, Kung Fu, Pinball, Soccer, Stack-Up, Tennis, Wild Gunman, Wrecking Crew, and Super Mario Bros. Some varieties of these launch games contained Famicom chips with an adapter inside the cartridge so they would play on North American consoles, which is why the title screen of Gyromite has the Famicom title "Robot Gyro" and the title screen of Stack-Up has the Famicom title "Robot Block".
thumb|R.O.B. (Robotic Operating Buddy), an accessory for the NES's 1985 launch. Although it ended up having a short product lifespan, R.O.B. was initially used to market the NES as novel and sophisticated compared to previous game consoles.
The system's launch represented not only a new product, but also a reframing of the severely damaged home video game market. The video game market crash of 1983 had occurred in large part due to a lack of consumer and retailer confidence in video games, which had been partially due to confusion and misrepresentation in video game marketing. Prior to the NES, the packaging of many video games presented bombastic artwork which exaggerated the graphics of the actual game. In terms of product identity, a single game such as Pac-Man would appear in many versions on many different game consoles and computers, with large variations in graphics, sound, and general quality between the versions. In stark contrast, Nintendo's marketing strategy aimed to regain consumer and retailer confidence by delivering a singular platform whose technology was not in need of exaggeration and whose qualities were clearly defined.
To differentiate Nintendo's new home platform from the perception of a troubled and shallow video game market, the company freshened its product nomenclature and established a strict product approval and licensing policy. The overall system was referred to as an "Entertainment System" instead of a "video game system", which was centered upon a machine called a "Control Deck" instead of a "console", and which featured software cartridges called "Game Paks" instead of "video games". To deter production of games which had not been licensed by Nintendo, and to prevent copying,
the 10NES lockout chip system acted as a lock-and-key coupling of each Game Pak and Control Deck. The packaging of the launch lineup of NES games bore pictures of close representations of actual onscreen graphics. To reduce consumer confusion, symbols on the games' packaging clearly indicated the genre of the game. A 'seal of quality' was printed on all licensed game and accessory packaging. The initial seal stated, "This seal is your assurance that Nintendo has approved and guaranteed the quality of this product". This text was later changed to "Official Nintendo Seal of Quality".
Unlike with the Famicom, Nintendo of America marketed the console primarily to children, instituting a strict policy of censoring profanity, sexual, religious, or political content. The most famous example was Lucasfilm's attempts to port the comedy-horror game Maniac Mansion to the NES, which Nintendo insisted be considerably watered down. Nintendo of America continued their censorship policy until 1994 with the advent of the Entertainment Software Rating Board system.
The optional Robotic Operating Buddy, or R.O.B., was part of a marketing plan to portray the NES's technology as being novel and sophisticated when compared to previous game consoles, and to portray its position as being within reach of the better established toy market. While at first, the American public exhibited limited excitement for the console itself, peripherals such as the light gun and R.O.B. attracted extensive attention.Boyer, Steven. "A Virtual Failure: Evaluating the Success of Nintendos Virtual Boy." Velvet Light Trap.64 (2009): 23–33. ProQuest Research Library. Web. May 24, 2012.
In Europe, Australia and Canada, the system was released to two separate marketing regions. The first consisted of mainland Europe (excluding Italy) where distribution was handled by a number of different companies, with Nintendo responsible for most cartridge releases. Most of this region saw a 1986 release. The following year Mattel handled distribution for the second region, consisting of the United Kingdom, Ireland, Canada, Italy, Australia and New Zealand. Not until the 1990s did Nintendo's newly created European branch direct distribution throughout Europe.
thumb|The Nintendo Entertainment System's Control Deck
For its complete North American release, the Nintendo Entertainment System was progressively released over the ensuing years in four different bundles: the Deluxe Set, the Control Deck, the Action Set and the Power Set. The Deluxe Set, retailing at (), included R.O.B., a light gun called the NES Zapper, two controllers, and two Game Paks: Gyromite, and Duck Hunt. The Basic Set, retailing at with no game, and bundled with Super Mario Bros. The Action Set, retailing in 1988 for , came with the Control Deck, two game controllers, an NES Zapper, and a dual Game Pak containing both Super Mario Bros. and Duck Hunt. In 1989, the Power Set included the console, two game controllers, an NES Zapper, a Power Pad, and a triple Game Pak containing Super Mario Bros, Duck Hunt, and World Class Track Meet.
In 1990, a Sports Set bundle was released, including the console, an NES Satellite infrared wireless multitap adapter, four game controllers, and a dual Game Pak containing Super Spike V'Ball and Nintendo World Cup.
Two more bundle packages were later released using the original model NES console. The Challenge Set of 1992 included the console, two controllers, and a Super Mario Bros. 3 Game Pak for a retail price of . The Basic Set, first released in 1987, was repackaged for a retail . It included only the console and two controllers, and no longer was bundled with a cartridge. Instead, it contained a book called the Official Nintendo Player's Guide, which contained detailed information for every NES game made up to that point.
Finally, the console was redesigned for both the North American and Japanese markets as part of the final Nintendo-released bundle package. The package included the new style NES-101 console, and one redesigned "dogbone" game controller. Released in October 1993 in North America, this final bundle retailed for and remained in production until the discontinuation of the NES in 1995.
Reception
By 1988, industry observers stated that the NES's popularity had grown so quickly that the market for Nintendo cartridges was larger than that for all home computer software. Compute! reported in 1989 that Nintendo had sold seven million NES systems in 1988, almost as many as the number of Commodore 64s sold in its first five years. "Computer game makers [are] scared stiff", the magazine said, stating that Nintendo's popularity caused most competitors to have poor sales during the previous Christmas and resulted in serious financial problems for some.
In June 1989, Nintendo of America's vice president of marketing Peter Main, said that the Famicom was present in 37% of Japan's households. By 1990, 30% of American households owned the NES, compared to 23% for all personal computers. By 1990, the NES had outsold all previously released consoles worldwide. The slogan for this brand was It can't be beaten. In Europe and South America, the NES was outsold by Sega's Master System, while the Nintendo Entertainment System was not available in the Soviet Union.
As the 1990s dawned, gamers predicted that competition from technologically superior systems such as the 16-bit Sega Mega Drive/Genesis would mean the immediate end of the NES’s dominance. Instead, during the first year of Nintendo's successor console the Super Famicom (named Super Nintendo Entertainment System outside Japan), the Famicom remained the second highest-selling video game console in Japan, outselling the newer and more powerful NEC PC Engine and Sega Mega Drive by a wide margin. The console remained popular in Japan and North America until late 1993, when the demand for new NES software abruptly plummeted. The final Famicom game released in Japan is Takahashi Meijin no Bōken Jima IV (Adventure Island IV), while in North America, Wario's Woods is the final licensed game. The last game to be released in Europe was "The Lion King" in 1995. In the wake of ever decreasing sales and the lack of new software titles, Nintendo of America officially discontinued the NES by 1995. Nintendo kept producing new Famicom units in Japan until September 25, 2003, and continued to repair Famicom consoles until October 31, 2007, attributing the discontinuation of support to insufficient supplies of parts.
Legacy
The NES was released after the "video game crash" of the early 1980s, when many retailers and adults regarded electronic games as a passing fad, so many believed at first that the NES would soon fade. Before the NES/Famicom, Nintendo was known as a moderately successful Japanese toy and playing card manufacturer, but the popularity of the NES/Famicom helped the company grow into an internationally recognized name almost synonymous with video games and set the stage for Japanese dominance of the video game industry. With the NES, Nintendo also changed the relationship between console manufacturers and third-party software developers by restricting developers from publishing and distributing software without licensed approval. This led to higher quality software titles, which helped change the attitude of a public that had grown weary from poorly produced titles for earlier game systems.
The NES hardware was also very influential. Nintendo chose the name "Nintendo Entertainment System" for the US market and redesigned the system so it would not give the appearance of a child's toy. The front-loading cartridge input allowed it to be used more easily in a TV stand with other entertainment devices, such as a videocassette recorder.National Academy of Television Arts And Sciences.
The system's hardware limitations led to game design similarities that still influence video game design and culture and many prominent game franchises originated on the NES, including Nintendo's own Super Mario Bros., The Legend of Zelda and Metroid, Capcom's Mega Man franchise, Konami's Castlevania franchise, Square's Final FantasyKohler (2004), p. 95. and Enix's Dragon QuestKohler (2004), p. 222. franchises.
NES imagery, especially its controller, has become a popular motif for a variety of products, including Nintendo's own Game Boy Advance. Clothing, accessories, and food items adorned with NES-themed imagery are still produced and sold in stores.
On July 14, 2016, Nintendo announced the November 2016 launch of a miniature replica of the NES, titled Nintendo Entertainment System: NES Classic Edition in the United States and Nintendo Classic Mini: Nintendo Entertainment System in Europe and Australia. The console includes 30 permanently inbuilt games from the vintage NES library, including the Super Mario Bros. and The Legend of Zelda series. The system features HDMI display output and a new replica controller, which can also connect to the Wii Remote for use with Virtual Console games.
Discontinuation
On August 14, 1995, Nintendo discontinued the Nintendo Entertainment System in both North America and Europe.
The Famicom was originally discontinued in September 2003. Nintendo offered repair service for the Famicom in Japan until 2007.
Games
The Nintendo Entertainment System offered a number of groundbreaking titles. Super Mario Bros. pioneered side-scrollers while The Legend of Zelda helped popularize battery-backed save functionality.
Game Pak
thumb|right|upright|North American and PAL NES cartridges (or "Game Paks") are significantly larger than Japanese Famicom cartridges.
The NES uses a 72-pin design, as compared with 60 pins on the Famicom. To reduce costs and inventory, some early games released in North America were simply Famicom cartridges attached to an adapter to fit inside the NES hardware. Originally, NES cartridges were held together with five small slotted screws. Games released after 1987 were redesigned slightly to incorporate two plastic clips molded into the plastic itself, removing the need for the top two screws.
The back of the cartridge bears a label with handling instructions. Production and software revision codes were imprinted as stamps on the back label to correspond with the software version and producer. All licensed NTSC and PAL cartridges are a standard shade of gray plastic, with the exception of The Legend of Zelda and Zelda II: The Adventure of Link, which were manufactured in gold-plastic carts. Unlicensed carts were produced in black, robin egg blue, and gold, and are all slightly different shapes than standard NES cartridges. Nintendo also produced yellow-plastic carts for internal use at Nintendo Service Centers, although these "test carts" were never made available for purchase. All licensed US cartridges were made by Nintendo, Konami and Acclaim. For promotion of DuckTales: Remastered, Capcom sent 150 limited-edition gold NES cartridges with the original game, featuring the Remastered art as the sticker, to different gaming news agencies. The instruction label on the back included the opening lyric from the show's theme song, "Life is like a hurricane".
Japanese (Famicom) cartridges are shaped slightly differently. Unlike NES games, official Famicom cartridges were produced in many colors of plastic. Adapters, similar in design to the popular accessory Game Genie, are available that allow Famicom games to be played on an NES. In Japan, several companies manufactured the cartridges for the Famicom. This allowed these companies to develop their own customized chips designed for specific purposes, such as chips that increased the quality of sound in their games.
Third-party licensing
thumb|right|The Famicom Family mark started appearing in games and peripherals released from 1988 and onward that were approved by Nintendo for compatibility with official Famicom consoles and derivatives.
Nintendo's near monopoly on the home video game market left it with a degree of influence over the industry. Unlike Atari, which never actively courted third-party developers (and even went to court in an attempt to force Activision to cease production of Atari 2600 games), Nintendo had anticipated and encouraged the involvement of third-party software developers; strictly on Nintendo's terms.GameSpy.com – Article. Web.archive.org (March 20, 2008). Retrieved on August 23, 2013. Some of the Nintendo platform-control measures were adopted by later console manufacturers such as Sega, Sony, and Microsoft, although not as stringent.
To this end, a 10NES authentication chip was placed in every console and another was placed in every officially licensed cartridge. If the console's chip could not detect a counterpart chip inside the cartridge, the game would not load. Nintendo portrayed these measures as intended to protect the public against poor-quality games, and placed a golden seal of approval on all licensed games released for the system.
Nintendo was not as restrictive as Sega, which did not permit third-party publishing until Mediagenic in late summer 1988. Nintendo's intention was to reserve a large part of NES game revenue for itself. Nintendo required that they be the sole manufacturer of all cartridges, and that the publisher had to pay in full before the cartridges for that game be produced. Cartridges could not be returned to Nintendo, so publishers assumed all the risk. As a result, some publishers lost more money due to distress sales of remaining inventory at the end of the NES era than they ever earned in profits from sales of the games. Because Nintendo controlled the production of all cartridges, it was able to enforce strict rules on its third-party developers, which were required to sign a contract by Nintendo that would obligate these parties to develop exclusively for the system, order at least 10,000 cartridges, and only make five games per year. A 1988 shortage of DRAM and ROM chips also reportedly caused Nintendo to only permit 25% of publishers' requests for cartridges. This was an average figure, with some publishers receiving much higher amounts and others almost none. GameSpy noted that Nintendo's "iron-clad terms" made the company many enemies during the 1980s. Some developers tried to circumvent the five game limit by creating additional company brands like Konami's Ultra Games label; others tried circumventing the 10NES chip.
Nintendo was accused of antitrust behavior because of the strict licensing requirements. The United States Department of Justice and several states began probing Nintendo's business practices, leading to the involvement of Congress and the Federal Trade Commission (FTC). The FTC conducted an extensive investigation which included interviewing hundreds of retailers. During the FTC probe, Nintendo changed the terms of its publisher licensing agreements to eliminate the two-year rule and other restrictive terms. Nintendo and the FTC settled the case in April 1991, with Nintendo required to send vouchers giving a $5 discount off to a new game, to every person that had purchased a NES title between June 1988 and December 1990. GameSpy remarked that Nintendo's punishment was particularly weak giving the case's findings, although it has been speculated that the FTC did not want to damage the video game industry in the United States.
With the NES near its end of its life many third-party publishers such as Electronic Arts supported upstart competing consoles with less strict licensing terms such as the Sega Genesis and then the PlayStation, which eroded and then took over Nintendo's dominance in the home console market, respectively. Consoles from Nintendo's rivals in the post-SNES era had always enjoyed much stronger third-party support than Nintendo, which relied more heavily on first-party games.
Unlicensed games
Companies that refused to pay the licensing fee or were rejected by Nintendo found ways to circumvent the console's authentication system. Most of these companies created circuits that used a voltage spike to temporarily disable the 10NES chip. A few unlicensed games released in Europe and Australia came in the form of a dongle to connect to a licensed game, in order to use the licensed game's 10NES chip for authentication. To combat unlicensed games, Nintendo of America threatened retailers who sold them with losing their supply of licensed titles and multiple revisions were made to the NES PCBs to prevent unlicensed games from working.
Atari Games took a different approach with their line of NES products, Tengen. The company attempted to reverse engineer the lockout chip to develop its own "Rabbit" chip. Tengen also obtained a description of the lockout chip from the United States Patent and Trademark Office by falsely claiming that it was required to defend against present infringement claims. Nintendo successfully sued Tengen for copyright infringement. Tengen's antitrust claims against Nintendo were never decided.
Color Dreams produced Christian video games under the subsidiary name Wisdom Tree. They were never sued by Nintendo as the company probably feared a public relations backlash.
Emulation
The NES can be emulated on many other systems, most notably the PC. The first emulator was the Japanese-only Pasofami. It was soon followed by iNES, which was available in English and was cross-platform, in 1996. It was described as being the first NES emulation software that could be used by a non-expert.Fayzullin, Marat "iNES". Retrieved on January 10, 2015. NESticle, a popular MS-DOS emulator, was released on April 3, 1997. There have since been many other emulators. The Virtual Console for the Wii, Nintendo 3DS and Wii U also offers emulation of many NES games.
Game rentals
As the Nintendo Entertainment System grew in popularity and entered millions of American homes, some small video rental shops began buying their own copies of NES games, and renting them out to customers for around the same price as a video cassette rental for a few days. Nintendo received no profit from the practice beyond the initial cost of their game, and unlike movie rentals, a newly released game could hit store shelves and be available for rent on the same day. Nintendo took steps to stop game rentals, but didn't take any formal legal action until Blockbuster Video began to make game rentals a large-scale service. Nintendo claimed that allowing customers to rent games would significantly hurt sales and drive up the cost of games.The Morning Call – Article. Retrieved on August 26, 2013. Nintendo lost the lawsuit,1UP.com – Article. Retrieved on August 26, 2013. but did win on a claim of copyright infringement.SunSentinel – Article. Retrieved on August 26, 2013. Blockbuster was banned from including original, copyrighted instruction booklets with their rented games. In compliance with the ruling, Blockbuster produced their own short instructions—usually in the form of a small booklet, card, or label stuck on the back of the rental box—that explained the game's basic premise and controls. Video rental shops continued the practice of renting video games and still do today.
There were some risks with renting cartridge-based games. Most rental shops did not clean the connectors and they would become dirty over time. Renting and using a cartridge with dirty connectors posed a problem for consoles, especially the Nintendo Entertainment System which was particularly susceptible to operation problems and failures when its internal connectors became dirty (see the Design flaws section below).
Hardware
Configurations
Although the Japanese Famicom, North American and European NES versions included essentially the same hardware, there were certain key differences among the systems.
The original Japanese Famicom was predominantly white plastic, with dark red trim. It featured a top-loading cartridge slot, grooves on both sides of the deck in which the hardwired game controllers could be placed when not in use, and a 15-pin expansion port located on the unit's front panel for accessories.
The original NES, meanwhile, featured a front-loading cartridge covered by a small, hinged door that can be opened to insert or remove a cartridge and closed at other times. It features a more subdued gray, black, and red color scheme. An expansion port was found on the bottom of the unit and the cartridge connector pinout was changed.
In the UK, Italy and Australia which share the PAL A region, two versions of the NES were released; the "Mattel Version" and "NES Version". When the NES was first released in those countries, it was distributed by Mattel and Nintendo decided to use a lockout chip specific to those countries, different from the chip used in other European countries. When Nintendo took over European distribution in 1990, they produced consoles that were then labelled "NES Version"; therefore, the only differences between the two are the text on the front flap and texture on the top/bottom of the casing.
right|thumb|The NES-101 control deck alongside its similarly redesigned NES-039 game controller.
In October 1993, Nintendo redesigned the NES to follow many of the same design cues as the newly introduced Super Nintendo Entertainment System and the Japanese Super Famicom. Like the SNES, the NES-101 model loaded cartridges through a covered slot on top of the unit replacing the complicated mechanism of the earlier design. For this reason the NES-101 is known informally as the "top-loader" among Nintendo fans.
right|thumb|The HVC-101 control deck alongside its similarly redesigned HVC-102 game controller.
In December 1993, the Famicom received a similar redesign. It also loads cartridges through a covered slot on the top of the unit and uses non-hardwired controllers. Because HVC-101 used composite video output instead of being RF only like the HVC-001, Nintendo marketed the newer model as the . Since the new controllers don't have microphones on them like the second controller on the original console, certain games such as the Disk System version of The Legend of Zelda and Raid on Bungeling Bay will have certain tricks that cannot be replicated when played on an HVC-101 Famicom without a modded controller. The HVC-101 Famicom is compatible with most NES controllers due to having the same controller port. In October 1987, Nintendo had also released a 3D graphic capable headset called the Famicom 3D System (HVC-031). This peripheral accessory was never released outside Japan.
Design flaws
thumb|right|The VCR-like loading mechanism of the NES led to problems over time. The design wore connector pins out quickly and could easily become dirty, resulting in difficulties with the NES reading game carts.
When Nintendo released the NES in the US, the design styling was deliberately different from that of other game consoles. Nintendo wanted to distinguish its product from those of competitors and to avoid the generally poor reputation that game consoles had acquired following the video game crash of 1983. One result of this philosophy was to disguise the cartridge slot design as a front-loading zero insertion force (ZIF) cartridge socket, designed to resemble the front-loading mechanism of a VCR. The newly designed connector worked quite well when both the connector and the cartridges were clean and the pins on the connector were new. Unfortunately, the ZIF connector was not truly zero insertion force. When a user inserted the cartridge into the NES, the force of pressing the cartridge down and into place bent the contact pins slightly, as well as pressing the cartridge’s ROM board back into the cartridge itself. Frequent insertion and removal of cartridges caused the pins to wear out from repeated usage over the years and the ZIF design proved more prone to interference by dirt and dust than an industry-standard card edge connector. These design issues were not alleviated by Nintendo’s choice of materials; the console slot nickel connector springs would wear due to design and the game cartridge copper connectors were also prone to tarnishing. Many players would try to alleviate issues in the game caused by this corrosion by blowing into the cartridges, then reinserting them, which actually hurt the copper connectors by speeding up the tarnishing.
thumb|right|The 10NES authentication chip contributed to the system's reliability problems. The circuit was ultimately removed from the remodeled NES 2.
Lockout
The Famicom contained no lockout hardware and, as a result, unlicensed cartridges (both legitimate and bootleg) were extremely common throughout Japan and the Far East. The original NES (but not the top-loading NES-101) contained the 10NES lockout chip, which significantly increased the challenges faced by unlicensed developers. Tinkerers at home in later years discovered that disassembling the NES and cutting the fourth pin of the lockout chip would change the chip’s mode of operation from "lock" to "key", removing all effects and greatly improving the console’s ability to play legal games, as well as bootlegs and converted imports. NES consoles sold in different regions had different lockout chips, so games marketed in one region would not work on consoles from another region. Known regions are: USA/Canada (3193 lockout chip), most of Europe (3195), Asia (3196) and UK, Italy and Australia (3197). Since two types of lockout chip were used in Europe, European NES game boxes often had an "A" or "B" letter on the front, indicating whether the game is compatible with UK/Italian/Australian consoles (A), or the rest of Europe (B). Rest-of-Europe games typically had text on the box stating "This game is not compatible with the Mattel or NES versions of the Nintendo Entertainment System". Similarly, UK / Italy / Australia games stated "This game is only compatible with the Mattel or NES versions of the Nintendo Entertainment System".
Pirate cartridges for the NES were rare, but Famicom ones were common and widespread in Asia. Most were produced in Hong Kong or Taiwan, and they usually featured a variety of small (32k or less) games which were selected from a menu and bank switched. Some were also hacks of existing games (especially Super Mario Bros.), and a few were cartridge conversions of Famicom Disk System titles such as the Japanese SMB2.
Problems with the 10NES lockout chip frequently resulted in the console's most infamous problem: the blinking red power light, in which the system appears to turn itself on and off repeatedly because the 10NES would reset the console once per second. The lockout chip required constant communication with the chip in the game to work. Dirty, aging and bent connectors would often disrupt the communication, resulting in the blink effect. Alternatively, the console would turn on but only show a solid white, gray, or green screen. Users attempted to solve this problem by blowing air onto the cartridge connectors, inserting the cartridge just far enough to get the ZIF to lower, licking the edge connector, slapping the side of the system after inserting a cartridge, shifting the cartridge from side to side after insertion, pushing the ZIF up and down repeatedly, holding the ZIF down lower than it should have been, and cleaning the connectors with alcohol. These attempted solutions often became notable in their own right and are often remembered alongside the NES. Many of the most frequent attempts to fix this problem instead ran the risk of damaging the cartridge and/or system. In 1989, Nintendo released an official NES Cleaning Kit to help users clean malfunctioning cartridges and consoles.
With the release of the top-loading NES-101 (NES 2) toward the end of the NES's lifespan, Nintendo resolved the problems by switching to a standard card edge connector and eliminating the lockout chip. All of the Famicom systems used standard card edge connectors, as did Nintendo’s subsequent game consoles, the Super Nintendo Entertainment System and the Nintendo 64.
In response to these hardware flaws, "Nintendo Authorized Repair Centers" sprang up across the U.S. According to Nintendo, the authorization program was designed to ensure that the machines were properly repaired. Nintendo would ship the necessary replacement parts only to shops that had enrolled in the authorization program. In practice, the authorization process consisted of nothing more than paying a fee to Nintendo for the privilege. In a recent trend, many sites have sprung up to offer Nintendo repair parts, guides, and services that replace those formerly offered by the authorized repair centers.
Famicom 3D System
Nintendo released a 3D headset peripheral called Famicom 3D System for 3D stereoscopic entertainment. This was never released outside Japan, since it was an utter commercial failure, making gamers experience headaches and nausea.
Famicom Modem
Nintendo released a modem peripheral called Famicom Modem. This was not intended for children. Instead, adults would use it for gambling horse races, set stocking dates, use their bank, and more.
Technical specifications
thumb|right|The motherboard of the NES. The two largest chips are the Ricoh-produced CPU and PPU.
For its central processing unit (CPU), the NES uses an 8-bit microprocessor produced by Ricoh based on a MOS Technology 6502 core.
The NES contains 2 kB of onboard work RAM. A game cartridge may contain expanded RAM to increase this amount. The size of NES games varies from 8 kB (Galaxian) to 1 MB (Metal Slader Glory), but 128 to 384 kB was the most common.
The NES uses a custom-made Picture Processing Unit (PPU) developed by Ricoh. All variations of the PPU feature 2 kB of video RAM, 256 bytes of on-die "object attribute memory" (OAM) to store the positions, colors, and tile indices of up to 64 sprites on the screen, and 28 bytes of on-die palette RAM to allow selection of background and sprite colors. The console's 2 kB of onboard RAM may be used for tile maps and attributes on the NES board and 8 kB of tile pattern ROM or RAM may be included on a cartridge. The system has an available color palette of 48 colors and 6 grays. Up to 25 simultaneous colors may be used without writing new values mid-frame: a background color, four sets of three tile colors and four sets of three sprite colors. The NES palette is based on NTSC rather than RGB values. A total of 64 sprites may be displayed onscreen at a given time without reloading sprites mid-screen. The standard display resolution of the NES is 256 horizontal pixels by 240 vertical pixels.
Video output connections varied from one model of the console to the next. The original HVC-001 model of the Family Computer featured only radio frequency (RF) modulator output. When the console was released in North America and Europe, support for composite video through RCA connectors was added in addition to the RF modulator. The HVC-101 model of the Famicom dropped the RF modulator entirely and adopted composite video output via a proprietary 12-pin "multi-out" connector first introduced for the Super Famicom/Super Nintendo Entertainment System. Conversely, the North American re-released NES-101 model most closely resembled the original HVC-001 model Famicom, in that it featured RF modulator output only. Finally, the PlayChoice-10 utilized an inverted RGB video output.
The stock NES supports a total of five sound channels, two of which are pulse channels with 4 pulse width settings, one is a triangle wave generator, another is a noise generator (often used for percussion), and the 5th one plays low-quality digital samples.
The NES supports expansion chips contained in certain cartridges to add sound channels and help with data processing. Developers can add these chips to their games, such as the Konami VRC6, Konami VRC7, Sunsoft 5B, Namco 163, and two more by Nintendo itself: the Nintendo FDS wave generator (a modified Ricoh RP2C33 chip with single-cycle wave table-lookup sound support), and the Nintendo Memory Management Controller 5 (MMC5).
Accessories
thumb|right|In addition to featuring a revised color scheme that matched the more subdued tones of the console itself, NES controllers could be unplugged. They nevertheless lacked the microphone featured in Famicom controllers.
Controllers
The game controller used for both the NES and the Famicom featured an oblong brick-like design with a simple four button layout: two round buttons labeled "A" and "B", a "START" button and a "SELECT" button. Additionally, the controllers utilized the cross-shaped joypad, designed by Nintendo employee Gunpei Yokoi for Nintendo Game & Watch systems, to replace the bulkier joysticks on earlier gaming consoles’ controllers.
The original model Famicom featured two game controllers, both of which were hardwired to the back of the console. The second controller lacked the START and SELECT buttons, but featured a small microphone. Relatively few games made use of this feature. The earliest produced Famicom units initially had square A and B buttons. This was changed to the circular designs because of the square buttons being caught in the controller casing when pressed down and glitches within the hardware causing the system to freeze occasionally while playing a game.
The NES dropped the hardwired controllers, instead featuring two custom 7-pin ports on the front of the console. Also in contrast to the Famicom, the controllers included with the NES were identical and swappable, and neither controller possessed the microphone that was present on the Famicom model. Both controllers included the START and SELECT buttons, allowing some NES localizations of games, such as The Legend of Zelda, to use the START button on the second controller to save the game without dying first. However, the NES controllers lacked the microphone, which was used on the Famicom version of Zelda to kill certain enemies.
thumb|right|The NES Zapper, a light gun accessory
A number of special controllers designed for use with specific games were released for the system, though very few such devices proved particularly popular. Such devices included, but were not limited to, the Zapper (a light gun), the R.O.B., and the Power Pad. The original Famicom featured a deepened DA-15 expansion port on the front of the unit, which was used to connect most auxiliary devices. On the NES, these special controllers were generally connected to one of the two control ports on the front of the console.
Nintendo also made two turbo controllers for the NES called NES Advantage and the NES Max. Both controllers had a Turbo feature, a feature where one tap of the button represented multiple taps. This feature allowed players to shoot much faster during shooter games. The NES Advantage had two knobs that adjusted the firing rate of the turbo button from quick to Turbo, as well as a "Slow" button that slowed down the game by rapidly pausing the game. The "Slow" button did not work with games that had a pause menu or pause screen and can interfere with jumping and shooting. The NES Max also had the Turbo Feature, but it was not adjustable, in contrast with the Advantage. It also did not have the "Slow" button. Its wing-like shape made it easier to hold than the Advantage and it also improved on the joystick. Turbo features were also featured on the NES Satellite, the NES Four Score, and the U-Force. Other accessories include the Power Pad and the Power Glove, which was featured in the movie The Wizard.
Near the end of the NES's lifespan, upon the release of the AV Famicom and the top-loading NES 2, the design of the game controllers was modified slightly. Though the original button layout was retained, the redesigned device abandoned the brick shell in favor of a dog bone shape. In addition, the AV Famicom joined its international counterpart and dropped the hardwired controllers in favor of detachable controller ports. The controllers included with the Famicom AV had cables which were 90 cm (3 feet) long, compared to the standard 180 cm (6 feet) of NES controllers.
The original NES controller has become one of the most recognizable symbols of the console. Nintendo has mimicked the look of the controller in several other products, from promotional merchandise to limited edition versions of the Game Boy Advance.
Japanese accessories
thumb|right|The Japanese Famicom has BASIC support with the Family BASIC keyboard.
A number of peripheral devices and software packages were released for the Famicom. Few of these devices were ever released outside Japan.
Family BASIC is an implementation of BASIC for the Famicom, packaged with a keyboard. Similar in concept to the Atari 2600 BASIC cartridge, it allows the user to program their own games, which can be saved on an included cassette recorder. Nintendo of America rejected releasing Famicom BASIC in the US because they did not think it fit their primary marketing demographic of children.
The Famicom Modem connected a Famicom to a now defunct proprietary network in Japan which provided content such as financial services. A dialup modem was never released for NES.
Family Computer Disk System
thumb|The Famicom Disk System was a peripheral available only for the Japanese Famicom that used games stored on "Disk Cards" with a 3" Quick Disk mechanism.
In 1986, Nintendo released the Famicom Disk System (FDS) in Japan, a type of floppy drive that uses a single-sided, proprietary 5 cm (2") disk and plugs into the cartridge port. It contains RAM for the game to load into and an extra single-cycle wavetable-lookup sound chip. The disks were originally obtained from kiosks in malls and other public places where buyers could select a title and have it written to the disk. This process would cost less than cartridges and users could take the disk back to a vending booth and have it rewritten with a new game. The disks were used both for storing the game and saving progress and total capacity was 128k (64k per side).
A variety of games for the FDS were released by Nintendo (including some like Super Mario Bros. which had already been released on cartridge) and third party companies such as Konami and Taito. A few unlicensed titles were made as well. Its limitations became quickly apparent as larger ROM chips were introduced, allowing cartridges with greater than 128k of space. More advanced memory management chips (MMC) soon appeared and the FDS quickly became obsolete. Nintendo also charged developers considerable amounts of money to produce FDS games, and many refused to develop for it, instead continuing to make cartridge titles. Many FDS disks have no dust covers (except in some unlicensed and bootleg variants) and are easily prone to getting dirt on the media. In addition, the drive use a belt which breaks frequently and requires invasive replacement. After only two years, the FDS was discontinued, although vending booths remained in place until 1993 and Nintendo continued to service drives, and to rewrite and offer replacement disks until 2003.
Nintendo of America initially planned to bring the FDS to the United States, but rejected the idea after considering the numerous problems encountered with them in Japan. Many FDS games such as Castlevania, Zelda, and Bubble Bobble were sold in the US as cartridge titles, with simplified sound and the disk save function replaced by passwords or battery save systems.
Hardware clones
thumb|right|Pirated clones of NES hardware remained in production for many years after the original had been discontinued. Some clones play cartridges from multiple systems, such as this FC Twin that plays NES and SNES games.
A thriving market of unlicensed NES hardware clones emerged during the climax of the console's popularity. Initially, such clones were popular in markets where Nintendo never issued a legitimate version of the console. In particular, the Dendy (), an unlicensed hardware clone produced in Taiwan and sold in the former Soviet Union, emerged as the most popular video game console of its time in that setting and it enjoyed a degree of fame roughly equivalent to that experienced by the NES/Famicom in North America and Japan. A Famicom clone was marketed in Argentina under the name of "Family Game", resembling the original hardware design. The Micro Genius (Simplified Chinese: 小天才) was marketed in Southeast Asia as an alternative to the Famicom; Samurai was the popular PAL alternative to the NES; and in Central Europe, especially Poland, the Pegasus was available.Pegasus IQ-502 Polish review of the most popular NES / Famicom clone – Pegasus IQ-502 Samurai was also available in India in early 90s which was the first instance of console gaming in India.
The unlicensed clone market has flourished following Nintendo's discontinuation of the NES. Some of the more exotic of these resulting systems have gone beyond the functionality of the original hardware and have included variations such as a portable system with a color LCD (e.g. PocketFami). Others have been produced with certain specialized markets in mind, such as an NES clone that functions as a rather primitive personal computer, which includes a keyboard and basic word processing software. These unauthorized clones have been helped by the invention of the so-called NES-on-a-chip.
As was the case with unlicensed software titles, Nintendo has typically gone to the courts to prohibit the manufacture and sale of unlicensed cloned hardware. Many of the clone vendors have included built-in copies of licensed Nintendo software, which constitutes copyright infringement in most countries.
Although most hardware clones were not produced under license by Nintendo, certain companies were granted licenses to produce NES-compatible devices. The Sharp Corporation produced at least two such clones: the Twin Famicom and the SHARP 19SC111 television. The Twin Famicom was compatible with both Famicom cartridges and Famicom Disk System disks. It was available in two colors (red and black) and used hardwired controllers (as did the original Famicom), but it featured a different case design. The SHARP 19SC111 television was a television which included a built-in Famicom. A similar licensing deal was reached with Hyundai Electronics, who licensed the system under the name Comboy in the South Korean market. This deal with Hyundai was made necessary because of the South Korean government's wide ban on all Japanese "cultural products", which remained in effect until 1998 and ensured that the only way Japanese products could legally enter the South Korean market was through licensing to a third-party (non-Japanese) distributor (see also Japan–Korea disputes).
thumb|Dimensions of The Nintendo NES System
NES Test Station
thumb|The NES Test station (Lower Left), SNES counter tester (Lower Right), SNES test cart (Upper Right), And the original TV that came with the unit (Upper Left).
The NES Test Station was a diagnostics machine for the Nintendo Entertainment System introduced in 1988.
It was a NES-based unit designed for testing NES hardware, components and games. It was only provided for use in World of Nintendo boutiques as part of the Nintendo World Class Service program. Visitors were to bring items to test with the station, and could be assisted by a store technician or employee.
The NES Test Station's front features a Game Pak slot and connectors for testing various components (AC adapter, RF switch, Audio/Video cable, NES Control Deck, accessories and games), with a centrally-located selector knob to choose which component to test. The unit itself weighs approximately 11.7 pounds without a TV. It connects to a television via a combined A/V and RF Switch cable. By actuating the green button, a user can toggle between an A/V Cable or RF Switch connection. The television it is connected to (typically 11" to 14") is meant to be placed atop it.
At the front of the Test Station are three colored switches, from left to right: a green switch for alternating between A/V and RF connections when testing an NES Control Deck, a blue Reset switch, and an illuminated red Power switch. The system can test:
thumb|NES test station AC adapter Pass or Fail test demonstration.
Game Paks (When set to this, the test station would run like a normal NES.)
Control Deck and Accessories (NES controllers, the NES Zapper, R.O.B. and Power Pad)
AV Cables
AC Adapters
RF Switches
Upon connecting an RF, AV, or AC adapter to the test station, the system displays a 'Pass' or 'Fail' result.
There was a manual included with the test station to help the user understand how to use the equipment, or how to make repairs. The manual came in a black binder with a Nintendo World Class Service logo on the front. Nintendo ordered the older manuals destroyed when an updated manual was issued, due to the manuals' confidential content.
In 1991, Nintendo provided an add-on called the "Super NES Counter Tester" that tests Super Nintendo components and games. The SNES Counter Tester is a standard SNES on a metal fixture with the connection from the back of the SNES re-routed to the front of the unit. These connections may be made directly to the test station or to the TV, depending on what is to be tested.
See also
History of Nintendo
Nintendo Hard
Nintendo World Championships
NES Classic Edition
Notes
References
External links
Video of Nintendo Famicom hardware and features from FamicomDojo.TV
at Nintendo.com (archived versions at the Internet Archive Wayback Machine)
NES games list at Nintendo.com (archived from the original at the Internet Archive Wayback Machine)
NES Classic Edition official website
Category:Nintendo consoles
Category:Home video game consoles
Category:Third-generation video game consoles
Category:1983 in video gaming
Category:1980s in video gaming
Category:1990s in video gaming
Category:1980s toys
Category:1990s toys
Category:Nintendo toys
Category:Computer-related introductions in 1983
Category:Products introduced in 1983
Category:Products introduced in 1985
Category:Products introduced in 1986
Category:1995 disestablishments
Category:2003 disestablishments in Japan
Category:Discontinued products | 18,944,028 | 2017-01 |
Immaculate Conception | {{Infobox saint
|name=Immaculate Conception of Mary
|feast_day=December 8
|venerated_in=Roman Catholic ChurchSome Oriental Orthodox Churches Anglican Communion
|image=Murillo immaculate conception.jpg
|imagesize=250px
|caption=La Purísima Inmaculada Concepciónby Bartolomé Esteban Murillo, 1678, now in Museo del Prado, Spain.|approval=Pope Pius IX
|attributes=crescent moon, halo of twelve stars, blue robe, cherubs, serpent underfoot, Assumption into heaven
|patronage=KoreaNicaraguaParaguayPhilippinesPortugalSpainUnited StatesUruguay
}}
The Immaculate Conception, according to the teaching of the Catholic Church, is the conception of the Blessed Virgin Mary free from original sin by virtue of the foreseen merits of her son Jesus Christ. The Catholic Church teaches that Mary was conceived by normal biological means in the womb of her mother, Saint Anne, but God acted upon her soul, keeping it "immaculate".
The Immaculate Conception is commonly confused with the Virgin Birth of Jesus. Jesus's birth is covered by the Doctrine of Incarnation, while the Immaculate Conception deals with the conception of Mary, not that of her son.
Although the belief that Mary was sinless, or conceived with an immaculate soul, has been widely held since Late Antiquity, the doctrine was not dogmatically defined until 1854, by Pope Pius IX in his papal bull Ineffabilis Deus.Catechism of the Catholic Church, 490-493 The Catholic Church celebrates the Feast of the Immaculate Conception on December 8; in many Catholic countries, it is a holy day of obligation or patronal feast, and in some a national public holiday.
Distinctions
Original sin and actual (personal) sin
The defined dogma of the Immaculate Conception regards original sin only, saying that Mary was preserved from any stain (in Latin, macula or labes, the second of these two synonymous words being the one used in the formal definition)."the doctrine which holds that the most Blessed Virgin Mary, in the first instance of her conception, by a singular grace and privilege granted by Almighty God, in view of the merits of Jesus Christ, the Saviour of the human race, was preserved free from all stain of original sin" (Encyclical Ineffabilis Deus of Pope Pius IX) The proclaimed Roman Catholic dogma states "that the most Blessed Virgin Mary, in the first instance of her conception, by a singular grace and privilege granted by Almighty God, in view of the merits of Jesus Christ, the Saviour of the human race, was preserved free from all stain of original sin." Therefore, being always free from original sin, the doctrine teaches that from her conception Mary received the sanctifying grace that would normally come with baptism after birth.
The definition makes no declaration about the Church's belief that the Blessed Virgin was sinless in the sense of freedom from actual or personal sin. However, the Church holds that Mary was also sinless personally, "free from all sin, original or personal".Encyclical Mystici Corporis, 110 The Council of Trent decreed: "If anyone shall say that a man once justified can sin no more, nor lose grace, and that therefore he who falls and sins was never truly justified; or, on the contrary, that throughout his whole life he can avoid all sins even venial sins, except by a special privilege of God, as the Church holds in regard to the Blessed Virgin: let him be anathema."
Virginal conception
The doctrine of the immaculate conception (Mary being conceived free from original sin) is not to be confused with the virginal conception of her son Jesus. This misunderstanding of the term immaculate conception is frequently met in the mass media. Catholics believe that Mary was not the product of a virginal conception herself but was the daughter of a human father and mother,As the Catholic Encyclopedia (1913) somewhat coyly puts it, "Her body was formed in the womb of the mother, and the father had the usual share in its formation. The question does not concern the immaculateness of the generative activity of her parents", "Immaculate Conception", Catholic Encyclopedia traditionally known by the names of Saint Joachim and Saint Anne. In 1677, the Holy See condemned the belief that Mary was virginally conceived, which had been a belief surfacing occasionally since the 4th century. The Church celebrates the Feast of the Immaculate Conception (when Mary was conceived free from original sin) on 8 December, exactly nine months before celebrating the Nativity of Mary. The feast of the Annunciation (which commemorates the virginal conception and the Incarnation of Jesus) is celebrated on 25 March, nine months before Christmas Day.The Catholicism Answer Book by John Trigilio, Kenneth Brighenti 2007 ISBN 1-4022-0806-5 page 59-62What Every Catholic Should Know about Mary by Terrence J. McNally ISBN 1-4415-1051-6 pages 104-108
Redemption
Another misunderstanding is that, by her immaculate conception, Mary did not need a saviour. When defining the dogma in Ineffabilis Deus, Pope Pius IX explicitly affirmed that Mary was redeemed in a manner more sublime. He stated that Mary, rather than being cleansed after sin, was completely prevented from contracting Original Sin in view of the foreseen merits of Jesus Christ, the Savior of the human race. In , Mary proclaims: "My spirit has rejoiced in God my Saviour." This is referred to as Mary's pre-redemption by Christ. Since the Second Council of Orange against semi-pelagianism, the Catholic Church has taught that even had man never sinned in the Garden of Eden and was sinless, he would still require God's grace to remain sinless.Council of Orange II, Canon 19 "That no one is saved except by God's mercy. Even if human nature remained in that integrity in which it was formed, it would in no way save itself without the help of its Creator; therefore, since without the grace of God it cannot guard the health which it received, how without the grace of God will it be able to recover what it has lost?"Theology for Beginners by Francis Joseph Sheed 1958 ISBN 0-7220-7425-5 pages 134-138
History
A feast of the Conception of the Most Holy and All Pure Mother of God was celebrated in Syria on 8 December perhaps as early as the 5th century. Note that the title of achrantos (spotless, immaculate, all-pure) refers to the holiness of Mary, not specifically to the holiness of her conception."The celebration of the Mother of God as immaculate (achrantos), is a clear and universal recognition of her exceptional and iconic sanctity. Orthodoxy did not follow the path of Roman Catholicism in moving towards a recognition of her Immaculate Conception" ( [https://books.google.ie/books?id=KLIFfmipXcoC&pg=PA218&dq=mcguckin+achrantos&hl=en&ei=aMIqTo-jOI_oOYWLwc4K&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCoQ6AEwAA#v=onepage&q&f=false John Anthony McGuckin, The Orthodox Church: An Introduction to Its History, Doctrine, and Spiritual Culture (Blackwell 2011] ISBN 978-1-4443-3731-0), p. 218.)</ref>
thumb|left|An 11th-century Eastern Orthodox icon of the Theotokos Panachranta, i.e. the "all immaculate" MaryRaymond Burke, 2008, Mariology: A Guide for Priests, Deacons,seminarians, and Consecrated Persons Queenship Publishing ISBN 1-57918-355-7 page
Mary's complete sinlessness and concomitant exemption from any taint from the first moment of her existence was a doctrine familiar to Greek theologians of Byzantium. Beginning with St. Gregory Nazianzen, his explanation of the "purification" of Jesus and Mary at the circumcision (Luke 2:22) prompted him to consider the primary meaning of "purification" in Christology (and by extension in Mariology) to refer to a perfectly sinless nature that manifested itself in glory in a moment of grace (e.g., Jesus at his Baptism). St. Gregory Nazianzen designated Mary as "prokathartheisa (prepurified)." Gregory likely attempted to solve the riddle of the Purification of Jesus and Mary in the Temple through considering the human natures of Jesus and Mary as equally holy and therefore both purified in this manner of grace and glory.Patrologia Graeca 36: 326B 41-42; idem, 36: 633C 7-8 Gregory's doctrines surrounding Mary's purification were likely related to the burgeoning commemoration of the Mother of God in and around Constantinople very close to the date of Christmas.Brian Daley, Gregory of Nazianzus (The Early Christian Church Fathers), New York 2006, 115-118. Nazianzen's title of Mary at the Annunciation as "prepurified" was subsequently adopted by all theologians interested in his Mariology to justify the Byzantine equivalent of the Immaculate Conception. This is especially apparent in the Fathers St. Sophronios of Jerusalem and St. John Damascene, who will be treated below in this article at the section on Church Fathers. About the time of Damascene, the public celebration of the "Conception of St. Ann [i.e., of the Theotokos in her womb]" was becoming popular. After this period, the "purification" of the perfect natures of Jesus and Mary would not only mean moments of grace and glory at the Incarnation and Baptism and other public Byzantine liturgical feasts, but purification was eventually associated with the feast of Mary's very conception (along with her Presentation in the Temple as a toddler) by Orthodox authors of the 2nd millennium (e.g., St. Nicholas CabasilasNicolas Cabasilas (†1371?). Homélies sur la Nativité, L’Annonciation et la Dormition de la Sainte Vierge, ed. M. Jugie (Patrologia Orientalis 19), Turnhout 1990, pp. 456-512 (ch. 10, lines 1-8): "If there are some of the holy doctors who say that the Virgin is ‘prepurified (προκεκαθάρθαι)’ by the Spirit, then it is yet necessary to think that ‘purification (κάθαρσιν)’ (i.e. an addition of graces) is intended by these authors, and these [doctors] say that this is the way the angels are ‘purified,’ with respect to whom there is nothing knavish." and Joseph Bryennius).Joseph Bryennius, Ιὠσῆφ Μαναχοῦ τοῦ Βρυεννίου τὰ εὐρεθέντα, vol. 3, ed. E. Bulgaris (Thessaloniki: 1990), 31: "He says: ‘Yet, on one hand, how did another Mother of God not come about?’ But, on the other hand: ‘Had she some sort of virtue/excellence, because of which she was honored above all women?’ First, another woman was not chosen over her, because while God foreknew all women, he sanctified (ἡγίασεν) the future woman from her mother’s womb, purer (καθαρωτέραν) than other women, who were going to come to exist; but he eschewed all unworthy persons with respect to her, as is reasonable. But she procured for herself the excellence superior to all men and [procured for her] to be prepared as a containing receptacle of the divinity, which [same receptacle] was prepurified (τὸ προκαθαρθῆναι) by the Holy Spirit; O what a marvel, indeed!"
Church Fathers
It is admitted that the doctrine as defined by Pius IX was not explicitly mooted before the 12th century. It is also agreed that "no direct or categorical and stringent proof of the dogma can be brought forward from Scripture". But it is claimed that the doctrine is implicitly contained in the teaching of the Fathers. Their expressions on the subject of the sinlessness of Mary are, it is pointed out, so ample and so absolute that they must be taken to include original sin as well as actual. Thus in the first five centuries such epithets as "in every respect holy", "in all things unstained", "super-innocent", and "singularly holy" are applied to her; she is compared to Eve before the fall, as ancestress of a redeemed people; she is "the earth before it was accursed". The well-known words of St. Augustine (d. 430) may be cited: "As regards the mother of God," he says, "I will not allow any question whatever of sin." It is true that he is here speaking directly of actual or personal sin. But his argument is that all men are sinners; that they are so through original depravity; that this original depravity may be overcome by the grace of God, and he adds that he does not know but that Mary may have had sufficient grace to overcome sin "of every sort" (omni ex parte).
Although the doctrine of Mary's Immaculate Conception appears only later among Latin (and particularly Frankish) theologians,Brian Reynolds, Gateway to Heaven: Marian Doctrine and Devotion Image and Typology in the Patristic and Medieval Periods, vol. 1 (NY: New City Press, 2012), 348-353. it became ever more manifest among Byzantine theologians reliant on Gregory Nazianzen's Mariology in the Medieval or Byzantine East. Although hymnographers and scholars, like the Emperor Justinian I, were accustomed to call Mary "prepurified" in their poetic and credal statements, the first point of departure for more fully commenting on Nazianzen's meaning occurs in Sophronius of Jerusalem.Sophronios of Jerusalem, In Sanctissimae Deiparae Annuntiationem (Patrologia Graeca 87.3:3248A 24): "Οὐδεὶς κατά σε μεμακάρισται, οὐδεὶς κατά σε καθαγίασται· οὐδεὶς κατά σε μεμεγάλυνται, οὐδεὶς κατά σε προκεκάθαρται· οὐδεὶς κατά σε περιηύγασται, οὐδεὶς κατά σε ἐκπεφώτισται." N.B., oudeis kata se prokekathartai was rendered in Latin as nemo, sicut tu, purificante gratia praeoccupatus est." In other places Sophronius explains that the Theotokos was already immaculate, when she was "purified" at the Annunciation and goes so far as to note that John the Baptist is literally "holier than all 'Men' born of woman" since Mary's surpassing holiness signifies that she was holier than even John after his sanctification in utero.Sophronios of Jerusalem, In Sanctissimae Deiparae Annuntiationem (Patrologia Graeca, 87.3: 3273D 43): "Πνεῦμα ἅγιον ἐπὶ σὲ, τὴν ἀμόλυντον, κάτεισι, καθαρωτέραν σε ποιησόμενον, καὶ καρπογόνον σοι παρεξόμενον δύναμιν."; idem, Encomium in S. Iohannem Baptistam, PG 87:3332C Sophronius' teaching is augmented and incorporated by St. John Damascene (d. 749/750). John, besides many passages wherein he extolls the Theotokos for her purification at the Annunciation, grants her the unique honor of "purifying the waters of baptism by touching them." This honor was most famously and firstly attributed to Christ, especially in the legacy of Nazianzen. As such, Nazianzen's assertion of parallel holiness between the prepurified Mary and purified Jesus of the New Testament is made even more explicit in Damascene in his discourse on Mary's holiness to also imitate Christ's baptism at the Jordan."The air, the fiery ether, the sky would have been made holy by the ascent of her spirit, as earth was sanctified by the deposition of her body. Even water had its share in the blessing: for she was washed in pure water, which did not so much cleanse her as it was itself consecrated." See John Damascene, On the Holy and Glorious Dormition and Transformation of Our Lady Mary, Mother of God and Ever-Virgin by Our Holy Father John, Monk of Damascus and Son of Mansour. Homily 2, in On the Dormition of Mary: Early Patristic Homilies, tr. B. Daley (Crestwood, NY :1998), 215. The Damascene's hymnongraphy and De fide Orthodoxa explicitly use Mary's "pre purification" as a key to understanding her absolute holiness and unsullied human nature. In fact, Damascene (along with Nazianzen) serves as the source for nearly all subsequent promotion of Mary's complete holiness from her Conception by the "all pure seed" of Joachim and the womb "wider than heaven" of St. Ann.Christiaan Kappes, The Immaculate Conception: Why Thomas Aquinas Denied, While John Duns Scotus, Gregory Palamas, and Mark Eugenicus Professed the Absolute Immaculate Existence of Mary (Bedford, MA: Academy of the Immaculate, 2014), 39-61
Feast day
By 750, the feast of her conception was widely celebrated in the Byzantine East, under the name of the Conception (active) of Saint Anne. In the West it was known as the feast of the Conception (passive) of Mary, and was associated particularly with the Normans, whether these introduced it directly from the EastFrancis X. Weiser, The Holyday Book (Harcourt, Brace and Co. 1956) or took it from English usage.Michael Kunzler, The Church's Liturgy (Continuum International 2002 ISBN 978-0-8264-1353-6), pp. 434-435 The spread of the feast, by now with the adjective "Immaculate" attached to its title, met opposition on the part of some, on the grounds that sanctification was possible only after conception.Frederick Holweck, "Immaculate Conception" in The Catholic Encyclopedia 1910 Critics included Saints Bernard of Clairvaux, Albertus Magnus and Thomas Aquinas. Other theologians defended the expression "Immaculate Conception", pointing out that sanctification could be conferred at the first moment of conception in view of the foreseen merits of Christ, a view held especially by Franciscans.Matthew Bunson, OSV's Encyclopedia of Catholic History (Our Sunday Visitor 2004 ISBN 978-1-59276-026-8), p. 455
William of Ware and Blessed John Duns Scotus pointed out that Mary’s Immaculate Conception enhances Jesus’ redemptive work.Foley OFM, Leonard. "Solemnity of the Immaculate Conception", Saint of the Day, (revised by Pat McCloskey OFM), AmericanCatholic.org One of the chief proponents of the doctrine was the Hungarian Franciscan Pelbartus Ladislaus of Temesvár.
Z. J. Kosztolnyik, Some Hungarian Theologians in the Late Renaissance, Church History. Volume: 57. Issue: 1, 1988. Z. J. Kosztolnyik, Pelbartus of Temesvar: a Francican Preacher and Writer of the Late Middle Ages in Hungary, Vivarium, 5/1967. Kenan B. Osborne, O.F.M., The History of Franciscan Theology, The Franciscan Institute St. Bonaventure, New York, 1994. Franklin H. Littell (ed.), Reformation Studies, John Knox Press, Richmond, Virginia, 1962.
On 28 February 1476, Pope Sixtus IV, authorized those dioceses that wished to introduce the feast to do so, and introduced it to his own diocese of Rome in 1477, with a specially composed Mass and Office of the feast.<ref>John D. Bryant, The Immaculate Conception of the Blessed Virgin Mary, Mother of God" (Boston 1855), p. 166 With his bull Cum praeexcelsa of 28 February 1477, in which he referred to the feast as that of the Conception of Mary, without using the word "Immaculate", he granted indulgences to those who would participate in the specially composed Mass or Office on the feast itself or during its octave, and he used the word "immaculate" of Mary, but applied instead the adjective "miraculous" to her conception. On 4 September 1483, referring to the feast as that of "the Conception of Immaculate Mary ever Virgin", he condemned both those who called it mortally sinful and heretical to hold that the "glorious and immaculate mother of God was conceived without the stain of original sin" and those who called it mortally sinful and heretical to hold that "the glorious Virgin Mary was conceived with original sin", since, he said, "up to this time there has been no decision made by the Roman Church and the Apostolic See." This decree was reaffirmed by the Council of Trent.
Pope Pius V, while including the feast in the Tridentine Calendar, removed the adjective "Immaculate" and suppressed the existing special Mass for the feast, directing that the Mass for the Nativity of Mary (with the word "Nativity" replaced by "Conception") be used instead.Decree of Our Most Holy Father Pope Paul V in Favor of the Immaculate Conception of the Blessed Virgin Mary, Mother of God. Lima, Peru : 1618. World Digital Library. Part of that earlier Mass was revived in the Mass that Pope Pius IX ordered to be used on the feast and that is still in use.
On 6 December 1708, Pope Clement XI made the feast of the Conception of Mary, at that time still with the Nativity of Mary formula for the Mass, a Holy Day of Obligation. Until Pope Pius X reduced in 1911 the number of Holy Days of Obligation to 8, there were in the course of the year 36 such days, apart from Sundays. Writers such as Sarah Jane Boss interpret the existence of the feast as a strong indication of the Church's traditional belief in the Immaculate Conception.Mary by Sarah Jane Boss, Neil Warmsley 2004 ISBN 0-8264-5788-6 page 139
thumb|the procession of the Quadrittu of the Immaculate Conception taken on 7 December in Saponara, Sicily
Definition of the dogma
thumb|220px|Altar of the Immaculata by Joseph Lusenberg, 1876. Saint Antony's Church, Urtijëi, Italy.
During the reign of Pope Gregory XVI the bishops in various countries began to press for a definition as dogma of the teaching of Mary's immaculate conception."These petitions were renewed in these our own times; they were especially brought to the attention of Gregory XVI" (Ineffabilis Deus).
In 1839 Mariano Spada (1796 - 1872), professor of theology at the Roman College of Saint Thomas, published Esame Critico sulla dottrina dell’ Angelico Dottore S. Tommaso di Aquino circa il Peccato originale, relativamente alla Beatissima Vergine Maria [A critical examination of the doctrine of St. Thomas Aquinas, the Angelic Doctor, regarding original sin with respect to the Most Blessed Virgin Mary], in which Aquinas is interpreted not as treating the question of the Immaculate Conception later formulated in the papal bull Ineffabilis Deus but rather the sanctification of the fetus within Mary's womb. Spada furnished an interpretation whereby Pius IX was relieved of the problem of seeming to foster a doctrine not in agreement with the Aquinas' teaching.3-11-2013; Il mistero di Maria: teologia, storia, devozione by Giuseppe Damigella, p. 175: "Pio IX si senti' sollevato dal peso teologico di dover sostenere una dottrina non fondata nel pensiero di san Tommaso, il cui insegnamento era allora, come oggi, ritenuto "sicuro"." Cf. A. Andaloro, "P. Mariano Spada, o.p. interprete di San Tommaso sull'Immucolata Concezione," Catania, 1958 Pope Pius IX would later appoint Spada Master of the Sacred Palace in 1867.
Pius IX, at the beginning of his pontificate, and again after 1851, appointed commissions to investigate the whole subject, and he was advised that the doctrine was one which could be defined and that the time for a definition was opportune.
It was not until 1854 that Pope Pius IX, with the support of the overwhelming majority of Roman Catholic bishops, whom he had consulted between 1851–1853, promulgated the papal bull Ineffabilis Deus (Latin for "Ineffable God"), which defined ex cathedra the dogma of the Immaculate Conception:The Creeds of Christendom by Philip Schaff 2009 ISBN 1-115-46834-0 page 211
The dogma was defined in accordance with the conditions of papal infallibility, which would be defined in 1870 by the First Vatican Council.
The papal definition of the dogma declares with absolute certainty and authority that Mary possessed sanctifying grace from the first instant of her existence and was free from the lack of grace caused by the original sin at the beginning of human history. Mary's salvation was won by her son Jesus Christ through his passion, death, and resurrection and was not due to her own merits.Jenny Schroedel, The Everything Mary Book (Adams 2006 ISBN 1-59337-713-4) pp. 180-181"Mark Miravalle, 1993, Introduction to Mary, Queenship Publishing ISBN 978-1-882972-06-7 page 64-70
Later developments
For the Roman Catholic Church the dogma of the Immaculate Conception gained additional significance from the reputed apparitions of Our Lady of Lourdes in 1858. At Lourdes a 14-year-old girl, Bernadette Soubirous, claimed that a beautiful woman appeared to her and said, "I am the Immaculate Conception". Many believe the woman to have been the Blessed Virgin Mary and pray to her as such.Vatican website
Pope Pius IX defined the dogma of the Immaculate Conception "not so much because of proofs in Scripture or ancient tradition, but due to a profound sensus fidelium and the Magisterium".
Speaking of the witness of the Church Fathers in claiming for Mary titles such as "Free from all contagion of sin", Pope Pius XII wrote:
The Roman Catholic tradition has a well-established philosophy for the study of the Immaculate Conception and the veneration of the Blessed Virgin Mary in the field of Mariology, with Pontifical schools such as the Marianum specifically devoted to this.Centers of Marian Study Publisher’s Notice in the Second Italian Edition (1986), reprinted in English Edition, Gabriel Roschini, O.S.M. (1989). The Virgin Mary in the Writings of Maria Valtorta (English Edition). Kolbe's Publication Inc. ISBN 2-920285-08-4
According to Bernard Ullathorne, a 19th-century English Roman Catholic prelate, "the expressions - The Immaculate Conception - The Immaculate Preservation - The Immunity - and Exception from original sin, are all phrases which bear the same signification, and are used equally to express one and the same mystery."Ullathorne, William Bernard, The immaculate conception of the Mother of God, an exposition, 1855.
Medieval dispute about the doctrine
It seems to have been St Bernard of Clairvaux who, in the 12th century, explicitly raised the question of the Immaculate Conception. A feast of the Conception of the Blessed Virgin had already begun to be celebrated in some churches of the West. St Bernard blames the canons of the metropolitan church of Lyon for instituting such a festival without the permission of the Holy See. In doing so, he takes occasion to repudiate altogether the view that the conception of Mary was sinless. It is doubtful, however, whether he was using the term "conception" in the same sense in which it is used in the definition of Pope Pius IX. Bernard would seem to have been speaking of conception in the active sense of the mother's cooperation, for in his argument he says: "How can there be absence of sin where there is concupiscence (libido)?" and stronger expressions follow, showing that he is speaking of the mother and not of the child.
Saint Thomas Aquinas refused to concede the Immaculate Conception, on the ground that, unless the Blessed Virgin had at one time or other been one of the sinful, she could not justly be said to have been redeemed by Christ.
Saint Bonaventure (d. 1274), second only to Saint Thomas in his influence on the Christian schools of his age, hesitated to accept it for a similar reason. He believed that Mary was completely free from sin, but that she was not given this grace at the instant of her conception.
The celebrated John Duns Scotus (d. 1308), a Friar Minor like Saint Bonaventure, argued, on the contrary, that from a rational point of view it was certainly as little derogatory to the merits of Christ to assert that Mary was by him preserved from all taint of sin, as to say that she first contracted it and then was delivered. Proposing a solution to the theological problem of reconciling the doctrine with that of universal redemption in Christ, he argued that Mary's immaculate conception did not remove her from redemption by Christ; rather it was the result of a more perfect redemption granted her because of her special role in salvation history.Encyclopedia of theology: a concise Sacramentum mundi by Karl Rahner 2004 ISBN 0-86012-006-6 pages 896-898
The arguments of Scotus, combined with a better acquaintance with the language of the early Fathers, gradually prevailed in the schools of the Western Church. In 1387 the university of Paris strongly condemned the opposite view.
Scotus's arguments remained controversial, however, particularly among the Dominicans, who were willing enough to celebrate Mary's sanctificatio (being made free from sin) but, following the Dominican Thomas Aquinas' arguments, continued to insist that her sanctification could not have occurred until after her conception.
Popular opinion remained firmly behind the celebration of Mary's conception. In 1439, the Council of Basel, which is not reckoned an ecumenical council, stated that belief in the immaculate conception of Mary is in accord with the Catholic faith. By the end of the 15th century the belief was widely professed and taught in many theological faculties, but such was the influence of the Dominicans, and the weight of the arguments of Thomas Aquinas (who had been canonised in 1323 and declared "Doctor Angelicus" of the Church in 1567) that the Council of Trent (1545–63)—which might have been expected to affirm the doctrine—instead declined to take a position.
The papal bull defining the dogma, Ineffabilis Deus, mentioned in particular the patrististic interpretation of as referring to a woman, Mary, who would be eternally at enmity with the evil serpent and completely triumphing over him. It said the Fathers saw foreshadowings of Mary's "wondrous abundance of divine gifts and original innocence" "in that ark of Noah, which was built by divine command and escaped entirely safe and sound from the common shipwreck of the whole world; in the ladder which Jacob saw reaching from the earth to heaven, by whose rungs the angels of God ascended and descended, and on whose top the Lord himself leaned; in that bush which Moses saw in the holy place burning on all sides, which was not consumed or injured in any way but grew green and blossomed beautifully; in that impregnable tower before the enemy, from which hung a thousand bucklers and all the armor of the strong; in that garden enclosed on all sides, which cannot be violated or corrupted by any deceitful plots; in that resplendent city of God, which has its foundations on the holy mountains; in that most august temple of God, which, radiant with divine splendours, is full of the glory of God; and in very many other biblical types of this kind."
The bull recounts that the Fathers interpreted the angel's address to Mary, "highly favoured one" or "full of grace", as indicating that "she was never subject to the curse and was, together with her Son, the only partaker of perpetual benediction"; they "frequently compare her to Eve while yet a virgin, while yet innocence, while yet incorrupt, while not yet deceived by the deadly snares of the most treacherous serpent".
Patronages
A number of countries are considered to be under the patronage of the Immaculate Conception by pontifical decree.
These include Argentina, Brazil, Korea, Nicaragua, Paraguay, Philippines, Spain (old kingdoms and the present state), the United States and Uruguay.
By royal decree under the House of Braganza, it is the principal Patroness of Portugal.
Other churches
For differing reasons, belief in Mary's immaculate conception in the Catholic doctrinal form is not part of the official doctrines of the Eastern Orthodox, Oriental Orthodox, Anglican and Protestant churches.
Eastern and Oriental Orthodox
thumb|The Immaculate Conception is also portrayed by artists in the Orthodox Church, for example Holy Mary in Perlez, Vojvodina, Serbia.
Contemporary Eastern Orthodox Christians often object to the dogmatic declaration of her immaculate conception as an "over-elaboration" of the faith and because they see it as too closely connected with a particular interpretation of the doctrine of ancestral sin.John Meyendorff, The Orthodox Church: Its Past and Its Role in the World Today (St Vladimir's Seminary Press 1996 ISBN 978-0-913836-81-1), p. 181 All the same, the historical and authentic tradition of Mariology in Byzantium took its historical point of departure from Sophronios, Damascene, and their imitators. The most famous Eastern Orthodox theologian to imply Mary's Immaculate Conception was St. Gregory Palamas. Though many passages from his works were long known to extol and attribute to Mary a Christlike holiness in her human nature, traditional objections to Palamas' disposition toward the Immaculate Conception typically rely on a poor understanding of his doctrine of "the purification of Mary" at the Annunciation.See https://www.academia.edu/4375213/The_Immaculate_Conception_Why_Thomas_Aquinas_Denied_While_John_Duns_Scotus_Gregory_Palamas_and_Mark_Eugenicus_Professed_the_Absolute_Immaculate_Existence_of_Mary , pp. 69-81 Not only did he explicitly cite St. Gregory Nazianzen for his understanding of Jesus' purification at His baptism and Mary's at the Annunciation, but Theophanes of Nicaea, Joseph Bryennius, and Gennadios Scholarios all explicitly placed Mary's Conception as the first moment of her all-immaculate participation in the divine energies to such a degree that she was always completely without spot and graced.Christiaan Kappes, The Immaculate Conception: Why Thomas Aquinas Denied, While John Duns Scotus, Gregory Palamas, and Mark Eugenicus Professed the Absolute Immaculate Existence of Mary (Bedford, MA: Academy of the Immaculate, 2014), 69-92; 157-169 In addition to Emperor Manuel II and Gennadius Scholarius, St. Mark of Ephesus also fervently defended Mary's title as "prepurified" against the Dominican, Manuel Calecas, who was perhaps promoting thomistic Mariology that denied Mary's all-holiness from the first moment of her existence.Mark of Ephesus, On the Distinction between Essence and Energy: First Antirrhetic against Manuel Kalekas. Editio princeps, ed. M. Pilavakis (Unpublished doctoral dissertation), London 1987: "But He did so with her, after he prepurified (προκαθαρθείσῃ) her through a most profuse grace by means of the protecting Holy Spirit and divine power [...]"
In the tradition of Ethiopian Orthodoxy, the Kebra Nagast says:
Old Catholic
While Old Catholics do not reject the Immaculate Conception of Mary, and some of their parishes venerate Mary as immaculately conceived and celebrate the feast of her Immaculate Conception, they do not accept its definition as a dogma, since they reject papal infallibility and with it the Pope's authority to define dogma.
Protestantism
Martin Luther, who initiated the Protestant Reformation, said: "Mother Mary, like us, was born in sin of sinful parents, but the Holy Spirit covered her, sanctified and purified her so that this child was born of flesh and blood, but not with sinful flesh and blood. The Holy Spirit permitted the Virgin Mary to remain a true, natural human being of flesh and blood, just as we. However, he warded off sin from her flesh and blood so that she became the mother of a pure child, not poisoned by sin as we are. For in that moment when she conceived, she was a holy mother filled with the Holy Spirit and her fruit is a holy pure fruit, at once God and truly man, in one person."[17] Some Lutherans, such as the members of the Anglo-Lutheran Catholic Church, support the doctrine.
Most Protestants reject the doctrine because they do not consider the development of dogmatic theology to be authoritative apart from biblical exegesis, and because the doctrine of the Immaculate Conception is not taught in the Bible.The Protestant faith by George Wolfgang Forell 1962 ISBN 0-8006-1095-4 page 23 The formal pronouncement of Mary's Immaculate Conception by the Catholic Church in 1854 alienated some Protestant churches partly due to its implication that not all have sinned.Jesus in history, thought, and culture: an encyclopedia, Volume 1 by James Leslie Houlden 2003 ISBN 1-57607-856-6 page
Anglicanism
Belief in Mary's immaculate conception is not a doctrine within Anglicanism, although it is shared by many Anglo-Catholics.Our Lady Saint Mary by J.G.H. Barry, 2008, ISBN 0-554-24332-6, pages 25-27 In the Church of England's Common Worship prayer book, 8 December is designated a Lesser Festival of the Conception of the Blessed Virgin Mary (without the adjective "immaculate").
The report "Mary: Faith and Hope in Christ", by the Anglican-Roman Catholic International Commission, concluded that the teaching about Mary in the two definitions of the Assumption and the Immaculate Conception can be said to be consonant with the teaching of the Scriptures and the ancient common traditions.Ecumenical Affairs - Dialogues - Anglican Roman Catholic Paragraph 78 - Accessed 8 December 2008 But the report expressed concerns that the Roman Catholic dogmatic definitions of these concepts implies them to be "revealed by God", stating: "The question arises for Anglicans, however, as to whether these doctrines concerning Mary are revealed by God in a way which must be held by believers as a matter of faith."Ecumenical Affairs - Dialogues - Anglican Roman Catholic Paragraph 60 - Accessed 8 December 2008
Other than Anglo-Catholics, most Anglicans reject the doctrine that Mary was sinless and conceived without original sin, often citing that it is not within the Holy Scripture and is against the Redemptive role and purpose of Jesus Christ merited for all human beings.
Islam
thumb|Manuscript of Chapter 19 (Sūratu Maryam) from a 9th-century Qur'an, Turkey.
Official Islamic teachings highly regard the Virgin Mary as a sublime model of both purity and piety. An entire Sura chapter of the Qur'an is dedicated to her nobility, holiness, and fiat obedience to God. Among Islamic circles and discussions, she is often given a prominent status being the supreme feminine model of sanctity and maternal virtue.
Some Western writers claim that the immaculate conception of Mary is a teaching of Islam. Thus, commenting in 1734 on the passage in the Qur'an, "I have called her Mary; and I commend her to thy protection, and also her issue, against Satan driven away with stones", George Sale stated: "It is not improbable that the pretended immaculate conception of the virgin Mary is intimated in this passage. For according to a tradition of Mohammed, every person that comes into the world, is touched at his birth by the devil, and therefore cries out, Mary and her son only excepted; between whom, and the evil spirit God placed a veil, so that his touch did not reach them. And for this reason they say, neither of them were guilty of any sin, like the rest of the children of Adam."George Sale, Koran, commonly called the Alcoran of Mohammed, chapter 3, p. 39
Others have rejected that the doctrine of Immaculate Conception exists in Islam, the Quranic account does not confirm the Immaculate Conception exclusively for Mary as in Islam every human child is born pure and immaculate,http://sunnah.com/urn/44530 Sahih Bukhari Kitabul Tafseer her sinless birth is thus independent of the Christian doctrine of original sin as no such doctrine exists in Islam.English 5 Volume Commentary https://www.alislam.org/quran/tafseer/?page=386®ion=E1Cleo McNelly Kearns, The Virgin Mary, Monotheism and Sacrifice (Cambridge University Press 2008 ISBN 978-0-52187156-3), p. 254 Moreover, Hannah's prayer in the Quran for her child to remain protected from Satan (Shayṭān) was said after it had already been born, not before and expresses a natural concern any righteous parent would have. The Muslim tradition or hadith, which states that the only children born without the "touch of Satan," were Mary and Jesus.Bukhari, Anbiya, 44; Muslim, Fada'il, trad. 146, 147 should therefore not be taken in isolation from the Quran, and is to be interpreted within the specific context of exonerating Mary and her child from the charges that were made against them and is not a general statement. The specific mention of Mary and Jesus in this hadith may also be taken to represent a class of people, in keeping with the Arabic language and the Quranic verse [O Satan] surely thou shalt have no power over My servants, except such of the erring ones as choose to follow thee (15:42)
Further claims were made that the Roman Catholic Church derives its doctrine from the Islamic teaching. In volume 5 of his Decline and Fall of the Roman Empire, published in 1788, Edward Gibbon wrote: "The Latin Church has not disdained to borrow from the Koran the immaculate conception of his virgin mother." That he was speaking of her immaculate conception by her mother, not of her own virginal conception of Jesus, is shown by his footnote: "In the xiith century the immaculate conception was condemned by St. Bernard as a presumptuous novelty."Edward Gibbon, The History of the Decline and Fall of the Roman Empire, vol. V, chapter 50 In the aftermath of the definition of the dogma in 1854, this charge was repeated: "Strange as it may appear, that the doctrine which the church of Rome has promulgated, with so much pomp and ceremony, 'for the destruction of all heresies, and the confirmation of the faith of her adherents', should have its origin in the Mohametan Bible; yet the testimony of such authorities as Gibbon, and Sale, and Forster, and Gagnier, and Maracci, leave no doubt as to the marvellous fact."
Without making Islamic belief the origin of the doctrine defined in 1854, a similarity between the two has been noted also by Roman Catholic writers such as Thomas Patrick Hughes,Thomas Patrick Hughes, A Dictionary of Islam. First published London 1885; reprinted by Asian Educational Services 2001. ISBN 8120606728, ISBN 9788120606722. Entry "IMMACULATE CONCEPTION". (Google Books) Quote: "This doctrine was asserted by Muhammad (Mishkāt, book i., ch. iii., pt. 1)." William Bernard Ullathorne,William Bernard Ullathorne. The Immaculate Conception of the Mother of God: An Exposition. First published 1855; ISBN 1-110-89977-7. Chapter XIV, "Mahomet and Martin Luther". (Google Books) Giancarlo Finazzo.Giancarlo Finazzo. "The Virgin Mary in the Koran". L'Osservatore Romano Weekly Edition in English, 13 April 1978, page 4. Quote: "The dogma of the Immaculate Conception ... is univocally recognized by the Islamic religion."
Prayers and hymns
The Roman Missal and the Roman Rite Liturgy of the Hours naturally includes references to Mary's immaculate conception in the feast of the Immaculate Conception. An example is the antiphon that begins: "Tota pulchra es, Maria, et macula originalis non est in te" (You are all beautiful, Mary, and the original stain [of sin] is not in you. Your clothing is white as snow, and your face is like the sun. You are all beautiful, Mary, and the original stain [of sin] is not in you. You are the glory of Jerusalem, you are the joy of Israel, you give honour to our people. You are all beautiful, Mary.)The text (in Latin) is given at Tota Pulchra Es - GMEA Honor Chorus. On the basis of the original Gregorian chant music, polyphonic settings have been composed by Anton Bruckner, Pablo Casals, Maurice Duruflé, Grzegorz Gerwazy Gorczycki, :no:Ola Gjeilo, José Maurício Nunes Garcia, and Nikolaus Schapfl,
Other prayers honouring Mary's immaculate conception are in use outside the formal liturgy. The hymn Immaculate Mary, addressed to Mary as the Immaculately Conceived One, is closely associated with Lourdes. The Immaculata prayer, composed by Saint Maximillian Kolbe, is a prayer of entrustment to Mary as the Immaculata. A novena of prayers, with a specific prayer for each of the nine days has been composed under the title of the Immaculate Conception Novena.
Artistic representations
thumb|right|Swiss emblem 16th century
The 1476 extension of the feast of the Immaculate Conception to the entire Latin Church reduced the likelihood of controversy for the artist or patron in depicting an image, so that emblems depicting The Immaculate Conception began to appear.
Many artists in the 15th century faced the problem of how to depict an abstract idea such as the Immaculate Conception, and the problem was not fully solved for 150 years. The Italian Renaissance artist Piero di Cosimo was among those artists who tried new solutions, but none of these became generally adopted so that the subject matter would be immediately recognisable to the faithful.
The definitive iconography for the Immaculate Conception, drawing on the emblem tradition, seems to have been finally established by the master and then father-in-law of Diego Velázquez, the painter and theorist Francisco Pacheco. Pacheco's iconography influenced other Spanish artists such as Bartolomé Murillo, Diego Velázquez, and Francisco Zurbarán, who each produced a number of artistic masterpieces based on the use of these same symbols.Ésotérisme, gnoses & imaginaire symbolique: mélanges offerts à Antoine Faivre by Richard Caron, Antoine Faivre 2001 ISBN 90-429-0955-2 page 676Divine Mirrors: The Virgin Mary in the Visual Arts by Melissa R. Katz and Robert A. Orsi 2001 ISBN 0-19-514557-7 page 98
The popularity of this particular representation of The Immaculate Conception spread across the rest of Europe, and has since remained the best known artistic depiction of the concept: in a heavenly realm, moments after her creation, the spirit of Mary (in the form of a young woman) looks up in awe at (or bows her head to) God. The moon is under her feet and a halo of twelve stars surround her head, possibly a reference to "a woman clothed with the sun" from Revelation 12:1-2. Additional imagery may include clouds, a golden light, and cherubs. In some paintings the cherubim are holding lilies and roses, flowers often associated with Mary.Our Lady in Art by Katherine Lee Rawlings Jenner 2009 ISBN 1-103-32689-9 pages 3-9
Gallery
See also
thumb|right|250px|Immaculate Conception celebration in Guatemala.
Cathedral of the Immaculate Conception (disambiguation)
Congregation of the Immaculate Conception
Feast of the Immaculate Conception
Immaculate Mary
Immaculata prayer
Miraculous medal
Marian doctrines of the Catholic Church
Mother of God (Roman Catholic)
Original sin
Patronages of the Immaculate Conception
Perpetual virginity of Mary
Roman Catholic Marian art
Virgin birth of Jesus
Bibliography
Le Franc, Martin. The Conception of Mary -- A Rhyming Translation of Book V of Le Champion des Dames by Martin Le Franc (1410-1461). Ed. and trans. Steven Millen Taylor. Lewiston, NY: The Edwin Mellen Press, 2010.
References
External links
The Immaculate Conception in Art (Painting)
Ineffabilis Deus (Apostolic Constitution of Pope Pius IX defining the dogma of the Immaculate Conception)
Godzinki: The Little Hours of the Immaculate Conception
St. Alphonsus Liguori's writing on the Immaculate Conception in his book The Glories of Mary
Catholic Encyclopedia entry on the Immaculate Conception
Catholic Encyclopedia entry on Original Sin
The Immaculate Conception. A study by a Melkite archimandrite
The Immaculate Conception of the Mother of God based on Juniper Carol's Mariology and William Bernard Ullathornee's book
"St. Augustine and Original Sin" — a short article on the different understandings of Original Sin in Eastern and Western Christianity, without distinguishing Protestant theology from Roman Catholic. The latter holds that "original sin does not have the character of a personal fault in any of Adam's descendants" (Catechism of the Catholic Church'', 405).
Mark I. Miraville (editor), "Mariology: A Guide for Priests, Deacons, Seminarians, and Consecrated Persons''
Original Sin According To St. Paul by John S. Romanides
Catechism of the Catholic Church "Conceived by the Power of the Holy Spirit and Born of the Virgin Mary"
Immac
Category:Anglican Mariology
Immac
Immac
Immac
Immac
Immac
Immac
Immac
Category:Christian terminology
Category:Christian miracle narrative
Category:Scotism
Category:Articles containing video clips | 15,256 | 2017-01 |
Southeast Asia | Southeast Asia or Southeastern Asia is a subregion of Asia, consisting of the countries that are geographically south of China, east of India, west of New Guinea and north of Australia. The region lies near the intersection of geological plates, with heavy seismic and volcanic activity. Southeast Asia consists of two geographic regions:
Mainland Southeast Asia, also known historically as Indochina, comprising Vietnam, Laos, Cambodia, Thailand, Myanmar (Burma), and West Malaysia.
Maritime Southeast Asia, comprising Indonesia, East Malaysia, Singapore, Philippines, East Timor, Brunei, Cocos (Keeling) Islands, and Christmas Island.
Divisions
Political
Definitions of "Southeast Asia" vary, but most definitions include the area represented by the countries (sovereign states and dependent territories) listed below. All of the states are members of the Association of Southeast Asian Nations (ASEAN), while East Timor is an observer state. The area, together with part of South Asia, was widely known as the East Indies or simply the Indies until the 20th century. Sovereignty issues exist over some territories in the South China Sea. Papua New Guinea has stated that it might join ASEAN, and is currently an observer.Papua New Guinea asks RP support for Asean membership bid. Retrieved July 8, 2009.Somare seeks PGMA's support for PNG's ASEAN membership bid. Retrieved July 8, 2009.
Sovereign states
State Area(km2) Population(2016) Density(/km2) GDP (nominal),USD (2016) GDP (nominal)per capita,USD (2016) HDI (2014) Capital 5,765 453,000 78 17,105,000,000 $37,627 0.856 Bandar Seri Begawan 181,035 15,561,000 85 17,291,000,000 $1,330 0.555 Phnom Penh 14,874 1,172,000 75 4,382,000,000 $1,130 0.595 Dili 1,904,569 251,490,000 132 895,677,000,000 $3,347 0.684 Jakarta 236,800 6,557,000 30 11,206,000,000 $2,007 0.575 Vientiane 329,847 30,751,602 91 367,712,000,000 $23,800 0.779 Kuala Lumpur * 676,000 51,419,000 98 63,881,000,000 $1,065 0.536 Nay Pyi Daw 342,353102,904,637 338 369,188,000,000 $3,077 0.668 Manila 724 5,554,000 7,671 289,086,000,000 $54,717 0.912 Singapore (city-state) 513,120 65,236,000 127 437,344,000,000 $7,345 0.726 Bangkok 331,210 94,444,200 279 187,848,000,000 $2,043 0.666 Hanoi
* Administrative centre in Putrajaya.
Dependent territories
thumb|250px|right|UNSD statistical division for Asia based on statistic convenience rather than implying any assumption regarding political or other affiliation of countries or territories:
Territory Area (km2) Population Density (/km2) Capital 135 1,402 10.4 Flying Fish Cove 14 596 42.6 West Island (Pulau Panjang)
Administrative subdivisions
Territory Area (km2) Population Density (/km2) Capital Andaman and Nicobar Islands 8,250 379,944Population data as per the Indian Census. 46 Port Blair
Geography
thumb|Relief map of Southeast Asia.
Southeast Asia is geographically divided into two subregions, namely Mainland Southeast Asia (or Indochina) and Maritime Southeast Asia (or the similarly defined Malay Archipelago) ().
Mainland Southeast Asia includes:
Vietnam
Laos
Cambodia
Thailand
Myanmar (Burma)
Peninsular Malaysia
Maritime Southeast Asia includes:
Indonesia
Philippines
East Malaysia
Brunei
Singapore
East Timor
The Andaman and Nicobar Islands of India are geographically considered part of Southeast Asia. Eastern Bangladesh and the Seven Sister States of India are culturally part of Southeast Asia and sometimes considered both South Asian and Southeast Asian. The Seven Sister States of India are also geographically part of Southeast Asia. The rest of the island of New Guinea which is not part of Indonesia, namely, Papua New Guinea, is sometimes included so are Palau, Guam, and the Northern Mariana Islands, which were all part of the Spanish East Indies.
The eastern half of Indonesia and East Timor (east of the Wallace Line) are considered to be biogeographically part of Oceania.
History
Prehistory
thumb|A troupe of Bahau Dayak performers during the Hudoq festival (Harvest festival) in Samarinda, East Kalimantan, Indonesia.
thumb|Balinese small familial house shrines to honor the households' ancestor in Bali island, Indonesia.
Homo sapiens reached the region by around 45,000 years ago,Demeter F, et al. (2012) Anatomically modern human in Southeast Asia (Laos) by 46 ka. Proc Natl Acad Sci USA 109(36):14375–14380. having moved eastwards from the Indian subcontinent. Homo floresiensis also lived in the area up until 12,000 years ago, when they became extinct. Austronesian people, who form the majority of the modern population in Indonesia, Malaysia, Brunei, East Timor, and the Philippines, may have migrated to Southeast Asia from Taiwan. They arrived in Indonesia around 2000 BC, and as they spread through the archipelago, they often settled along coastal areas and confined indigenous peoples such as Negritos of the Philippines or Papuans of New Guinea to inland regions.
Studies presented by HUGO (Human Genome Organization) through genetic studies of the various peoples of Asia, empirically points out that instead of the other way around, another migration from the south first entered Southeast Asia and then travelled slowly northwards.
Solheim and others have shown evidence for a Nusantao (Nusantara) maritime trading network ranging from Vietnam to the rest of the archipelago as early as 5000 BC to 1 AD.Solheim, Journal of East Asian Archaeology, 2000, 2:1–2, pp. 273–284(12) The peoples of Southeast Asia, especially those of Austronesian descent, have been seafarers for thousands of years, some reaching the island of Madagascar. Their vessels, such as the vinta, were ocean-worthy. Magellan's voyage records how much more manoeuvrable their vessels were, as compared to the European ships.Laurence Bergreen, Over the Edge of the World: Magellan's Terrifying Circumnavigation of the Globe, HarperCollins Publishers, 2003, hardcover 480 pages, ISBN 0-06-621173-5
Passage through the Indian Ocean aided the colonisation of Madagascar by the Austronesian people, as well as commerce between West Asia and Southeast Asia. Gold from Sumatra is thought to have reached as far west as Rome, while a slave from the Sulu Sea was believed to have been used in Magellan's voyage as a translator.
Originally most people were animist. This was later replaced by Hinduism. Theravada Buddhism soon followed in 525. In the 15th century, Islamic influences began to enter. This forced the last Hindu court in Indonesia to retreat to Bali.
In Mainland Southeast Asia, Burma, Cambodia and Thailand retained the Theravada form of Buddhism, brought to them from Sri Lanka. This type of Buddhism was fused with the Hindu-influenced Khmer culture.
Indianised kingdoms
thumb|right|250px|upright|Angkor Wat in Siem Reap, Cambodia
Very little is known about Southeast Asian religious beliefs and practices before the advent of Indian merchants and religious influences from the 2nd century BCE onwards. Prior to the 13th century CE, Hinduism and Buddhism were the main religions in Southeast Asia.
The Jawa Dwipa Hindu kingdom in Java and Sumatra existed around 200 BCE. The history of the Malay-speaking world began with the advent of Indian influence, which dates back to at least the 3rd century BCE. Indian traders came to the archipelago both for its abundant forest and maritime products and to trade with merchants from China, who also discovered the Malay world at an early date. Both Hinduism and Buddhism were well established in the Malay Peninsula by the beginning of the 1st century CE, and from there spread across the archipelago.
Cambodia was first influenced by Hinduism during the beginning of the Funan kingdom. Hinduism was one of the Khmer Empire's official religions. Cambodia is the home to one of the only two temples dedicated to Brahma in the world. Angkor Wat is also a famous Hindu temple of Cambodia.
The Champa civilisation was located in what is today central Vietnam, and was a highly Indianised Hindu Kingdom. The Vietnamese launched a massive conquest against the Cham people during the 1471 Vietnamese invasion of Champa, ransacking and burning Champa, slaughtering thousands of Cham people, and forcibly assimilating them into Vietnamese culture.
The Majapahit Empire was an Indianised kingdom based in eastern Java from 1293 to around 1500. Its greatest ruler was Hayam Wuruk, whose reign from 1350 to 1389 marked the empire's peak when it dominated other kingdoms in the southern Malay Peninsula, Borneo, Sumatra, and Bali. Various sources such as the Nagarakertagama also mention that its influence spanned over parts of Sulawesi, Maluku, and some areas of western New Guinea and the Philippines, making it the largest empire to ever exist in Southeast Asian history.
The Cholas excelled in maritime activity in both military and the mercantile fields. Their raids of Kedah and the Srivijaya, and their continued commercial contacts with the Chinese Empire, enabled them to influence the local cultures. Many of the surviving examples of the Hindu cultural influence found today throughout Southeast Asia are the result of the Chola expeditions.The great temple complex at Prambanan in Indonesia exhibit a number of similarities with the South Indian architecture. See Nilakanta Sastri, K.A. The CōĻas, 1935 pp 709
Spread of Islam
thumb|left|Kampung Laut Mosque in Tumpat is one of the oldest mosques in Malaysia, dating to the early 18th century.
In the 11th century, a turbulent period occurred in the history of Maritime Southeast Asia. The Indian Chola navy crossed the ocean and attacked the Srivijaya kingdom of Sangrama Vijayatungavarman in Kadaram (Kedah), the capital of the powerful maritime kingdom was sacked and the king was taken captive. Along with Kadaram, Pannai in present-day Sumatra and Malaiyur and the Malayan peninsula were attacked too. Soon after that, the king of Kedah Phra Ong Mahawangsa became the first ruler to abandon the traditional Hindu faith, and converted to Islam with the Sultanate of Kedah established in year 1136. Samudera Pasai converted to Islam in the year 1267, the King of Malacca Parameswara married the princess of Pasai, and the son became the first sultan of Malacca. Soon, Malacca became the center of Islamic study and maritime trade, and other rulers followed suit. Indonesian religious leader and Islamic scholar Hamka (1908–1981) wrote in 1961: "The development of Islam in Indonesia and Malaya is intimately related to a Chinese Muslim, Admiral Zheng He."Chinese Muslims in Malaysia, History and Development by Rosey Wang Ma
thumb|right|Children studying Qur'an in Java, Indonesia, during colonial period.
There are several theories to the Islamisation process in Southeast Asia. Another theory is trade. The expansion of trade among West Asia, India and Southeast Asia helped the spread of the religion as Muslim traders from Southern Yemen (Hadramout) brought Islam to the region with their large volume of trade. Many settled in Indonesia, Singapore, and Malaysia. This is evident in the Arab-Indonesian, Arab-Singaporean, and Arab-Malay populations who were at one time very prominent in each of their countries. The second theory is the role of missionaries or Sufis. The Sufi missionaries played a significant role in spreading the faith by introducing Islamic ideas to the region. Finally, the ruling classes embraced Islam and that further aided the permeation of the religion throughout the region. The ruler of the region's most important port, Malacca Sultanate, embraced Islam in the 15th century, heralding a period of accelerated conversion of Islam throughout the region as Islam provided a positive force among the ruling and trading classes.
Trade and colonisation
China
Records from Magellan's voyage show that Brunei possessed more cannon than the European ships, so the Chinese must have been trading with them.
Malaysian legend has it that a Chinese Ming emperor sent a princess, Hang Li Po, to Malacca, with a retinue of 500, to marry Sultan Mansur Shah after the emperor was impressed by the wisdom of the sultan. Han Li Po's well (constructed 1459) is now a tourist attraction there, as is Bukit Cina, where her retinue settled.
The strategic value of the Strait of Malacca, which was controlled by Sultanate of Malacca in the 15th and early 16th century, did not go unnoticed by Portuguese writer Duarte Barbosa, who in 1500 wrote "He who is lord of Malacca has his hand on the throat of Venice".
From 111 BC to 938 AD northern Vietnam was under Chinese rule. Vietnam was successfully governed by a series of Chinese dynasties including the Han, Eastern Han, Eastern Wu, Cao Wei, Jin, Liu Song, Southern Qi, Liang, Sui, Tang, and Southern Han.
Europe
thumb|right|Strait of Malacca (narrows)
thumb|Duit, a coin minted by the VOC, 1646-1667. 2 kas, 2 duit.
Western influence started to enter in the 16th century, with the arrival of the Portuguese in Malacca, Maluku and the Philippines, the latter being settled by the Spanish years later. Throughout the 17th and 18th centuries the Dutch established the Dutch East Indies; the French Indochina; and the British Strait Settlements. By the 19th century, all Southeast Asian countries were colonised except for Thailand.
European explorers were reaching Southeast Asia from the west and from the east. Regular trade between the ships sailing east from the Indian Ocean and south from mainland Asia provided goods in return for natural products, such as honey and hornbill beaks from the islands of the archipelago.
Before the eighteenth and nineteenth century, the Europeans mostly were interested in expanding trade links. For the majority of the populations in each country, there was comparatively little interaction with Europeans and traditional social routines and relationships continued. For most, a life with subsistence level agriculture, fishing and, in less developed civilizations, hunting and gathering was still hard.
Europeans brought Christianity allowing Christian missionaries to become widespread. Thailand also allowed Western scientists to enter its country to develop its own education system as well as start sending Royal members and Thai scholars to get higher education from Europe and Russia.
Japan
During World War II, Imperial Japan invaded most of the former western colonies. The Shōwa occupation regime committed violent actions against civilians such as the Manila massacre and the implementation of a system of forced labour, such as the one involving 4 to 10 million romusha in Indonesia.Library of Congress, 1992, "Indonesia: World War II and the Struggle For Independence, 1942–50; The Japanese Occupation, 1942–45" Access date: 9 February 2007. A later UN report stated that four million people died in Indonesia as a result of famine and forced labour during the Japanese occupation.John W. Dower War Without Mercy: Race and Power in the Pacific War (1986; Pantheon; ISBN 0-394-75172-8) The Allied powers who defeated Japan in the South-East Asian theatre of World War II then contended with nationalists to whom the occupation authorities had granted independence.
Past
Trade among Southeast Asian countries has a long tradition. The consequences of colonial rule, struggle for independence and in some cases war influenced the economic attitudes and policies of each country until today.
Present
Most countries in the region enjoy national autonomy. Democratic forms of government and the recognition of human rights are taking root. ASEAN provides a framework for the integration of commerce, and regional responses to international concerns.
Conflicting claims over the Spratly Islands are made by Brunei, China, Malaysia, Philippines, Taiwan, and Vietnam.
Geography
Indonesia is the largest country in Southeast Asia and it also the largest archipelago in the world by size (according to the CIA World Factbook). Geologically, the Indonesian Archipelago is one of the most volcanically active regions in the world. Geological uplifts in the region have also produced some impressive mountains, culminating in Puncak Jaya in Papua, Indonesia at , on the island of New Guinea; it is the only place where ice glaciers can be found in Southeast Asia. The highest mountain in Southeast Asia is Hkakabo Razi at 5,967 meters and can be found in northern Burma sharing the same range of its parent peak, Mount Everest.
The South China Sea is the major body of water within Southeast Asia. The Philippines, Vietnam, Malaysia, Brunei, Indonesia, and Singapore, have integral rivers that flow into the South China Sea.
Mayon Volcano, despite being dangerously active, holds the record of the world's most perfect cone which is built from past and continuous eruption.Davis, Lee (1992). Natural disasters: from the Black Plague to the eruption of Mt. Pinatubo. New York, NY: Facts on File Inc.. pp. 300–301.
Boundaries
Southeast Asia is bounded to the southeast by the Australian continent, a boundary which runs through Indonesia. But a cultural touch point lies between Papua New Guinea and the Indonesian region of the Papua and West Papua, which shares the island of New Guinea with Papua New Guinea.
Climate
thumb|Southeast Asia map of Köppen climate classification.
The climate in Southeast Asia is mainly tropical–hot and humid all year round with plentiful rainfall. Northern Vietnam and the Myanmar Himalayas are the only regions in Southeast Asia that feature a subtropical climate, which has a cold winter with snow. The majority of Southeast Asia has a wet and dry season caused by seasonal shift in winds or monsoon. The tropical rain belt causes additional rainfall during the monsoon season. The rain forest is the second largest on earth (with the Amazon being the largest). An exception to this type of climate and vegetation is the mountain areas in the northern region, where high altitudes lead to milder temperatures and drier landscape. Other parts fall out of this climate because they are desert like.
Environment
thumb|Komodo dragon in Komodo National Park, Indonesia
The vast majority of Southeast Asia falls within the warm, humid tropics, and its climate generally can be characterised as monsoonal.
The animals of Southeast Asia are diverse; on the islands of Borneo and Sumatra, the orangutan, the Asian elephant, the Malayan tapir, the Sumatran rhinoceros and the Bornean clouded leopard can also be found. Six subspecies of the binturong or bearcat exist in the region, though the one endemic to the island of Palawan is now classed as vulnerable.
Tigers of three different subspecies are found on the island of Sumatra (the Sumatran tiger), in peninsular Malaysia (the Malayan tiger), and in Indochina (the Indochinese tiger); all of which are endangered species.
The Komodo dragon is the largest living species of lizard and inhabits the islands of Komodo, Rinca, Flores, and Gili Motang in Indonesia.
thumb|left|The Philippine eagle
The Philippine eagle is the national bird of the Philippines. It is considered by scientists as the largest eagle in the world, and is endemic to the Philippines' forests.
The wild Asian water buffalo, and on various islands related dwarf species of Bubalus such as anoa were once widespread in Southeast Asia; nowadays the domestic Asian water buffalo is common across the region, but its remaining relatives are rare and endangered.
The mouse deer, a small tusked deer as large as a toy dog or cat, mostly can be found on Sumatra, Borneo (Indonesia) and in Palawan Islands (Philippines). The gaur, a gigantic wild ox larger than even wild water buffalo, is found mainly in Indochina. There is very little scientific information available regarding Southeast Asian amphibians.
Birds such as the peafowl and drongo live in this subregion as far east as Indonesia. The babirusa, a four-tusked pig, can be found in Indonesia as well. The hornbill was prized for its beak and used in trade with China. The horn of the rhinoceros, not part of its skull, was prized in China as well.
thumb|right|Wallace's hypothetical line divide Indonesian Archipelago into 2 types of fauna, Australasian and Southeast Asian fauna. The deep water of the Lombok Strait between the islands of Bali and Lombok formed a water barrier even when lower sea levels linked the now-separated islands and landmasses on either side.
The Indonesian Archipelago is split by the Wallace Line. This line runs along what is now known to be a tectonic plate boundary, and separates Asian (Western) species from Australasian (Eastern) species. The islands between Java/Borneo and Papua form a mixed zone, where both types occur, known as Wallacea. As the pace of development accelerates and populations continue to expand in Southeast Asia, concern has increased regarding the impact of human activity on the region's environment. A significant portion of Southeast Asia, however, has not changed greatly and remains an unaltered home to wildlife. The nations of the region, with only few exceptions, have become aware of the need to maintain forest cover not only to prevent soil erosion but to preserve the diversity of flora and fauna. Indonesia, for example, has created an extensive system of national parks and preserves for this purpose. Even so, such species as the Javan rhinoceros face extinction, with only a handful of the animals remaining in western Java.
The shallow waters of the Southeast Asian coral reefs have the highest levels of biodiversity for the world's marine ecosystems, where coral, fish and molluscs abound. According to Conservation International, marine surveys suggest that the marine life diversity in the Raja Ampat (Indonesia) is the highest recorded on Earth. Diversity is considerably greater than any other area sampled in the Coral Triangle composed of Indonesia, Philippines, and Papua New Guinea. The Coral Triangle is the heart of the world's coral reef biodiversity, the Verde Passage is dubbed by Conservation International as the world's "center of the center of marine shorefish biodiversity". The whale shark, the world's largest species of fish and 6 species of sea turtles can also be found in the South China Sea and the Pacific Ocean territories of the Philippines.
The trees and other plants of the region are tropical; in some countries where the mountains are tall enough, temperate-climate vegetation can be found. These rainforest areas are currently being logged-over, especially in Borneo.
While Southeast Asia is rich in flora and fauna, Southeast Asia is facing severe deforestation which causes habitat loss for various endangered species such as orangutan and the Sumatran tiger. Predictions have been made that more than 40% of the animal and plant species in Southeast Asia could be wiped out in the 21st century.Biodiversity wipeout facing Southeast Asia, New Scientist, 23 July 2003 At the same time, haze has been a regular occurrence. The two worst regional hazes were in 1997 and 2006 in which multiple countries were covered with thick haze, mostly caused by "slash and burn" activities in Sumatra and Borneo. In reaction, several countries in Southeast Asia signed the ASEAN Agreement on Transboundary Haze Pollution to combat haze pollution.
The 2013 Southeast Asian Haze saw API levels reach a hazardous level in some countries. Muar experienced the highest API level of 746 on 23 June 2013 at around 7 am.2013 Southeast Asian haze#Air Pollution Index readings
Economy
thumb|The Keppel Container Terminal in the Port of Singapore. The Port of Singapore is the busiest transshipment and container port in the world, and is an important transportation and shipping hub in Southeast Asia.
Even prior to the penetration of European interests, Southeast Asia was a critical part of the world trading system. A wide range of commodities originated in the region, but especially important were spices such as pepper, ginger, cloves, and nutmeg. The spice trade initially was developed by Indian and Arab merchants, but it also brought Europeans to the region. First Spaniards (Manila galleon) and Portuguese, then the Dutch, and finally the British and French became involved in this enterprise in various countries. The penetration of European commercial interests gradually evolved into annexation of territories, as traders lobbied for an extension of control to protect and expand their activities. As a result, the Dutch moved into Indonesia, the British into Malaya and parts of Borneo, the French into Indochina, and the Spanish and the US into the Philippines. An economic effect of this imperialism was the shift in the production of commodities. For example, the rubber plantations of Malaysia, Java, Vietnam and Cambodia, the tin mining of Malaya, the rice fields of the Mekong Delta in Vietnam and Irrawaddy River delta in Burma, were a response to powerful market demands.
The overseas Chinese community has played a large role in the development of the economies in the region. These business communities are connected through the bamboo network, a network of overseas Chinese businesses operating in the markets of Southeast Asia that share common family and cultural ties. The origins of Chinese influence can be traced to the 16th century, when Chinese migrants from southern China settled in Indonesia, Thailand, and other Southeast Asian countries. Chinese populations in the region saw a rapid increase following the Communist Revolution in 1949, which forced many refugees to emigrate outside of China.
The region's economy greatly depends on agriculture; rice and rubber have long been prominent exports. Manufacturing and services are becoming more important. An emerging market, Indonesia is the largest economy in this region. Newly industrialised countries include Indonesia, Malaysia, Thailand, and the Philippines, while Singapore and Brunei are affluent developed economies. The rest of Southeast Asia is still heavily dependent on agriculture, but Vietnam is notably making steady progress in developing its industrial sectors. The region notably manufactures textiles, electronic high-tech goods such as microprocessors and heavy industrial products such as automobiles. Oil reserves in Southeast Asia are plentiful.
Seventeen telecommunications companies contracted to build the Asia-America Gateway submarine cable to connect Southeast Asia to the US This is to avoid disruption of the kind recently caused by the cutting of the undersea cable from Taiwan to the US in the 2006 Hengchun earthquake.
Tourism has been a key factor in economic development for many Southeast Asian countries, especially Cambodia. According to UNESCO, "tourism, if correctly conceived, can be a tremendous development tool and an effective means of preserving the cultural diversity of our planet."Background overview of The National Seminar on Sustainable Tourism Resource Management, Phnom Penh, 9–10 June 2003. Since the early 1990s, "even the non-ASEAN nations such as Cambodia, Laos, Vietnam and Burma, where the income derived from tourism is low, are attempting to expand their own tourism industries."Hitchcock, Michael, et al. Tourism in South-East Asia. New York: Routledge, 1993 In 1995, Singapore was the regional leader in tourism receipts relative to GDP at over 8%. By 1998, those receipts had dropped to less than 6% of GDP while Thailand and Lao PDR increased receipts to over 7%. Since 2000, Cambodia has surpassed all other ASEAN countries and generated almost 15% of its GDP from tourism in 2006.WDI Online
Indonesia is the only member of G-20 major economies and is the largest economy in the region.What is the G-20, www.g20.org. Retrieved 6 October 2009. Indonesia's estimated gross domestic product (nominal) for 2008 was US$511.7 billion with estimated nominal per capita GDP was US$2,246, and per capita GDP PPP was US$3,979 (international dollars).
Stock markets in Southeast Asia have performed better than other bourses in the Asia-Pacific region in 2010, with the Philippines' PSE leading the way with 22 percent growth, followed by Thailand's SET with 21 percent and Indonesia's JKSE with 19 percent.Bull Market Lifts PSE Index to Top Rank Among Stock Exchanges in Asia | Manila Bulletin. Mb.com.ph (24 September 2010). Retrieved on 17 October 2011.
Demographics
thumb|Pie chart showing the distribution of population among the nations of Southeast Asia
Southeast Asia has an area of approximately 4,000,000 km2 (1.6 million square miles). As of 2013, Around 625 million people lived in the region, more than a fifth of them (143 million) on the Indonesian island of Java, the most densely populated large island in the world. Indonesia is the most populous country with 255 million people as of 2015, and also the 4th most populous country in the world. The distribution of the religions and people is diverse in Southeast Asia and varies by country. Some 30 million overseas Chinese also live in Southeast Asia, most prominently in Christmas Island, Indonesia, Malaysia, the Philippines, Singapore, and Thailand, and also, as the Hoa, in Vietnam. People of Southeast Asian origins are known as Southeast Asians or Aseanites.
Ethnic groups
thumb|right|Ati woman in Aklanthe Negritos were the earliest inhabitants of Southeast Asia
thumb|A Native Indonesian Balinese girl wearing kebaya during a traditional ceremony.
In modern times, the Javanese are the largest ethnic group in Southeast Asia, with more than 100 million people, mostly concentrated in Java, Indonesia. In Burma, the Burmese account for more than two-thirds of the ethnic stock in this country, while ethnic Thais and Vietnamese account for about four-fifths of the respective populations of those countries. Indonesia is clearly dominated by the Javanese and Sundanese ethnic groups, while Malaysia is split between half Malays and one-quarter Chinese. Within the Philippines, the Visayan (mainly Cebuanos and Hiligaynons), Tagalog, Ilocano and Bicolano groups are significant.
Religion
thumb|Thai Theravada Buddhists in Chiang Mai, Thailand.
right|thumb|Roman Catholic Cathedral-Basilica of the Immaculate Conception, metropolitan see of the Archbishop of Manila, Philippines.
thumb|right|Sultan Omar Ali Saifuddin Mosque in Brunei, an Islamic country with Syariah rule.
Countries in Southeast Asia practice many different religions. Islam is the most practised faith, numbering approximately 240 million adherents, or about 40% of the entire population, concentrated in Indonesia, Brunei, Malaysia, Southern Thailand and in the Southern Philippines. Indonesia is the most populous Muslim-majority country around the world.
Buddhism is predominant in Vietnam, Thailand, Laos, Cambodia, Burma and Singapore. Ancestor worship and Confucianism are also widely practised in Vietnam and Singapore.
Christianity is predominant in the Philippines, eastern Indonesia, East Malaysia and East Timor. The Philippines has the largest Roman Catholic population in Asia. East Timor is also predominantly Roman Catholic due to a history of Portuguese rule.
The religious composition for each country is as follows: Some values are taken from the CIA World Factbook:
No individual Southeast Asian country is religiously homogeneous. In the world's most populous Muslim nation, Indonesia, Hinduism is dominant on islands such as Bali. Christianity also predominates in the rest of the part of the Philippines, New Guinea and Timor. Pockets of Hindu population can also be found around Southeast Asia in Singapore, Malaysia etc. Garuda (Sanskrit: Garuḍa), the phoenix who is the mount (vahanam) of Vishnu, is a national symbol in both Thailand and Indonesia; in the Philippines, gold images of Garuda have been found on Palawan; gold images of other Hindu gods and goddesses have also been found on Mindanao. Balinese Hinduism is somewhat different from Hinduism practised elsewhere, as Animism and local culture is incorporated into it. Christians can also be found throughout Southeast Asia; they are in the majority in East Timor and the Philippines, Asia's largest Christian nation. In addition, there are also older tribal religious practices in remote areas of Sarawak in East Malaysia,Highland Philippines and Papua in eastern Indonesia. In Burma, Sakka (Indra) is revered as a nat. In Vietnam, Mahayana Buddhism is practised, which is influenced by native animism but with strong emphasis on ancestor worship.
Country Religions Andaman and Nicobar IslandsHinduism (69%), Christianity, Islam, Sikhism and othersIslam (67%), Buddhism, Christianity, others (indigenous beliefs, etc.)Buddhism (89%), Islam, Christianity, Hinduism, Animism, othersBuddhism (97%), Islam, Christianity, Animism, othersBuddhism (75%), Islam, ChristianityIslam (80%), othersRoman Catholicism (97%), Islam, Protestantism, Buddhism, HinduismIslam (87.18%), Protestantism, Roman Catholicism, Hinduism, Buddhism, othersIndonesia – The World FactbookBuddhism (67%), Animism, Christianity, othersIslam (60.4%), Buddhism, Christianity, Hinduism, AnimismRoman Catholicism (80%), Islam (11%),http://www.ncmf.gov.ph/ Other Christian(3%), Buddhism (2%), Animism (1.25%), others (0.35%)Buddhism, Christianity, Islam, Taoism, Hinduism, othersBuddhism (93.83%), Islam (4.56%), Christianity (0.8%), Hinduism (0.011%), others (0.079%) Vietnamese folk religion (45.3%), Buddhism (16.4%), Christianity (8.2%), Other (0.4%), Unaffiliated (29.6%)http://www.pewforum.org/2012/12/18/table-religious-composition-by-country-in-percentages/
Languages
Each of the languages have been influenced by cultural pressures due to trade, immigration, and historical colonisation as well.
The language composition for each country is as follows: (official languages are in bold.)
Country Languages Andaman and Nicobar Islands Bengali, Hindi, English, Tamil,Telugu, Malayalam, Shompen, A-Pucikwar, Aka-Jeru, Aka-Bea, Aka-Bo, Aka-Cari, Aka-Kede, Aka-Kol, Aka-Kora, Aka-Bale, Jangil, Jarawa, Oko-Juwoi, Önge, Sentinelese, Camorta, Car, Chaura, Katchal, Nancowry, Southern Nicobarese, Teressa Malay, English, Indonesian, Chinese, indigenous Bornean dialectsCIA – The World Factbook – Brunei. Cia.gov. Retrieved on 17 October 2011.Burmese, Shan, Kayin(Karen), Rakhine, Kachin, Chin, Mon, Kayah,Chinese and other ethnic languagesKhmer, Thai, English, French, Vietnamese, Cham, Chinese, othersCIA – The World Factbook – Cambodia. Cia.gov. Retrieved on 17 October 2011.English, Chinese, MalayCIA – The World Factbook – Christmas Island. Cia.gov. Retrieved on 17 October 2011.English, Cocos MalayCIA – The World Factbook – Cocos (Keeling) Islands. Cia.gov. Retrieved on 17 October 2011.Tetum, Portuguese, Indonesian, English, Mambae, Makasae, Tukudede, Bunak, Galoli, Kemak, Fataluku, Baikeno, othersCIA – The World Factbook – East Timor. Cia.gov. Retrieved on 17 October 2011.Indonesian, Javanese, English, Dutch, Sundanese, Batak, Minangkabau, Buginese, Banjar, Papuan, Dayak, Acehnese, Ambonese Balinese, Betawi, Madurese, Musi, Manado, Sasak, Makassarese, Batak Dairi, Karo, Mandailing, Jambi Malay, Mongondow, Gorontalo, Ngaju, Nias, North Moluccan, Uab Meto, Bima, Manggarai, Toraja-Sa'dan, Komering, Tetum, Rejang, Muna, Sumbawa, Bangka Malay, Osing, Gayo, Bungku-Tolaki languages, Moronene, Bungku, Bahonsuai, Kulisusu, Wawonii, Mori Bawah, Mori Atas, Padoe, Tomadino, Lewotobi, Tae', Mongondow, Lampung, Tolaki, Ma'anyan, Simeulue, Gayo, Buginese, Mandar, Minahasan, Enggano, Ternate, Tidore, Mairasi, East Cenderawasih Language, Lakes Plain Languages, Tor-Kwerba, Nimboran, Skou/Sko, Border languages, Senagi, Pauwasi, Mandarin, Hokkien, Cantonese, Hakka, Teochew, Tamil, Punjabi, Bengali, Arabic
Indonesia has over 700 languages in over 17,000 islands across the archipelago, making Indonesia the second most linguistically diverse country on the planet, slightly behind Papua New Guinea. The official language of Indonesia is Indonesian (Bahasa Indonesia), widely used in educational, political, economic, and other formal situations. In daily activities and informal situations, most Indonesians speak in their local language(s). For more details, see: Languages of Indonesia.Lao, Thai, Vietnamese, Hmong, Miao, Mien, Dao, Shan, French, English and othersCIA – The World Factbook – Laos. Cia.gov. Retrieved on 17 October 2011.Malay, English, Indonesian, Mandarin, Tamil, Kedah Malay, Sabah Malay, Brunei Malay, Kelantan Malay, Pahang Malay, Acehnese, Javanese, Minangkabau, Banjar, Buginese, Hakka, Cantonese, Hokkien, Teochew, Foochownese, Telugu, Hindi, Bengali, Punjabi, Sinhalese, Malayalam, Arabic, Brunei Bisaya, Okolod, Kota Marudu Talantang, Kelabit, Lotud, Terengganu Malay, Semelai, Thai, Iban, Kadazan, Dusun, Kristang, Bajau, Jakun, Mah Meri, Batek, Melanau, Semai, Temuan, Temiar, Penan, Tausug, Iranun and others,CIA – The World Factbook – Malaysia. Cia.gov. Retrieved on 17 October 2011. see: Languages of MalaysiaFilipino, English, Tagalog, Visayan (Aklanon, Cebuano, Kinaray-a, Capiznon, Hiligaynon, Waray, Masbateño, Romblomanon, Cuyonon, Surigaonon, Butuanon, Tausug) Ivatan, Ilocano, Ibanag, Pangasinan, Kapampangan, Bicolano, Sama-Bajaw, Maguindanao, Maranao, Chavacano
The Philippines has more than a hundred native languages, most without official recognition from the national government. Spanish and Arabic are on a voluntary and optional basis. Malaysian, Indonesian, Standard Chinese, Lan-nang (Min Nan), Cantonese, Hakka, Japanese and Korean are also spoken in the Philippines due to immigration, geographic proximity and historical ties. See: Languages of the PhilippinesEnglish, Chinese, Malay, Tamil, Bengali, Arabic, Urdu, Indonesian, Hokkien, Teochew, Cantonese, Hakka, Telugu, Malayalam, Hindi, Persian, Javanese, Japanese, Korean, Dutch, Singlish creole and othersThai, Teochew, Minnan, Hakka, Yuehai, English, Malay, Bengali, Hindi, Urdu, Arabic, Lao, Khmer, Isaan, Shan, Lue, Phutai, Mon, Mein, Hmong, Karen, Burmese and othersCIA – The World Factbook – Thailand. Cia.gov. Retrieved on 17 October 2011.Vietnamese, English, Khmer, French, Cantonese, Hmong, Tai, Cham and othersCIA – The World Factbook – Vietnam. Cia.gov. Retrieved on 17 October 2011.
Cities
Jabodetabek (Jakarta/West Java/Banten), . Jabodetabek is an abbreviation of Jakarta, Bogor, Depok, Tangerang, and Bekasi, which are the satellite cities of the Special Capital Region of Jakarta.
Metro Manila (Manila/Quezon City/Makati/Taguig/Pasay and 12 others),
Bangkok Metropolitan Region (Bangkok/Nonthaburi/Samut Prakan/Pathum Thani/Samut Sakhon/Nakhon Pathom),
Greater Kuala Lumpur/Klang Valley (Kuala Lumpur/Selangor),
Greater Penang (Penang/Kedah),
Ho Chi Minh City Metropolitan Area (Ho Chi Minh City/Vung Tau),
Yangon Region (Yangon/Thanlyin),
Hanoi Capital Region (Hanoi/Hai Phong/Ha Long),
Gerbangkertosusila (Surabaya/Sidoarjo/Gresik/Mojokerto/Lamongan/Bangkalan),
Bandung Metropolitan Area (Bandung/Cimahi),
Metro Cebu (Cebu City/Mandaue/Lapu-Lapu City/Talisay City and 11 others),
Metro Davao (Davao City/Digos/Tagum/Island Garden City of Samal),
Culture
thumb|left| Rice field at Tonlé Sap in Cambodia
thumb|right|A paddy field in Vietnam.
The culture in Southeast Asia is very diverse: on mainland Southeast Asia, the culture is a mix of Indochinese (Burma, Cambodia, Laos and Thailand) and Chinese (Vietnam). While in Indonesia, the Philippines, Singapore and Malaysia the culture is a mix of indigenous Austronesian, Indian, Islamic, Western, and Chinese cultures. Also Brunei shows a strong influence from Arabia. Singapore and Vietnam show more Chinese influencehttp://unesdoc.unesco.org/images/0014/001478/147804eb.pdf in that Singapore, although being geographically a Southeast Asian nation, is home to a large Chinese majority and Vietnam was in China's sphere of influence for much of its history. Indian influence in Singapore is only evident through the Tamil migrants,http://www.microsite.nl.sg/PDFs/BiblioAsia/BIBA_0303Oct07a.pdf which influenced, to some extent, the cuisine of Singapore. Throughout Vietnam's history, it has had no direct influence from India - only through contact with the Thai, Khmer and Cham peoples.
Rice paddy agriculture has existed in Southeast Asia for thousands of years, ranging across the subregion. Some dramatic examples of these rice paddies populate the Banaue Rice Terraces in the mountains of Luzon in the Philippines. Maintenance of these paddies is very labour-intensive. The rice paddies are well-suited to the monsoon climate of the region.
Stilt houses can be found all over Southeast Asia, from Thailand and Vietnam, to Borneo, to Luzon in the Philippines, to Papua New Guinea. The region has diverse metalworking, especially in Indonesia. This include weaponry, such as the distinctive kris, and musical instruments, such as the gamelan.
Influences
The region's chief cultural influences have been from some combination of Islam, India, and China. Diverse cultural influence is pronounced in the Philippines, derived particularly from the period of the Spanish and American rule, contact with Indian-influenced cultures, and the Chinese and Japanese trading era.
As a rule, the peoples who ate with their fingers were more likely influenced by the culture of India, for example, than the culture of China, where the peoples ate with chopsticks; tea, as a beverage, can be found across the region. The fish sauces distinctive to the region tend to vary.
Arts
thumb|The Royal Ballet of Cambodia (Paris, France 2010)
The arts of Southeast Asia have affinity with the arts of other areas. Dance in much of Southeast Asia includes movement of the hands as well as the feet, to express the dance's emotion and meaning of the story that the ballerina is going to tell the audience. Most of Southeast Asia introduced dance into their court; in particular, Cambodian royal ballet represented them in the early 7th century before the Khmer Empire, which was highly influenced by Indian Hinduism. Apsara Dance, famous for strong hand and feet movement, is a great example of Hindu symbolic dance.
Puppetry and shadow plays were also a favoured form of entertainment in past centuries, a famous one being Wayang from Indonesia. The arts and literature in some of Southeast Asia is quite influenced by Hinduism, which was brought to them centuries ago. Indonesia, despite conversion to Islam which opposes certain forms of art, has retained many forms of Hindu-influenced practices, culture, art and literature. An example is the Wayang Kulit (Shadow Puppet) and literature like the Ramayana. The wayang kulit show has been recognized by UNESCO on November 7, 2003, as a Masterpiece of Oral and Intangible Heritage of Humanity.
It has been pointed out that Khmer and Indonesian classical arts were concerned with depicting the life of the gods, but to the Southeast Asian mind the life of the gods was the life of the peoples themselves—joyous, earthy, yet divine. The Tai, coming late into Southeast Asia, brought with them some Chinese artistic traditions, but they soon shed them in favour of the Khmer and Mon traditions, and the only indications of their earlier contact with Chinese arts were in the style of their temples, especially the tapering roof, and in their lacquerware.
Music
Traditional music in Southeast Asia is as varied as its many ethnic and cultural divisions. Main styles of traditional music can be seen: Court music, folk music, music styles of smaller ethnic groups, and music influenced by genres outside the geographic region.
Of the court and folk genres, gong-chime ensembles and orchestras make up the majority (the exception being lowland areas of Vietnam). Gamelan and Angklung orchestras from Indonesia, Piphat /Pinpeat ensembles of Thailand and Cambodia and the Kulintang ensembles of the southern Philippines, Borneo, Sulawesi and Timor are the three main distinct styles of musical genres that have influenced other traditional musical styles in the region. String instruments also are popular in the region.
On November 18, 2010, UNESCO officially recognized angklung as a Masterpiece of Oral and Intangible Heritage of Humanity, and encourage Indonesian people and government to safeguard, transmit, promote performances and to encourage the craftsmanship of angklung making.
Writing
thumb|right|The Terengganu Inscription Stone in Malaysia, written in year 1303, oldest written artifact with Jawi script on it.
The history of Southeast Asia has led to a wealth of different authors, from both within and without writing about the region.
Originally, Indians were the ones who taught the native inhabitants about writing. This is shown through Brahmic forms of writing present in the region such as the Balinese script shown on split palm leaf called lontar (see image to the left — magnify the image to see the writing on the flat side, and the decoration on the reverse side).
The antiquity of this form of writing extends before the invention of paper around the year 100 in China. Note each palm leaf section was only several lines, written longitudinally across the leaf, and bound by twine to the other sections. The outer portion was decorated. The alphabets of Southeast Asia tended to be abugidas, until the arrival of the Europeans, who used words that also ended in consonants, not just vowels. Other forms of official documents, which did not use paper, included Javanese copperplate scrolls. This material would have been more durable than paper in the tropical climate of Southeast Asia.
In Malaysia, Brunei, and Singapore, the Malay language is now generally written in the Latin script. The same phenomenon is present in Indonesian, although different spelling standards are utilised (e.g. 'Teksi' in Malay and 'Taksi' in Indonesian for the word 'Taxi').
The use of Chinese characters, in the past and present, is only evident in Vietnam and more recently, Singapore and Malaysia. The adoption of Chinese characters in Vietnam dates back to around 111BC, when it was occupied by the Chinese. A Vietnamese script called Chu nom used modified Chinese characters to express the Vietnamese language. Both classical Chinese and Chu Nom were used up until the early 20th century.
However, the use of the Chinese script has been in decline, especially in Singapore and Malaysia as the younger generations are in favour of the Latin Script.
See also
List of Southeast Asian leaders
South Asia
Northeast Asia
Southeast Asia Treaty Organization
Tiger Cub Economies
References
Tiwari, Rajnish (2003): Post-crisis Exchange Rate Regimes in Southeast Asia (PDF), Seminar Paper, University of Hamburg.
Further reading
Osborne, Milton (2010; first published in 1979). Southeast Asia: An Introductory History Allen & Unwin. ISBN 978-1-74237-302-7
Fletcher, Banister; Cruickshank, Dan (1996; first published in 1896). Sir Banister Fletcher's a History of Architecture, Architectural Press, 20th edition. ISBN 0-7506-2267-9. Cf. Part Four, Chapter 27.
Farah, Paolo Davide (2015) Energy Investments and Environmental Concerns in Southeast Asia, in: Paolo Davide FARAH & Piercarlo ROSSI, ENERGY: POLICY, LEGAL AND SOCIAL-ECONOMIC ISSUES UNDER THE DIMENSIONS OF SUSTAINABILITY AND SECURITY, World Scientific Reference on Globalisation in Eurasia and the Pacific Rim, Imperial College Press (London, UK) & World Scientific Publishing, Nov. 2015.
External links
Topography of Southeast Asia in detail (PDF) (previous version)
CityMayors.com article
Southeast Asian Archive at the University of California, Irvine
Southeast Asia Digital Library at Northern Illinois University
"Documenting the Southeast Asian Refugee Experience", exhibit at the University of California, Irvine, Library
Southeast Asia Visions, a collection of historical travel narratives Cornell University Library Digital Collection
Official website of the ASEAN Tourism Association
Southeast Asia Time Lapse Video Southeast Asia Time Lapse Video
Southeast Asia eCommerce Southeast Asia eCommerce
Art of Island Southeast Asia, a full text exhibition catalogue from The Metropolitan Museum of Art
List of Southeast Asia eCommerce
Philippines Population updated as of 11/20/2016 //Philippines Population updated as of 11/20/2016
Category:Regions of Asia
| 28,741 | 2017-01 |
Rajasthan | Rajasthan ( ; literally, "Land of Kings")Tara Boland-Crewe, David Lea, The Territories and States of India, p. 208. is India's largest state by area ( or 10.4% of India's total area). It is located on the western side of the country, where it comprises most of the wide and inhospitable Thar Desert (also known as the "Rajasthan Desert" and "Great Indian Desert") and shares a border with the Pakistani provinces of Punjab to the northwest and Sindh to the west, along the Sutlej-Indus river valley. Elsewhere it is bordered by the other Indian states: Punjab to the north; Haryana and Uttar Pradesh to the northeast; Madhya Pradesh to the southeast; and Gujarat to the southwest. Rajasthan is an economically backward region of India and has the highest percentage of unemployed youth in North India.
Rajasthan is divided into 9 regions; Ajmer State, Hadoti, Dhundhar, Gorwar, Shekhawati, Mewar, Marwar, Vagad and Mewat which are equally rich in its heritage and artistic contribution. These regions have a parallel history which goes along with that of the state.
Major features include the ruins of the Indus Valley Civilization at Kalibanga; the Dilwara Temples, a Jain pilgrimage site at Rajasthan's only hill station, Mount Abu, in the ancient Aravalli mountain range; and, in eastern Rajasthan, the Keoladeo National Park near Bharatpur, a World Heritage Site known for its bird life. Rajasthan is also home to two national tiger reserves, the Ranthambore National Park in Sawai Madhopur and Sariska Tiger Reserve in Alwar.
The state was formed on 30 March 1949 when Rajputanathe name adopted by the British Raj for its dependencies in the regionwas merged into the Dominion of India. Its capital and largest city is Jaipur, also known as Pink City, located on the state's eastern side. Other important cities are Jodhpur, Udaipur, Bikaner, Kota and Ajmer.
Etymology
The first mention of the name "Rajasthan" appears in the 1829 publication Annals and Antiquities of Rajast'han or the Central and Western Rajpoot States of India, while the earliest known record of "Rajputana" as a name for the region is in George Thomas's 1800 memoir Military Memories. John Keay, in his book India: A History, stated that "Rajputana" was coined by the British in 1829, John Briggs, translating Ferishta's history of early Islamic India, used the phrase "Rajpoot (Rajput) princes" rather than "Indian princes".
History
Ancient
Parts of what is now Rajasthan were partly part of the Vedic Civilisation and Indus Valley Civilization. Kalibangan, in Hanumangarh district, was a major provincial capital of the Indus Valley Civilization.
Matsya, a state of the Vedic civilisation of India, is said to roughly corresponded to the former state of Jaipur in Rajasthan and included the whole of Alwar with portions of Bharatpur. The capital of Matsya was at Viratanagar (modern Bairat), which is said to have been named after its founder king Virata.
BhargavaSudhir Bhargava, "Location of Brahmavarta and Drishadwati river is important to find earliest alignment of Saraswati river" Seminar, Saraswati river-a perspective, 20–22 Nov. 2009, Kurukshetra University, Kurukshetra, organised by: Saraswati Nadi Shodh Sansthan, Haryana, Seminar Report: pages 114-117 identifies the two districts of Jhunjhunu and Sikar and parts of Jaipur district along with Haryana districts of Mahendragarh and Rewari as part of Vedic state of Brahmavarta. Bhargava also locates the present day Sahibi River as the Vedic Drishadwati River, which along with Saraswati River formed the borders of the Vedic state of Brahmavarta.Manusmriti Manu and Bhrigu narrated the Manusmriti to a congregation of seers in this area only. Ashrams of Vedic seers Bhrigu and his son Chayvan Rishi, for whom Chyawanprash was formulated, were near Dhosi Hill part of which lies in Dhosi village of Jhunjhunu district of Rajasthan and part lies in Mahendragarh district of Haryana.
The Western Kshatrapas (405–35 BC), the Saka rulers of the western part of India, were successors to the Indo-Scythians, and were contemporaneous with the Kushans, who ruled the northern part of the Indian subcontinent. The Indo-Scythians invaded the area of Ujjain and established the Saka era (with their calendar), marking the beginning of the long-lived Saka Western Satraps state."The dynastic art of the Kushans", John Rosenfield, p 130.
Classical
Gurjars
Gurjars ruled for many dynasties in this part of the country, the region was known as Gurjaratra. Up to the tenth century almost the whole of North India, acknowledged the supremacy of the Gurjars with their seat of power at Kannauj.
Gurjara-Pratihara
The Gurjar Pratihar Empire acted as a barrier for Arab invaders from the 8th to the 11th century. The chief accomplishment of the Gurjara Pratihara empire lies in its successful resistance to foreign invasions from the west, starting in the days of Junaid. Historian R. C. Majumdar says that this was openly acknowledged by the Arab writers. He further notes that historians of India have wondered at the slow progress of Muslim invaders in India, as compared with their rapid advance in other parts of the world. Now there seems little doubt that it was the power of the Gurjara Pratihara army that effectively barred the progress of the Arabs beyond the confines of Sindh, their first conquest for nearly 300 years.
Medieval and Early Modern
Historical tribes
Traditionally the Rajputs, Jats, Meenas,REBARI, Gurjars, Bhils, Rajpurohit, Charans, Yadavs, Bishnois, Sermals, PhulMali (Saini) and other tribes made a great contribution in building the state of Rajasthan. All these tribes suffered great difficulties in protecting their culture and the land. Millions of them were killed trying to protect their land. A number of Gurjars had been exterminated in Bhinmal and Ajmer areas fighting with the invaders. Bhils once ruled Kota. Meenas were rulers of Bundi, Hadoti and the Dhundhar region.
Major Rulers
thumb|left|upright|A portrait of Hem Chandra Vikramaditya from the 1910s.
Hem Chandra Vikramaditya, the Hindu Emperor, was born in the village of Machheri in Alwar District in 1501. He won 22 battles against Afghans, from Punjab to Bengal including states of Ajmer and Alwar in Rajasthan, and defeated Akbar's forces twice at Agra and Delhi in 1556 at Battle of DelhiBhardwaj, K. K. "Hemu-Napoleon of Medieval India", Mittal Publications, New Delhi, p.25 before acceding to the throne of Delhi and establishing the "Hindu Raj" in North India, albeit for a short duration, from Purana Quila in Delhi. Hem Chandra was killed in the battlefield at Second Battle of Panipat fighting against Mughals on 5 November 1556.thumb|upright|left|Maharana Pratap Singh, legendary sixteenth-century Rajput ruler of Mewar.
Maharana Pratap of Mewar resisted Akbar in the famous Battle of Haldighati (1576) and later operated from hilly areas of his kingdom. The Bhils were Maharana's main allies during these wars. Most of these attacks were repulsed even though the Mughal forces outnumbered Mewar Rajputs in all the wars fought between them. The Haldighati war was fought between 10,000 Mewaris and a 100,000-strong Mughal force (including many Rajputs like Kachwahas from Dhundhar).
Jat king Maharaja Suraj Mal (February 1707 – 25 December 1765) or Sujan Singh was ruler of Bharatpur in Rajasthan. A contemporary historian has described him as "the Plato of the Jat people" and by a modern writer as the "Jat Odysseus", because of his political sagacity, steady intellect, and clear vision.R.C.Majumdar, H.C.Raychaudhury, Kalikaranjan Datta: An Advanced History of India, fourth edition, 1978, ISBN 0-333-90298-X, Page-535
Rajput era
Rajput families rose to prominence in the 6th century CE. The Rajputs put up a valiant resistance to the Islamic invasions and protected this land with their warfare and chivalry for more than 500 years. They also resisted Mughal incursions into India and thus contributed to their slower-than-anticipated access to the Indian subcontinent. Later, the Mughals, through skilled warfare, were able to get a firm grip on northern India, including Rajasthan. Mewar led other kingdoms in its resistance to outside rule. Most notably, Rana Sanga fought the Battle of Khanua against Babur, the founder of the Mughal empire.
left|thumb|Hawa Mahal ("Palace of Winds") in Jaipur
Over the years, the Mughals began to have internal disputes which greatly distracted them at times. The Mughal Empire continued to weaken, and with the decline of the Mughal Empire in the 18th century, Rajputana came under the suzerainty of the Marathas.
The Marathas, who were Hindus from the state of what is now Maharashtra, ruled Rajputana for most of the eighteenth century. The Maratha Empire, which had replaced the Mughal Empire as the overlord of the subcontinent, was finally replaced by the British Empire in 1818.
Following their rapid defeat, the Rajput kings concluded treaties with the British in the early 19th century, accepting British suzerainty and control over their external affairs in return for internal autonomy.
thumb|right|The Mehrangarh Fort at Jodhpur was built by Rao Jodha in 1459.
Modern
Modern Rajasthan includes most of Rajputana, which comprises the erstwhile nineteen princely states, two chiefships, and the British district of Ajmer-Merwara. Marwar (Jodhpur), Bikaner, Mewar (Chittorgarh), Alwar and Dhundhar (Jaipur) were some of the main Rajput princely states. Bharatpur and Dholpur were Jat princely states whereas Tonk was a princely state under a Muslim Nawab.
Rajasthan's formerly independent kingdoms created a rich architectural and cultural heritage, seen even today in their numerous forts and palaces (Mahals and Havelis), which are enriched by features of Islamic and Jain architecture.
The development of frescos in Rajasthan is linked with the history of the Marwaris (Jodhpur-pali), who played a crucial role in the economic development of the region. Many wealthy families throughout Indian history have links to Marwar. These include the legendary Birla, Bajaj, Dalmia, and Mittal families.
Geography
The geographic features of Rajasthan are the Thar Desert and the Aravalli Range, which runs through the state from southwest to northeast, almost from one end to the other, for more than . Mount Abu lies at the southwestern end of the range, separated from the main ranges by the West Banas River, although a series of broken ridges continues into Haryana in the direction of Delhi where it can be seen as outcrops in the form of the Raisina Hill and the ridges farther north. About three-fifths of Rajasthan lies northwest of the Aravallis, leaving two-fifths on the east and south direction.
thumb|left|Camel ride in the Thar Desert near Jaisalmer.
The northwestern portion of Rajasthan is generally sandy and dry. Most of this region are covered by the Thar Desert which extends into adjoining portions of Pakistan. The Aravalli Range does not intercept the moisture-giving southwest monsoon winds off the Arabian Sea, as it lies in a direction parallel to that of the coming monsoon winds, leaving the northwestern region in a rain shadow. The Thar Desert is thinly populated; the town of Jodhpur is the largest city in the desert and known as the gateway of thar desert. The desert has some major districts like Jodhpur, Jaisalmer, Barmer, Bikaner and Nagour. This area is also important defence point of view. Jodhpur airbase is Indias largest airbase and military, BSF bases are also situated here. A single civil airport is also situated in Jodhpur. The Northwestern thorn scrub forests lie in a band around the Thar Desert, between the desert and the Aravallis. This region receives less than 400 mm of rain in an average year. Temperatures can sometimes exceed 54 °C in the summer months or 129 degrees Fahrenheit and drop below freezing in the winter. The Godwar, Marwar, and Shekhawati regions lie in the thorn scrub forest zone, along with the city of Jodhpur. The Luni River and its tributaries are the major river system of Godwar and Marwar regions, draining the western slopes of the Aravallis and emptying southwest into the great Rann of Kutch wetland in neighbouring Gujarat. This river is saline in the lower reaches and remains potable only up to Balotara in Barmer district. The Ghaggar River, which originates in Haryana, is an intermittent stream that disappears into the sands of the Thar Desert in the northern corner of the state and is seen as a remnant of the primitive Saraswati river.
The Aravalli Range and the lands to the east and southeast of the range are generally more fertile and better watered. This region is home to the Kathiarbar-Gir dry deciduous forests ecoregion, with tropical dry broadleaf forests that include teak, Acacia, and other trees. The hilly Vagad region, home to the cities of Dungarpur and Banswara lies in southernmost Rajasthan, on the border with Gujarat and Madhya Pradesh. With the exception of Mount Abu, Vagad is the wettest region in Rajasthan, and the most heavily forested. North of Vagad lies the Mewar region, home to the cities of Udaipur and Chittaurgarh. The Hadoti region lies to the southeast, on the border with Madhya Pradesh. North of Hadoti and Mewar lies the Dhundhar region, home to the state capital of Jaipur. Mewat, the easternmost region of Rajasthan, borders Haryana and Uttar Pradesh. Eastern and southeastern Rajasthan is drained by the Banas and Chambal rivers, tributaries of the Ganges.
thumb|left|Hills around Jaipur, viewed from Jaigarh Fort.
The Aravalli Range runs across the state from the southwest peak Guru Shikhar (Mount Abu), which is in height, to Khetri in the northeast. This range divides the state into 60% in the northwest of the range and 40% in the southeast. The northwest tract is sandy and unproductive with little water but improves gradually from desert land in the far west and northwest to comparatively fertile and habitable land towards the east. The area includes the Thar Desert. The south-eastern area, higher in elevation (100 to 350 m above sea level) and more fertile, has a very diversified topography. in the south lies the hilly tract of Mewar. In the southeast, a large area within the districts of Kota and Bundi forms a tableland. To the northeast of these districts is a rugged region (badlands) following the line of the Chambal River. Farther north the country levels out; the flat plains of the northeastern Bharatpur district are part of an alluvial basin. Merta City lies in the geographical centre of Rajasthan.
{| class="toccolours" style="margin:1em; float:right; width:25%;"
|+ State symbols of Rajasthan|-
| Formation day| 1 November
|-
| State animal| Chinkara and Camel
|-
| State bird| Godavan (great Indian bustard)
|-
| State flower| Flower – Rohida
|-
| State Tree'| Khejri
|}
Flora and fauna
150px|thumb|upright|right|The great Indian bustard has been classed as critically endangered since 2011.
Though a large percentage of the total area is desert with little forest cover, Rajasthan has a rich and varied flora and fauna. The natural vegetation is classed as Northern Desert Thorn Forest (Champion 1936). These occur in small clumps scattered in a more or less open form. The density and size of patches increase from west to east following the increase in rainfall.
The Desert National Park in Jaisalmer is spread over an area of , is an excellent example of the ecosystem of the Thar Desert and its diverse fauna. Seashells and massive fossilised tree trunks in this park record the geological history of the desert. The region is a haven for migratory and resident birds of the desert. One can see many eagles, harriers, falcons, buzzards, kestrels and vultures. Short-toed eagles (Circaetus gallicus), tawny eagles (Aquila rapax), spotted eagles (Aquila clanga), laggar falcons (Falco jugger) and kestrels are the commonest of these.
The Ranthambore National Park located in Sawai Madhopur, one of the finest tiger reserves in the country, became a part of Project Tiger in 1973.
The Dhosi Hill located in the district of Jhunjunu, known as 'Chayvan Rishi's Ashram', where 'Chyawanprash' was formulated for the first time, has unique and rare herbs growing.
The Sariska Tiger Reserve located in Alwar district, from Delhi and from Jaipur, covers an area of approximately . The area was declared a national park in 1979.
Tal Chhapar Sanctuary is a very small sanctuary in Sujangarh, Churu District, from Jaipur in the Shekhawati region. This sanctuary is home to a large population of blackbuck. Desert foxes and the caracal, an apex predator, also known as the desert lynx, can also be spotted, along with birds such as the partridge and sand grouse. The great Indian bustard, known locally as the godavan, and which is a state bird, has been classed as critically endangered since 2011.
Wildlife protection
320x240px|thumbnail|right|Reclining tiger, Ranthambore National Park
Rajasthan is also noted for its national parks and wildlife sanctuaries. There are four national park and wildlife sanctuaries: Keoladeo National Park of Bharatpur, Sariska Tiger Reserve of Alwar, Ranthambore National Park of Sawai Madhopur, and Desert National Park of Jaisalmer. A national level institute, Arid Forest Research Institute (AFRI) an autonomous institute of the ministry of forestry is situated in Jodhpur and continuously work on desert flora and their conservation.
Ranthambore National Park is known worldwide for its tiger population and is considered by both wilderness lovers and photographers as one of the best place in India to spot tigers. At one point, due to poaching and negligence, tigers became extinct at Sariska, but five tigers have been relocated there. Prominent among the wildlife sanctuaries are Mount Abu Sanctuary, Bhensrod Garh Sanctuary, Darrah Sanctuary, Jaisamand Sanctuary, Kumbhalgarh Wildlife Sanctuary, Jawahar Sagar sanctuary, and Sita Mata Wildlife Sanctuary.
Communication
Major ISP and Telecom companies are present in Jaipur including Airtel, Data Infosys Limited, Reliance Limited, RAILTEL, Software Technology Parks of India (STPI), Tata Telecom, Vodafone. Data Infosys was the first Internet Service Provider(ISP) to bring internet in Rajasthan in April 1999 and OASIS was first private mobile telephone company, which was later taken over by Airtel.
Government and politics
The politics of Rajasthan is dominated mainly by the Bharatiya Janata Party and the Indian National Congress. The Chief Minister, serving the second term, is Vasundhara Raje.
Administrative divisions
thumb|right|The Jain temple at Ranakpur is in Pali district.
Rajasthan is divided into 33 districts within seven divisions:
Division Districts Jaipur Jodhpur Ajmer Udaipur Bikaner Kota Bharatpur
Economy
Rajasthan's economy is primarily agricultural and pastoral. Wheat and barley are cultivated over large areas, as are pulses, sugarcane, and oilseeds. Cotton and tobacco are the state's cash crops. Rajasthan is among the largest producers of edible oils in India and the second largest producer of oilseeds. Rajasthan is also the biggest wool-producing state in India and the main opium producer and consumer. There are mainly two crop seasons. The water for irrigation comes from wells and tanks. The Indira Gandhi Canal irrigates northwestern Rajasthan.
thumb|left|A marble quarry in Kishangarh Ajmer.
The main industries are mineral based, agriculture based, and textile based. Rajasthan is the second largest producer of polyester fibre in India. The Pali and Bhilwara District produces more cloth than Bhiwandi, Maharashtra and the bhilwara is the largest city in suitings production and export and Pali is largest city in cotton and polyster in blouse pieces and rubia production and export. Several prominent chemical and engineering companies are located in the city of Kota, in southern Rajasthan. Rajasthan is pre-eminent in quarrying and mining in India. The Taj Mahal was built from the white marble which was mined from a town called Makrana. The state is the second largest source of cement in India. It has rich salt deposits at Sambhar, copper mines at Khetri, Jhunjhunu, and zinc mines at Dariba, Zawar mines and Rampura Aghucha (opencast) near Bhilwara. Dimensional stone mining is also undertaken in Rajasthan. Jodhpur sandstone is mostly used in monuments, important buildings and residential buildings. This stone is termed as "chittar patthar". Jodhpur leads in Handicraft and Guar Gum industry.
Rajasthan is also a part of the Mumbai-Delhi Industrial corridor is set to benefit economically. The State gets 39% of the DMIC, with major districts of Jaipur, Alwar, Kota and Bhilwara benefiting.
thumb|The Indira Gandhi Canal passes through the Thar Desert near Ramgarh, Jaisalmer.
Crude oil
Rajasthan is earning Rs. 150 million (approx. US$2.5 million) per day as revenue from the crude oil sector. This earning is expected to reach 250 million per day in 2013 (which is an increase of 100 million or more than 66 percent). The government of India has given permission to extract 300,000 barrels of crude per day from Barmer region which is now 175,000 barrels per day. Once this limit is achieved Rajasthan will become a leader in Crude extraction in Country. Bombay High leads with a production of 250,000 barrels crude per day. Once the limit if 300,000 barrels per day is reached, the overall production of the country will increase by 15 percent. Cairn India is doing the work of exploration and extraction of crude oil in Rajasthan.
Rajasthan has rich reserves of limestone.Niki Chemical Industries, Jodhpur is one of the largest manufacturer of Slaked lime (Hydrated Lime or Ca(OH)2)
Transport
thumb|Jaipur Airport|185x185px
thumb|Road Tunnel in Jaipur Rajasthan
thumb|left|NH 8 between Udaipur and Ahmedabad.|179x179px
Rajasthan is connected by many national highways. Most renowned being NH 8, which is India's first 4–8 lane highway. Rajasthan also has an inter-city surface transport system both in terms of railways and bus network. All chief cities are connected by air, rail and road.
Air
There are three main airports at Rajasthan- Jaipur International Airport, Jodhpur Airport, Udaipur Airport and recently started Bikaner Airport. These airports connect Rajasthan with the major cities of India such as Delhi and Mumbai. There are two other airports in Jaisalmer, Kota but are not open for commercial/civilian flights yet. One more airport at Kishangarh, Ajmer .i.e. Kishangarh Airport is being constructed by the Airport Authority of India.
Rail
Rajasthan is connected with the main cities of India by rail. Jaipur, Jodhpur, Kota, Bharatpur, Bikaner, Ajmer, Alwar, Abu Road and Udaipur are the principal railway stations in Rajasthan. Kota City is the only Electrified Section served by three Rajdhani Expresses and trains to all major cities of India. There is also an international railway, the Thar Express from Jodhpur (India) to Karachi (Pakistan). However, this is not open to foreign nationals.
Road
Rajasthan is well connected to the main cities of the country including Delhi, Ahmedabad and Indore by State and National Highways and served by Rajasthan State Road Transport Corporation (RSRTC) and Private operators.
Demographics
thumb|Children performing for Independence Day in village in Alwar district, Rajasthan
According to final results of 2011 Census of India, Rajasthan has a total population of 68,548,437. Rajasthan's population is made up mainly of Hindus, who account for 87.45% of the population. Muslims make up 10.08%, Sikhs 1.27% and Jains 1% of the population. The state of Rajasthan is also populated by Sindhis, who came to Rajasthan from Sindh province (now in Pakistan) during the India-Pakistan separation in 1947.
Hindi is the official and the most widely spoken language in the state (91% of the population as per the 2001 census), followed by Bhili (5%), Punjabi (2%), and Urdu (2%).
Culture
Rajasthan is culturally rich and has artistic and cultural traditions which reflect the ancient Indian way of life. There is rich and varied folk culture from villages which are often depicted and is symbolic of the state. Highly cultivated classical music and dance with its own distinct style is part of the cultural tradition of Rajasthan. The music has songs that depict day-to-day relationships and chores, often focused around fetching water from wells or ponds.
left|thumb|Special Jodhpuri Mirchi vada
Rajasthani cooking was influenced by both the war-like lifestyles of its inhabitants and the availability of ingredients in this arid region. Food that could last for several days and could be eaten without heating was preferred. The scarcity of water and fresh green vegetables have all had their effect on the cooking. It is known for its snacks like Bikaneri Bhujia. Other famous dishes include bajre ki roti (millet bread) and lashun ki chutney (hot garlic paste), mawa kachori Mirchi Bada, Pyaaj Kachori and ghevar from Jodhpur, Alwar ka Mawa(Milk Cake), malpauas from Pushkar and rassgollas from Bikaner. Originating from the Marwar region of the state is the concept Marwari Bhojnalaya, or vegetarian restaurants, today found in many parts of India, which offer vegetarian food of the Marwari people.
Dal-Bati-Churma is very popular in Rajasthan. The traditional way to serve it is to first coarsely mash the Baati then pour pure Ghee on top of it. It is served with the daal (lentils) and spicy garlic chutney. Also served with Besan (gram flour) ki kadi. It is commonly served at all festivities, including religious occasions, wedding ceremonies, and birthday parties in Rajasthan. "Dal-Baati-Churma", is a combination of three different food items — Daal (lentils), Baati and Churma (Sweet). It is a typical Rajasthani dish.
thumb|right|"Up-down" dolls are found in the roadside shops of Jaisalmer.
The Ghoomar dance from Jodhpur Marwar and Kalbeliya dance of Jaisalmer have gained international recognition. Folk music is a large part of Rajasthani culture. Kathputli, Bhopa, Chang, Teratali, Ghindr, Kachchhighori, and Tejaji are examples of traditional Rajasthani culture. Folk songs are commonly ballads which relate heroic deeds and love stories; and religious or devotional songs known as bhajans and banis which are often accompanied by musical instruments like dholak, sitar, and sarangi are also sung.
thumb|right|Traditional musical instruments of Rajasthan
Rajasthan is known for its traditional, colourful art. The block prints, tie and dye prints, Bagaru prints, Sanganer prints, and Zari embroidery are major export products from Rajasthan. Handicraft items like wooden furniture and crafts, carpets, and blue pottery are commonly found here. Shopping reflects the colourful culture, Rajasthani clothes have a lot of mirror work and embroidery. A Rajasthani traditional dress for females comprises an ankle-length skirt and a short top, also known as a lehenga or a chaniya choli. A piece of cloth is used to cover the head, both for protection from heat and maintenance of modesty. Rajasthani dresses are usually designed in bright colours like blue, yellow and orange.
left|thumb|A traditional folk singer practising in front of Jodhpur fort.
The main religious festivals are Deepawali, Holi, Gangaur, Teej, Gogaji, Shri Devnarayan Jayanti, Makar Sankranti and Janmashtami, as the main religion is Hinduism. Rajasthan's desert festival is held once a year during winter. Dressed in costumes, the people of the desert dance and sing ballads. There are fairs with snake charmers, puppeteers, acrobats and folk performers. Camels play a role in this festival.
Spirit possession has been documented in modern Rajasthan. Some of the spirits possessing Rajasthanis are seen as good and beneficial while others are seen as malevolent. The good spirits include murdered royalty, the underworld god Bhaironji, and Muslim saints. Bad spirits include perpetual debtors who die in debt, stillborn infants, deceased widows, and foreign tourists. The possessed individual is referred to as a ghorala ("mount"). Possession, even if it is by a benign spirit, is regarded as undesirable, as it entails loss of self-control and violent emotional outbursts.Jeffrey G. Snodgrass, "Imitation Is Far More than the Sincerest of Flattery: The Mimetic Power of Spirit Possession in Rajasthan, India," Cultural Anthropology, Vol. 17, No. 1 (Feb. 2002), pp. 32–64.
Education
thumb|Children at a non-formal education centre
thumb|AIIMS Campus at Jodhpur.
During recent years, Rajasthan has worked on the state of education. The state government has been making sustained efforts to improve the education standard.
In recent decades, the literacy rate of Rajasthan has increased significantly. In 1991, the state's literacy rate was only 38.55% (54.99% male and 20.44% female). In 2001, the literacy rate increased to 60.41% (75.70% male and 43.85% female). This was the highest leap in the percentage of literacy recorded in India (the rise in female literacy being 23%). At the Census 2011, Rajasthan had a literacy rate of 67.06% (80.51% male and 52.66% female). Although Rajasthan's literacy rate is below the national average of 74.04% and although its female literacy rate is the lowest in the country, the state has been praised for its efforts and achievements in raising male and female literacy rates.
In Rajasthan, Jodhpur and Kota are two major educational hubs. Kota is known for its quality education in preparation of various competitive exams, coaching for medical and engineering exams while Jodhpur is home to many higher educational institutions like IIT, AIIMS, National Law University, Sardar Patel Police University, National institute of Fashion Technology, MBM Engineering College etc. Kota is popularly referred to as, "coaching capital of India". Other major educational institutions are Birla Institute of Technology and Science Pilani, Malaviya National Institute of Technology Jaipur, IIM Udaipur AIIMS Jodhpur and LNMIIT. Rajasthan has nine universities and more than 250 colleges, 55,000 primary and 7,400 secondary schools. There are 41 engineering colleges with an annual enrolment of about 11,500 students. Apart from above there 41 Private universities like Singhania University,Pacheri Bari Amity University Rajasthan, Jaipur, Mewar University Chittorgarh, OPJS University, Churu, Mody University of Technology and Science Lakshmangarh (Women's University, Sikar), RNB Global University, Bikaner. The state has 23 polytechnic colleges and 152 Industrial Training Institutes (ITIs) that impart vocational training.
In 2009, Central University of Rajasthan a central university fully funded by Government of India, came into force near Kishangarh in Ajmer district.
The acclaimed Mayo College, an all boys boarding school is located in Ajmer district. There are many notable alumni from the school who hold important positions in the organisations worldwide.
In rural areas of Rajasthan, the literacy rate is 76.16% for males and 45.8% for females. This has been debated across all the party level except BJP, when the governor of Rajasthan set a minimum educational qualification for the village panchayat elections.
Tourism
thumb|Blue City- a beautiful Aerial view
Rajasthan attracted 14 percent of total foreign visitors during 2009–2010 which is the fourth highest among Indian states. It is fourth also in Domestic tourist visitors. Tourism is a flourishing industry in Rajasthan. The palaces of Jaipur and Ajmer-Pushkar, the lakes of Udaipur, the desert forts of Jodhpur, Taragarh Fort (Star Fort) in Ajmer, and Bikaner and Jaisalmer rank among the most preferred destinations in India for many tourists both Indian and foreign. Tourism accounts for eight percent of the state's domestic product. Many old and neglected palaces and forts have been converted into heritage hotels. Tourism has increased employment in the hospitality sector.
thumb|left|Pushkar Lake, a sacred Hindu lake, is surrounded by fifty-two bathing ghats.
Rajasthan is famous for its forts, carved temples, and decorated havelis, which were built by Rajput kings in pre-Muslim era Rajasthan. Rajasthan's Jaipur Jantar Mantar, Mehrangarh Fort and Stepwell of Jodhpur, Dilwara Temples, Chittorgarh Fort, Lake Palace, miniature paintings in Bundi, and numerous city palaces and haveli's are part of the architectural heritage of India. Jaipur, the Pink City, is noted for the ancient houses made of a type of sandstone dominated by a pink hue. In Jodhpur, maximum houses are painted blue. At Ajmer, there is white marble Bara-dari on the Anasagar lake. Jain Temples dot Rajasthan from north to south and east to west. Dilwara Temples of Mount Abu, Ranakpur Temple dedicated to Lord Adinath in Pali District, Jain temples in the fort complexes of Chittor, Jaisalmer and Kumbhalgarh, Lodurva Jain temples, Mirpur Jain Temple, Sarun Mata Temple kotputli, Bhandasar and Karni Mata Temple of Bikaner and Mandore of Jodhpur are some of the best examples.
See also
Outline of Rajasthan
List of people from Rajasthan
Tourism in Rajasthan
References
Further reading
Bhattacharya, Manoshi. 2008. The Royal Rajputs: Strange Tales and Stranger Truths. Rupa & Co, New Delhi.
Gahlot, Sukhvirsingh. 1992. RAJASTHAN: Historical & Cultural. J. S. Gahlot Research Institute, Jodhpur.
Somani, Ram Vallabh. 1993. History of Rajasthan. Jain Pustak Mandir, Jaipur.
Tod, James & Crooke, William. 1829. Annals and Antiquities of Rajast'han or the Central and Western Rajpoot States of India,. Numerous reprints, including 3 Vols. Reprint: Low Price Publications, Delhi. 1990. ISBN 81-85395-68-3 (set of 3 vols.)
Mathur, P.C., 1995. Social and Economic Dynamics of Rajasthan Politics (Jaipur, Aaalekh)
External links
Government
Official Site of the Government of Rajasthan, India
Official Tourism Site of Rajasthan, India
General information
Rajasthan Encyclopædia Britannica'' entry
*
Category:States and territories of India
Category:States and territories established in 1950
Category:1950 establishments in India | 26,291 | 2017-01 |
Mammal | Mammals are any vertebrates within the class Mammalia ( from Latin mamma "breast"), a clade of endothermic amniotes distinguished from reptiles and birds by the possession of a neocortex (a region of the brain), hair, three middle ear bones and mammary glands. The sister group of mammals may be the extinct Haldanodon. The mammals represent the only living Synapsida, which together with the Sauropsida form the Amniota clade. The mammals consist of the Yinotheria including monotrema and the Theriiformes including the theria.
Mammals include the largest animals on the planet, the great whales, as well as some of the most intelligent, such as elephants, primates and cetaceans. The basic body type is a terrestrial quadruped, but some mammals are adapted for life at sea, in the air, in trees, underground or on two legs. The largest group of mammals, the placentals, have a placenta, which enables the feeding of the fetus during gestation.
Mammals range in size from the bumblebee bat to the blue whale. With the exception of the five species of monotreme (egg-laying mammals), all modern mammals give birth to live young. Most mammals, including the six most species-rich orders, belong to the placental group. The three largest orders in number of species are Rodentia: mice, rats, porcupines, beavers, capybaras and other gnawing mammals; Chiroptera: bats; and Soricomorpha: shrews, moles and solenodons. The next three biggest orders, depending on the biological classification scheme used, are the Primates including the great apes and monkeys; the Cetartiodactyla including whales and even-toed ungulates; and the Carnivora which includes cats, dogs, weasels, bears and seals.
All female mammals nurse their young with milk, secreted from the mammary glands. According to Mammal Species of the World, 5,416 species were known in 2006. These were grouped in 1,229 genera, 153 families and 29 orders. In 2008 the International Union for Conservation of Nature (IUCN) completed a five-year, 1,700-scientist Global Mammal Assessment for its IUCN Red List, which counted 5,488 species. In some classifications, extant mammals are divided into two subclasses: the Prototheria, that is, the order Monotremata; and the Theria, or the infraclasses Metatheria and Eutheria. The marsupials constitute the crown group of the Metatheria, and include all living metatherians as well as many extinct ones; the placentals are the crown group of the Eutheria. While mammal classification at the family level has been relatively stable, several contending classifications regarding the higher levels—subclass, infraclass and order, especially of the marsupials—appear in contemporaneous literature. Much of the changes reflect the advances of cladistic analysis and molecular genetics. Findings from molecular genetics, for example, have prompted adopting new groups, such as the Afrotheria, and abandoning traditional groups, such as the Insectivora.
The early synapsid mammalian ancestors were sphenacodont pelycosaurs, a group that produced the non-mammalian Dimetrodon. At the end of the Carboniferous period, this group diverged from the sauropsid line that led to today's reptiles and birds. The line following the stem group Sphenacodontia split-off several diverse groups of non-mammalian synapsids—sometimes referred to as mammal-like reptiles—before giving rise to the proto-mammals (Therapsida) in the early Mesozoic era. The modern mammalian orders arose in the Paleogene and Neogene periods of the Cenozoic era, after the extinction of non-avian dinosaurs, and have been among the dominant terrestrial animal groups from 66 million years ago to the present.
In human culture, domesticated mammals played a major role in the Neolithic revolution, causing farming to replace hunting and gathering, and leading to a major restructuring of human societies with the first civilizations. They provided, and continue to provide, power for transport and agriculture, as well as various commodities such as meat, dairy products, wool, and leather. Mammals are hunted or raced for sport, and are used as model organisms in science. Mammals have been depicted in art since Palaeolithic times, and appear in literature, film, mythology, and religion.
Classification
thumb|300px|The orders Rodentia (blue), Chiroptera (red) and Soricomorpha (yellow) together comprise over 70% of mammal species.
Mammal classification has been through several iterations since Carl Linnaeus initially defined the class. No classification system is universally accepted; McKenna & Bell (1997) and Wilson & Reader (2005) provide useful recent compendiums. George Gaylord Simpson's "Principles of Classification and a Classification of Mammals" (AMNH Bulletin v. 85, 1945) provides systematics of mammal origins and relationships that were universally taught until the end of the 20th century. Since Simpson's classification, the paleontological record has been recalibrated, and the intervening years have seen much debate and progress concerning the theoretical underpinnings of systematization itself, partly through the new concept of cladistics. Though field work gradually made Simpson's classification outdated, it remains the closest thing to an official classification of mammals.
Most mammals, including the six most species-rich orders, belong to the placental group. The three largest orders in numbers of species are Rodentia: mice, rats, porcupines, beavers, capybaras and other gnawing mammals; Chiroptera: bats; and Soricomorpha: shrews, moles and solenodons. The next three biggest orders, depending on the biological classification scheme used, are the Primates including the great apes and monkeys; the Cetartiodactyla including whales and even-toed ungulates; and the Carnivora which includes cats, dogs, weasels, bears and seals. According to Mammal Species of the World, 5,416 species were identified in 2006. These were grouped into 1,229 genera, 153 families and 29 orders. In 2008, the International Union for Conservation of Nature (IUCN) completed a five-year Global Mammal Assessment for its IUCN Red List, which counted 5,488 species.
Definitions
The word "mammal" is modern, from the scientific name Mammalia coined by Carl Linnaeus in 1758, derived from the Latin mamma ("teat, pap"). In an influential 1988 paper, Timothy Rowe defined Mammalia phylogenetically as the crown group of mammals, the clade consisting of the most recent common ancestor of living monotremes (echidnas and platypuses) and therian mammals (marsupials and placentals) and all descendants of that ancestor. Since this ancestor lived in the Jurassic period, Rowe's definition excludes all animals from the earlier Triassic, despite the fact that Triassic fossils in the Haramiyida have been referred to the Mammalia since the mid-19th century. If Mammalia is considered as the crown group, its origin can be roughly dated as the first known appearance of animals more closely related to some extant mammals than to others. Ambondro is more closely related to monotremes than to therian mammals while Amphilestes and Amphitherium are more closely related to the therians; as fossils of all three genera are dated about in the Middle Jurassic, this is a reasonable estimate for the appearance of the crown group.
T. S. Kemp has provided a more traditional definition: "synapsids that possess a dentary–squamosal jaw articulation and occlusion between upper and lower molars with a transverse component to the movement" or, equivalently in Kemp's view, the clade originating with the last common ancestor of Sinoconodon and living mammals. The earliest known synapsid satisfying Kemp's definitions is Tikitherium, dated , so the appearance of mammals in this broader sense can be given this Late Triassic date.
McKenna/Bell classification
In 1997, the mammals were comprehensively revised by Malcolm C. McKenna and Susan K. Bell, which has resulted in the McKenna/Bell classification. Their 1997 book, Classification of Mammals above the Species Level, is a comprehensive work on the systematics, relationships and occurrences of all mammal taxa, living and extinct, down through the rank of genus, though molecular genetic data challenge several of the higher level groupings. The authors worked together as paleontologists at the American Museum of Natural History, New York. McKenna inherited the project from Simpson and, with Bell, constructed a completely updated hierarchical system, covering living and extinct taxa that reflects the historical genealogy of Mammalia.
Extinct groups are represented by a dagger (†).
Class Mammalia
Subclass Prototheria: monotremes: echidnas and the platypus
Subclass Theriiformes: live-bearing mammals and their prehistoric relatives
Infraclass †Allotheria: multituberculates
Infraclass †Eutriconodonta: eutriconodonts
Infraclass Holotheria: modern live-bearing mammals and their prehistoric relatives
Superlegion †Kuehneotheria
Supercohort Theria: live-bearing mammals
Cohort Marsupialia: marsupials
Magnorder Australidelphia: Australian marsupials and the monito del monte
Magnorder Ameridelphia: New World marsupials. Now considered paraphyletic, with shrew opossums being closer to australidelphians.
Cohort Placentalia: placentals
Magnorder Xenarthra: xenarthrans
Magnorder Epitheria: epitheres
Superorder †Leptictida
Superorder Preptotheria
Grandorder Anagalida: lagomorphs, rodents and elephant shrews
Grandorder Ferae: carnivorans, pangolins, †creodonts and relatives
Grandorder Lipotyphla: insectivorans
Grandorder Archonta: bats, primates, colugos and treeshrews
Grandorder Ungulata: ungulates
Order Tubulidentata incertae sedis: aardvark
Mirorder Eparctocyona: †condylarths, whales and artiodactyls (even-toed ungulates)
Mirorder †Meridiungulata: South American ungulates
Mirorder Altungulata: perissodactyls (odd-toed ungulates), elephants, manatees and hyraxes
Molecular classification of placentals
Molecular studies based on DNA analysis have suggested new relationships among mammal families over the last few years. Most of these findings have been independently validated by retrotransposon presence/absence data. Classification systems based on molecular studies reveal three major groups or lineages of placental mammals—Afrotheria, Xenarthra and Boreoeutheria—which diverged in the Cretaceous. The relationships between these three lineages is contentious, and all three possible different hypotheses have been proposed with respect to which group is basal. These hypotheses are Atlantogenata (basal Boreoeutheria), Epitheria (basal Xenarthra) and Exafroplacentalia (basal Afrotheria). Boreoeutheria in turn contains two major lineages—Euarchontoglires and Laurasiatheria.
Estimates for the divergence times between these three placental groups range from 105 to 120 million years ago, depending on the type of DNA used (such as nuclear or mitochondrial) and varying interpretations of paleogeographic data.
Cladogram based on Tarver et al. (2016)
Group I: Superorder Afrotheria
Clade Afroinsectiphilia
Order Macroscelidea: elephant shrews (Africa)
Order Afrosoricida: tenrecs and golden moles (Africa)
Order Tubulidentata: aardvark (Africa south of the Sahara)
Clade Paenungulata
Order Hyracoidea: hyraxes or dassies (Africa, Arabia)
Order Proboscidea: elephants (Africa, Southeast Asia)
Order Sirenia: dugong and manatees (cosmopolitan tropical)
Group II: Superorder Xenarthra
Order Pilosa: sloths and anteaters (neotropical)
Order Cingulata: armadillos and extinct relatives (Americas)
Group III: Magnaorder Boreoeutheria
Superorder: Euarchontoglires (Supraprimates)
Grandorder Euarchonta
Order Scandentia: treeshrews (Southeast Asia).
Order Dermoptera: flying lemurs or colugos (Southeast Asia)
Order Primates: lemurs, bushbabies, monkeys, apes, humans (cosmopolitan)
Grandorder Glires
Order Lagomorpha: pikas, rabbits, hares (Eurasia, Africa, Americas)
Order Rodentia: rodents (cosmopolitan)
Superorder: Laurasiatheria
Order Eulipotyphla: shrews, hedgehogs, moles, solenodons
Clade Ferungulata
Order Cetartiodactyla: cetaceans (whales, dolphins and porpoises) and even-toed ungulates, including pigs, cattle, deer and giraffes
Clade Pegasoferae
Order Chiroptera: bats (cosmopolitan)
Clade Zooamata
Order Perissodactyla: odd-toed ungulates, including horses, donkeys, zebras, tapirs and rhinoceroses
Clade Ferae
Order Pholidota: pangolins or scaly anteaters (Africa, South Asia)
Order Carnivora: carnivores (cosmopolitan), including cats and dogs
Taxonomy and phylogeny
Origins
Synapsida, a clade that contains mammals and their extinct relatives, originated during the Pennsylvanian subperiod, when they split from reptilian and avian lineages. Crown group mammals evolved from earlier mammaliaforms during the Early Jurassic. The cladogram takes Mammalia to be the crown group.
Evolution from amniotes
thumb|The original synapsid skull structure contains one temporal opening behind the orbitals, in a fairly low position on the skull (lower right in this image). This opening might have assisted in containing the jaw muscles of these organisms which could have increased their biting strength.
The first fully terrestrial vertebrates were amniotes. Like their amphibious tetrapod predecessors, they had lungs and limbs. Amniotic eggs, however, have internal membranes that allow the developing embryo to breathe but keep water in. Hence, amniotes can lay eggs on dry land, while amphibians generally need to lay their eggs in water.
The first amniotes apparently arose in the Pennsylvanian subperiod of the Carboniferous. They descended from earlier reptiliomorph amphibious tetrapods, which lived on land that was already inhabited by insects and other invertebrates as well as ferns, mosses and other plants. Within a few million years, two important amniote lineages became distinct: the synapsids, which would later include the common ancestor of the mammals; and the sauropsids, which now include turtles, lizards, snakes, crocodilians, dinosaurs and birds. Synapsids have a single hole (temporal fenestra) low on each side of the skull. One synapsid group, the pelycosaurs, included the largest and fiercest animals of the early Permian. Nonmammalian synapsids are sometimes called "mammal-like reptiles".
Therapsids descended from pelycosaurs in the Middle Permian, about 265 million years ago, and became the dominant land vertebrates. They differ from basal eupelycosaurs in several features of the skull and jaws, including: larger skulls and incisors which are equal in size in therapsids, but not for eupelycosaurs. The therapsid lineage leading to mammals went through a series of stages, beginning with animals that were very similar to their pelycosaur ancestors and ending with probainognathian cynodonts, some of which could easily be mistaken for mammals. Those stages were characterized by:
The gradual development of a bony secondary palate.
Progression towards an erect limb posture, which would increase the animals' stamina by avoiding Carrier's constraint. But this process was slow and erratic: for example, all herbivorous nonmammaliaform therapsids retained sprawling limbs (some late forms may have had semierect hind limbs); Permian carnivorous therapsids had sprawling forelimbs, and some late Permian ones also had semisprawling hindlimbs. In fact, modern monotremes still have semisprawling limbs.
The dentary gradually became the main bone of the lower jaw which, by the Triassic, progressed towards the fully mammalian jaw (the lower consisting only of the dentary) and middle ear (which is constructed by the bones that were previously used to construct the jaws of reptiles).
First mammals
The Permian–Triassic extinction event, which was a prolonged event due to the accumulation of several extinction pulses, ended the dominance of carnivorous therapsids. In the early Triassic, most medium to large land carnivore niches were taken over by archosaurs which, over an extended period (35 million years), came to include the crocodylomorphs, the pterosaurs and the dinosaurs; however, large cynodonts like Trucidocynodon and traversodontids still occupied large sized carnivorous and herbivorous niches respectively. By the Jurassic, the dinosaurs had come to dominate the large terrestrial herbivore niches as well.
The first mammals (in Kemp's sense) appeared in the Late Triassic epoch (about 225 million years ago), 40 million years after the first therapsids. They expanded out of their nocturnal insectivore niche from the mid-Jurassic onwards; The Jurassic Castorocauda, for example, had adaptations for swimming, digging and catching fish. Most, if not all, are thought to have remained nocturnal (the Nocturnal bottleneck), accounting for much of the typical mammalian traits. The majority of the mammal species that existed in the Mesozoic Era were multituberculates, eutriconodonts and spalacotheriids. The earliest known metatherian is Sinodelphys, found in 125 million-year-old Early Cretaceous shale in China's northeastern Liaoning Province. The fossil is nearly complete and includes tufts of fur and imprints of soft tissues.
thumb|Restoration of Juramaia sinensis, the oldest known Eutherian (160 mya)
The oldest known fossil among the Eutheria ("true beasts") is the small shrewlike Juramaia sinensis, or "Jurassic mother from China", dated to 160 million years ago in the late Jurassic. A later eutherian, Eomaia, dated to 125 million years ago in the early Cretaceous, possessed some features in common with the marsupials but not with the placentals, evidence that these features were present in the last common ancestor of the two groups but were later lost in the placental lineage. In particular, the epipubic bones extend forwards from the pelvis. These are not found in any modern placental, but they are found in marsupials, monotremes, nontherian mammals and Ukhaatherium, an early Cretaceous animal in the eutherian order Asioryctitheria. This also applies to the multituberculates. They are apparently an ancestral feature, which subsequently disappeared in the placental lineage. These epipubic bones seem to function by stiffening the muscles during locomotion, reducing the amount of space being presented, which placentals require to contain their fetus during gestation periods. A narrow pelvic outlet indicates that the young were very small at birth and therefore pregnancy was short, as in modern marsupials. This suggests that the placenta was a later development.
The earliest known monotreme was Teinolophos, which lived about 120 million years ago in Australia. Monotremes have some features which may be inherited from the original amniotes such as the same orifice to urinate, defecate and reproduce (cloaca) – as lizards and birds also do – and they lay eggs which are leathery and uncalcified.
Earliest appearances of features
Hadrocodium, whose fossils date from approximately 195 million years ago, in the early Jurassic, provides the first clear evidence of a jaw joint formed solely by the squamosal and dentary bones; there is no space in the jaw for the articular, a bone involved in the jaws of all early synapsids.
The earliest clear evidence of hair or fur is in fossils of Castorocauda and Megaconus, from 164 million years ago in the mid-Jurassic. In the 1950s, it was suggested that the foramina (passages) in the maxillae and premaxillae (bones in the front of the upper jaw) of cynodonts were channels which supplied blood vessels and nerves to vibrissae (whiskers) and so were evidence of hair or fur; it was soon pointed out, however, that foramina do not necessarily show that an animal had vibrissae, as the modern lizard Tupinambis has foramina that are almost identical to those found in the nonmammalian cynodont Thrinaxodon. Popular sources, nevertheless, continue to attribute whiskers to Thrinaxodon. Studies on Permian coprolites suggest that non-mammalian synapsids of the epoch already had fur, setting the evolution of hairs possibly as far back as dicynodonts.
When endothermy first appeared in the evolution of mammals is uncertain, though it is generally agreed to have first evolved in non-mammalian therapsids. Modern monotremes have lower body temperatures and more variable metabolic rates than marsupials and placentals, but there is evidence that some of their ancestors, perhaps including ancestors of the therians, may have had body temperatures like those of modern therians. Likewise, some modern therians like afrotheres and xenarthrans have secondarily developed lower body temperatures.
The evolution of erect limbs in mammals is incomplete — living and fossil monotremes have sprawling limbs. The parasagittal (nonsprawling) limb posture appeared sometime in the late Jurassic or early Cretaceous; it is found in the eutherian Eomaia and the metatherian Sinodelphys, both dated to 125 million years ago. Epipubic bones, a feature that strongly influenced the reproduction of most mammal clades, are first found in Tritylodontidae, suggesting that it is a synapomorphy between them and mammaliformes. They are omnipresent in non-placental mammaliformes, though Megazostrodon and Erythrotherium appear to have lacked them.
It has been suggested that the original function of lactation (milk production) was to keep eggs moist. Much of the argument is based on monotremes, the egg-laying mammals.
Rise of the mammals
Therian mammals took over the medium- to large-sized ecological niches in the Cenozoic, after the Cretaceous–Paleogene extinction event emptied ecological space once filled by non-avian dinosaurs and other groups of reptiles, as well as various other mammal groups. Then mammals diversified very quickly; both birds and mammals show an exponential rise in diversity. For example, the earliest known bat dates from about 50 million years ago, only 16 million years after the extinction of the dinosaurs.
Molecular phylogenetic studies initially suggested that most placental orders diverged about 100 to 85 million years ago and that modern families appeared in the period from the late Eocene through the Miocene. However, no placental fossils have been found from before the end of the Cretaceous. The earliest undisputed fossils of placentals comes from the early Paleocene, after the extinction of the dinosaurs. In particular, scientists have identified an early Paleocene animal named Protungulatum donnae as one of the first placental mammals. however it has been reclassified as a non-placental eutherian. Recalibrations of genetic and morphological diversity rates have suggested a terminal Maastrichtian origin for placentals, and a Paleocene origin for most modern clades.
The earliest known ancestor of primates is Archicebus achilles from around 55 million years ago. This tiny primate weighed 20–30 grams (0.7–1.1 ounce) and could fit within a human palm.
Anatomy and morphology
Distinguishing features
Living mammal species can be identified by the presence of sweat glands, including those that are specialized to produce milk to nourish their young. In classifying fossils, however, other features must be used, since soft tissue glands and many other features are not visible in fossils.
Many traits shared by all living mammals appeared among the earliest members of the group:
Jaw joint - The dentary (the lower jaw bone, which carries the teeth) and the squamosal (a small cranial bone) meet to form the joint. In most gnathostomes, including early therapsids, the joint consists of the articular (a small bone at the back of the lower jaw) and quadrate (a small bone at the back of the upper jaw).
Middle ear - In crown-group mammals, sound is carried from the eardrum by a chain of three bones, the malleus, the incus and the stapes. Ancestrally, the malleus and the incus are derived from the articular and the quadrate bones that constituted the jaw joint of early therapsids.
Tooth replacement - Teeth are replaced once or (as in toothed whales and murid rodents) not at all, rather than being replaced continually throughout life.
Prismatic enamel - The enamel coating on the surface of a tooth consists of prisms, solid, rod-like structures extending from the dentin to the tooth's surface.
Occipital condyles - Two knobs at the base of the skull fit into the topmost neck vertebra; most other tetrapods, in contrast, have only one such knob.
For the most part, these characteristics were not present in the Triassic ancestors of the mammals. Nearly all mammal groups possess an epipubic bone, the exception being modern placentals.
Biological systems
thumb|left|Raccoon lungs being inflated manually.
The majority of mammals have seven cervical vertebrae (bones in the neck), including bats, giraffes, whales and humans. The exceptions are the manatee and the two-toed sloth, which have just six, and the three-toed sloth which has nine cervical vertebrae. All mammalian brains possess a neocortex, a brain region unique to mammals. Placental mammals have a corpus callosum, unlike monotremes and marsupials.
The lungs of mammals are spongy and honeycombed. Breathing is mainly achieved with the diaphragm, which divides the thorax from the abdominal cavity, forming a dome convex to the thorax. Contraction of the diaphragm flattens the dome, increasing the volume of the lung cavity. Air enters through the oral and nasal cavities, and travels through the larynx, trachea and bronchi, and expands the alveoli. Relaxing the diaphragm has the opposite effect, decreasing the volume of the lung cavity, causing air to be pushed out of the lungs. During exercise, the abdominal wall contracts, increasing pressure on the diaphragm, which forces air out quicker and more forcefully. The rib cage is able to expand and contract the chest cavity through the action of other respiratory muscles. Consequently, air is sucked into or expelled out of the lungs, always moving down its pressure gradient. This type of lung is known as a bellows lung due to its resemblance to blacksmith bellows.
The mammalian heart has four chambers, two upper atria, the receiving chambers, and two lower ventricles, the discharging chambers. The heart has four valves, which separate its chambers and ensures blood flows in the correct direction through the heart (preventing backflow). After gas exchange in the pulmonary capillaries (blood vessels in the lungs), oxygen-rich blood returns to the left atrium via one of the four pulmonary veins. Blood flows nearly continuously back into the atrium, which acts as the receiving chamber, and from here through an opening into the left ventricle. Most blood flows passively into the heart while both the atria and ventricles are relaxed, but toward the end of the ventricular relaxation period, the left atrium will contract, pumping blood into the ventricle. The heart also requires nutrients and oxygen found in blood like other muscles, and is supplied via coronary arteries.
The integumentary system is made up of three layers: the outermost epidermis, the dermis and the hypodermis. The epidermis is typically 10 to 30 cells thick; its main function is to provide a waterproof layer. Its outermost cells are constantly lost; its bottommost cells are constantly dividing and pushing upward. The middle layer, the dermis, is 15 to 40 times thicker than the epidermis. The dermis is made up of many components, such as bony structures and blood vessels. The hypodermis is made up of adipose tissue, which stores lipids and provides cushioning and insulation. The thickness of this layer varies widely from species to species; marine mammals require a thick hypodermis (blubber) for insulation, and right whales have the thickest blubber at . Although other animals have features such as whiskers, feathers, setae, or cilia that superficially resemble it, no animals other than mammals have hair. It is a definitive characteristic of the class. Though some mammals have very little, careful examination reveals the characteristic, often in obscure parts of their bodies.
Herbivores have developed a diverse range of physical structures to facilitate the consumption of plant material. To break up intact plant tissues, mammals have developed teeth structures that reflect their feeding preferences. For instance, frugivores (animals that feed primarily on fruit) and herbivores that feed on soft foliage have low-crowned teeth specialized for grinding foliage and seeds. Grazing animals that tend to eat hard, silica-rich grasses, have high-crowned teeth, which are capable of grinding tough plant tissues and do not wear down as quickly as low-crowned teeth. Most carnivorous mammals have carnassialiforme teeth (of varying length depending on diet), long canines and similar tooth replacement patterns.
The stomach of Artiodactyls is divided into four sections: the rumen, the reticulum, the omasum and the abomasum (only ruminants have a rumen). After the plant material is consumed, it is mixed with saliva in the rumen and reticulum and separates into solid and liquid material. The solids lump together to form a bolus (or cud), and is regurgitated. When the bolus enters the mouth, the fluid is squeezed out with the tongue and swallowed again. Ingested food passes to the rumen and reticulum where cellulytic microbes (bacteria, protozoa and fungi) produce cellulase, which is needed to break down the cellulose in plants. Perissodactyls, in contrast to the ruminants, store digested food that has left the stomach in an enlarged cecum, where it is fermented by bacteria. Carnivora have a simple stomach adapted to digest primarily meat, as compared to the elaborate digestive systems of herbivorous animals, which are necessary to break down tough, complex plant fibers. The caecum is either absent or short and simple, and the large intestine is not sacculated or much wider than the small intestine.
thumb|Bovine kidney
The mammalian excretory system involves many components. Like most other land animals, mammals are ureotelic, and convert ammonia into urea, which is done by the liver as part of the urea cycle. Bilirubin, a waste product derived from blood cells, is passed through bile and urine with the help of enzymes excreted by the liver. The passing of bilirubin via bile through the intestinal tract gives mammalian feces a distinctive brown coloration. Distinctive features of the mammalian kidney include the presence of the renal pelvis and renal pyramids, and of a clearly distinguishable cortex and medulla, which is due to the presence of elongated loops of Henle. Only the mammalian kidney has a bean shape, although there are some exceptions, such as the multilobed reniculate kidneys of pinnipeds, cetaceans and bears. Most adult placental mammals have no remaining trace of the cloaca. In the embryo, the embryonic cloaca divides into a posterior region that becomes part of the anus, and an anterior region that has different fates depending on the sex of the individual: in females, it develops into the vestibule that receives the urethra and vagina, while in males it forms the entirety of the penile urethra. However, the tenrecs, golden moles, and some shrews retain a cloaca as adults.Biological Reviews - Cambridge Journals In marsupials, the genital tract is separate from the anus, but a trace of the original cloaca does remain externally. Monotremes, which translates from Greek into "single hole", have a true cloaca.
Sound production
thumb|300px|A diagram of ultrasonic signals emitted by a bat, and the echo from a nearby object
As in all other tetrapods, mammals have a larynx that can quickly open and close to produce sounds, and a supralaryngeal vocal tract which filters this sound. The lungs and surrounding musculature provide the air and pressure required phonate. The larynx controls the pitch and volume of sound, but the strength the lungs exert to exhale also contributes to volume. More primitive mammals, such as the echidna, can only hiss, as sound is achieved solely through exhaling through a partially close larynx. Other mammals phonate using vocal folds, as opposed to the vocal cords seen in birds and reptiles. The movement or tenseness of the vocal folds can result in many sounds such as purring and screaming. Mammals can change the position of the larnyx, allowing them to breathe through the nose while swallowing through the mouth, and to create both oral and nasal sounds; nasal sounds, such as a dog whine, are generally soft sounds, and oral sounds, such as a dog bark, are generally loud. Some mammals have a large larynx and, thus, a low-pitched voice, namely the hammer-headed bat (Hypsignathus monstrosus) where the larynx can take up the entirety of the thoracic cavity while pushing the lungs, heart, and trachea into the abdomen. Large vocal pads can also lower the pitch, as in the low-pitched roars of big cats. The production of infrasound is possible in some mammals such as the African elephant (Loxodonta spp.) and baleen whales. Small mammals with small larynxes have the ability to produced ultrasound, which can be detected by modifications to the middle ear and cochlea. Ultrasound is inaudible to birds and reptiles, which might have been important during the Mesozoic, wherein birds and reptiles were the dominant predators. This private channel is used by some rodents in, for example, mother-to-pup communication, and by bats when echolocating. Toothed whales also use echolocation, but, as opposed to the vocal membrane that extends upward from the vocal folds, they have a melon to manipulate sounds. Some mammals, namely the primates, have air sacs attached to the larynx, which may function to increase the volume of sound.
The vocal production system is controlled by the cranial nerve nucleus in the brain, and supplied by the recurrent laryngeal nerve and the superior laryngeal nerve, branches of the vagus nerve. The vocal tract is supplied by the hypoglossal nerve and facial nerves. Electrical stimulation of the periaqueductal gray (PEG) region of the mammalian midbrain elicit vocalizations. The ability to learn new vocalizations is only exemplified in humans, seals, cetaceans, and possibly bats; in humans, this is the result of a direct connection between the motor cortex, which controls movement, and the motor neurons in the spinal cord.
Coloration
thumb|A leopard's disruptively colored coat provides camouflage for this ambush predator.
Mammalian coats or pelage are colored for a variety of reasons, the major selective pressures including camouflage, sexual selection, communication and physiological processes such as temperature regulation. Camouflage is a powerful influence in a large number of mammals, as it helps to conceal individuals from predators or prey. Aposematism, warning off possible predators, is the most likely explanation of the black-and-white pelage of many mammals which are able to defend themselves, such as in the foul-smelling skunk and the powerful and aggressive honey badger. In arctic and subarctic mammals such as the arctic fox (Alopex lagopus), collared lemming (Dicrostonyx groenlandicus), stoat (Mustela erminea), and snowshoe hare (Lepus americanus), seasonal color change between brown in summer and white in winter is driven largely by camouflage. Differences in female and male coat color may indicate nutrition and hormone levels, important in mate selection. Some arboreal mammals, notably primates and marsupials, have shades of violet, green, or blue skin on parts of their bodies, indicating some distinct advantage in their largely arboreal habitat due to convergent evolution. The green coloration of sloths, however, is the result of a symbiotic relationship with algae. Coat color is sometimes sexually dimorphic, as in many primate species.
Reproductive system
thumb|left|Goat kids stay with their mother until they are weaned.
thumb|Due to the presence of epipubic bones, non-placental mammals cannot expand their abdomen, being thus forced to give birth to (or lay eggs that hatch into) fetus-like larvae. Echidna "puggle" (a) compared to various "joeys": Virginia opossum (b), Gray short-tailed opossum (c), Eastern quoll (d), Koala (e), Brushtail possum (f) and Southern brown bandicoot (g).
Most mammals are viviparous, giving birth to live young. However, the five species of monotreme, the platypus and the four species of echidna, lay eggs. The monotremes have a sex determination system different from that of most other mammals. In particular, the sex chromosomes of a platypus are more like those of a chicken than those of a therian mammal.
The mammary glands of mammals are specialized to produce milk, the primary source of nutrition for newborns. The monotremes branched early from other mammals and do not have the nipples seen in most mammals, but they do have mammary glands. The young lick the milk from a mammary patch on the mother's belly.
Viviparous mammals are in the subclass Theria; those living today are in the marsupial and placental infraclasses. Marsupials have a short gestation period, typically shorter than its estrous cycle and gives birth to an undeveloped newborn that then undergoes further development; in many species, this takes place within a pouch-like sac, the marsupium, located in the front of the mother's abdomen. This is the plesiomorphic condition among viviparous mammals; the presence of epipubic bones in all non-placental mammals prevents the expansion of the torso needed for full pregnancy. Even non-placental eutherians probably reproduced this way. The placentals give birth to relatively complete and developed young, usually after long gestation periods. They get their name from the placenta, which connects the developing fetus to the uterine wall to allow nutrient uptake.
Endothermy
Nearly all mammals are endothermic ("warm-blooded"). Most mammals also have hair to help keep them warm. Like birds, mammals can forage or hunt in weather and climates too cold for ectothermic ("cold-blooded") reptiles and insects. Endothermy requires plenty of food energy, so mammals eat more food per unit of body weight than most reptiles. Small insectivorous mammals eat prodigious amounts for their size. A rare exception, the naked mole-rat produces little metabolic heat, so it is considered an operational poikilotherm. Birds are also endothermic, so endothermy is not unique to mammals.
Behavior
Communication and vocalization
thumb|upright|Vervet monkeys use at least four distinct alarm calls for different predators.
Many mammals communicate by vocalizing. Vocal communication serves many purposes, including in mating rituals, as warning calls, to indicate food sources, and for social purposes. Males often call during mating rituals to ward off other males and to attract females, as in the roaring of lions and red deer. The songs of the humpback whale may be signals to females; they have different dialects in different regions of the ocean. Social vocalizations include the territorial calls of gibbons, and the use of frequency in greater spear-nosed bats to distinguish between groups. The vervet monkey gives a distinct alarm call for each of at least four different predators, and the reactions of other monkeys vary according to the call. For example, if an alarm call signals a python, the monkeys climb into the trees, whereas the eagle alarm causes monkeys to seek a hiding place on the ground. Prairie dogs similarly have complex calls that signal the type, size, and speed of an approaching predator. Elephants communicate socially with a variety of sounds including snorting, screaming, trumpeting, roaring and rumbling. Some of the rumbling calls are infrasonic, below the hearing range of humans, and can be heard by other elephants up to away at still times near sunrise and sunset.
Mammals signal by a variety of means. Many give visual anti-predator signals, as when deer and gazelle stot, honestly indicating their fit condition and their ability to escape, or when white-tailed deer and other prey mammals flag with conspicuous tail markings when alarmed, informing the predator that it has been detected. Many mammals make use of scent-marking, sometimes possibly to help defend territory, but probably with a range of functions both within and between species. Microbats and toothed whales including oceanic dolphins vocalize both socially and in echolocation.
Feeding
thumb|The insectivorous giant anteater eats some 30,000 insects per day.
To maintain a high constant body temperature is energy expensive – mammals therefore need a nutritious and plentiful diet. While the earliest mammals were probably predators, different species have since adapted to meet their dietary requirements in a variety of ways. Some eat other animals – this is a carnivorous diet (and includes insectivorous diets). Other mammals, called herbivores, eat plants, which contain complex carbohydrates such as cellulose. An herbivorous diet includes subtypes such as granivory (seed eating), folivory (leaf eating), frugivory (fruit eating), nectarivory (nectar eating), gummivory (gum eating) and mycophagy (fungus eating). The digestive tract of an herbivore is host to bacteria that ferment these complex substances, and make them available for digestion, which are either housed in the multichambered stomach or in a large cecum. Some mammals are coprophagous, consuming feces to absorb the nutrients not digested when the food was first ingested. An omnivore eats both prey and plants. Carnivorous mammals have a simple digestive tract because the proteins, lipids and minerals found in meat require little in the way of specialized digestion. Exceptions to this include baleen whales who also house gut flora in a multi-chambered stomach, like terrestrial herbivores.
The size of an animal is also a factor in determining diet type (Allen's rule). Since small mammals have a high ratio of heat-losing surface area to heat-generating volume, they tend to have high energy requirements and a high metabolic rate. Mammals that weigh less than about are mostly insectivorous because they cannot tolerate the slow, complex digestive process of an herbivore. Larger animals, on the other hand, generate more heat and less of this heat is lost. They can therefore tolerate either a slower collection process (those that prey on larger vertebrates) or a slower digestive process (herbivores). Furthermore, mammals that weigh more than usually cannot collect enough insects during their waking hours to sustain themselves. The only large insectivorous mammals are those that feed on huge colonies of insects (ants or termites).
Some mammals are omnivores and display varying degrees of carnivory and herbivory, generally leaning in favor of one more than the other. Since plants and meat are digested differently, there is a preference for one over the other, as in bears where some species may be mostly carnivorous and others mostly herbivorous. They are grouped into three categories: mesocarnivory (50-70% meat), hypercarnivory (70% and greater of meat), and hypocarnivory (50% or less of meat). The dentition of hypocarnivores consists of dull, triangular carnassial teeth meant for grinding food. Hypercarnivores, however, have conical teeth and sharp carnassials meant for slashing, and in some cases strong jaws for bone-crushing, as in the case of hyenas, allowing them to consume bones; some extinct groups, notably the Machairodontinae, had saber-shaped canines.
Some physiological carnivores consume plant matter and some physiological herbivores consuming meat. From a behavioral aspect, this would make them omnivores, but from the physiological standpoint, this may be due to zoopharmacognosy. Physiologically, animals must be able to obtain both energy and nutrients from plant and animal materials to be considered omnivorous. Thus, such animals are still able to be classified as carnivores and herbivores when they are just obtaining nutrients from materials originating from sources that do not seemingly complement their classification. For example, it is well documented that some ungulates. such as giraffes, camels, and cattle, will gnaw on bones to consume particular minerals and nutrients. Also, cats, which are generally regarded as obligate carnivores, occasionally eat grass to regurgitate indigestible material (such as hairballs), aid with hemoglobin production, and as a laxative.
Many mammals, in the absence of sufficient food requirements in an environment, suppress their metabolism and conserve energy in a process known as hibernation. In the period preceding hibernation, larger mammals, such as bears, become polyphagic to increase fat stores, whereas smaller mammals prefer to collect and stash food. The slowing of the metabolism is accompanied by a decreased heart and respiratory rate, as well as a drop in internal temperatures, which can be around ambient temperature in some cases. For example, the internal temperatures of hibernating arctic ground squirrels can drop to , however the head and neck always stay above . A few mammals in hot environments aestivate in times of drought or extreme heat, namely the fat-tailed dwarf lemur (Cheirogaleus medius).
Intelligence
In intelligent mammals, such as primates, the cerebrum is larger relative to the rest of the brain. Intelligence itself is not easy to define, but indications of intelligence include the ability to learn, matched with behavioral flexibility. Rats, for example, are considered to be highly intelligent, as they can learn and perform new tasks, an ability that may be important when they first colonize a fresh habitat. In some mammals, food gathering appears to be related to intelligence: a deer feeding on plants has a brain smaller than a cat, which must think to outwit its prey.
thumb|A bonobo fishing for termites with a stick
Tool use by animals may indicate different levels of learning and cognition. The sea otter uses rocks as essential and regular parts of its foraging behaviour (smashing abalone from rocks or breaking open shells), with some populations spending 21% of their time making tools. Other tool use, such as chimpanzees using twigs to "fish" for termites, may be developed by watching others use tools and may even be a true example of animal teaching. Tools may even be used in solving puzzles in which the animal appears to experience a "Eureka moment". Other mammals that do not use tools, such as dogs, can also experience a Eureka moment.
Brain size was previously considered a major indicator of the intelligence of an animal. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that mammalian brain size scales at approximately the ⅔ or ¾ exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such allometric analysis provides an encephalisation quotient that can be used as another indication of animal intelligence. Sperm whales have the largest brain mass of any animal on earth, averaging and in mature males.
Self-awareness appears to be a sign of abstract thinking. Self-awareness, although not well-defined, is believed to be a precursor to more advanced processes such as metacognitive reasoning. The traditional method for measuring this is the mirror test, which determines if an animal possesses the ability of self-recognition. Mammals that have 'passed' the mirror test include Asian elephants (some pass, some do not); chimpanzees; bonobos; orangutans; humans, from 18 months (mirror stage); bottlenose dolphins killer whales; and false killer whales.
Social structure
thumb|Female elephants live in stable groups, along with their offspring.
Eusociality is the highest level of social organization. These societies have an overlap of adult generations, the division of reproductive labor and cooperative caring of young. Usually insects, such as bees, ants and termites, have eusocial behavior, but it is demonstrated in two rodent species: the naked mole-rat and the Damaraland mole-rat.
Presociality is when animals exhibit more than just sexual interactions with members of the same species, but fall short of qualifying as eusocial. That is, presocial animals can display communal living, cooperative care of young, or primitive division of reproductive labor, but they do not display all of the three essential traits of eusocial animals. Humans and some species of Callitrichidae (marmosets and tamarins) are unique among primates in their degree of cooperative care of young. Harry Harlow set up an experiment with rhesus monkeys, presocial primates, in 1958; the results from this study showed that social encounters are necessary in order for the young monkeys to develop both mentally and sexually.
A fission-fusion society is a society that changes frequently in its size and composition, making up a permanent social group called the "parent group". Permanent social networks consist of all individual members of a community and often varies to track changes in their environment. In a fission–fusion society, the main parent group can fracture (fission) into smaller stable subgroups or individuals to adapt to environmental or social circumstances. For example, a number of males may break off from the main group in order to hunt or forage for food during the day, but at night they may return to join (fusion) the primary group to share food and partake in other activities. Many mammals exhibit this, such as primates (for example orangutans and spider monkeys), elephants, spotted hyenas, lions, and dolphins.
Solitary animals defend a territory and avoid social interactions with the members of its species, except during breeding season. This is to avoid resource competition, as two individuals of the same species would occupy the same niche, and to prevent depletion of food. A solitary animal, while foraging, can also be less conspicuous to predators or prey.
thumb|left|Red kangaroos "boxing" for dominance
In a hierarchy, individuals are either dominant or submissive. A despotic hierarchy is where one individual is dominant while the others are submissive, as in wolves and lemurs, and a pecking order is a linear ranking of individuals where there is a top individual and a bottom individual. Pecking orders may also be ranked by sex, where the lowest individual of a sex has a higher ranking than the top individual of the other sex, as in hyenas. Dominant individuals, or alphas, have a high chance of reproductive success, especially in harems where one or a few males (resident males) have exclusive breeding rights to females in a group. Non-resident males can also be accepted in harems, but some species, such as the common vampire bat (Desmodus rotundus), may be more strict.
When two animals mate, they both share an interest in the success of the offspring, though often to different extremes, unless the male and female are perfectly monogamous, meaning that they mate for life and take no other partners (even after the original mate’s death), as with wolves, Eurasian beavers, and otters. The amount of parental care will vary. There are three types of polygamy: either one or multiple dominant males have with breeding rights (polygyny), multiple males that females mate with (polyandry), or multiple males have exclusive relations with multiple females (polygynandry). It is much more common for polygynous mating to happen, which, excluding leks, are estimated to occur in up to 90% of mammals. Lek mating occurs in harems, wherein one or a few males protect their harem of females from other males who would otherwise mate with the females, as in elephant seals; or males congregate around females and try to attract them with various courtship displays and vocalizations, as in harbor seals.
Locomotion
Terrestrial
thumb|Running gait. Photographs by Eadweard Muybridge, 1887
Most vertebrates—the amphibians, the reptiles and some mammals such as humans and bears—are plantigrade, walking on the whole of the underside of the foot. Many mammals, such as cats and dogs, are digitigrade, walking on their toes, the greater stride length allowing more speed. Digitigrade mammals are also often adept at quiet movement. Some animals such as horses are unguligrade, walking on the tips of their toes. This even further increases their stride length and thus their speed. A few mammals, namely the great apes, are also known to walk on their knuckles, at least for their front legs. Giant anteaters and platypuses are also knuckle-walkers.
Animals will use different gaits for different speeds, terrain and situations. For example, horses show four natural gaits, the slowest horse gait is the walk, then there are three faster gaits which, from slowest to fastest, are the trot, the canter and the gallop. Animals may also have unusual gaits that are used occasionally, such as for moving sideways or backwards. For example, the main human gaits are bipedal walking and running, but they employ many other gaits occasionally, including a four-legged crawl in tight spaces. Mammals show a vast range of gaits, the order that they place and lift their appendages in locomotion. Gaits can be grouped into categories according to their patterns of support sequence. For quadrupeds, there are three main categories: walking gaits, running gaits and leaping gaits. Walking is the most common gait, where some feet are on the ground at any given time, and found in almost all legged animals. Running is considered to occur when at some points in the stride all feet are off the ground in a moment of suspension.
Arboreal
thumb|left|upright|Gibbons are very good brachiators because their elongated limbs enable them to easily swing and grasp on to branches.
Arboreal animals frequently have elongated limbs that help them cross gaps, reach fruit or other resources, test the firmness of support ahead and, in some cases, to brachiate. Many arboreal species, such as tree porcupines, silky anteaters, spider monkeys and possums, use prehensile tails to grasp branches. In the spider monkey, the tip of the tail has either a bare patch or adhesive pad, which provides increased friction. Claws can be used to interact with rough substrates and re-orient the direction of forces the animal applies. This is what allows squirrels to climb tree trunks that are so large to be essentially flat from the perspective of such a small animal. However, claws can interfere with an animal's ability to grasp very small branches, as they may wrap too far around and prick the animal's own paw. Frictional gripping is used by primates, relying upon hairless fingertips. Squeezing the branch between the fingertips generates frictional force that holds the animal's hand to the branch. However, this type of grip depends upon the angle of the frictional force, thus upon the diameter of the branch, with larger branches resulting in reduced gripping ability. To control descent, especially down large diameter branches, some arboreal animals such as squirrels have evolved highly mobile ankle joints that permit rotating the foot into a 'reversed' posture. This allows the claws to hook into the rough surface of the bark, opposing the force of gravity. Small size provides many advantages to arboreal species: such as increasing the relative size of branches to the animal, lower center of mass, increased stability, lower mass (allowing movement on smaller branches) and the ability to move through more cluttered habitat. Size relating to weight affects gliding animals such as the sugar glider. Some species of primate, bat and all species of sloth achieve passive stability by hanging beneath the branch. Both pitching and tipping become irrelevant, as the only method of failure would be losing their grip.
Aerial
thumb|300px|Slow-motion and normal speed of Egyptian fruit bats flying
Bats are the only mammals that can truly fly. They fly through the air at a constant speed by moving their wings up and down (usually with some fore-aft movement as well). Because the animal is in motion, there is some airflow relative to its body which, combined with the velocity of the wings, generates a faster airflow moving over the wing. This generates a lift force vector pointing forwards and upwards, and a drag force vector pointing rearwards and upwards. The upwards components of these counteract gravity, keeping the body in the air, while the forward component provides thrust to counteract both the drag from the wing and from the body as a whole.
The wings of bats are much thinner and consist of more bones than that of birds, allowing bats to maneuver more accurately and fly with more lift and less drag. By folding the wings inwards towards their body on the upstroke, they use 35% less energy during flight than birds. The membranes are delicate, ripping easily; however, the tissue of the bat's membrane is able to regrow, such that small tears can heal quickly. The surface of their wings is equipped with touch-sensitive receptors on small bumps called Merkel cells, also found on human fingertips. These sensitive areas are different in bats, as each bump has a tiny hair in the center, making it even more sensitive and allowing the bat to detect and collect information about the air flowing over its wings, and to fly more efficiently by changing the shape of its wings in response.
Fossorial
Fossorial creatures live in subterranean environments. Many fossorial mammals were classified under the, now obsolete, order Insectivora, such as shrews, hedgehogs and moles. Fossorial mammals have a fusiform body, thickest at the shoulders and tapering off at the tail and nose. Unable to see in the dark burrows, most have degenerated eyes, but degeneration varies between species; pocket gophers, for example, are only semi-fossorial and have very small yet functional eyes, in the fully fossorial marsupial mole the eyes are degenerated and useless, talpa moles have vestigial eyes and the cape golden mole has a layer of skin covering the eyes. External ears flaps are also very small or absent. Truly fossorial mammals have short, stout legs as strength is more important than speed to a burrowing mammal, but semi-fossorial mammals have cursorial legs. The front paws are broad and have strong claws to help in loosening dirt while excavating burrows, and the back paws have webbing, as well as claws, which aids in throwing loosened dirt backwards. Most have large incisors to prevent dirt from flying into their mouth.
Aquatic
thumb|A pod of short-beaked common dolphins swimming
Fully aquatic mammals, the cetaceans and sirenians, have lost their legs and have a tail fin to propel themselves through the water. Flipper movement is continuous. Whales swim by moving their tail fin and lower body up and down, propelling themselves through vertical movement, while their flippers are mainly used for steering. Their skeletal anatomy allows them to be fast swimmers. Most species have a dorsal fin to prevent themselves from turning upside-down in the water. The flukes of sirenians are raised up and down in long strokes to move the animal forward, and can be twisted to turn. The forelimbs are paddle-like flippers which aid in turning and slowing.
Semi-aquatic mammals, like pinnipeds, have two pairs of flippers on the front and back, the fore-flippers and hind-flippers. The elbows and ankles are enclosed within the body.Berta, pp. 62–64. Pinnipeds have several adaptions for reducing drag. In addition to their streamlined bodies, they have smooth networks of muscle bundles in their skin that may increase laminar flow and make it easier for them to slip through water. They also lack arrector pili, so their fur can be streamlined as they swim. They rely on their fore-flippers for locomotion in a wing-like manner similar to penguins and sea turtles. Fore-flipper movement is not continuous, and the animal glides between each stroke. Compared to terrestrial carnivorans, the fore-limbs are reduced in length, which gives the locomotor muscles at the shoulder and elbow joints greater mechanical advantage; the hind-flippers serve as stabilizers. Other semi-aquatic mammals include beavers, hippopotamuses, otters and platypuses. Hippos are very large semi-aquatic mammals, and their barrel-shaped bodies have graviportal skeletal structures, adapted to carrying their enormous weight, and their specific gravity allows them to sink and move along the bottom of a river.
Mammals and humans
In human culture
thumb|Upper Paleolithic cave painting of a variety of large mammals, Lascaux, c. 17,300 years old
Non-human mammals play a wide variety of roles in human culture. They are the most popular of pets, with tens of millions of dogs, cats and other animals including rabbits and mice kept by families around the world. Mammals such as mammoths, horses and deer are among the earliest subjects of art, being found in Upper Paleolithic cave paintings such as at Lascaux. Major artists such as Albrecht Dürer, George Stubbs and Edwin Landseer are known for their portraits of mammals. Many species of mammals have been hunted for sport and for food; deer and wild boar are especially popular as game animals. Chapters on hunting deer, wild hog (boar), rabbit, and squirrel. Mammals such as horses and dogs are widely raced for sport, often combined with betting on the outcome. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. Mammals further play a wide variety of roles in literature, film, mythology, and religion.
Uses and importance
thumb|upright|left|Cattle have been kept for milk for thousands of years.
Domestic mammals form a large part of the livestock raised for meat across the world. They include (2011) around 1.4 billion cattle, 1.2 billion sheep, 1 billion domestic pigs, and (1985) over 700 million rabbits. Working domestic animals including cattle and horses have been used for work and transport from the origins of agriculture, their numbers declining with the arrival of mechanised transport and agricultural machinery. In 2004 they still provided some 80% of the power for the mainly small farms in the third world, and some 20% of the world's transport, again mainly in rural areas. In mountainous regions unsuitable for wheeled vehicles, pack animals continue to transport goods.
Mammal skins provide leather for shoes, clothing and upholstery. Wool from mammals including sheep, goats and alpacas has been used for centuries for clothing.Quiggle, Charlotte. "Alpaca: An Ancient Luxury." Interweave Knits Fall 2000: 74-76. Mammals serve a major role in science as experimental animals, both in fundamental biological research, such as in genetics, and in the development of new medicines, which must be tested exhaustively to demonstrate their safety. Millions of mammals, especially mice and rats, are used in experiments each year. A knockout mouse is a genetically modified mouse with an inactivated gene, replaced or disrupted with an artificial piece of DNA. They enable the study of sequenced genes whose functions are unknown.Y Zan et al., Production of knockout rats using ENU mutagenesis and a yeast-based screening assay, Nat. Biotechnol. (2003). A small percentage of the mammals are non-human primates, used in research for their similarity to humans.
Charles Darwin, Jared Diamond and others have noted the importance of domesticated mammals in the Neolithic development of agriculture and of civilization, causing farmers to replace hunter-gatherers around the world. This transition from hunting and gathering to herding flocks and growing crops was a major step in human history. The new agricultural economies, based on domesticated mammals, caused "radical restructuring of human societies, worldwide alterations in biodiversity, and significant changes in the Earth's landforms and its atmosphere... momentous outcomes".
Hybrids
Hybrids are offspring resulting from the breeding of two genetically distinct individuals, which usually will result in a high degree of heterozygosity, though hybrid and heterozygous are not synonymous. The deliberate or accidental hybridizing of two or more species of closely related animals through captive breeding is a human activity which has been in existence for millennia and has grown for economic purposes. Hybrids between different subspecies within a species (such as between the Bengal tiger and Siberian tiger) are known as intra-specific hybrids. Hybrids between different species within the same genus (such as between lions and tigers) are known as interspecific hybrids or crosses. Hybrids between different genera (such as between sheep and goats) are known as intergeneric hybrids. Natural hybrids will occur in hybrid zones, where two populations of species within the same genera or species living in the same or adjacent areas will interbreed with each other. Some hybrids have been recognized as species, such as the red wolf (though this is controversial).
Artificial selection, the deliberate selective breeding of domestic animals, is being used to breed back recently extinct animals in an attempt to achieve an animal breed with a phenotype that resembles that extinct wildtype ancestor. A breeding-back (intraspecific) hybrid may be very similar to the extinct wildtype in appearance, ecological niche and to some extent genetics, but the initial gene pool of that wild type is lost forever with its extinction. As a result, bred-back breeds are at best vague look-alikes of extinct wildtypes, as Heck cattle are of the aurochs.
Notes
See also
List of recently extinct mammals – during recorded history
List of prehistoric mammals
List of monotremes and marsupials
List of placental mammals
List of mammal genera – living mammals
List of mammalogists
Lists of mammals by population size
Lists of mammals by region
List of threatened mammals of the United States
Mammals described in the 2000s
Mammals in culture
Prehistoric mammals
References
Further reading
External links
BBC Wildlife Finder – video clips from the BBC's natural history archive
Biodiversitymapping.org – All mammal orders in the world with distribution maps
Paleocene Mammals, a site covering the rise of the mammals, paleocene-mammals.de
Evolution of Mammals, a brief introduction to early mammals, enchantedlearning.com
Mammal Species, collection of information sheets about various mammal species, learnanimals.com
European Mammal Atlas EMMA from Societas Europaea Mammalogica, European-mammals.org
Marine Mammals of the World—An overview of all marine mammals, including descriptions, both fully aquatic and semi-aquatic, noaa.gov
Mammalogy.org The American Society of Mammalogists was established in 1919 for the purpose of promoting the study of mammals, and this website includes a mammal image library
Category:Therapsids
Category:Synapsids
Category:Bathonian first appearances
Category:Extant Middle Jurassic first appearances | 18,838 | 2017-01 |
Communication | Communication (from Latin commūnicāre, meaning "to share") is the act of conveying intended meanings from one entity or group to another through the use of mutually understood signs and semiotic rules.
The basic steps of communication are:
The forming of communicative intent.
Message composition.
Message encoding and decoding.
Transmission of the encoded message as a sequence of signals using a specific channel or medium.
Reception of signals.
Reconstruction of the original message.
Interpretation and making sense of the reconstructed message.
The study of communication can be divided into:
Information theory which studies the quantification, storage, and communication of information in general;
Communication studies which concerns human communication;
Biosemiotics which examines the communication of organisms in general.
The channel of communication can be visual, auditory, tactile (such as in Braille) and haptic, olfactory, Kinesics, electromagnetic, or biochemical. Human communication is unique for its extensive use of abstract language.
Non-verbal
Nonverbal communication describes the process of conveying meaning in the form of non-word messages. Examples of nonverbal communication include haptic communication, chronemic communication, gestures, body language, facial expressions, eye contact, and how one dresses. Nonverbal communication also relates to intent of a message. Examples of intent are voluntary, intentional movements like shaking a hand or winking, as well as involuntary, such as sweating. Speech also contains nonverbal elements known as paralanguage, e.g. rhythm, intonation, tempo, and stress. There may even be a pheromone component. Research has shown that up to 55% of human communication may occur through non-verbal facial expressions, and a further 38% through para-language.Mehrabian, A. (1972). Nonverbal communication. Transaction Publishers. It affects communication most at the subconscious level and establishes trust. Likewise, written texts include nonverbal elements such as handwriting style, spatial arrangement of words and the use of emoticons to convey emotion.
Nonverbal communication demonstrates one of Wazlawick's laws: you cannot not communicate. Once proximity has formed awareness, living creatures begin interpreting any signals received.Wazlawick, Paul (1970's) opus Some of the functions of nonverbal communication in humans are to complement and illustrate, to reinforce and emphasize, to replace and substitute, to control and regulate, and to contradict the denovative message.
Verbal
Verbal communication is the spoken conveying of message. Human language can be defined as a system of symbols (sometimes known as lexemes) and the grammars (rules) by which the symbols are manipulated. The word "language" also refers to common properties of languages. Language learning normally occurs most intensively during human childhood. Most of the thousands of human languages use patterns of sound or gesture for symbols which enable communication with others around them. Languages tend to share certain properties, although there are exceptions. There is no defined line between a language and a dialect. Constructed languages such as Esperanto, programming languages, and various mathematical formalism is not necessarily restricted to the properties shared by human languages.
Written communication and its historical development
Over time the forms of and ideas about communication have evolved through the continuing progression of technology. Advances include communications psychology and media psychology, an emerging field of study.
The progression of written communication can be divided into three "information communication revolutions":
Written communication first emerged through the use of pictographs. The pictograms were made in stone, hence written communication was not yet mobile. Pictograms began to develop standardized and simplified forms.
The next step occurred when writing began to appear on paper, papyrus, clay, wax, and other media with common shared writing systems, leading to adaptable alphabets. Communication became mobile.
The final stage is characterized by the transfer of information through controlled waves of electromagnetic radiation (i.e., radio, microwave, infrared) and other electronic signals.
Communication is thus a process by which meaning is assigned and conveyed in an attempt to create shared understanding. Gregory Bateson called it "the replication of tautologies in the universe.Bateson, Gregory (1960) Steps to an Ecology of Mind This process, which requires a vast repertoire of skills in interpersonal processing, listening, observing, speaking, questioning, analyzing, gestures, and evaluating enables collaboration and cooperation.
Business
Business communication is used for a wide variety of activities including, but not limited to: strategic communications planning, media relations, public relations (which can include social media, broadcast and written communications, and more), brand management, reputation management, speech-writing, customer-client relations, and internal/employee communications.
Companies with limited resources may choose to engage in only a few of these activities, while larger organizations may employ a full spectrum of communications. Since it is difficult to develop such a broad range of skills, communications professionals often specialize in one or two of these areas but usually have at least a working knowledge of most of them. By far, the most important qualifications communications professionals can possess are excellent writing ability, good 'people' skills, and the capacity to think critically and strategically.
Political
Communication is one of the most relevant tools in political strategies, including persuasion and propaganda. In mass media research and online media research, the effort of strategist is that of getting a precise decoding, avoiding "message reactance", that is, message refusal. The reaction to a message is referred also in terms of approach to a message, as follows:
In "radical reading" the audience rejects the meanings, values, and viewpoints built into the text by its makers. Effect: message refusal.
In "dominant reading", the audience accepts the meanings, values, and viewpoints built into the text by its makers. Effect: message acceptance.
In "subordinate reading" the audience accepts, by and large, the meanings, values, and worldview built into the text by its makers. Effect: obey to the message.Danesi, Marcel (2009), Dictionary of Media and Communications. M.E.Sharpe, Armonk, New York.
Holistic approaches are used by communication campaign leaders and communication strategists in order to examine all the options, "actors" and channels that can generate change in the semiotic landscape, that is, change in perceptions, change in credibility, change in the "memetic background", change in the image of movements, of candidates, players and managers as perceived by key influencers that can have a role in generating the desired "end-state".
The modern political communication field is highly influenced by the framework and practices of "information operations" doctrines that derive their nature from strategic and military studies. According to this view, what is really relevant is the concept of acting on the Information Environment. The information environment is the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consist s of three interrelated dimensions, which continuously interact with individuals, organizations, and systems. These dimensions are known as physical, informational, and cognitive.Chairman of the Joint Chiefs of Staff, U.S. Army (2012). Information Operations. Joint Publication 3-13. Joint Doctrine Support Division, 116 Lake View Parkway, Suffolk, VA., Available in http://www.dtic.mil/doctrine/new_pubs/jp3_13.pdf
Family
Family communication is the study of the communication perspective in a broadly defined family, with intimacy and trusting relationship.Turner, L. H., & West, R. L. (2013). Perspectives on family communication. Boston, MA: McGraw-Hill. The main goal of family communication is to understand the interactions of family and the pattern of behaviors of family members in different circumstances. Open and honest communication creates an atmosphere that allows family members to express their differences as well as love and admiration for one another. It also helps to understand the feelings of one another.
Family communication study looks at topics such as family rules, family roles or family dialectics and how those factors could affect the communication between family members. Researchers develop theories to understand communication behaviors. Family communication study also digs deep into certain time periods of family life such as marriage, parenthood or divorce and how communication stands in those situations. It is important for family members to understand communication as a trusted way which leads to a well constructed family.
Interpersonal
In simple terms, interpersonal communication is the communication between one person and another (or others). It is often referred to as face-to-face communication between two (or more) people. Both verbal and nonverbal communication, or body language, play a part in how one person understands another. In verbal interpersonal communication there are two types of messages being sent: a content message and a relational message. Content messages are messages about the topic at hand and relational messages are messages about the relationship itself. This means that relational messages come across in how one says something and it demonstrates a person’s feelings, whether positive or negative, towards the individual they are talking to, indicating not only how they feel about the topic at hand, but also how they feel about their relationship with the other individual.
Barriers to effectiveness
Barriers to effective communication can retard or distort the message and intention of the message being conveyed which may result in failure of the communication process or an effect that is undesirable. These include filtering, selective perception, information overload, emotions, language, silence, communication apprehension, gender differences and political correctnessRobbins, S., Judge, T., Millett, B., & Boyle, M. (2011). Organisational Behaviour. 6th ed. Pearson, French's Forest, NSW p315-317.
This also includes a lack of expressing "knowledge-appropriate" communication, which occurs when a person uses ambiguous or complex legal words, medical jargon, or descriptions of a situation or environment that is not understood by the recipient.
Physical barriers- Physical barriers are often due to the nature of the environment. An example of this is the natural barrier which exists if staff are located in different buildings or on different sites. Likewise, poor or outdated equipment, particularly the failure of management to introduce new technology, may also cause problems. Staff shortages are another factor which frequently causes communication difficulties for an organization.
System design- System design faults refer to problems with the structures or systems in place in an organization. Examples might include an organizational structure which is unclear and therefore makes it confusing to know whom to communicate with. Other examples could be inefficient or inappropriate information systems, a lack of supervision or training, and a lack of clarity in roles and responsibilities which can lead to staff being uncertain about what is expected of them.
Attitudinal barriers- Attitudinal barriers come about as a result of problems with staff in an organization. These may be brought about, for example, by such factors as poor management, lack of consultation with employees, personality conflicts which can result in people delaying or refusing to communicate, the personal attitudes of individual employees which may be due to lack of motivation or dissatisfaction at work, brought about by insufficient training to enable them to carry out particular tasks, or simply resistance to change due to entrenched attitudes and ideas.
Ambiguity of words/phrases- Words sounding the same but having different meaning can convey a different meaning altogether. Hence the communicator must ensure that the receiver receives the same meaning. It is better if such words are avoided by using alternatives whenever possible.
Individual linguistic ability- The use of jargon, difficult or inappropriate words in communication can prevent the recipients from understanding the message. Poorly explained or misunderstood messages can also result in confusion. However, research in communication has shown that confusion can lend legitimacy to research when persuasion fails.What Should Be Included in a Project Plan - Retrieved December 18th, 2009
Physiological barriers- These may result from individuals' personal discomfort, caused—for example—by ill health, poor eyesight or hearing difficulties.
Bypassing-These happens when the communicators (sender and the receiver) do not attach the same symbolic meanings to their words. It is when the sender is expressing a thought or a word but the receiver take it in a different meaning. For example- ASAP, Rest room
Technological multi-tasking and absorbency- With a rapid increase in technologically-driven communication in the past several decades, individuals are increasingly faced with condensed communication in the form of e-mail, text, and social updates. This has, in turn, led to a notable change in the way younger generations communicate and perceive their own self-efficacy to communicate and connect with others. With the ever-constant presence of another "world" in one's pocket, individuals are multi-tasking both physically and cognitively as constant reminders of something else happening somewhere else bombard them. Though perhaps too new of an advancement to yet see long-term effects, this is a notion currently explored by such figures as Sherry Turkle.
Fear of being criticized-This is a major factor that prevents good communication. If we exercise simple practices to improve our communication skill, we can become effective communicators. For example, read an article from the newspaper or collect some news from the television and present it in front of the mirror. This will not only boost your confidence, but also improve your language and vocabulary.
Gender barriers- Most communicators whether aware or not, often have a set agenda. This is very notable among the different genders. For example, many women are found to be more critical in addressing conflict. It's also been noted that men are more than likely to withdraw from conflict when in comparison to women. This breakdown and comparison not only shows that there are many factors to communication between two specific genders, but also room for improvement as well as established guidelines for all.
Cultural aspects
Cultural differences exist within countries (tribal/regional differences, dialects etc.), between religious groups and in organisations or at an organisational level - where companies, teams and units may have different expectations, norms and idiolects. Families and family groups may also experience the effect of cultural barriers to communication within and between different family members or groups. For example: words, colours and symbols have different meanings in different cultures. In most parts of the world, nodding your head means agreement, shaking your head means no, except in some parts of the world.Nageshwar Rao, Rajendra P.Das, Communication skills, Himalaya Publishing House, 9789350516669, p.48
Communication to a great extent is influenced by culture and cultural variables.http://www.beyondintractability.org/bi-essay/cross-cultural-communicationhttp://www.studymode.com/essays/Important-Components-Of-Cross-Cultural-Communication-595745.htmlhttp://www.ijdesign.org/ojs/index.php/IJDesign/article/view/313/155 Understanding cultural aspects of communication refers to having knowledge of different cultures in order to communicate effectively with cross culture people. Cultural aspects of communication are of great relevance in today's world which is now a global village, thanks to globalisation. Cultural aspects of communication are the cultural differences which influences communication across borders. Impact of cultural differences on communication components are explained below:
1) Verbal communication refers to form of communication which uses spoken and written words for expressing and transferring views and ideas. Language is the most important tool of verbal communication and it is the area where cultural difference play its role. All countries have different languages and to have a better understanding of different culture it is required to have knowledge of languages of different countries.
2) Non verbal communication is a very wide concept and it includes all the other forms of communication which do not uses written or spoken words. Non verbal communication takes following forms:
Paralinguistics are the voice involved in communication other than actual language and involves tones, pitch, vocal cues etc. It also include sounds from throat and all these are greatly influenced by cultural differences across borders.
Proxemics deals with the concept of space element in communication. Proxemics explains four zones of spaces namely intimate personal, social and public. This concept differs with different culture as the permissible space vary in different countries.
Artifactics studies about the non verbal signals or communication which emerges from personal accessories such as dresses or fashion accessories worn and it varies with culture as people of different countries follow different dressing codes.
Chronemics deal with the time aspects of communication and also include importance given to the time. some issues explaining this conceptpt are pauses, silences and response lag during an interaction. This aspect of communication is also influenced by cultural differences as it is well known that there is a great difference in the value given by different cultures to time.
Kinesics mainly deals with the body languages such as postures, gestures, head nods, leg movements etc. In different countries, the same gestures and postures are used to convey different messages. Sometimes even a particular kinesic indicating something good in a country may have a negative meaning in any other culture.
So in order to have an effective communication across world it is desirable to have a knowledge of cultural variables effecting communication.
According to Michael Walsh and Ghil'ad Zuckermann, Western conversational interaction is typically "dyadic", between two particular people, where eye contact is important and the speaker controls the interaction; and "contained" in a relatively short, defined time frame. However, traditional Aboriginal conversational interaction is "communal", broadcast to many people, eye contact is not important, the listener controls the interaction; and "continuous", spread over a longer, indefinite time frame.
Nonhuman
Every information exchange between living organisms — i.e. transmission of signals that involve a living sender and receiver can be considered a form of communication; and even primitive creatures such as corals are competent to communicate. Nonhuman communication also include cell signaling, cellular communication, and chemical transmissions between primitive organisms like bacteria and within the plant and fungal kingdoms.
Animals
The broad field of animal communication encompasses most of the issues in ethology. Animal communication can be defined as any behavior of one animal that affects the current or future behavior of another animal. The study of animal communication, called zoo semiotics (distinguishable from anthroposemiotics, the study of human communication) has played an important part in the development of ethology, sociobiology, and the study of animal cognition. Animal communication, and indeed the understanding of the animal world in general, is a rapidly growing field, and even in the 21st century so far, a great share of prior understanding related to diverse fields such as personal symbolic name use, animal emotions, animal culture and learning, and even sexual conduct, long thought to be well understood, has been revolutionized. A special field of animal communication has been investigated in more detail such as vibrational communication.Randall J.A. (2014). Vibrational Communication: Spiders to Kangaroo Rats. In: Witzany, G. (ed). Biocommunication of Animals, Springer, Dordrecht. pp. 103-133. ISBN 978-94-007-7413-1.
Plants and fungi
Communication is observed within the plant organism, i.e. within plant cells and between plant cells, between plants of the same or related species, and between plants and non-plant organisms, especially in the root zone. Plant roots communicate with rhizome bacteria, fungi, and insects within the soil. These interactions are governed by syntactic, pragmatic, and semantic rules, and are possible because of the decentralized "nervous system" of plants. The original meaning of the word "neuron" in Greek is "vegetable fiber" and recent research has shown that most of the microorganism plant communication processes are neuron-like. Plants also communicate via volatiles when exposed to herbivory attack behavior, thus warning neighboring plants. In parallel they produce other volatiles to attract parasites which attack these herbivores. In stress situations plants can overwrite the genomes they inherited from their parents and revert to that of their grand- or great-grandparents.
Fungi communicate to coordinate and organize their growth and development such as the formation of Marcelia and fruiting bodies. Fungi communicate with their own and related species as well as with non fungal organisms in a great variety of symbiotic interactions, especially with bacteria, unicellular eukaryote, plants and insects through biochemicals of biotic origin. The biochemicals trigger the fungal organism to react in a specific manner, while if the same chemical molecules are not part of biotic messages, they do not trigger the fungal organism to react. This implies that fungal organisms can differentiate between molecules taking part in biotic messages and similar molecules being irrelevant in the situation. So far five different primary signalling molecules are known to coordinate different behavioral patterns such as filamentation, mating, growth, and pathogenicity. Behavioral coordination and production of signaling substances is achieved through interpretation processes that enables the organism to differ between self or non-self, a biotic indicator, biotic message from similar, related, or non-related species, and even filter out "noise", i.e. similar molecules without biotic content.Witzany, G (ed) (2012). Biocommunication of Fungi. Springer. ISBN 978-94-007-4263-5
Bacteria quorum sensing
Communication is not a tool used only by humans, plants and animals, but it is also used by microorganisms like bacteria. The process is called quorum sensing. Through quorum sensing, bacteria are able to sense the density of cells, and regulate gene expression accordingly. This can be seen in both gram positive and gram negative bacteria.
This was first observed by Fuqua et al. in marine microorganisms like V. harveyi and V. fischeri.Anand, Sandhya. Quorum Sensing- Communication Plan For Microbes. Article dated 2010-12-28, retrieved on 2012-04-03.
Models
Shannon and Weaver Model of Communication|thumb|270px
Communication major dimensions scheme|thumb|270px
Interactional Model of Communication|thumb|270px
Berlo's Sender-Message-Channel-Receiver Model of Communication|thumb|270px
Transactional model of communication|thumb|270px
Communication code scheme|thumb|270px
Linear Communication Model|thumb|270px
The first major model for communication was introduced by Claude Shannon and Warren Weaver for Bell Laboratories in 1949Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. Urbana, Illinois: University of Illinois Press The original model was designed to mirror the functioning of radio and telephone technologies. Their initial model consisted of three primary parts: sender, channel, and receiver. The sender was the part of a telephone a person spoke into, the channel was the telephone itself, and the receiver was the part of the phone where one could hear the other person. Shannon and Weaver also recognized that often there is static that interferes with one listening to a telephone conversation, which they deemed noise.
In a simple model, often referred to as the transmission model or standard view of communication, information or content (e.g. a message in natural language) is sent in some form (as spoken language) from an emisor/ sender/ encoder to a destination/ receiver/ decoder. This common conception of communication simply views communication as a means of sending and receiving information. The strengths of this model are simplicity, generality, and quantifiability. Claude Shannon and Warren Weaver structured this model based on the following elements:
An information source, which produces a message.
A transmitter, which encodes the message into signals
A channel, to which signals are adapted for transmission
A noise source, which distorts the signal while it propagates through the channel
A receiver, which 'decodes' (reconstructs) the message from the signal.
A destination, where the message arrives.
Shannon and Weaver argued that there were three levels of problems for communication within this theory.
The technical problem: how accurately can the message be transmitted?
The semantic problem: how precisely is the meaning 'conveyed'?
The effectiveness problem: how effectively does the received meaning affect behavior?
Daniel ChandlerDaniel Chandler, "The Transmission Model of Communication", Aber.ac.uk critiques the transmission model by stating:
It assumes communicators are isolated individuals.
No allowance for differing purposes.
No allowance for differing interpretations.
No allowance for unequal power relations.
No allowance for situational contexts.
In 1960, David Berlo expanded on Shannon and Weaver's (1949) linear model of communication and created the SMCR Model of Communication.Berlo, D. K. (1960). The process of communication. New York, New York: Holt, Rinehart, & Winston. The Sender-Message-Channel-Receiver Model of communication separated the model into clear parts and has been expanded upon by other scholars.
Communication is usually described along a few major dimensions: Message (what type of things are communicated), source / emisor / sender / encoder (by whom), form (in which form), channel (through which medium), destination / receiver / target / decoder (to whom), and Receiver. Wilbur Schram (1954) also indicated that we should also examine the impact that a message has (both desired and undesired) on the target of the message.Schramm, W. (1954). How communication works. In W. Schramm (Ed.), The process and effects of communication (pp. 3–26). Urbana, Illinois: University of Illinois Press. Between parties, communication includes acts that confer knowledge and experiences, give advice and commands, and ask questions. These acts may take many forms, in one of the various manners of communication. The form depends on the abilities of the group communicating. Together, communication content and form make messages that are sent towards a destination. The target can be oneself, another person or being, another entity (such as a corporation or group of beings).
Communication can be seen as processes of information transmission with three levels of semiotic rules:
Pragmatic (concerned with the relations between signs/expressions and their users)
Semantic (study of relationships between signs and symbols and what they represent) and
Syntactic (formal properties of signs and symbols).
Therefore, communication is social interaction where at least two interacting agents share a common set of signs and a common set of semiotic rules. This commonly held rule in some sense ignores autocommunication, including intrapersonal communication via diaries or self-talk, both secondary phenomena that followed the primary acquisition of communicative competences within social interactions.
In light of these weaknesses, Barnlund (2008) proposed a transactional model of communication.Barnlund, D. C. (2008). A transactional model of communication. In. C. D. Mortensen (Eds.), Communication theory (2nd ed., pp47-57). New Brunswick, New Jersey: Transaction. The basic premise of the transactional model of communication is that individuals are simultaneously engaging in the sending and receiving of messages.
In a slightly more complex form a sender and a receiver are linked reciprocally. This second attitude of communication, referred to as the constitutive model or constructionist view, focuses on how an individual communicates as the determining factor of the way the message will be interpreted. Communication is viewed as a conduit; a passage in which information travels from one individual to another and this information becomes separate from the communication itself. A particular instance of communication is called a speech act. The sender's personal filters and the receiver's personal filters may vary depending upon different regional traditions, cultures, or gender; which may alter the intended meaning of message contents. In the presence of "communication noise" on the transmission channel (air, in this case), reception and decoding of content may be faulty, and thus the speech act may not achieve the desired effect. One problem with this encode-transmit-receive-decode model is that the processes of encoding and decoding imply that the sender and receiver each possess something that functions as a codebook, and that these two code books are, at the very least, similar if not identical. Although something like code books is implied by the model, they are nowhere represented in the model, which creates many conceptual difficulties.
Theories of coregulation describe communication as a creative and dynamic continuous process, rather than a discrete exchange of information. Canadian media scholar Harold Innis had the theory that people use different types of media to communicate and which one they choose to use will offer different possibilities for the shape and durability of society (Wark, McKenzie 1997). His famous example of this is using ancient Egypt and looking at the ways they built themselves out of media with very different properties stone and papyrus. Papyrus is what he called 'Space Binding'. it made possible the transmission of written orders across space, empires and enables the waging of distant military campaigns and colonial administration. The other is stone and 'Time Binding', through the construction of temples and the pyramids can sustain their authority generation to generation, through this media they can change and shape communication in their society (Wark, McKenzie 1997).
Noise
In any communication model, noise is interference with the decoding of messages sent over a channel by an encoder. There are many examples of noise:
Environmental noise. Noise that physically disrupts communication, such as standing next to loud speakers at a party, or the noise from a construction site next to a classroom making it difficult to hear the professor.
Physiological-impairment noise. Physical maladies that prevent effective communication, such as actual deafness or blindness preventing messages from being received as they were intended.
Semantic noise. Different interpretations of the meanings of certain words. For example, the word "weed" can be interpreted as an undesirable plant in a yard, or as a euphemism for marijuana.
Syntactical noise. Mistakes in grammar can disrupt communication, such as abrupt changes in verb tense during a sentence.
Organizational noise. Poorly structured communication can prevent the receiver from accurate interpretation. For example, unclear and badly stated directions can make the receiver even more lost.
Cultural noise. Stereotypical assumptions can cause misunderstandings, such as unintentionally offending a non-Christian person by wishing them a "Merry Christmas".
Psychological noise. Certain attitudes can also make communication difficult. For instance, great anger or sadness may cause someone to lose focus on the present moment. Disorders such as autism may also severely hamper effective communication.Roy M. Berko, et al., Communicating. 11th ed. (Boston, MA: Pearson Education, Inc., 2010) 9-12
To face communication noise, redundancy and acknowledgement must often be used. Acknowledgements are messages from the addressee informing the originator that his/her communication has been received and is understood.North Atlantic Treaty Organization, Nato Standardization Agency AAP-6 - Glossary of terms and definitions, p 43. Message repetition and feedback about message received are necessary in the presence of noise to reduce the probability of misunderstanding.
As academic discipline
See also
Advice
Augmentative and alternative communication
Communication rights
Data communication
Four Cs of 21st century learning
Human communication
Inter Mirifica
Intercultural communication
Ishin-denshin
Proactive communications
Sign system
Small talk
SPEAKING
Telecommunication
Telepathy
Understanding
21st century skills
Assertion Theory
References
Further reading
Innis, Harold. Empire and Communications. Rev. by Mary Q. Innis; foreword by Marshall McLuhan. Toronto, Ont.: University of Toronto Press, 1972. xii, 184 p. N.B.: "Here he [i.e. Innis] develops his theory that the history of empires is determined to a large extent by their means of communication."—From the back cover of the book's pbk. ed. ISBN 0-8020-6119-2 pbk
| 5,177 | 2017-01 |
Greeks | The Greeks or Hellenes ( ) are an ethnic group native to Greece, Cyprus, southern Albania, Turkey, Sicily, Egypt and, to a lesser extent, other countries surrounding the Mediterranean Sea. They also form a significant diaspora, with Greek communities established around the world..
Greek colonies and communities have been historically established on the shores of the Mediterranean Sea and Black Sea, but the Greek people have always been centered on the Aegean and Ionian seas, where the Greek language has been spoken since the Bronze Age... Until the early 20th century, Greeks were distributed between the Greek peninsula, the western coast of Asia Minor, the Black Sea coast, Cappadocia in central Anatolia, Egypt, the Balkans, Cyprus, and Constantinople. Many of these regions coincided to a large extent with the borders of the Byzantine Empire of the late 11th century and the Eastern Mediterranean areas of ancient Greek colonization.. The cultural centers of the Greeks have included Athens, Thessalonica, Alexandria, Smyrna, and Constantinople at various periods.
Most ethnic Greeks live nowadays within the borders of the modern Greek state and Cyprus. The Greek genocide and population exchange between Greece and Turkey nearly ended the three millennia-old Greek presence in Asia Minor. Other longstanding Greek populations can be found from southern Italy to the Caucasus and southern Russia and Ukraine and in the Greek diaspora communities in a number of other countries. Today, most Greeks are officially registered as members of the Greek Orthodox Church.CIA World Factbook on Greece: Greek Orthodox 98%, Greek Muslim 1.3%, other 0.7%.
Greeks have greatly influenced and contributed to culture, arts, exploration, literature, philosophy, politics, architecture, music, mathematics, science and technology, business, cuisine, and sports, both historically and contemporarily.
History
240px|thumb|right|A reconstruction of the 3rd millennium BC "Proto-Greek area", by Vladimir I. Georgiev.
The Greeks speak the Greek language, which forms its own unique branch within the Indo-European family of languages, the Hellenic. They are part of a group of pre-modern ethnicities, described by Anthony D. Smith as an "archetypal diaspora people".
Origins
The Proto-Greeks probably arrived at the area now called Greece, in the southern tip of the Balkan peninsula, at the end of the 3rd millennium BC, The sequence of migrations into the Greek mainland during the 2nd millennium BC has to be reconstructed on the basis of the ancient Greek dialects, as they presented themselves centuries later and are therefore subject to some uncertainties. There were at least two migrations, the first being the Ionians and Aeolians, which resulted in Mycenaean Greece by the 16th century BC, and the second, the Dorian invasion, around the 11th century BC, displacing the Arcadocypriot dialects, which descended from the Mycenaean period. Both migrations occur at incisive periods, the Mycenaean at the transition to the Late Bronze Age and the Doric at the Bronze Age collapse.
An alternative hypothesis has been put forth by linguist Vladimir Georgiev, who places Proto-Greek speakers in northwestern Greece by the Early Helladic period (3rd millennium BC), i.e. towards the end of the European Neolithic.Vladimir I. Georgiev, for example, placed Proto-Greek in northwestern Greece during the Late Neolithic period. () Linguists Russell Gray and Quentin Atkinson in a 2003 paper using computational methods on Swadesh lists have arrived at a somewhat earlier estimate, around 5000 BC for Greco-Armenian split and the emergence of Greek as a separate linguistic lineage around 4000 BC.; .
Mycenaean
In 1600 BC, the Mycenaean Greeks borrowed from the Minoan civilization its syllabic writing system (i.e. Linear A) and developed their own syllabic script known as Linear B, providing the first and oldest written evidence of Greek.. The Mycenaeans quickly penetrated the Aegean Sea and, by the 15th century BC, had reached Rhodes, Crete, Cyprus and the shores of Asia Minor.; ; .
Around 1200 BC, the Dorians, another Greek-speaking people, followed from Epirus.. Traditionally, historians have believed that the Dorian invasion caused the collapse of the Mycenaean civilization, but it is likely the main attack was made by seafaring raiders (Sea Peoples) who sailed into the eastern Mediterranean around 1180 BC.. The Dorian invasion was followed by a poorly attested period of migrations, appropriately called the Greek Dark Ages, but by 800 BC the landscape of Archaic and Classical Greece was discernible..
The Greeks of classical antiquity idealized their Mycenaean ancestors and the Mycenaean period as a glorious era of heroes, closeness of the gods and material wealth.; . The Homeric Epics (i.e. Iliad and Odyssey) were especially and generally accepted as part of the Greek past and it was not until the 19th century that scholars began to question Homer's historicity. As part of the Mycenaean heritage that survived, the names of the gods and goddesses of Mycenaean Greece (e.g. Zeus, Poseidon and Hades) became major figures of the Olympian Pantheon of later antiquity.; .
Classical
thumb|right|Hoplites fighting. Detail from an Attic black-figure hydria, ca. 560 BC–550 BC. Louvre, Paris.
The ethnogenesis of the Greek nation is linked to the development of Pan-Hellenism in the 8th century BC. According to some scholars, the foundational event was the Olympic Games in 776 BC, when the idea of a common Hellenism among the Greek tribes was first translated into a shared cultural experience and Hellenism was primarily a matter of common culture. The works of Homer (i.e. Iliad and Odyssey) and Hesiod (i.e. Theogony) were written in the 8th century BC, becoming the basis of the national religion, ethos, history and mythology.; . The Oracle of Apollo at Delphi was established in this period..
The classical period of Greek civilization covers a time spanning from the early 5th century BC to the death of Alexander the Great, in 323 BC (some authors prefer to split this period into "Classical", from the end of the Greco-Persian Wars to the end of the Peloponnesian War, and "Fourth Century", up to the death of Alexander). It is so named because it set the standards by which Greek civilization would be judged in later eras. The Classical period is also described as the "Golden Age" of Greek civilization, and its art, philosophy, architecture and literature would be instrumental in the formation and development of Western culture.
While the Greeks of the classical era understood themselves to belong to a common Hellenic genos,. their first loyalty was to their city and they saw nothing incongruous about warring, often brutally, with other Greek city-states.; . The Peloponnesian War, the large scale civil war between the two most powerful Greek city-states Athens and Sparta and their allies, left both greatly weakened.
thumb|left|Alexander the Great, whose conquests led to the Hellenistic Age.
Most of the feuding Greek city-states were, in some scholars' opinions, united under the banner of Philip's and Alexander the Great's Pan-Hellenic ideals, though others might generally opt, rather, for an explanation of "Macedonian conquest for the sake of conquest" or at least conquest for the sake of riches, glory and power and view the "ideal" as useful propaganda directed towards the city-states.
In any case, Alexander's toppling of the Achaemenid Empire, after his victories at the battles of the Granicus, Issus and Gaugamela, and his advance as far as modern-day Pakistan and Tajikistan,. provided an important outlet for Greek culture, via the creation of colonies and trade routes along the way. While the Alexandrian empire did not survive its creator's death intact, the cultural implications of the spread of Hellenism across much of the Middle East and Asia were to prove long lived as Greek became the lingua franca, a position it retained even in Roman times.. Many Greeks settled in Hellenistic cities like Alexandria, Antioch and Seleucia. Two thousand years later, there are still communities in Pakistan and Afghanistan, like the Kalash, who claim to be descended from Greek settlers..
Hellenistic
thumb|250px|left|The major Hellenistic realms c. 300 BC; the Ptolemaic Kingdom (dark blue) and the Seleucid Empire (yellow).
thumb|right|140px|Bust of Cleopatra VII. Altes Museum, Berlin.
The Hellenistic civilization was the next period of Greek civilization, the beginnings of which are usually placed at Alexander's death.. This Hellenistic age, so called because it saw the partial Hellenization of many non-Greek cultures, lasted until the conquest of Egypt by Rome in 30 BC.
This age saw the Greeks move towards larger cities and a reduction in the importance of the city-state. These larger cities were parts of the still larger Kingdoms of the Diadochi.. Greeks, however, remained aware of their past, chiefly through the study of the works of Homer and the classical authors.. An important factor in maintaining Greek identity was contact with barbarian (non-Greek) peoples, which was deepened in the new cosmopolitan environment of the multi-ethnic Hellenistic kingdoms. This led to a strong desire among Greeks to organize the transmission of the Hellenic paideia to the next generation. Greek science, technology and mathematics are generally considered to have reached their peak during the Hellenistic period.
In the Indo-Greek and Greco-Bactrian kingdoms, Greco-Buddhism was spreading and Greek missionaries would play an important role in propagating it to China.. Further east, the Greeks of Alexandria Eschate became known to the Chinese people as the Dayuan..
Roman Empire
Following the time of the conquest of the last of the independent Greek city-states and Hellenistic (post-Alexandrine) kingdoms, almost all of the world's Greek speakers lived as citizens or subjects of the Roman Empire. Despite their military superiority, the Romans admired and became heavily influenced by the achievements of Greek culture, hence Horace's famous statement: Graecia capta ferum victorem cepit ("Greece, although captured, took its wild conqueror captive")..
In the religious sphere, this was a period of profound change. The spiritual revolution that took place, saw a waning of the old Greek religion, whose decline beginning in the 3rd century BC continued with the introduction of new religious movements from the East. The cults of deities like Isis and Mithra were introduced into the Greek world. Greek-speaking communities of the Hellenized East were instrumental in the spread of early Christianity in the 2nd and 3rd centuries,. and Christianity's early leaders and writers (notably Saint Paul) were generally Greek-speaking,. though none were from Greece. However, Greece itself had a tendency to cling to paganism and was not one of the influential centers of early Christianity: in fact, some ancient Greek religious practices remained in vogue until the end of the 4th century,. with some areas such as the southeastern Peloponnese remaining pagan until well into the 10th century AD..
Byzantine Empire
Of the new eastern religions introduced into the Greek world, the most successful was Christianity. From the early centuries of the Common Era, the Greeks self-identified as Romaioi ("Romans"), as well as Graikoi ("Greeks");; ; . by that time, the name Hellenes denoted pagans but was revived as an ethnonym in the 11th century.. While ethnic distinctions still existed in the Roman Empire, they became secondary to religious considerations and the renewed empire used Christianity as a tool to support its cohesion and promoted a robust Roman national identity.. Concurrently, the secular, urban civilization of Late Antiquity survived in the Eastern Mediterranean along with the Greco-Roman educational system; the Greeks' essential values were drawn from both Christianity and the Homeric tradition of their classical ancestors...
thumb|Scenes of marriage and family life in Constantinople.
The Eastern Roman Empire (today conventionally named the Byzantine Empire, a name not used during its own time) became increasingly influenced by Greek culture after the 7th century when Emperor Heraclius ( 610–641 AD) decided to make Greek the empire's official language... Certainly from then on, but likely earlier, the Greek and Roman cultures were virtually fused into a single Greco-Roman world. Although the Latin West recognized the Eastern Empire's claim to the Roman legacy for several centuries, after Pope Leo III crowned Charlemagne, king of the Franks, as the "Roman Emperor" on 25 December 800, an act which eventually led to the formation of the Holy Roman Empire, the Latin West started to favour the Franks and began to refer to the Eastern Roman Empire largely as the Empire of the Greeks (Imperium Graecorum).; Annales Fuldenses, 389: "Mense lanuario c. epiphaniam Basilii, Graecorum imperatoris, legati cum muneribus et epistolis ad Hludowicum regem Radasbonam venerunt ...".: "The Frankish court no longer regarded the Byzantine Empire as holding valid claims of universality; instead it was now termed the 'Empire of the Greeks'."
{| class="toccolours" style="float:right; margin-left:1em; margin-right:0; font-size:75%; background:#j7dbf9; color:black; width:20em; max-width:40%;" cellspacing="5"
|-
| style="text-align: left;" | "Much of what we know of antiquity – especially of Hellenic and Roman literature and of Roman law — would have been lost for ever but for the scholars and scribes and copyists of Constantinople."
|-
| style="text-align: left;" | John J. Norwich
|}
These Byzantine Greeks were largely responsible for the preservation of the literature of the classical era... Byzantine grammarians were those principally responsible for carrying, in person and in writing, ancient Greek grammatical and literary studies to the West during the 15th century, giving the Italian Renaissance a major boost.. The Aristotelian philosophical tradition was nearly unbroken in the Greek world for almost two thousand years, until the Fall of Constantinople in 1453.
To the Slavic world, Roman-era Greeks contributed by the dissemination of literacy and Christianity. The most notable example of the later was the work of the two Byzantine Greek brothers, the monks Saints Cyril and Methodius from the port city of Thessalonica, in Greek Macedonia, who are credited today with formalizing the first Slavic alphabet.
A distinct Greek political identity re-emerged in the 11th century in educated circles and became more forceful after the fall of Constantinople to the Crusaders of the Fourth Crusade in 1204, so that when the empire was revived in 1261, it became in many ways a Greek national state. That new notion of nationhood engendered a deep interest in the classical past culminating in the ideas of the Neoplatonist philosopher Gemistus Pletho, who abandoned Christianity. However, it was the combination of Orthodox Christianity with a specifically Greek identity that shaped the Greeks' notion of themselves in the empire's twilight years. The interest in the Classical Greek heritage was complemented by a renewed emphasis on Greek Orthodox identity, which was reinforced in the late Medieval and Ottoman Greeks' links with their fellow Orthodox Christians in the Russian Empire. These were further strengthened following the fall of the Empire of Trebizond in 1461, after which and until the second Russo-Turkish War of 1828–29 hundreds of thousands of Pontic Greeks fled or migrated from the Pontic Alps and Armenian Highlands to southern Russia and the Russian South Caucasus (see also Greeks in Russia, Greeks in Armenia, Greeks in Georgia, and Caucasian Greeks).See for example Anthony Bryer, 'The Empire of Trebizond and the Pontus' (Variourum, 1980), and his 'Migration and Settlement in the Caucasus and Anatolia' (Variourum, 1988), and other works listed in Caucasian Greeks and Pontic Greeks.
Ottoman Empire
thumb|120px|Engraving of a Greek merchant by Cesare Vecellio (16th century).
Following the Fall of Constantinople on 29 May 1453, many Greeks sought better employment and education opportunities by leaving for the West, particularly Italy, Central Europe, Germany and Russia. Greeks are greatly credited for the European cultural revolution, later called, the Renaissance. In Greek-inhabited territory itself, Greeks came to play a leading role in the Ottoman Empire, due in part to the fact that the central hub of the empire, politically, culturally, and socially, was based on Western Thrace and Greek Macedonia, both in Northern Greece, and of course was centred on the mainly Greek-populated, former Byzantine capital, Constantinople. As a direct consequence of this situation, Greek-speakers came to play a hugely important role in the Ottoman trading and diplomatic establishment, as well as in the church. Added to this, in the first half of the Ottoman period men of Greek origin made up a significant proportion of the Ottoman army, navy, and state bureaucracy, having been levied as adolescents (along with especially Albanians and Serbs) into Ottoman service through the devshirme. Many Ottomans of Greek (or Albanian or Serb) origin were therefore to be found within the Ottoman forces which governed the provinces, from Ottoman Egypt, to Ottomans occupied Yemen and Algeria, frequently as provincial governors.
For those that remained under the Ottoman Empire's millet system, religion was the defining characteristic of national groups (milletler), so the exonym "Greeks" (Rumlar from the name Rhomaioi) was applied by the Ottomans to all members of the Orthodox Church, regardless of their language or ethnic origin. The Greek speakers were the only ethnic group to actually call themselves Romioi, (as opposed to being so named by others) and, at least those educated, considered their ethnicity (genos) to be Hellenic. There were, however, many Greeks who escaped the second-class status of Christians inherent in the Ottoman millet system, according to which Muslims were explicitly awarded senior status and preferential treatment. These Greeks either emigrated, particularly to their fellow Greek Orthodox protector, the Russian Empire, or simply converted to Islam, often only very superficially and whilst remaining crypto-Christian. The most notable examples of large-scale conversion to Turkish Islam among those today defined as Greek Muslims—excluding those who had to convert as a matter of course on being recruited through the devshirme—were to be found in Crete (Cretan Turks), Greek Macedonia (for example among the Vallahades of western Macedonia), and among Pontic Greeks in the Pontic Alps and Armenian Highlands. Several Ottoman sultans and princes were also of part Greek origin, with mothers who were either Greek concubines or princesses from Byzantine noble families, one famous example being sultan Selim the Grim ( 1517–1520), whose mother Gülbahar Hatun was a Pontic Greek.
The roots of Greek success in the Ottoman Empire can be traced to the Greek tradition of education and commerce exemplified in the Phanariotes. It was the wealth of the extensive merchant class that provided the material basis for the intellectual revival that was the prominent feature of Greek life in the half century and more leading to the outbreak of the Greek War of Independence in 1821. Not coincidentally, on the eve of 1821, the three most important centres of Greek learning were situated in Chios, Smyrna and Aivali, all three major centres of Greek commerce. Greek success was also favoured by Greek domination of the Christian Orthodox church.
Modern
thumb|right|140px|The cover of Hermes o Logios, a Greek literary publication of the late 18th and early 19th century with major contribution to the Modern Greek Enlightenment.
The relationship between ethnic Greek identity and Greek Orthodox religion continued after the creation of the modern Greek nation-state in 1830. According to the second article of the first Greek constitution of 1822, a Greek was defined as any native Christian resident of the Kingdom of Greece, a clause removed by 1840. A century later, when the Treaty of Lausanne was signed between Greece and Turkey in 1923, the two countries agreed to use religion as the determinant for ethnic identity for the purposes of population exchange, although most of the Greeks displaced (over a million of the total 1.5 million) had already been driven out by the time the agreement was signed.While Greek authorities signed the agreement legalizing the population exchange this was done on the insistence of Mustafa Kemal Atatürk and after a million Greeks had already been expelled from Asia Minor ().; ; ; . The Greek genocide, in particular the harsh removal of Pontian Greeks from the southern shore area of the Black Sea, contemporaneous with and following the failed Greek Asia Minor Campaign, was part of this process of Turkification of the Ottoman Empire and the placement of its economy and trade, then largely in Greek hands under ethnic Turkish control..
Identity
The terms used to define Greekness have varied throughout history but were never limited or completely identified with membership to a Greek state.. By Western standards, the term Greeks has traditionally referred to any native speakers of the Greek language, whether Mycenaean, Byzantine or modern Greek... Byzantine Greeks self-identified as Romaioi ("Romans"), Graikoi ("Greeks") and Christianoi ("Christians") since they were the political heirs of imperial Rome, the descendants of their classical Greek forebears and followers of the Apostles;; ; ; . during the mid-to-late Byzantine period (11th–13th century), a growing number of Byzantine Greek intellectuals deemed themselves Hellenes although for most Greek-speakers, "Hellene" still meant pagan.. On the eve of the Fall of Constantinople the Last Emperor urged his soldiers to remember that they were the descendants of Greeks and Romans.
Before the establishment of the modern Greek nation-state, the link between ancient and modern Greeks was emphasized by the scholars of Greek Enlightenment especially by Rigas Feraios. In his "Political Constitution", he addresses to the nation as "the people descendant of the Greeks".Feraios, Rigas. New Political Constitution of the Inhabitants of Rumeli, Asia Minor, the Islands of the Aegean, and the Principalities of Moldavia and Wallachia. The modern Greek state was created in 1829, when the Greeks liberated a part of their historic homelands, Peloponnese, from the Ottoman Empire.. The large Greek diaspora and merchant class were instrumental in transmitting the ideas of western romantic nationalism and philhellenism, which together with the conception of Hellenism, formulated during the last centuries of the Byzantine Empire, formed the basis of the Diafotismos and the current conception of Hellenism.
The Greeks today are a nation in the meaning of an ethnos, defined by possessing Greek culture and having a Greek mother tongue, not by citizenship, race, and religion or by being subjects of any particular state.. In ancient and medieval times and to a lesser extent today the Greek term was genos, which also indicates a common ancestry..
Names
thumb|right|Map showing the major regions of mainland ancient Greece, and adjacent "barbarian" lands.
Throughout the centuries, Greeks and Greek-speakers have developed and used different names to refer to themselves collectively. The term Achaeans (Ἀχαιοί) constitutes one of the collective names for the Greeks in Homer's Iliad and Odyssey (the Homeric "long-haired Achaeans" would have been a part of the Mycenaean civilization that dominated Greece from 1600 BC until 1100 BC). The other common names are Danaans (Δαναοί) and Argives (Ἀργεῖοι) while Panhellenes (Πανέλληνες) and Hellenes (Ἕλληνες) both appear only once in the Iliad;See Iliad, II.2.530 for "Panhellenes" and Iliad II.2.653 for "Hellenes". all of the aforementioned terms were used synonymously to denote a common Greek civilizational identity. In the historical period, Herodotus identified the Achaeans of the northern Peloponnese as descendants of the earlier, Homeric Achaeans.Herodotus. Histories, 7.94 and 8.73.
Homer refers to the "Hellenes" () as a relatively small tribe settled in Thessalic Phthia, with its warriors under the command of Achilleus.Homer. Iliad, 2.681–685 The Parian Chronicle says that Phthia was the homeland of the Hellenes and that this name was given to those previously called Greeks ().The Parian Marble, Entry #6: "From when Hellen [son of] Deuc[alion] became king of [Phthi]otis and those previously called Graekoi were named Hellenes." In Greek mythology, Hellen, the patriarch of the Hellenes who ruled around Phthia, was the son of Pyrrha and Deucalion, the only survivors after the Great Deluge.Pseudo-Apollodorus. Bibliotheca. The Greek philosopher Aristotle names ancient Hellas as an area in Epirus between Dodona and the Achelous river, the location of the Great Deluge of Deucalion, a land occupied by the Selloi and the "Greeks" who later came to be known as "Hellenes".Aristotle. Meteorologica, 1.14: "The deluge in the time of Deucalion, for instance took place chiefly in the Greek world and in it especially about ancient Hellas, the country about Dodona and the Achelous." In the Homeric tradition, the Selloi were the priests of Dodonian Zeus.Homer. Iliad, 16.233–16.235: "King Zeus, lord of Dodona ... you who hold wintry Dodona in your sway, where your prophets the Selloi dwell around you."
In the Hesiodic Catalogue of Women, Graecus is presented as the son of Zeus and Pandora II, sister of Hellen the patriarch of the Hellenes.Hesiod. Catalogue of Women, Fragment 5. According to the Parian Chronicle, when Deucalion became king of Phthia, the Graikoi (Γραικοί) were named Hellenes. Aristotle notes in his Meteorologica that the Hellenes were related to the Graikoi.
Continuity
thumb|Family group on a funerary stele from Athens, National Archaeological Museum, Athens.
The most obvious link between modern and ancient Greeks is their language, which has a documented tradition from at least the 14th century BC to the present day, albeit with a break during the Greek Dark Ages (lasting from the 11th to the 8th century BC).. Scholars compare its continuity of tradition to Chinese alone. Since its inception, Hellenism was primarily a matter of common culture and the national continuity of the Greek world is a lot more certain than its demographic.. Yet, Hellenism also embodied an ancestral dimension through aspects of Athenian literature that developed and influenced ideas of descent based on autochthony. During the later years of the Eastern Roman Empire, areas such as Ionia and Constantinople experienced a Hellenic revival in language, philosophy, and literature and on classical models of thought and scholarship. This revival provided a powerful impetus to the sense of cultural affinity with ancient Greece and its classical heritage. Throughout their history, the Greeks have retained their language and alphabet, certain values and cultural traditions, customs, a sense of religious and cultural difference and exclusion (the word barbarian was used by 12th-century historian Anna Komnene to describe non-Greek speakers),Anna Comnena. Alexiad, Books 1–15. a sense of Greek identity and common sense of ethnicity despite the undeniable socio-political changes of the past two millennia. In recent anthropological studies, both ancient and modern Greek osteological samples were analyzed demonstrating a bio-genetic affinity and continuity shared between both groups.
Demographics
Today, Greeks are the majority ethnic group in the Hellenic Republic, where they constitute 93% of the country's population, and the Republic of Cyprus where they make up 78% of the island's population (excluding Turkish settlers in the occupied part of the country). Greek populations have not traditionally exhibited high rates of growth; a large percentage of Greek population growth since Greece's foundation in 1832 was attributed to annexation of new territories, as well as the influx of 1.5 million Greek refugees after the 1923 population exchange between Greece and Turkey. About 80% of the population of Greece is urban, with 28% concentrated in the city of Athens.
Greeks from Cyprus have a similar history of emigration, usually to the English-speaking world because of the island's colonization by the British Empire. Waves of emigration followed the Turkish invasion of Cyprus in 1974, while the population decreased between mid-1974 and 1977 as a result of emigration, war losses, and a temporary decline in fertility. After the ethnic cleansing of a third of the Greek population of the island in 1974,; ; ; . there was also an increase in the number of Greek Cypriots leaving, especially for the Middle East, which contributed to a decrease in population that tapered off in the 1990s. Today more than two-thirds of the Greek population in Cyprus is urban.
There is a sizeable Greek minority of approximately 200,000 people in Albania. The Greek minority of Turkey, which numbered upwards of 200,000 people after the 1923 exchange, has now dwindled to a few thousand, after the 1955 Constantinople Pogrom and other state sponsored violence and discrimination. This effectively ended, though not entirely, the three-thousand-year-old presence of Hellenism in Asia Minor.. There are smaller Greek minorities in the rest of the Balkan countries, the Levant and the Black Sea states, remnants of the Old Greek Diaspora (pre-19th century).
Diaspora
thumb|150px|Zach Galifianakis, American stand-up comedian and actor of Greek ancestry.
The total number of Greeks living outside Greece and Cyprus today is a contentious issue. Where Census figures are available, they show around 3 million Greeks outside Greece and Cyprus. Estimates provided by the SAE - World Council of Hellenes Abroad put the figure at around 7 million worldwide. According to George Prevelakis of Sorbonne University, the number is closer to just below 5 million. Integration, intermarriage, and loss of the Greek language influence the self-identification of the Omogeneia. Important centres of the New Greek Diaspora today are London, New York, Melbourne and Toronto. In 2010, the Hellenic Parliament introduced a law that enables Diaspora Greeks in Greece to vote in the elections of the Greek state. This law was later repealed in early 2014.
Ancient
thumb|right|220px|Greek colonization in antiquity.
In ancient times, the trading and colonizing activities of the Greek tribes and city states spread the Greek culture, religion and language around the Mediterranean and Black Sea basins, especially in Sicily and southern Italy (also known as Magna Grecia), Spain, the south of France and the Black sea coasts.. Under Alexander the Great's empire and successor states, Greek and Hellenizing ruling classes were established in the Middle East, India and in Egypt. The Hellenistic period is characterized by a new wave of Greek colonization that established Greek cities and kingdoms in Asia and Africa.. Under the Roman Empire, easier movement of people spread Greeks across the Empire and in the eastern territories, Greek became the lingua franca rather than Latin. The modern-day Griko community of southern Italy, numbering about 60,000, may represent a living remnant of the ancient Greek populations of Italy.
Modern
thumb|220px|Distribution of ethnic groups in 1918, National Geographic
thumb|220px|Greek Diaspora (20th century).
During and after the Greek War of Independence, Greeks of the diaspora were important in establishing the fledgling state, raising funds and awareness abroad.. Greek merchant families already had contacts in other countries and during the disturbances many set up home around the Mediterranean (notably Marseilles in France, Livorno in Italy, Alexandria in Egypt), Russia (Odessa and Saint Petersburg), and Britain (London and Liverpool) from where they traded, typically in textiles and grain.. Businesses frequently comprised the extended family, and with them they brought schools teaching Greek and the Greek Orthodox Church.
As markets changed and they became more established, some families grew their operations to become shippers, financed through the local Greek community, notably with the aid of the Ralli or Vagliano Brothers.. With economic success, the Diaspora expanded further across the Levant, North Africa, India and the USA..
In the 20th century, many Greeks left their traditional homelands for economic reasons resulting in large migrations from Greece and Cyprus to the United States, Great Britain, Australia, Canada, Germany, and South Africa, especially after the Second World War (1939–1945), the Greek Civil War (1946–1949), and the Turkish Invasion of Cyprus in 1974..
While official figures remain scarce, polls and anecdotal evidence point to renewed Greek emigration as a result of the Greek financial crisis. According to data published by the Federal Statistical Office of Germany in 2011, 23,800 Greeks emigrated to Germany, a significant increase over the previous year. By comparison, about 9,000 Greeks emigrated to Germany in 2009 and 12,000 in 2010.
Culture
Greek culture has evolved over thousands of years, with its beginning in the Mycenaean civilization, continuing through the classical era, the Hellenistic period, the Roman and Byzantine periods and was profoundly affected by Christianity, which it in turn influenced and shaped.; . Ottoman Greeks had to endure through several centuries of adversity that culminated in genocide in the 20th century.; ; ; . The Diafotismos is credited with revitalizing Greek culture and giving birth to the synthesis of ancient and medieval elements that characterize it today.
Language
thumb|Ancient Greek Ostracon bearing the name of Cimon. Museum of the Ancient Agora, Athens.
Most Greeks speak the Greek language, an Indo-European language that forms a branch itself, with its closest relations being Armenian (see Graeco-Armenian) and the Indo-Iranian languages (see Graeco-Aryan). It has one of the longest documented histories of any language and Greek literature has a continuous history of over 2,500 years. Several notable literary works, including the Homeric epics, Euclid's Elements and the New Testament, were originally written in Greek.
Greek demonstrates several linguistic features that are shared with other Balkan languages, such as Albanian, Bulgarian and Eastern Romance languages (see Balkan sprachbund), and has absorbed many foreign words, primarily of Western European and Turkish origin.. Because of the movements of Philhellenism and the Diafotismos in the 19th century, which emphasized the modern Greeks' ancient heritage, these foreign influences were excluded from official use via the creation of Katharevousa, a somewhat artificial form of Greek purged of all foreign influence and words, as the official language of the Greek state. In 1976, however, the Hellenic Parliament voted to make the spoken Dimotiki the official language, making Katharevousa obsolete..
Modern Greek has, in addition to Standard Modern Greek or Dimotiki, a wide variety of dialects of varying levels of mutual intelligibility, including Cypriot, Pontic, Cappadocian, Griko and Tsakonian (the only surviving representative of ancient Doric Greek).. Yevanic is the language of the Romaniotes, and survives in small communities in Greece, New York and Israel. In addition to Greek, many Greeks in Greece and the Diaspora are bilingual in other languages or dialects such as English, Arvanitika/Albanian, Aromanian, Macedonian Slavic, Russian and Turkish..
Religion
Most Greeks are Christians, belonging to the Greek Orthodox Church. During the first centuries after Jesus Christ, the New Testament was originally written in Koine Greek, which remains the liturgical language of the Greek Orthodox Church, and most of the early Christians and Church Fathers were Greek-speaking. There are small groups of ethnic Greeks adhering to other Christian denominations like Greek Catholics, Greek Evangelicals, Pentecostals, and groups adhering to other religions including Romaniot and Sephardic Jews and Greek Muslims. About 2,000 Greeks are members of Hellenic Polytheistic Reconstructionism congregations.
Greek-speaking Muslims live mainly outside Greece in the contemporary era. There are both Christian and Muslim Greek-speaking communities in Lebanon and Syria, while in the Pontus region of Turkey there is a large community of indeterminate size who were spared from the population exchange because of their religious affiliation.
Arts
Greek art has a long and varied history. Greeks have contributed to the visual, literary and performing arts.. In the West, classical Greek art was influential in shaping the Roman and later the modern Western artistic heritage. Following the Renaissance in Europe, the humanist aesthetic and the high technical standards of Greek art inspired generations of European artists. Well into the 19th century, the classical tradition derived from Greece played an important role in the art of the Western world.. In the East, Alexander the Great's conquests initiated several centuries of exchange between Greek, Central Asian and Indian cultures, resulting in Greco-Buddhist art, whose influence reached as far as Japan..
Byzantine Greek art, which grew from classical art and adapted the pagan motifs in the service of Christianity, provided a stimulus to the art of many nations.. Its influences can be traced from Venice in the West to Kazakhstan in the East. In turn, Greek art was influenced by eastern civilizations (i.e. Egypt, Persia, etc.) during various periods of its history.; .
Notable modern Greek artists include Renaissance painter Dominikos Theotokopoulos (El Greco), Panagiotis Doxaras, Nikolaos Gyzis, Nikiphoros Lytras, Yannis Tsarouchis, Nikos Engonopoulos, Constantine Andreou, Jannis Kounellis, sculptors such as Leonidas Drosis, Georgios Bonanos, Yannoulis Chalepas and Joannis Avramidis, conductor Dimitri Mitropoulos, soprano Maria Callas, composers such as Mikis Theodorakis, Nikos Skalkottas, Iannis Xenakis, Manos Hatzidakis, Eleni Karaindrou, Yanni and Vangelis, one of the best-selling singers worldwide Nana Mouskouri and poets such as Kostis Palamas, Dionysios Solomos, Angelos Sikelianos and Yannis Ritsos. Alexandrian Constantine P. Cavafy and Nobel laureates Giorgos Seferis and Odysseas Elytis are among the most important poets of the 20th century. Novel is also represented by Alexandros Papadiamantis and Nikos Kazantzakis.
Notable Greek actors include Marika Kotopouli, Melina Mercouri, Ellie Lambeti, Academy Award winner Katina Paxinou, Dimitris Horn, Manos Katrakis and Irene Papas. Alekos Sakellarios, Michael Cacoyannis and Theo Angelopoulos are among the most important directors.
Science
thumb|right|Aristarchus of Samos was the first known individual to propose a heliocentric system, in the 3rd century BC
The Greeks of the Classical and Hellenistic eras made seminal contributions to science and philosophy, laying the foundations of several western scientific traditions, such as astronomy, geography, historiography, mathematics, medicine and philosophy. The scholarly tradition of the Greek academies was maintained during Roman times with several academic institutions in Constantinople, Antioch, Alexandria and other centers of Greek learning, while Byzantine science was essentially a continuation of classical science. Greeks have a long tradition of valuing and investing in paideia (education). Paideia was one of the highest societal values in the Greek and Hellenistic world while the first European institution described as a university was founded in 5th century Constantinople and operated in various incarnations until the city's fall to the Ottomans in 1453. The University of Constantinople was Christian Europe's first secular institution of higher learning since no theological subjects were taught,. and considering the original meaning of the world university as a corporation of students, the world's first university as well.
As of 2007, Greece had the eighth highest percentage of tertiary enrollment in the world (with the percentages for female students being higher than for male) while Greeks of the Diaspora are equally active in the field of education. Hundreds of thousands of Greek students attend western universities every year while the faculty lists of leading Western universities contain a striking number of Greek names. Notable modern Greek scientists of modern times include Dimitrios Galanos, Georgios Papanikolaou (inventor of the Pap test), Nicholas Negroponte, Constantin Carathéodory, Manolis Andronikos, Michael Dertouzos, John Argyris, Panagiotis Kondylis, John Iliopoulos (2007 Dirac Prize for his contributions on the physics of the charm quark, a major contribution to the birth of the Standard Model, the modern theory of Elementary Particles), Joseph Sifakis (2007 Turing Award, the "Nobel Prize" of Computer Science), Christos Papadimitriou (2002 Knuth Prize, 2012 Gödel Prize), Mihalis Yannakakis (2005 Knuth Prize) and Dimitri Nanopoulos.
Symbols
thumb|180px|The flag of the Greek Orthodox Church is based on the coat of arms of the Palaiologoi, the last dynasty of the Byzantine Empire.
thumb|180px|Traditional Greek flag.
The most widely used symbol is the flag of Greece, which features nine equal horizontal stripes of blue alternating with white representing the nine syllables of the Greek national motto Eleftheria i Thanatos (Freedom or Death), which was the motto of the Greek War of Independence.. The blue square in the upper hoist-side corner bears a white cross, which represents Greek Orthodoxy. The Greek flag is widely used by the Greek Cypriots, although Cyprus has officially adopted a neutral flag to ease ethnic tensions with the Turkish Cypriot minority (see flag of Cyprus).
The pre-1978 (and first) flag of Greece, which features a Greek cross (crux immissa quadrata) on a blue background, is widely used as an alternative to the official flag, and they are often flown together. The national emblem of Greece features a blue escutcheon with a white cross surrounded by two laurel branches. A common design involves the current flag of Greece and the pre-1978 flag of Greece with crossed flagpoles and the national emblem placed in front. [Note: Website contains image of the 1665 original for the current Greek flag.]
Another highly recognizable and popular Greek symbol is the double-headed eagle, the imperial emblem of the last dynasty of the Eastern Roman Empire and a common symbol in Asia Minor and, later, Eastern Europe.. It is not part of the modern Greek flag or coat-of-arms, although it is officially the insignia of the Greek Army and the flag of the Church of Greece. It had been incorporated in the Greek coat of arms between 1925 and 1926.
Surnames and personal names
Greek surnames began to appear in the 9th and 10th century, at first among ruling families, eventually supplanting the ancient tradition of using the father's name as disambiguator.. Nevertheless, Greek surnames are most commonly patronymics, such those ending in the suffix -opoulos or -ides, while others derive from trade professions, physical characteristics, or a location such as a town, village, or monastery. Commonly, Greek male surnames end in -s, which is the common ending for Greek masculine proper nouns in the nominative case. Occasionally (especially in Cyprus), some surnames end in -ou, indicating the genitive case of a patronymic name.. Many surnames end in suffixes that are associated with a particular region, such as -akis (Crete), -eas or -akos (Mani Peninsula), -atos (island of Cephalonia), and so forth. In addition to a Greek origin, some surnames have Turkish or Latin/Italian origin, especially among Greeks from Asia Minor and the Ionian Islands, respectively.. Female surnames end in a vowel and are usually the genitive form of the corresponding males surname, although this usage is not followed in the diaspora, where the male version of the surname is generally used.
With respect to personal names, the two main influences are Christianity and classical Hellenism; ancient Greek nomenclatures were never forgotten but have become more widely bestowed from the 18th century onwards. As in antiquity, children are customarily named after their grandparents, with the first born male child named after the paternal grandfather, the second male child after the maternal grandfather, and similarly for female children. Personal names are often familiarized by a diminutive suffix, such as -akis for male names and -itsa or -oula for female names. Greeks generally do not use middle names, instead using the genitive of the father's first name as a middle name. This usage has been passed on to the Russians and other East Slavs (otchestvo).
Sea
thumb|right|200px|Aristotle Onassis, the best known Greek shipping magnate.
The traditional Greek homelands have been the Greek peninsula and the Aegean Sea, Southern Italy (Magna Graecia), the Black Sea, the Ionian coasts of Asia Minor and the islands of Cyprus and Sicily. In Plato's Phaidon, Socrates remarks, "we (Greeks) live around a sea like frogs around a pond" when describing to his friends the Greek cities of the Aegean.Plato. Phaidon, 109c: "ὥσπερ περὶ τέλμα μύρμηκας ἢ βατράχους περὶ τὴν θάλατταν οἰκοῦντας." This image is attested by the map of the Old Greek Diaspora, which corresponded to the Greek world until the creation of the Greek state in 1832. The sea and trade were natural outlets for Greeks since the Greek peninsula is rocky and does not offer good prospects for agriculture.
Notable Greek seafarers include people such as Pytheas of Marseilles, Scylax of Caryanda who sailed to Iberia and beyond, Nearchus, the 6th century merchant and later monk Cosmas Indicopleustes (Cosmas who Sailed to India'') and the explorer of the Northwestern Passage, Apostolos Valerianos also known as Juan de Fuca.; ; ; . In later times, the Byzantine Greeks plied the sea-lanes of the Mediterranean and controlled trade until an embargo imposed by the Byzantine emperor on trade with the Caliphate opened the door for the later Italian pre-eminence in trade.; .
The Greek shipping tradition recovered during Ottoman rule when a substantial merchant middle class developed, which played an important part in the Greek War of Independence. Today, Greek shipping continues to prosper to the extent that Greece has the largest merchant fleet in the world, while many more ships under Greek ownership fly flags of convenience. The most notable shipping magnate of the 20th century was Aristotle Onassis, others being Yiannis Latsis, George Livanos, and Stavros Niarchos.
Physical appearance
A study from 2013 for prediction of hair and eye colour from DNA of the Greek people showed that the self-reported phenotype frequencies according to hair and eye colour categories was as follows: 119 individuals – hair colour, 11 was blond, 45 dark blond/light brown, 49 dark brown, 3 brown red/auburn and 11 had black hair; eye colour, 13 with blue, 15 with intermediate (green, heterochromia) and 91 had brown eye colour..
Another study from 2012 included 150 dental school students from the University of Athens, and the results of the study showed that light hair colour (blonde/light ash brown) was predominant in 10.7% of the students. 36% had medium hair colour (light brown/medium darkest brown), 32% had darkest brown and 21% black (15.3 off black, 6% midnight black). In conclusion, the hair colour of young Greeks are mostly brown, ranging from light to dark brown with significant minorities having black and blonde hair. The same study also showed that the eye colour of the students was 14.6% blue/green, 28% medium (light brown) and 57.4% dark brown..
Timeline
The history of the Greek people is closely associated with the history of Greece, Cyprus, Constantinople, Asia Minor and the Black Sea. During the Ottoman rule of Greece, a number of Greek enclaves around the Mediterranean were cut off from the core, notably in Southern Italy, the Caucasus, Syria and Egypt. By the early 20th century, over half of the overall Greek-speaking population was settled in Asia Minor (now Turkey), while later that century a huge wave of migration to the United States, Australia, Canada and elsewhere created the modern Greek diaspora.
TimeEvents c. 3rd millennium BC Proto-Greek tribes from around the Southern Balkans/Aegean are generally thought to have arrived in the Greek mainland. 16th century BC Decline of the Minoan civilization, possibly because of the eruption of Thera. Emergence of the Achaeans and formation of the Mycenaean civilization, the first Greek-speaking civilization. 13th century BC First colonies established in Asia Minor. 11th century BC The Mycenaean civilization ends in the presumed Dorian invasion. The Greek Dark Ages begin. Dorians move into peninsular Greece. Achaeans flee to Aegean Islands, Asia Minor and Cyprus. 9th century BC Major colonization of Asia Minor and Cyprus by the Greek tribes. 8th century BC First major colonies established in Sicily and Southern Italy. The first Pan-Hellenic festival, the Olympic games, is held in 776 BC. The emergence of Pan-Hellenism marks the ethnogenesis of the Greek nation. 6th century BC Colonies established across the Mediterranean Sea and the Black Sea. 5th century BC Defeat of the Persians and emergence of the Delian League in Ionia, the Black Sea and Aegean perimeter culminates in Athenian Empire and the Classical Age of Greece; ends with Athens defeat by Sparta at the close of the Peloponnesian War 4th century BC Rise of Theban power and defeat of the Spartans; Campaign of Alexander the Great; Greek colonies established in newly founded cities of Ptolemaic Egypt and Asia. 2nd century BC Conquest of Greece by the Roman Empire. Migrations of Greeks to Rome. 4th century AD Eastern Roman Empire. Migrations of Greeks throughout the Empire, mainly towards Constantinople. 7th century Slavic conquest of several parts of Greece, Greek migrations to Southern Italy, Roman emperors capture main Slavic bodies and transfer them to Cappadocia. The Bosphorus is re-populated by Macedonian and Cypriot Greeks. 8th century Roman dissolution of surviving Slavic settlements in Greece and full recovery of the Greek peninsula. 9th century Retro-migrations of Greeks from all parts of the Empire (mainly from Southern Italy and Sicily) into parts of Greece that were depopulated by the Slavic Invasions (mainly western Peloponnesus and Thessaly). 13th century Roman Empire dissolves, Constantinople taken by the Fourth Crusade; becoming the capital of the Latin Empire. Liberated after a long struggle by the Empire of Nicaea, but fragments remain separated. Migrations between Asia Minor, Constantinople and mainland Greece take place. 15th century–19th century Conquest of Constantinople by the Ottoman Empire. Greek diaspora into Europe begins. Ottoman settlements in Greece. Phanariot Greeks occupy high posts in Eastern European millets. 1830s Creation of the Modern Greek State. Immigration to the New World begins. Large-scale migrations from Constantinople and Asia Minor to Greece take place.
TimeEvents 1913European Ottoman lands partitioned; unorganized migrations of Greeks, Bulgarians and Turks towards their respective states. 1914–1923 Greek genocide; hundreds of thousands of Ottoman Greeks are estimated to have died during this period. 1919 Treaty of Neuilly; Greece and Bulgaria exchange populations, with some exceptions. 1922 The Destruction of Smyrna (modern-day Izmir) more than 40 thousand Greeks killed; end of significant Greek presence in Asia Minor. 1923 Treaty of Lausanne; Greece and Turkey agree to exchange populations with limited exceptions of the Greeks in Constantinople, Imbros, Tenedos and the Muslim minority of Western Thrace. 1.5 million of Asia Minor and Pontic Greeks settle in Greece, and some 450 thousands of Muslims settle in Turkey. 1940s Hundred of thousands Greeks died from starvation during the Axis Occupation of Greece 1947 Communist regime in Romania begins evictions of the Greek community, approx. 75,000 migrate. 1948 Greek Civil War. Tens of thousands of Greek communists and their families flee into Eastern Bloc nations. Thousands settle in Tashkent. 1950s Massive emigration of Greeks to West Germany, the United States, Australia, Canada, and other countries. 1955 Istanbul Pogrom against Greeks. Exodus of Greeks from the city accelerates; less than 2,000 remain today. 1958 Large Greek community in Alexandria flees Nasser's regime in Egypt.1960s Republic of Cyprus created as an independent state under Greek, Turkish and British protection. Economic emigration continues. 1974Turkish invasion of Cyprus. Almost all Greeks living in Northern Cyprus flee to the south and the United Kingdom. 1980sMany civil war refugees were allowed to re-emigrate to Greece. Retro-migration of Greeks from Germany begins. 1990sCollapse of Soviet Union. Approximately 340,000 ethnic Greeks migrate from Georgia, Armenia, southern Russia, and Albania to Greece. early 2000s Some statistics show the beginning of a trend of reverse migration of Greeks from the United States and Australia. 2010s Over 200,000 people, particularly young skilled individuals, emigrate to other EU states due to high unemployment (see also Greek government-debt crisis).
See also
Antiochian Greeks
Arvanites
Cappadocian Greeks
Caucasian Greeks
Greek Cypriots
Greek Diaspora
Griko people
Macedonians (Greeks)
Maniots
Greek Muslims
Northern Epirotes
Pelasgians
Pontic Greeks
Romaniotes
Sarakatsani
List of ancient Greeks
List of Greeks
List of Greek Americans
Notes
Citations
References
Further reading
Mycenaean Greeks
Classical Greeks
Hellenistic Greeks
Byzantine Greeks
Ottoman Greeks
Modern Greeks
External links
Omogenia
World Council of Hellenes Abroad (SAE), Umbrella Diaspora Organization
Religious
Ecumenical Patriarchate of Constantinople
Greek Orthodox Patriarchate of Alexandria
Greek Orthodox Patriarchate of Antioch
Greek Orthodox Patriarchate of Jerusalem
Church of Cyprus
Church of Greece
Academic
Transnational Communities Programme at the University of Oxford, includes papers on the Greek Diaspora
Greeks on Greekness: The Construction and Uses of the Greek Past among Greeks under the Roman Empire.
The Modern Greek Studies Association is a scholarly organization for modern Greek studies in North America, which publishes the Journal of Modern Greek Studies.
Got Greek? Next Generation National Research Study
Waterloo Institute for Hellenistic Studies
Trade organizations
Hellenic Canadian Board of Trade
Hellenic Canadian Lawyers Association
Hellenic Canadian Congress of British Columbia
Hellenic-American Chamber of Commerce
Hellenic-Argentine Chamber of Industry and Commerce (C.I.C.H.A.)
Charitable organizations
AHEPA - American Hellenic Educational Progressive Association
Hellenic Heritage Foundation
Hellenic Home for the Aged
Hellenic Hope Center
Hellenic Scholarships
Category:Ethnic groups in Europe
Category:Ethnic groups in Greece
Category:Ancient peoples of Europe | 42,056 | 2017-01 |
Chihuahua (state) | Chihuahua (), officially the Free and Sovereign State of Chihuahua (), is one of the 31 states which, with Mexico City, comprise the 32 Federal Entities of Mexico. Its capital city is Chihuahua City.
It is located in Northwestern Mexico and is bordered by the states of Sonora to the west, Sinaloa to the southwest, Durango to the south, and Coahuila to the east. To the north and northeast, it has a long border with the U.S. adjacent to the U.S. states of New Mexico and Texas.
Chihuahua is the largest state in Mexico by area, with an area of , it is slightly larger than the United Kingdom. The state is consequently known under the nickname El Estado Grande ("The Big State").
Although Chihuahua is primarily identified with the Chihuahuan Desert for namesake, it has more forests than any other state in Mexico, with the exception of Durango. Due to its variant climate, the state has a large variety of fauna and flora. The state is mostly characterized by rugged mountainous terrain and wide river valleys. The Sierra Madre Occidental mountain range, an extension of the Rocky Mountains, dominates the state's terrain and is home to the state's greatest attraction, Las Barrancas del Cobre, or Copper Canyon, a canyon system larger and deeper than the Grand Canyon. On the slope of the Sierra Madre Occidental mountains (around the regions of Casas Grandes, Cuauhtémoc and Parral), there are vast prairies of short yellow grass, the source of the bulk of the state's agricultural production. Most of the inhabitants live along the Rio Grande Valley and the Conchos River Valley.
The etymology of the name Chihuahua has long been disputed by historians and linguists. The most accepted theory explains that the name was derived from the Nahuatl language meaning "The place where the water of the rivers meet" (i.e., "confluence", cf. Koblenz).
Chihuahua has a diversified state economy. The three most important economic centers in the state are: Ciudad Juárez, an international manufacturing center; Chihuahua, the state capital; and Delicias, the state's main agriculture hub. Today Chihuahua serves as an important commercial route prospering from billions of dollars from international trade as a result of NAFTA. On the other hand the state suffers the fallout of illicit trade and activities especially at the border.
History
Prehistory
thumb|Paquimé artifact found at Casas Grandes
The earliest evidence of human inhabitants of modern day Chihuahua was discovered in the area of Samalayuca and Rancho Colorado. Clovis points have been found in northeastern Chihuahua that have been dated from 12,000 BC to 7000 BC. It is thought that these inhabitants were hunter gatherers. Inhabitants of the state later developed farming with the domestication of corn. An archeological site in northern Chihuahua known as Cerro Juanaqueña revealed squash cultivation, irrigation techniques, and ceramic artifacts dating to around 2000 BC.
left|thumb|185px|right|Cliff dwellings at Las Jarillas Cave, part of the Cuarenta Casas archeological site.
Between AD 300 and 1300 in the northern part of the state along the wide, fertile valley on the San Miguel River the Casas Grandes (Big Houses) culture developed into an advanced civilization. The Casas Grandes civilization is part of a major prehistoric archaeological culture known as Mogollon which is related to the Ancestral Pueblo culture. Paquime was the center of the Casas Grandes civilization. Extensive archaeological evidence shows commerce, agriculture, and hunting at Paquime and Cuarenta Casas (Forty Houses).
La Cueva De Las Ventanas (The Cave of Windows), a series of cliff dwellings along an important trade route, and Las Jarillas Cave scrambled along the canyons of the Sierra Madre in Northwestern Chihuahua date between AD 1205 and 1260 and belong to the Paquimé culture. Cuarenta Casas is thought to have been a branch settlement from Paquime to protect the trade route from attack. Archaeologists believe the civilization began to decline during the 13th century and by the 15th century the inhabitants of Paquime sought refuge in the Sierra Madre Occidental while others are thought to have emigrated north and joined the Ancestral Pueblo peoples. According to anthropologist current natives tribes (Yaqui, Mayo, Opata, and Tarahumara) are descendants of the Casas Grandes culture.
During the 14th century in the northeastern part of the state nomad tribes by the name of Jornado hunted bison along the Rio Grande; they left numerous rock paintings throughout the northeastern part of the state. When the Spanish explorers reached this area they found their descendants, Suma and Manso tribes. In the southern part of the state, in a region known as Aridoamerica, Chichimeca people survived by hunting, gathering, and farming between AD 300 and 1300. The Chichimeca are the ancestors of the Tepehuan people.
Colonial Era
Nueva Vizcaya (New Biscay) was the first province of northern New Spain to be explored and settled by the Spanish. Around 1528, a group of Spaniard explorers, led by Álvar Núñez Cabeza de Vaca, first entered the territory of what is now Chihuahua. The conquest of the territory lasted nearly one century and encountered fierce resistance from the Conchos tribe, but the desire of the Spanish Crown to transform the region into a bustling mining center led to a strong strategy to control the area.
thumb|right|150px|Antonio de Deza y Ulloa the founder of Chihuahua, Chihuahua
In 1562 Francisco de Ibarra headed a personal expedition in search of the mythical cities of Cibola and Quivira; he traveled through the present-day state of Chihuahua. Francisco de Ibarra is thought to have been the first European to see the ruins of Paquime. In 1564 Rodrigo de Río de Loza, a lieutenant under Francisco de Ibarra, stayed behind after the expedition and found gold at the foot of the mountains of the Sierra Madre Occidental; he founded the first Spanish city in the region, Santa Barbara in 1567 by bringing 400 European families to the settlement. A few years later in 1569 Franciscan missionaries led by Fray Agustín Rodríguez from the coast of Sinaloa and the state of Durango founded the first mission in the state in Valle de San Bartolomé (present-day Valle de Allende). Fray Agustín Rodríguez evangelized the native population until 1581. Between 1586 and 1588 a epidemic caused a temporary exodus of the small population in the territory of Nueva Vizcaya.
Santa Bárbara became the launching place for expeditions into New Mexico by Spanish conquistadors like Antonio de Espejo, Gaspar Castaño, Antonio Gutiérrez de Umaña, Francisco Leyba de Bonilla, and Vicente de Zaldívar. Several expeditions were led to find a shorter route from Santa Barbara to New Mexico. In April 1598, Juan de Oñate found a short route from Santa Barbara to New Mexico which came to be called El Paso del Norte (The Northern Pass). The discovery of El Paso Del Norte was important for the expansion of El Camino Real de Tierra Adentro (The Inner Land Royal Road) to link Spanish settlements in New Mexico to Mexico City; El Camino Real de Tierra Adentro facilitated transport of settlers and supplies to New Mexico and Camargo.
thumb|right|200px|An 18th century colonial aqueduct built in Chihuahua City
In 1631 Juan Rangel de Biezma discovered a rich vein of silver and subsequently established San Jose del Parral near the site. Parral remained an important economic and cultural center for the next 300 years. On December 8, 1659 Fray García de San Francisco founded the mission of Nuestra Señora de Guadalupe de Mansos del Paso del Río del Norte and founded the town El Paso Del Norte (present day Ciudad Juárez) in 1667.
The Spanish society that developed in the region replaced the sparse population of indigenous peoples. The absence of servants and workers forged the spirit of northern people as self-dependent, creative people that defended their European heritage. In 1680 settlers from Santa Fe, New Mexico sought refuge in El Paso Del Norte for twelve years after fleeing the attacks from Pueblo tribes, but returned to Santa Fe in 1692 after Diego de Vargas recaptured the city and vicinity. In 1709, Antonio de Deza y Ulloa founded the state capital Chihuahua City; shortly after, the city became the headquarters for the regional mining offices of the Spanish crown known as 'Real de Minas de San Francisco de Cuéllar' in honor of the Viceroy of New Spain, Francisco Fernández de la Cueva Enríquez, Duke of Alburquerque and the Marquee of Cuéllar.
Mexican War of Independence
thumb|175px|right|A mural of Miguel Hidalgo y Costilla in the Government Palace of Chihuahua by Aarón Piña Mora
During the Napoleonic Occupation of Spain, Miguel Hidalgo y Costilla, a Catholic priest of progressive ideas, declared Mexican independence in the small town of Dolores, Guanajuato on September 16, 1810 with a proclamation known as the "Grito de Dolores". Hidalgo built a large support among intellectuals, liberal priests and many poor people. Hidalgo fought to protect the rights of the poor and indigenous population. He started on a march to the capital, Mexico City, but retreated back north when faced with the elite of the royal forces at the outskirts of the capital. He established a liberal government from Guadalajara, Jalisco but was soon forced to flee north by the royal forces that recaptured the city. Hidalgo attempted to reach the United States and gain American support for Mexican independence. HIdalgo reached Saltillo, Coahuila where he publicly resigned his military post and rejected a pardon offered by Viceroy Francisco Venegas in return for Hidalgo's surrender. A short time later, he and his supporters were captured by royalist Ignacio Elizondo at the Wells of Baján (Norias de Baján) on March 21, 1811 and taken to the city of Chihuahua. Hidalgo forced the Bishop of Valladolid, Manuel Abad y Queipo, to rescind the excommunication order he had circulated against him on September 24, 1810. Later, the Inquisition issued an excommunication edict on October 13, 1810 condemning Miguel Hidalgo as a seditionary, apostate, and heretic.
Hidalgo was turned over to the Bishop of Durango, Francisco Gabriel de Olivares, for an official defrocking and excommunication on July 27, 1811. He was then found guilty of treason by a military court and executed by firing squad on July 30 at 7 in the morning. Before his execution, he thanked his jailers, Private Soldiers Ortega and Melchor, in letters for their humane treatment. At his execution, Hidalgo placed his right hand over his heart to show the riflemen where they should aim. He also refused the use of a blindfold. His body, along with the bodies of Allende, Aldama and José Mariano Jiménez were decapitated, and the heads were put on display on the four corners of the Alhóndiga de Granaditas in Guanajuato. The heads remained there for ten years until the end of the Mexican War of Independence to serve as a warning to other insurgents. Hidalgo's headless body was first displayed outside the prison but then buried in the Church of St Francis in Chihuahua. Those remains would later be transferred in 1824 to Mexico City.
thumb|left|200px|El Templo de San Francisco in Chihuahua.
Hidalgo's death resulted in a political vacuum on the insurgent side until 1812. The royalist military commander, General Felix Calleja, continued to pursue rebel troops. Insurgent fighting evolved into guerrilla warfare, and eventually the next major insurgent leader, Jose Maria Morelos y Pavon, who had led rebel movements with Hidalgo, became head of the insurgents.
Hidalgo is hailed as the Father of the Nation even though it was Agustin de Iturbide and not Hidalgo who achieved Mexican Independence in 1821. Shortly after gaining independence, the day to celebrate it varied between September 16, the day of Hidalgo's Grito, and September 27, the day Iturbide rode into Mexico City to end the war. Later, political movements would favor the more liberal Hidalgo over the conservative Iturbide, so that eventually September 16, 1810 became the officially recognized day of Mexican independence. The reason for this is that Hidalgo is considered to be "precursor and creator of the rest of the heroes of the (Mexican War of) Independence." Hidalgo has become an icon for Mexicans who resist tyranny in the country. Diego Rivera painted Hidalgo's image in half a dozen murals. José Clemente Orozco depicted him with a flaming torch of liberty and considered the painting among his best work. David Alfaro Siqueiros was commissioned by San Nicolas University in Morelia to paint a mural for a celebration commemorating the 200th anniversary of Hidalgo's birth. The town of his parish was renamed Dolores Hidalgo in his honor and the state of Hidalgo was created in 1869. Every year on the night of 15–16 September, the president of Mexico re-enacts the Grito from the balcony of the National Palace. This scene is repeated by the heads of cities and towns all over Mexico. The remains of Miguel Hidalgo y Costilla lie in the column of the Angel of Independence in Mexico City. Next to it is a lamp lit to represent the sacrifice of those who gave their lives for Mexican Independence.
Constituent legislatures
thumb|right|200px|Map of Chihuahua in 1824
In the constituent legislature or convention, the conservative and liberal elements formed using the nicknames of Chirrines and Cuchas. The military entered as a third party. The elections for the first regular legislature were disputed, and it was not until May 1, 1826, that the body was installed. The liberals gained control and the opposition responded by fomenting a conspiracy. This was promptly stopped with the aid of informers, and more strenuous measures were taken against the conservatives. Extra powers were conferred on the Durango governor, Santiago Baca Ortiz, deputy to the first national congress, and leader of the liberal party.History Of The North Mexican States And Texas, Vol. II 1801–1889, San Francisco, The History Company, Publishers, 1889, Chapter 24
González' rebellion
Opponents continued to plot against the new government. In March 1827, Lieutenant J.M. González proclaimed himself comandante general, arrested the governor, and dissolved the legislature. General Parras was sent to suppress the movement. Comandante general J. J. Ayestarán was replaced by José Figueroa. When elections failed, the government intervened in favor of the Yorkino party, which had elected Vicente Guerrero to the presidency.
Because of the general instability of the federal government during 1828, the installation of the new legislature did not take place until the middle of the following year. It was quickly dissolved by Governor Santiago de Baca Ortiz, who replaced it with a more pronounced Yorkino type. When Guerrero's liberal administration was overthrown in December, Gaspar de Ochoa aligned with Anastasio Bustamante, and in February 1830, organized an opposition group that arrested the new governor, F. Elorriaga, along with other prominent Yorkinos. He then summoned the legislature, which had been dissolved by Baca. The civil and military authorities were now headed by J. A. Pescador and Simón Ochoa.
Vicente Guerrero
The general features of the preceding occurrence applied also to Chihuahua, although in a modified form. The first person elected under the new constitution of 1825 was Simón Elías Gonzalez, who being in Sonora, was induced to remain there. José Antonio Arcé took his place as ruler in Chihuahua. In 1829, González became general commander of Chihuahua, when his term of office on the west coast expired. Arcé was less of a yorkino than his confrere of Durango. Although unable to resist the popular demand for the expulsion of the Spaniards, he soon quarreled with the legislature, which declared itself firmly for Guerrero, and announcing his support of Bustamante's revolution, he suspended, in March 1830, eight members of that body, the vice-governor, and several other officials, and expelled them from the state. The course thus outlined was followed by Governor José Isidro Madero, who succeeded in 1830, associated with J. J. Calvo as general commander, stringent laws being issued against secret societies, which were supposed to be the main spring to the anti-clerical feeling among liberals.
Durango and Bustamante
The anti-clerical feeling was widespread, and Durango supported the initial reaction against the government at Mexico. In May 1832, José Urrea, a rising officer, supported the restoration of President Pedraza. On July 20, Governor Elorriaga was reinstated, and Baca along with the legislative minority were brought back to form a new legislature, which met on September 1. Chihuahua showed no desire to imitate the revolutionary movement and Urrea prepared to invade the state. Comandante-general J.J.Calvo threatened to retaliate, and a conflict seemed imminent. The entry of General Santa Anna into Mexico brought calm, as the leaders waited for clarity.
Santa Anna
left|thumb|175px|Santa Anna
Bishop José Antonio Laureano de Zubiría of Durango was banished for resisting the law relating to priests and other encroachments on the church; another joined the western states in a short lived coalition for sustaining the federal system. Chihuahua adopted the Plan of Cuernavaca in July 1834 while President Valentín Gómez Farías was in power. Because the plan was not enforced, commanding officer, Colonel J.I. Gutiérrez, declared the term of the legislature and governor expired on September 3.
At a convention of citizens called to select a new provisional ruler, Gutierrez obtained the vote, with P. J. Escalante for his deputy, and a council to guide the administration. Santa Anna ordered the reinstatement of Mendarozqueta as comandante general. Gutiérrez yielded, but Escalante refused to surrender office, demonstrations of support ensued, but Escalante yielded when troops were summoned from Zacatecas. A new election brought a new legislature, and conforming governors. In September 1835 José Urrea a federalist army officer came into power.
Comandante general Simón Elías González, was nominated governor and military command was given to Colonel J.J. Calvo, whose firmness had earned well-merited praise. The state was in the midst of a war with the Apaches, which became the focus of all their energy and resources. After a review of the situation, Simón Elías González declared that the interests of the territory would be best served by uniting the civil and military power, at least while the campaign lasted. He resigned under opposition, but was renominated in 1837.
Mexican–American War
thumb|left|200px|Battles of Mexican–American War in Chihuahua
The state seemed at relative calm compared to the rest of the country due to its close ties to the United States until 1841. In 1843 the possibility of war was anticipated by the state government and it began to reinforce the defense lines along the political boundary with Texas. Supplies of weapons were sent to fully equip the military and took steps to improve efficiency at the presidios. Later, the Regimen for the Defenders of the Border were organized by the state which were made up of: light cavalry, four squads of two brigades, and a small force of 14 men and 42 officials at the price of 160,603 pesos per year. During the beginning of the 1840s, private citizens took it upon themselves to stop the commercial caravans of supplies from the United States, but being so far away from the large suppliers in central Mexico the caravan was allowed to continue in March 1844. Continuing to anticipate a war, the state legislature on July 11, 1846 by decree enlisted 6,000 men to serve along the border; during that time Ángel Trías quickly rose to power by portraying zealous anti-American rhetoric. Trías took the opportunity to dedicate important state resources to gain economic concessions from the people and loans from many municipalities in preparation to defend the state; he used all the money he received to equip and organize a large volunteer militia. Ángel Trías took measures for state self-dependence in regards to state militia due to the diminishing financial support from the federal government.
The United States Congress declared war on Mexico on May 13, 1846 after only having a few hours to debate. Although President José Mariano Paredes's issuance of a manifesto on May 23 is sometimes considered the declaration of war, Mexico officially declared war by Congress on July 7. After the American invasion of New Mexico, Chihuahua sent 12,000 men led by Colonel Vidal to the border to stop the American military advance into the state. The Mexican forces being impatient to confront the American forces passed beyond El Paso del Norte about north along the Rio Grande. The first battle that Chihuahua fought was the battle of El Bracito; the Mexican forces consisting of 500 cavalry and 70 infantry confronted a force of 1,100–1,200 Americans on December 25, 1846. The battle ended badly by the Mexican forces that were then forced to retreat back into the state of Chihuahua. By December 27, 1846, the American forces occupied El Paso Del Norte. General Doniphan maintained camp in El Paso Del Norte awaiting supplies and artillery which he received in February 1847.
On February 8, 1847, Doniphan continued his march with 924 men mostly from Missouri; he accompanied a train of 315 wagons of a large commercial caravan heading to the state capital. Meanwhile, the Mexican forces in the state had time to prepare a defense against the Americans. About north of the capital where two mountain ranges join from east to west is the only pass into the capital; known as Sacramento Pass, this point is now part of present-day Chihuahua City. The Battle of Sacramento was the most important battle fought in the state of Chihuahua because it was the sole defense for the state capital. The battle ended quickly because of some devastating defensive errors from the Mexican forces and the ingenious strategic moves by the American forces. After their loss at the Battle of Sacramento, the remaining Mexican soldiers retreated south, leaving the city to American occupation. Almost 300 Mexicans were killed in the battle, as well as almost 300 wounded. The Americans also confiscated large amounts of Mexican supplies and took 400 Mexican soldiers prisoners of war. American forces maintained an occupation of the state capital for the rest of the Mexican–American War.
thumb|right|200px|Battle of the Sacramento River
The Treaty of Guadalupe Hidalgo, signed on February 2, 1848, by American diplomat Nicholas Trist and Mexican plenipotentiary representatives Luis G. Cuevas, Bernardo Couto, and Miguel Atristain, ended the war, gave the U.S. undisputed control of Texas, and established the U.S.–Mexican border of the Rio Grande. As news of peace negotiations reached the state, new call to arms began to flare among the people of the state. But as the Mexican officials in Chihuahua heard that General Price was heading back to Mexico with a large force comprising several companies of infantry and three companies of cavalry and one division of light artillery from Santa Fe on February 8, 1848, Ángel Trías sent a message to Sacramento Pass to ask for succession of the area as they understood the war had concluded. General Price, misunderstanding this as a deception by the Mexican forces, continued to advance towards the state capital. On March 16, 1848 Price began negotiations with Ángel Trías, but the Mexican leader responded with an ultimatum to General Price. The American forces engaged with the Mexican forces near Santa Cruz de los Rosales on March 16, 1848. The Battle of Santa Cruz de los Rosales was the last battle of the Mexican–American War and it occurred after the peace treaty was signed. The American forces maintained control over the state capital for three months after the confirmation of the peace treaty. The American presence served to delay the possible succession of the state which had been discussed at the end of 1847, and the state remained under United States occupation until May 22, 1848.
During the American occupation of the state, the number of Indian attacks was drastically reduced, but in 1848 the attacks resumed to such a degree that the Mexican officials had no choice but to resume military projects to protect Mexican settlements in the state. Through the next three decades the state faced constant attacks from indigenous on Mexican settlements. After the occupation the people of the state were worried about the potential attack from the hostile indigenous tribes north of the Rio Grande; as a result a decree on July 19, 1848, the state established 18 military colonies along the Rio Grande. The new military colonies were to replace the presidios as population centers to prevent future invasions by indigenous tribes; these policies remained prominent in the state until 1883. Eventually the state replaced the old state security with a state policy to form militias organized with every Mexican in the state capable to serve between the ages of 18 and 55 to fulfill the mandate of having six men defending for every 1000 residents. thumb|200px|right|La Mesilla, a large area that was claimed by the state of Chihuahua.
La Mesilla
The frontier counties of the state along the border with the United States expected federal protection from the federal government under Herrera and Arista, but were soon disappointed by the federal government's decision to deploy military forces to other areas of the country due to internal challenges in the state of Jalisco. Ángel Trías led a rebellion to successfully depose the unpopular conservative Governor Cordero at the end of 1852.
Despite the efforts of strong political forces led by Ángel Trías in the state could not stop President Santa Anna from selling La Mesilla as part of the Gadsden Purchase on December 30, 1853 for 15 million USD. It was then ratified in the United States on April 25, 1854 and signed by President Franklin Pierce, with final approval action taken by Mexico on June 8, 1854. The citizens of the area held strong anti-American sentiments and raided American settlers and travelers across the area.
The Reform War and the French Intervention
thumb|200px|left|A mural by Piña in the Government Palace, honouring the liberators Abraham Lincoln, Benito Juárez and Simón Bolivar
The state united behind the Plan of Ayutla and ratified the new constitution in 1855. The state was able to survive through the Reform War with minimal damage due to the large number of liberal political figures. The 1858 conservative movement did not succeed in the state even after the successful military campaign of the conservative Zuloaga with 1,000 men occupied the cities of Chihuahua and Parral. In August 1859, Zuloaga and his forces were defeated by the liberal Orozco and his forces; Orozco soon after deposed the state governor, but had to flee to Durango two months later. In the late 1860s the conservative General Cajen briefly entered the state after his campaign through the state of Jalisco and helped establish conservative politicians and ran out the liberal leaders Jesús González Ortega and José María Patoni. Cajen took possession of the state capital and established himself as governor; he brooked no delay in uniting a large force to combat the liberal forces which he defeated in La Batalla del Gallo. Cajen attained several advantages over the liberals within the state, but soon lost his standing due to a strong resurgence of the liberal forces within the state. The successful liberal leaders José María Patoni of Durango and J.E. Muñoz of Chihuahua quickly strengthened their standing by limiting the political rights of the clergy implementing the presidential decree. The state elected General Luis Terrazas, a liberal leader, as governor; he would continue to fight small battles within the state to suppress conservative uprisings during 1861.thumb|200px|Museo Casa Juarez, a 19th-century building in downtown Chihuahua city, that served as the de facto National Palace of Mexico.
In consequence to the Reform War, the federal government was bankrupt and could not pay its foreign debts to Spain, England, and France. On July 17, 1861, President Juárez decreed a moratorium on payment to foreign debtors for a period of two years. Spain, England, and France did not accept the moratorium by Mexico; they united at the Convention of the Triple Alliance on October 31, 1861 in which they agreed to take possession of several custom stations within Mexico as payment. A delegation of the Triple Alliance arrived in Veracruz in December 1861. President Juárez immediately sent his Foreign Affairs Minister, Manuel Doblado, who is able to reduce the debts through the Pacto de Soledad (Soledad Pact). General Juan Prim of Spain persuaded the English delegation to accept the terms of the Pacto de Soledad, but the French delegation refused.
The liberal political forces maintained strong control over the state government until shortly after the French Intervention which turned the tables in favor to the conservative forces once again. The intervention had serious repercussions for the state of Chihuahua. President Juárez, in an effort to organize a strong defense against the French, decreed a list of national guard units that every state had to contribute to the Ministry of War and the Navy; Chihuahua was responsible for inducting 2,000 men. Regaining power, Governor Luis Terrazas assigned the First Battalion of Chihuahua for integration into the national army led by General Jesús González Ortega; the battalion was deployed to Puebla. After the defeat of the army in Puebla, the Juárez administration was forced to abandon Mexico City; the president retreated further north seeking refuge in the state of Chihuahua.
Under threat from the conservative forces, Governor Terrazas was deposed, and the state legislature proclaimed martial law in the state in April 1864 and established Jesús José Casavantes as the new governor. In response, José María Patoni decided to march to Chihuahua with presidential support. Meanwhile, Maximilian von Habsburg, a younger brother of the Emperor of Austria, was proclaimed Emperor Maximilian I of Mexico on April 10, 1864 with the backing of Napoleon III and a group of Mexican conservatives. Before President Benito Juárez was forced to flee, Congress granted him an emergency extension of his presidency, which would go into effect in 1865 when his term expired, and last until 1867. At the same time, the state liberals and conservatives compromised to allow the popular Ángel Trías take the governorship; by this time the French forces had taken control over the central portions of the country and were making preparations to invade the northern states.
300px|left|thumb|Overview of military actions
The French forces tried to subdue and capture the liberal government based in Saltillo. On September 21, 1864, José María Patoni and Jesús González Ortega lost against the French forces at the Battle of Estanzuelas; the supreme government led by President Juárez was forced to evacuate the city of Saltillo and relocate to Chihuahua. Juárez stopped in Ciudad Jiménez, Valle de Allende, and Hidalgo de Parral, in turn. He decreed Parral the capital of Mexico from October 2–5, 1864.Sección en INEGI Estado Chihuahua,municipio Hidalgo del Parral, localidad 0001 Enero 7 2007 Perceiving the threat from the advancing French forces, the president continued his evacuation through Santa Rosalía de Camargo, Santa Cruz de Rosales, and finally Chihuahua, Chihuahua. On October 12, 1864, the people of the state gave President Juárez an overwhelmingly supportive reception, led by Governor Ángel Trías. On October 15, 1864 the city of Chihuahua was declared the temporary capital of Mexico.
After running imperial military affairs in the states of Coahuila and Durango, General Agustín Enrique Brincourt made preparations to invade the state of Chihuahua. On July 8, 1865 Brincourt crossed the Nazas River in northern Durango, heading toward Chihuahua. On July 22 Brincourt crossed the banks of Río Florido into Ciudad Jiménez; one day later he arrived at Valle de Allende where he sent Colonel Pyot with a garrison to take control of Hidalgo del Parral. Brincourt continued through Santa Rosalia de Camargo and Santa Cruz de Rosales. President Juárez remained in the state capital until August 5, 1865 when he left for El Paso del Norte (present-day Ciudad Juárez) due to evidence that the French were to attack the city. On the same day, the President named General Manuel Ojinaga the new governor and placed him in charge of all the republican forces. Meanwhile, General Villagran surprised the imperial forces in control of Hidalgo de Parral; after a short two-hour battle, Colonel Pyot was defeated and forced to retreat. At the Battle of Parral, the French lost 55 men to the Republican forces. On August 13, 1865, the French forces with an estimated 2,500 men arrived at the outskirts of Chihuahua City, and on August 15, 1865, General Brincourt defeated the republican forces, taking control of the state capital. Brincourt designated Tomás Zuloaga as Prefect of Chihuahua. Fearing the French would continue their campaign to El Paso del Norte, President Juárez relocated to El Carrizal, a secluded place in the mountains near El Paso del Norte, in August 1865, . 'Archivo Histórico de Localidades' It would have been easy for the French forces to continue in pursuit of President Juárez across the border, but they feared altercations with American forces. General François Achille Bazaine ordered the French troops to retreat back to the state of Durango after only reaching a point one days travel north of Chihuahua City. General Brincourt asked for 1,000 men to be left behind to help maintain control over the state, but his request was denied. After the death of General Ojinaga, the Republican government declared General Villagran in charge of the fight against the Imperial forces. The French left the state on October 29, 1865. President Juárez returned to Chihuahua City on November 20, 1865 and remained in the city until December 9, 1865 when he returned to El Paso del Norte. Shortly after the president left Chihuahua City, Terrazas was restored as governor of the state on December 11, 1865.
Maximilian was deeply dissatisfied with General Bazaine's decision to abandon the state capital of Chihuahua and immediately ordered Agustín B. Billaut to recapture the city. On December 11, 1865, Billaut with a force of 500 men took control of the city. By January 31, 1866 Billaut was ordered to leave Chihuahua, but he left behind 500 men to maintain control. At the zenith of their power, the imperialist forces controlled all but four states in Mexico; the only states to maintain strong opposition to the French were: Guerrero, Chihuahua, Sonora, and Baja California.
thumb|right|200px|The Plaza de Armas and the Cathedral of the Holy Cross, Our Lady of Regla and St Francis of AssisiPresident Juárez once again based his government in the state of Chihuahua and it served as the center for the resistance against the French invasion throughout Mexico. On March 25, 1866, a battle ensued in the Plaza de Armas in the center of Chihuahua City between the French imperial forces that were guarding the plaza and the Republican forces led by General Terrazas. Being completely caught off guard, the French imperial forces sought refuge by bunkering themselves in the Cathedral of the Holy Cross, Our Lady of Regla, and St Fancis of Assisi and made it almost impossible to penetrate their defenses. General Terrazas then decided to fire a heavy artillery barrage with 8 kg cannonballs. The first cannon fired hit a bell in the tower of the church, instantly breaking it in half; soon after, 200 men of the imperial army forces surrendered. The republican forces had recovered control over the state capital. The bell in the church was declared a historical monument and can be seen today in the Cathedral. By April 1866, the state government had established a vital trading route from Chihuahua City to San Antonio, Texas; the government began to replenish their supplies and reinforce their fight against the Imperial forces.
General Aguirre moved to the deserts of the southeastern portion of the state and defeated the French forces in Parral, led by Colonel Cottret. By the middle of 1866, the state of Chihuahua was declared free of enemy control; Parral was the last French stronghold within the state. On June 17, 1866, President Juárez arrived in Chihuahua City and remained in the capital until December 10, 1866. During his two years in the state of Chihuahua, President Juárez passed ordinances regarding the rights of adjudication of property and nationalized the property of the clergy. The distance of the French forces and their allies allowed the Ministry of War, led by General Negrete, to reorganize the state's national guard into the Patriotic Battalion of Chihuahua, which was deployed to fight in the battle of Matamoros, Tamaulipas against the French. After a series of major defeats and an escalating threat from Prussia, France began pulling troops out of Mexico in late 1866. Disillusioned with the liberal political views of Maximilian, the Mexican conservatives abandoned him, and in 1867 the last of the Emperor's forces were defeated. Maximilian was sentenced to death by a military court; despite national and international pleas for amnesty, Juárez refused to commute the sentence. Maximilian was executed by firing squad on June 19, 1867.
Juárez Government
thumb|200px|right|Monument to Benito Juárez in Ciudad Juárez, Chihuahua
President Benito Juárez was re-elected in the general election of 1867 in which he received strong liberal support, especially in Chihuahua. Luis Terrazas was confirmed by the people of Chihuahua to be governor of the state. But soon after the election, President Juárez had another crisis on his hands; the Juárez administration was suspected to be involved in the assassination of the military chief José María Patoni executed by General Canto in August 1868. General Canto turned himself over to Donato Guerra. Canto was sentenced to death, but later his sentence changed to 10 years imprisonment. The sense of injustice gave rise to a new rebellion in 1869 that threatened the federal government. In response, the Juárez administration took drastic measures by temporarily suspending constitutional rights, but the governor of Chihuahua did not support this action. Hostilities continued to increase especially after the election of 1871 which was perceived to be fraudulent. A new popular leader arose among the rebels, Porfirio Díaz. The federal government was successful in quelling rebellions in Durango an Chihuahua. On July 18, 1872, President Juárez died from a heart attack; soon after, many of his supporters ceased the fighting. Peace returned to Chihuahua and the new government was led by Governor Antonio Ochoa (formerly a co-owner of the Batopilas silver mines) in 1873 after Luis Terrazas finished his term in 1872.
But the peace in the state did not last long, the elections of 1875 caused new hostilities. Ángel Trías led a new movement against the government in June 1875 and maintained control over the government until September 18, 1875 when Donato Guerra the orchestrator of the Revolution of the North was captured. Donato Guerra was assassinated in a suburb of Chihuahua City where he was incarcerated for conspiring with Ángel Trías. During October 1875 several locations were controlled by rebel forces, but the government finally regained control on November 25, 1875.
Porfiriato
thumb|left|175px|Porfirio Díaz in military uniformAfter the death of the president Benito Juárez in 1872, the first magistracy of the country was occupied by the vice-president Sebastián Lerdo de Tejada, who called for new elections. Two candidates were registered; Lerdo de Tejada and General Porfirio Díaz, one of the heroes of the Battle of Puebla which had taken place on May 5, 1862. Lerdeo de Tejada won the election, but lost popularity after he announced his intent to run for re-election. On March 21, 1876, Don Porfirio Díaz rebelled against President Sebastian Lerdo de Tejada. The Plan of Tuxtepec defended the "No Re-election" principle. On June 2, 1876 the garrisons in the state of Chihuahua surrendered to the authority of General Porfirio Díaz; Governor Antonio Ochoa was arrested until all the Lerdista forces were suppressed throughout the state. Porfirio Díaz then helped Tíras regain the governorship of the state of Chihuahua allowing for the Plan of Tuxtepec to be implemented. The victory of the Plan of Tuxtepec, gave the interim presidency to Jose Maria Iglesias and later, as the only candidate, the General Porfirio Díaz assumed the presidency on May 5, 1877.
During the first years of the Porfiriato (Porfirio Díaz Era), the Díaz administration had to combat several attacks from the Lerdista forces and the Apache. A new rebellion led by the Lerdista party was orchestrated from exile in the United States. The Lerdista forces were able to temporarily occupy the city of El Paso del Norte until mid-1877. During 1877 the northern parts of the state suffered through a spell of extreme drought which were responsible for many deaths in El Paso del Norte. thumb|200px|right|Palacio de Alvarado is the house of Pedro Alvarado Torres, one of the richest silver barons of Mexico during the Porfiriato.The officials in Mexico City reduced the price of corn from six cents to two cents a pound. The northern portion of the state continued to decline economically which led to another revolt led by G. Casavantes in August 1879; Governor Trías was accused of misappropriation of funds and inefficient administration of the state. Casavantes took the state capital and occupied it briefly; he was also successful in forcing Governor Trías to exile. Shortly afterwards, the federal government sent an entourage led by Treviño; Casavantes was immediately ordered to resign his position. Casavantes declared political victory as he was able to publicly accuse and depose Governor Trías. At the same time the states of Durango and Coahuila had a military confrontation over territorial claims and water rights; this altercation between the state required additional federal troops to stabilize the area. Later a dispute ensued again among the states of Coahuila, Durango, and Chihuahua over the mountain range area known as Sierra Mojada, when large deposits of gold ore was discovered. The state of Chihuahua officially submitted a declaration of protest in May 1880 that shortly after was amicably settled. Despite the difficulties at the beginning, Díaz was able to secure and stabilize the state, which earned the confidence and support of the people.
During the 1880s, the Díaz administration consolidated several government agencies throughout Mexico to control credit and currency by the creation of the Institution of Credit and Currency. Because Díaz had created such an effective centralized government, he was able to concentrate decision making and maintain control over the economic instability.
thumb|200px|The City Hall of Chihuahua is an example of the neoclassical architecture that was erected during the presidency of Porfirio Díaz.
The Díaz administration made political decisions and took legal measures that allowed the elite throughout Mexico to concentrate the nation's wealth by favoring monopolies. During this time, two-fifths of the state's territory was divided among 17 rich families which owned practically all of the arable land in Chihuahua. The state economy grew at a rapid pace during the Porfiriato; the economy in Chihuahua was dominated by agriculture and mining. The Díaz administration helped Governor Luis Terrazas by funding the Municipal Public Library in Chihuahua City and passing a federal initiative for the construction of the railroad from Chihuahua City to Ciudad Júarez. By 1881, the Central Mexican Railroad was completed which connected Mexico City to Ciudad Juárez. In 1883 telephone lines were installed throughout the state, allowing communication between Chihuahua City and Aldama. By 1888 the telephone services were extended from the capital to the cites of Julimes, Meoqui, and Hidalgo del Parral; the telecommunication network in the state covered an estimated 3,500 kilometers. The need of laborers to construct the extensive infrastructure projects resulted in a significant Asian immigration, mostly from China. Asian immigrants soon become integral to the state economy by opening restaurants, small grocery stores, and hotels. By the end of the Terrazas term, the state experienced an increase in commerce, mining, and banking. When the banks were nationalized, Chihuahua became the most important banking state in Mexico.
Under Governor Miguel Ahumada, the education system in the state was unified and brought under tighter control by the state government, and the metric system was standardized throughout the state to replace the colonial system of weights and measures. On September 16, 1897, the Civilian Hospital of Chihuahua was inaugurated in Chihuahua City and became known among the best in the country. In 1901 the Heroes Theater (Teatro de los Héroes) opened in Chihuahua City. On August 18, 1904, Governor Terrazas was replaced by Governor Enrique C. Creel. From 1907 to 1911, the Creel administration succeeded in advancing the state's legal system, modernizing the mining industry, and raising public education standards. In 1908 the Chihuahuan State Penitentiary was built, and the construction on the first large scale dam project was initiated on the Chuviscar River. During the same time, the streets of Chihuahua City were paved and numerous monuments were built in Chihuahua City and Ciudad Juárez.
Mexican Revolution
200px|right|thumbnail|The government palace built during the early 20th century now a museum.
Díaz created an effective centralized government that helped concentrate wealth and political power among the elite upper class, mostly criollo. The economy was characterized by the construction of factories, roads, dams, and better farms. The Díaz administration passed new land laws that virtually unraveled all the rights previously recognized and the land reforms passed by President Benito Juárez. No peasant or farmer could claim the land he occupied without formal legal title.
left|thumb|200px|Quinta Carolina is an hacienda owned by the Terrazas family.
A handful of families owned large estates (known as haciendas) and controlled the greater part of the land across the state while the vast majority of Chihuahuans were landless. The state economy was largely defined by ranching and mining. At the expense of the working class, the Díaz administration promoted economic growth by encouraging investment from foreign companies from the United Kingdom, France, Imperial Germany and the United States. The proletariat was often exploited, and found no legal protection or political recourse to redress injustices.
Despite the internal stability (known as the paz porfiriana), modernization, and economic growth in Mexico during the Porfiriato from 1876 to 1910, many across the state became deeply dissatisfied with the political system. When Díaz first ran for office, he committed to a strict “No Re-election” policy in which he disqualified himself to serve consecutive terms. Eventually backtracking on many of his initial political positions Díaz became a de facto dictator. Díaz became increasingly unpopular due to brutal suppression of political dissidents by using the Rurales and manipulating the elections to solidify his political machine. The working class was frustrated with the Díaz regime due to the corruption of the political system that had increased the inequality between the rich and poor. The peasants felt disenfranchised by the policies that promoted the unfair distribution of land where 95% of the land was owned by the top 5%.
The end of the Porfiriato came in 1910 with the beginning of the Mexican Revolution. Díaz had stated that Mexico was ready for democracy and he would step down to allow other candidates to compete for the presidency, but Díaz decided to run again in 1910 for the last time against Francisco I. Madero. During the campaign Díaz incarcerated Madero on election day in 1910. Díaz was announced the winner of the election by a landslide, triggering the revolution. Madero supporter Toribio Ortega took up arms with a group of followers at Cuchillo Parado, Chihuahua on November 10, 1910.McLynn, Frank. Villa and Zapata p. 24.Womack, John. Zapata and the Mexican Revolution p. 10.Johnson, William. Heroic Mexico p. 41.
thumbnail|left|180px|Pascual Orozco
In response to Madero's letter to action, Pascual Orozco (a wealthy mining baron) and Chihuahua Governor Abraham González formed a powerful military union in the north, taking military control of several northern Mexican cities with other revolutionary leaders, including Pancho Villa. Against Madero's wishes, Orozco and Villa fought for and won Ciudad Juárez. After militias loyal to Madero defeated the Mexican federal army, on May 21, 1911, Madero signed the Treaty of Ciudad Juárez with Díaz. It required that Díaz abdicate his rule and be replaced by Madero. Insisting on a new election, Madero won overwhelmingly in late 1911, and he established a liberal democracy and received support from the United States and popular leaders such as Orozco and Villa. Orozco eventually became disappointed with the Madero's government and led a rebellion against him. He organized his own army, called "Orozquistas"—also called the Colorados ("Red Flaggers")—after Madero refused to agree to social reforms calling for better working hours, pay and conditions. The rural working class, which had supported Madero, now took up arms against him in support of Orozco.
In March 1912, in Chihuahua, Gen. Pascual Orozco revolted. Immediately President Francisco Madero commanded Gen. Victoriano Huerta of the Federal Army, to put down the Orozco revolt. The governor of Chihuahua mobilized the state militia led by Colonel Pancho Villa to supplement General Huerta. By June, Villa notified Huerta that the Orozco revolt had been put down and that the militia would consider themselves no longer under Huerta's command and would depart. Huerta became furious and ordered that Villa be executed. Raúl Madero, Madero's brother, intervened to save Villa's life. Jailed in Mexico City, Villa fled to the United States.Friedrich Katz, The Life and Times of Pancho Villa 1998, p. 165. Madero's time as leader was short-lived, ended by a coup d'état in 1913 led by Gen. Victoriano Huerta; Orozco sided with Huerta, and Huerta made him one of his generals.Pascual Orozco : Faces of the Revolution : The Storm That Swept Mexico : PBS.
On March 26, 1913, Venustiano Carranza issued the Plan de Guadalupe, which refused to recognize Huerta as president and called for war between the two factions. Soon after the assassination of President Madero, Carranza returned to Mexico to fight Huerta, but with only a handful of comrades. However, by 1913 his forces had swelled into an army of thousands, called the División del Norte (Northern Division). Villa and his army, along with Emiliano Zapata and Álvaro Obregón, united with Carranza to fight against Huerta. In March 1914 Carranza traveled to Ciudad Juárez, which served as rebellion's capital for the remainder of the struggle with Huerta. In April 1914 U.S. opposition to Huerta had reached its peak, blockading the regime's ability to resupply from abroad. Carranza trying to keep his nationalistic credentials threatened war with the United States. In his spontaneous response to U.S. President Woodrow Wilson Carranza asked "that the president withdraw American troops from Mexico.”Carothers to Secretary of State, April 22, 1914, Wilson Papers, Ser. 2, as quoted in
thumb|right|Generals Obregon, Villa and Pershing pose after meeting at Ft. Bliss, Texas (Immediately behind Gen. Pershing is his aide, 1st Lt. George S. Patton Jr.).
The situation became so tense that war with the United States seemed imminent. On April 22, 1914, on the initiative of Felix A. Sommerfeld and Sherburne Hopkins, Pancho Villa traveled to Juárez to calm fears along the border and asked President Wilson's emissary George Carothers to tell "Señor Wilson" that he had no problems with the American occupation of Veracruz. Carothers wrote to Secretary William Jennings Bryan: "As far as he was concerned we could keep Vera Cruz [sic] and hold it so tight that not even water could get in to Huerta and . . . he could not feel any resentment". Whether trying to please the U.S. government or through the diplomatic efforts of Sommerfeld and Carothers, or maybe as a result of both, Villa stepped out from under Carranza’s stated foreign policy.Heribert von Feilitzsch, In Plain Sight: Felix A. Sommerfeld, Spymaster in Mexico, 1908 to 1914, Henselstone Verlag, Virginia, 2012, p. 359.
thumbnail|180px|Bronze statue of Villa in Chihuahua, Chihuahua
The uneasy alliance of Carranza, Obregón, Villa, and Zapata eventually led the rebels to victory.Profile of Venustiano Carranza - Venustiano Carranza Biography The fight against Huerta formally ended on August 15, 1914, when Álvaro Obregón signed a number of treaties in Teoloyucan in which the last of Huerta's forces surrendered to him and recognized the constitutional government. On August 20, 1914, Carranza made a triumphal entry into Mexico City. Carranza (supported by Obregón) was now the strongest candidate to fill the power vacuum and set himself up as head of the new government. This government successfully printed money, passed laws, etc.
Villa and Carranza had different political goals causing Villa to become an enemy of Carranza. After Carranza took control in 1914, Villa and other revolutionaries who opposed him met at what was called the Convention of Aguascalientes. The convention deposed Carranza in favor of Eulalio Gutiérrez. In the winter of 1914 Villa's and Zapata's troops entered and occupied Mexico City. Villa was forced from the city in early 1915 and attacked the forces of Gen. Obregón at the Battle of Celaya and was badly defeated in the bloodiest battle of the revolution, with thousands dead. With the defeat of Villa, Carranza seized power. A short time later the United States recognized Carranza as president of Mexico. Even though Villa's forces were badly depleted by his loss at Celaya, he continued his fight against the Carranza government. Finally, in 1920, Obregón—who had defeated him at Celaya—finally reached an agreement with Villa end his rebellion.
Public opinion pressured the U.S. government to bring Villa to justice for the raid on Columbus, New Mexico; U.S. President Wilson sent Gen. John J. Pershing and some 5,000 troops into Mexico in an unsuccessful attempt to capture Villa.Friedrich Katz, The Life and Times of Pancho Villa 1998, p. 569. It was known as the Punitive Expedition. After nearly a year of pursuing Villa, American forces returned to the United States. The American intervention had been limited to the western sierras of Chihuahua. Villa had the advantage of intimately knowing the inhospitable terrain of the Sonoran Desert and the almost impassable Sierra Madre mountains and always managed to stay one step ahead of his pursuers. In 1923 Villa was assassinated by a group of seven gunmen who ambushed him while he was sitting in the back seat of his car in Parral.
Modern
On February 6, 2010, former Governor José Reyes Baeza proposed to move the three State Powers (Executive, Legislative, and Judicial) from Chihuahua to Ciudad Juárez in order to face the insecurity problems in Ciudad Juárez, but that request was rejected by the State Legislature on February 12.
Geography
thumb|260px|right|Wintry landscape at Lake Arareco, in the Tarahumara Mountains.
The state of Chihuahua is the largest state in the country and is known as El Estado Grande (The Big State); it accounts for 12.6% of the land of Mexico. The area is landlocked by the states of Sonora to the west, Sinaloa to the south-west, Durango to the south, and Coahuila to the east, and by the U.S. states of Texas to the northeast and New Mexico to the north. The state is made up of three geologic regions: Mountains, Plains-Valleys, and Desert, which occur in large bands from west to east. Because of the different geologic regions there are contrasting climates and ecosystems.
left|thumb|200px|Cerro Mohinora is the highest point in Chihuahua
The main mountain range in the state is the Sierra Madre Occidental reaching a maximum altitude of 10,826 ft (3,300 m) known as Cerro Mohinora. Mountains account for one third of the state's surface area which include large coniferous forests. The climate in the mountainous regions varies Chihuahua has more forests than any other state in Mexico making the area a bountiful source of wood; the mountainous areas are rich in minerals important to Mexico's mining industry. Precipitation and temperature in the mountainous areas depends on the elevation. Between the months of November and March snow storms are possible in the lower elevations and are frequent in the higher elevations. There are several watersheds located in the Sierra Madre Occidental all of the water that flows through the state; most of the rivers finally empty into the Río Grande. Temperatures in some canyons in the state reach over 100 °F in the summer while the same areas rarely drop below 32 °F in the winter. Microclimates found in the heart of the Sierra Madre Occidental in the state could be considered tropical, and wild tropical plants have been found in some canyons. La Barranca del Cobre, or Copper Canyon, a spectacular canyon system larger and deeper than the Grand Canyon; the canyon also contains Mexico's two tallest waterfalls: Basaseachic Falls and Piedra Volada. There are two national parks found in the mountainous area of the state: Cumbres de Majalca National Park and Basaseachic Falls National Park.
thumb|right|200px|Satellite image of the state of Chihuahua shows the varying terrain from the green alpine mountains in the southwest, to the steppe highlands in the center, to the desert in the east.
left|thumb|200px|Basaseachic Falls in Copper Canyon.
The plains at the foot of the Sierra Madre Occidental is an elongated mesa known as Altiplanicie Mexicana that exhibits a steppe climate and serves as a transition zone from the mountain climate in the western part of the state to the desert climate in the eastern side of the state. The steppe zone accounts for a third of the state's area, and it experiences pronounced dry and wet seasons. The pronounced rainy season in the steppe is usually observed in the months of July, August, and September. The steppe also encounters extreme temperatures that often reach over 100 °F in the summer and drop below 32 °F in the winter. The steppe zone is an important agriculture zone due to an extensive development of canals exploiting several rivers that flow down from the mountains. The steppe zone is the most populated area of the state.
The most important river in the state is Río Conchos which is the largest tributary to the Río Grande from the Mexican side; the river descends from the zenith of the Sierra Madre Occidental in the southwest part of the state and winds through the center of the state where the water is exploited in the steppe zone and it eventually empties into the Río Grande in the small desert town of Ojinaga.
The desert zone also accounts for about a third of the state's surface area. The Chihuahuan Desert is an international biome that also extends into the neighboring Mexican state of Coahuila and into the U.S. states of Texas and New Mexico. The desert zone is mainly of flat topography with some small mountain ranges that run north to south. The desert in the state varies slightly with a small variant in climate. The lower elevations of the desert zone are found in the north along the Rio Grande which experience hotter temperatures in the summer and winter while the southern portion of the desert zone experiences cooler temperatures due to its higher elevation. The Samalayuca dunes cover an area of about 150 km2; it is an impressive site of the Chihuahuan Desert and is a protected area by the state due to unique species of plants and animals.
Climate
thumb|right|200px|Namúrachi is in the semi-arid zone.
The climate in the state depends mainly in the elevation of the terrain. According to Köppen climate classification the state has five major climate zones. The Sierra Madre Occidental dominates the western part of the state; there are two main climates in this area: Subtropical Highland (Cfb) and Humid Subtropical (Cwa). There are some microclimates in the state due to the varying topology mostly found in the western side of the state. The two best known microclimates are: Tropical savanna climate (Aw) in deep canyons located in the extreme southern part of the state; Continental Mediterranean climate (Dsb) in the extremely high elevations of the Sierra Madre Occidental. Satellite image to the right shows the vegetation is much greener in the west because of the cooler temperatures and larger amounts of precipitation as compared to the rest of the state.
In the far eastern part of the state the Chihuahuan Desert dominates due to low precipitation and extremely high temperatures; some areas of the eastern part of the state are so dry no vegetation is found like the Sand Dunes of Samalayuca. There are two distinctive climate zones found in the eastern part of the state: Hot Desert (BWh) and Cool Desert (BWk) which are differentiated by average annual temperature due to differences in elevation. There is a transition zone in the middle of the state between the two extremely different climates from the east and west; this zone is the Steppe characterized by a compromise between juxtaposed climate zones.thumb|left|200px|Köppen Climate Zones
thumb|right|200px|Altiplanicie Mexicana during the monsoon season.
thumbnail|200px|Chihuahua white pines amid snow.
right|thumb|200px|Dunas de Samalayuca a state protected area south of Ciudad Juárez.
Subtropical Highland (Cfb) most common at elevations above above sea level; this climate zone has warm summers reaching a maximum temperature of and summer lows of . Heavy rainstorms are observed from July to September. Winters are cold reaching a maximum low of and a maximum high of . During the winter months many snowstorms are observed with typically of snow per season.
Humid Subtropical (Cwa) climate is most common at elevations between above sea level; this climate zone has warm humid summers and an average summer temperature of . The summer average precipitation is , mostly in the months of: July, August, and September. From November to March there are many rainstorms and snowstorms caused by high elevation and prominent cold fronts. Winter temperatures can reach a low of .
Semi-arid climate or Steppe (BSk) is most common at elevations between above sea level; this climate zone has an annual average of and maximum temperatures above and lows reaching slightly below , with a wet season in the late summer and fall. Snowfall is rare but possible in the winter and frost is common from December to March. The annual average rainfall in the steppe climate zone is about .
Hot Desert (BWh) is most common at elevations below above sea level; this climate zone tends to have a hot summer at temperatures that often reach . Winter is warm, rarely dropping below . Precipitation averages 6–10 in. per year; most of the moisture falls during the monsoon of late summer.
Cool Desert (BWk) is most common at elevations below above sea level; this climate zone tends to have a mild summer, rarely reaching temperatures over . Winter weather varies from mild to cold depending on northern fronts, often dropping below . Precipitation averages 10–16 in. per year; most of the moisture falls during the monsoon of late summer.
Flora and fauna
thumb|left|200px|Cumbres de Majalca National Park is found in the transition zone from humid subtropical climate to semiarid climate where Pinus ponderosa can be found.
The state has a great diversity due to the large number of microclimates found and dramatic varying terrain. The flora throughout the Sierra Madre Occidental mountain range varies with elevation. Pine (Pinus) and oak (Quercus) species are usually found at an elevation of 2,000 m (6,560 ft) above sea level. The most common species of flora found in the mountains are: Pinus, Quercus, Abies, Ficus, Vachellia, Ipomoea, Acacia, Lysiloma, Bursera, Vitex, Tabebuia, Sideroxylon, Cordia, Fouquieria, Pithecellobium. The state is home to one of the largest variation species of the genus Pinus in the world. The lower elevations have a steppe vegetation with a variety of grasses and small bushes. Several species of Juniperus dot the steppe and the transition zone.
According to the World Wide Fund for Nature, the Chihuahuan Desert may be the most biologically diverse desert in the world, whether measured on species richness or endemism, although the region has been heavily degraded over time. Many native species have been replaced with creosote shrubs. The most common desert flora in the state includes: Agave, Larrea, Prosopis, Fouquieria, Dasylirion, Yucca, Poaceae, Lophophora, Opuntia, Echinocereus, Baileya, Chilopsis, Eucnide, and Hylocereus.
thumb|200px|right|American bison Bison bison near Chihuahua City.The fauna in the state is just as diverse as the flora and varies greatly due to the large contrast in climates. In the mountain zone of the state the most observed mammals are: Mexican fox squirrel (Sciurus nayaritensis), antelope jackrabbit (Lepus alleni), raccoon (Procyon lotor), hooded skunk (Mephitis macroura), wild boar (Sus scrofa), collared peccary (Pecari tajacu), white-tailed deer (Odocoileus virginianus), mule deer Odocoileus hemionus, American bison Bison bison, cougar (Puma concolor), eastern cottontail Sylvilagus floridanus, North American porcupine Erethizon dorsatum, bobcat Lynx rufus, Mexican wolf Canis lupus baileyi, and coyote Canis latrans. American black bear Ursus americanus is also found but in very small numbers. The Mexican wolf, once abundant, has been extirpated. The main cause of degradation has been grazing. Although there are many reptilian species in the mountains the most observed species include: Northern Mexican pine snake, Pituophis deppei jani, Texas horned lizard (Phrynosoma cornutum), rock rattlesnake (Crotalus lepidus), black-tailed rattlesnake (Crotalus molossus), and plateau tiger salamander Ambystoma velasci, one of possibly many amphibians to be found in the mountains.
The Chihuahuan Desert is home to a diverse ecosystem which is home to a large variety of mammals. The most common mammals in the desert include: Desert cottontail Sylvilagus audubonii, black-tailed jackrabbit Lepus californicus, hooded skunk Mephitis macroura, cactus mouse Peromyscus eremicus, swift fox Vulpes velox, white-throated woodrat Neotoma albigula, pallid bat Antrozous pallidus, and coyote Canis latrans. The most observed reptiles in the desert include: Mohave rattlesnake Crotalus scutulatus, twin-spotted rattlesnake Crotalus pricei, prairie rattlesnake Crotalus viridis, ridge-nosed rattlesnake Crotalus willardi, whip snake Masticophis flagellum, New Mexico whiptail Cnemidophorus neomexicanus, and red-spotted toad Bufo punctatus.
The state is also a host to a large population of birds which include endemic species and migratory species: greater roadrunner Geococcyx californianus, cactus wren Campylorhynchus brunneicapillus, Mexican jay Aphelocoma ultramarina, Steller's jay Cyanocitta stelleri, acorn woodpecker Melanerpes formicivorus, canyon towhee Pipilo fuscus, mourning dove Zenaida macroura, broad-billed hummingbird Cynanthus latirostris, Montezuma quail Cyrtonyx montezumae, mountain trogon Trogon mexicanus, turkey vulture Cathartes aura, and golden eagle Aquila chrysaetos. Trogon mexicanus is an endemic species found in the mountains in Mexico; it is considered an endangered species and has symbolic significance to Mexicans.http://www.mexicodesconocido.com.mx/notas/7235-Candame%F1a-(Chihuahua)
Flora and fauna of Chihuahua120px125px125px125px125pxCynomys ludovicianusFelis concolorAphelocoma wollweberiBison bisonAquila chrysaetos120px120px115px125px120pxMeleagris gallopavoCrotalus scutulatusAntilocapra americanaUrsus americanusOdocoileus hemionus120px120px120px120px120pxPopulus tremuloidesOpuntia engelmanniiAgave palmeriAriocarpus fissuratusPinus engelmannii
Demography
According to the census by the Instituto Nacional de Estadística y Geografía (INEGI) in 2005, the state population is 3,241,444 making the state the 11th most populated state in Mexico. Census recorded 1,610,275 men and 1,631,169 women.= mpob02&c = 3179 Población total por entidad federativa según sexo, 2000 y 2005 INEGI The median age of the population is 25 years.= mpob07&c = 3184 Edad mediana por entidad federativa según sexo, 2000 y 2005 INEGI The northern state is placed seventh in the nation regarding quality of life and sixth in terms of life expectancy at 75.2 years of age.
During the period from 2000–2005 it is estimated that 49,722 people left the state for the United States. Some 82,000 people are thought to have immigrated to the state from 2000–2005 mainly coming from Veracruz (17.6%), United States (16.2%), Durango (13.2%), Coahuila (8.0%) and Chiapas (4.5%). It is believed that there is a large number of undocumented immigrants in that state the come from Central and South America which mainly settle in Ciudad Juárez. According to the 2005 census, the population grew 1.06% from 2000 to 2005. The state has an uneven settlement of people and the lowest population density of any Mexican state; according to the 2005 census there were 12 people per km2.= mpob11&c = 3188 Densidad de población por entidad federativa, 2000 INEGI Of all the 3,241,444 people in the state, two-thirds (2,072,129) live in the cities of Ciudad Juárez and Chihuahua. Only three other cities have populations over 100,000: Parral 101,147, Cuauhtémoc 105,725, and Delicias 108,187.
Ethnic Groups
thumb|175px|right|Indigenous Tribesthumb|200px|left|Tarahumara women selling artisanal goods.
thumb|right|175px|Municipality Population Density Data Source: INEGIhttp://www.inegi.org.mx/sistemas/biblioteca/detalle.aspx?c=16632&upc=702825494384&s=est&tg=0&f=2&pf=Pob
The last census in Mexico that asked for an individual's race, which was taken in 1921, indicated that 50.09% of the population identified as Mestizo (mixed Amerindian and European descent). The second-largest group was whites at 36.33% of the population. The third-largest group was the "pure indigenous" population, constituting 12.76% of the population. The remaining 0.82% of the population of Chihuahua was considered "other", i.e., neither Mestizo, indigenous, nor white.http://www.somosprimos.com/schmal/schmal.htm The most important indigenous tribes of the state of Chihuahua are:
Tarahumara: The largest ethnic group of indigenous people in the state. They call themselves Rarámuri, which means "Barefoot Runner". They are famous for their endurance in running long distances. They live in large areas of the Sierra Madre Occidental. Many have migrated to the large cities of the state mainly for economic incentives.http://www.cdi.gob.mx/dmdocuments/tarahumaras.pdf
Tepehuan Del Norte: A tribe linguistically differentiated from the Tepehuan in the state of Durango. The tribe lives near the small towns of Guadalupe y Calvo and Baborigame.
Guarijío: A small tribe linguistically differentiated from the other tribes of the state. Little is known about these indigenous tribes except that they live near the small villages of Chínipas and Uruachi.
Pima: A large ethnic group that lives across extensive areas of northwestern Mexico and southwestern United States. The population of the tribe in the state is small, mostly around the town of Temósachi. Although all the tribe speaks the same language, variant dialects have been discovered between different settlements.http://www.chihuahua.gob.mx/atach2/codesoypc/uploads/Lecturas%20de%20Pol%C3%ADtica%20Social/Etnias%20Iind%C3%ADgenas/Guarij%C3%ADos.pdf
Religion
thumb|left|200px|Plautdietsch speaking Mennonite girl in Cuauhtémoc, Chihuahua.
Although the great majority of residents of the state of Chihuahua are Catholics, there is a large diversity of religions within the state. There are many apostolic churches, Mormon wards, and large Mennonite communities. Those aged 5 years and older claim to be the following religious beliefs: 84.6% are Catholic; 7.1% are Protestant; 2.0% are Nondenominational; 5.1% are Atheist. Compared to most of Mexico, the state has a higher percentage of Protestants.= mrel10&c = 4143&e = 08 Volumen y porcentaje de la población de 5 y más años sin religión por entidad federativa, 2000 INEGI
During the Mexican Revolution, Álvaro Obregón invited a group of Canadian German-speaking Mennonites to resettle in Mexico. By the late 1920s, some 7,000 had immigrated to Chihuahua State and Durango State, almost all from Canada, only a few from the U.S. and Russia.<ref>Harry Leonard Sawatzky: Sie suchten eine Heimat - deutsch-mennonitische Kolonisierung in Mexico 1922-1984, Marburg 1986, p. 68.</ref> Today, Mexico accounts for about 42% of all Mennonites in Latin America. Mennonites in the country stand out because of their fair skin, hair, and eyes. They are a largely insular community that speaks a form of German and wear traditional clothing. They own their own businesses in various communities in Chihuahua, and account for about half of the state's farm economy, excelling in cheese production.
Main Cities
The state has one city with a population exceeding one million: Ciudad Juárez. Ciudad Juárez is ranked eighth most populous city in the country and Chihuahua City was ranked 16th most populous in Mexico. Chihuahua (along with Baja California) is the only state in Mexico to have two cities ranked in the top 20 most populated. El Paso and Ciudad Juárez comprise one of the largest binational metropolitan areas in the world with a combined population of 2.4 million. In fact, Ciudad Juárez is one of the fastest growing cities in the world in spite of the fact that it is "the most violent zone in the world outside of declared war zones".http://www.chron.com/disp/story.mpl/breaking/6679334.html For instance, a few years ago the Federal Reserve Bank of Dallas published that in Ciudad Juárez "the average annual growth over the 10-year period 1990–2000 was 5.3 percent. Juárez experienced much higher population growth than the state of Chihuahua and than Mexico as a whole".http://www.dallasfed.org/research/busfront/bus0102.html
Chihuahua City has one of the highest literacy rates in the country at 98%; 35% of the population is aged 14 or below, 60% 15-65, and 5% over 65. The growth rate is 2.4%.
The 76.5%= mpob00&c = 5262 Cuadro resumen INEGI of the population of the state of Chihuahua live in cities which makes the state one of the most urbanized in Mexico.
thumb|center|900px|A panoramic view of Ciudad Juárez and El Paso, Texas from the north. The Hueco Mountains can be seen toward the east; the Juarez mountains of Mexico can be seen to the south (right of the image).
Education
thumb|right|200px|Quinta Gameros was built in 1907 as a private residence and is now part of the Universidad Autónoma de Chihuahua Campus.According to the Instituto Nacional de Estadística, Geografía e Informática (INEGI), 95.6% of the population over the age of 15 could read and write Spanish, and 97.3% of children of ages 8–14 could read and write Spanish. An estimated 93.5% of the population ages 6–14 attend an institution of education. Estimated 12.8% of residents of the state have obtained a college degree., "Perfil Sociodemográfico de Chihuahua" Conteo de Población y Vivienda 2005 Insitutio Nacional de Estadística, Geografía e Informática p 32-43 ISBN 978-970-13-4992-2 Average schooling is 8.5 years, which means that in general the average citizen over 15 years of age has gone as far as a second year in secondary education.
Institutions of higher education include:
Instituto Tecnológico de Chihuahua
Instituto Tecnológico de Chihuahua II
Universidad Autónoma de Chihuahua
Instituto Tecnólogico y de Estudios Superiores de Monterrey Campus Chihuahua
Universidad La Salle
Universidad Tecnológica de Chihuahua
Government
120px|left|thumbnail|The state legislature
The current government of the state was established officially by the Political Constitution of the United Mexican States in 1917. The state government is divided into three branches: the legislative branch, the judicial branch, and the executive branch. The government is centrally located in the state capital Chihuahua City.
The legislative branch consists of an elected assembly of representatives to form the state congress. The congress is composed of 33 deputies, of which 22 are directly elected to represent each of the 22 districts in the state. In addition 11 deputies are elected by system of proportional representation through a list of registered political party members. Deputies are elected every three years and cannot be reelected consecutively.
The judicial branch is led by the Supreme Tribunal of Justice which is constituted of 15 magistrate judges. The judges are appointed by the governor and approved by the state congress. The executive branch is headed by the governor of the state, who is elected for one term of six years on the fourth day of October every election year. Governors are not eligible to be reelected due to constitutional one-term limitation.
The state is represented at the federal level in the Congress of the Union by three senators and nine deputies (representatives). The deputies serve three-year terms and are elected in federal elections. The senators serve six-year terms and are elected in federal elections.
Administrative divisions
Chihuahua is subdivided into 67 municipios (municipalities).
Economy
thumbnail|right|Copachisa is an industrial design and construction company based in the city of Chihuahua, Mexico.
The state has the 12th-largest state economy in Mexico, accounting for 2.7% of the country’s GDP.INEGI, Chihuahua has the fifth highest manufacturing GDP in Mexico and ranks second for the most factories funded by foreign investment in the country. , the state had an estimated 396 billion pesos (31.1 billion dollars) of annual GDP. According to official federal statistical studies, the service sector accounted for the largest portion of the state economy at 59.28%; the manufacturing and industrial sector is estimated to account for 34.36% of the state's GDP, with the agricultural sector accounting for 6.36% of the state's GDP. Manufacturing sector was the principal foreign investment in the state followed by the mining sector. In 2011, the state received approximately 884 million dollars in remittances from the United States, which was 4.5% of all remittances from the United States to Mexico.
thumbnail|left|Naica Mine is known for its extraordinary selenite crystals and is a major source of lead, zinc, and silver operated by Industrias Peñoles.
During the 1990s after NAFTA was signed, industrial development grew rapidly with foreign investment. Large factories known as maquiladoras were built to export manufactured goods to the United States and Canada. Today, most of the maquiladoras produce electronics, automobile, and aerospace components. There are more than 406 companies operating under the federal IMMEX or Prosec program in Chihuahua. The large portion of the manufacturing sector of the state is 425 factories divided into 25 industrial parks accounting for 12.47% of the maquiladoras in Mexico, which employ 294,026 people in the state. While export-driven manufacturing is one of the most important components of the state's economy, the industrial sector is quite diverse and can be broken down into several sectors, which are: electronics, agro-industrial, wood base manufacturing, mineral, and biotech. Similar to the rest of the country, small businesses continue to be the foundation of the state’s economy. Small business employs the largest portion of the population.
thumb|right|The dairy industry is an important part of the agriculture sector of the economy in the state.
, the state's economy employed 786,758 people, which accounted for 3.9% of the country's workforce with annual GDP per capita of 136,417 pesos (12,338 dollars). The average employee wage in Chihuahua is approximately 193 pesos per day. The minimum wage in the state is 61.38 pesos (4.66 dollars) per day except for the municipalities of Guadalupe, Ciudad Juárez, and Praxedis G. Guerrero, which have a minimum wage of 64.76 Mexican pesos (4.92 dollars).
Agriculture is a relatively small component of the state's economy and varies greatly due to the varying climate across the state. The state ranked first in Mexico for the production of the following crops: oats, chile verde, cotton, apples, pecans, and membrillo. The state has an important dairy industry with large milk processors throughout the state. Delicias is home to Alpura, the second-largest dairy company in Mexico. The state has a large logging industry ranking second in oak and third in pine in Mexico. The mining industry is a small but continues to produce large amounts of minerals. The state ranked first place in the country for the production of lead with 53,169 metric tons. Chihuahua ranked second in Mexico for zinc at 150,211 metric tons, silver at 580,271 kg, and gold at 15,221.8 kg.
See also
Chihuahuan Desert
Los Medanos, the Samalayuca Dune Fields
Geography of Mexico
Chihuahua, a dog breed named after the state
Indigenous peoples of Mexico
Casas Grandes
References
External links
Chihuahua state government
Secretariat of Industrial Development of Chihuahua State Government
Chihuahua's municipal governments
Chihuahua photos
Encyclopaedia Britannica, Chihuahua
Chihuahuan Frontier
Category:States of Mexico
Category:Mexican Plateau states
Category:Northwestern Mexico
*
Category:States and territories established in 1824 | 23,962,301 | 2017-01 |
Database | A database is an organized collection of data. It is the collection of schemas, tables, queries, reports, views, and other objects.
The data are typically organized to model aspects of reality in a way that supports processes requiring information, such as modelling the availability of rooms in hotels in a way that supports finding a hotel with vacancies.
A database management system (DBMS) is a computer software application that interacts with the user, other applications, and the database itself to capture and analyze data. A general-purpose DBMS is designed to allow the definition, creation, querying, update, and administration of databases. Well-known DBMSs include MySQL, PostgreSQL, MongoDB, MariaDB, Microsoft SQL Server, Oracle, Sybase, SAP HANA, and IBM DB2. A database is not generally portable across different DBMSs, but different DBMS can interoperate by using standards such as SQL and ODBC or JDBC to allow a single application to work with more than one DBMS. Database management systems are often classified according to the database model that they support; the most popular database systems since the 1980s have all supported the relational model as represented by the SQL language. Sometimes a DBMS is loosely referred to as a 'database'.
Terminology and overview
Formally, a "database" refers to a set of related data and the way it is organized. Access to this data is usually provided by a "database management system" (DBMS) consisting of an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index). This article is concerned only with databases where the size and usage requirements necessitate use of a database management system.
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:
Data definition – Creation, modification and removal of definitions that define the organization of the data.
Update – Insertion, modification, and deletion of the actual data.
Retrieval – Providing information in a form directly usable or for further processing by other applications. The retrieved data may be made available in a form basically the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database.
Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, and database.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. RAID is used for recovery of data if any of the disks fail. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions from databases before the inception of Structured Query Language (SQL). The data recovered was disparate, redundant and disorderly, since there was no proper method to fetch it and arrange it in a concrete structure.
Since DBMSs comprise a significant economical market, computer and storage vendors often take into account DBMS requirements in their own development plans.
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
Applications
Databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers (see Enterprise software).
Databases are used to hold administrative information and more specialized data, such as engineering data or economic models. Examples of database applications include computerized library systems, flight reservation systems, computerized parts inventory systems, and many content management systems that store websites as collections of webpages in a database.
General-purpose and special-purpose DBMSs
A DBMS has evolved into a complex software system and its development typically requires thousands of human years of development effort. Some general-purpose DBMSs such as Adabas, Oracle and DB2 have been undergoing upgrades since the 1970s. General-purpose DBMSs aim to meet the needs of as many applications as possible, which adds to the complexity. However, the fact that their development cost can be spread over a large number of users means that they are often the most cost-effective approach. However, a general-purpose DBMS is not always the optimal solution: in some cases a general-purpose DBMS may introduce unnecessary overhead. Therefore, there are many examples of systems that use special-purpose databases. A common example is an email system that performs many of the functions of a general-purpose DBMS such as the insertion and deletion of messages composed of various items of data or associating messages with a particular email address; but these functions are limited to what is required to handle email and don't provide the user with all of the functionality that would be available using a general-purpose DBMS.
Many other databases have application software that accesses the database on behalf of end-users, without exposing the DBMS interface directly. Application programmers may use a wire protocol directly, or more likely through an application programming interface. Database designers and database administrators interact with the DBMS through dedicated interfaces to build and maintain the applications' databases, and thus need some more knowledge and understanding about how DBMSs operate and the DBMSs' external interfaces and tuning parameters.
History
Following the technology progress in the areas of processors, computer memory, computer storage, and computer networks, the sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. The development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, and post-relational.
The two main early navigational data models were the hierarchical model, epitomized by IBM's IMS system, and the CODASYL model (network model), implemented in a number of products such as IDMS.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and they remain dominant : IBM DB2, Oracle, MySQL, and Microsoft SQL Server are the top DBMS. The dominant database language, standardised SQL for the relational model, has influenced database languages for other data models.
Object databases were developed in the 1980s to overcome the inconvenience of object-relational impedance mismatch, which led to the coining of the term "post-relational" and also the development of hybrid object-relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key-value stores and document-oriented databases. A competing "next generation" known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.
1960s, navigational DBMS
thumb|280px|Basic structure of navigational CODASYL database model
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by the System Development Corporation of California as the first to use the term "data-base" in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the "Database Task Group" within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the "CODASYL approach", and soon a number of commercial products based on this approach entered the market.
The CODASYL approach relied on the "manual" navigation of a linked data set which was formed into a large network. Applications could find records by one of three methods:
Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a very straightforward query language. However, in the final tally, CODASYL was very complex and required significant training and effort to produce useful applications.
IBM also had their own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed, and Bachman's 1973 Turing Award presentation was The Programmer as Navigator. IMS is classified as a hierarchical database. IDMS and Cincom Systems' TOTAL database are classified as network databases. IMS remains in use .
1970s, relational DBMS
Edgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.
In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to use a "table" of fixed-length records, with each table used for a different type of entity. A linked-list system would be very inefficient when storing "sparse" databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of normalized tables (or relations), with optional elements being moved out of the main table to where they would take up room only if needed. Data may be freely inserted, deleted and edited in these tables, with the DBMS doing whatever maintenance needed to present a table view to the application/user.
thumb|300px|In the relational model, records are "linked" using virtual keys not stored in the database but defined as needed between the data contained in the records.
The relational model also allowed the content of the database to evolve without constant rewriting of links and pointers. The relational part comes from entities referencing other entities in what is known as one-to-many relationship, like a traditional hierarchical model, and many-to-many relationship, like a navigational (network) model. Thus, a relational model can express both hierarchical and navigational models, as well as its native tabular model, allowing for pure or combined modeling in terms of these three models, as the application requires.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single record, and unused items would simply not be placed in the database. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
Linking the information back together is the key to this system. In the relational model, some bit of information was used as a "key", uniquely defining a particular record. When information was being collected about a user, information stored in the optional tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses and phone numbers for that user would be recorded with the login name as its key. This simple "re-linking" of related data back into a single collection is something that traditional computer languages are not designed for.
Just as the navigational approach would require programs to loop in order to collect records, the relational approach would require loops to collect information about any one record. Codd's solution to the necessary looping was a set-oriented language, a suggestion that would later spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he demonstrated that such a system could support all the operations of normal databases (inserting, updating etc.) as well as providing a simple system for finding and returning sets of data in a single operation.
Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.
In 1970, the University of Michigan began development of the MICRO Information Management System based on D.L. Childs' Set-Theoretic Data model. MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System.MICRO Information Management System (Version 5.0) Reference Manual, M.A. Kahn, D.L. Rumelhart, and B.L. Bronson, October 1977, Institute of Labor and Industrial Relations (ILIR), University of Michigan and Wayne State University The system remained in production until 1998.
Integrated approach
In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.
Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).
Late 1970s, SQL DBMS
IBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL – had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).
Larry Ellison's Oracle started from a different chain, based on IBM's papers on System R, and beat IBM to market when the first version was released in 1978.
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd's paper was also read and Mimer SQL was developed from the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise. In the early 1980s, Mimer introduced transaction handling for high robustness in applications, an idea that was subsequently implemented on most other DBMSs.
Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two have become irrelevant.
1980s, on the desktop
The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff the creator of dBASE stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation."Interview with Wayne Ratliff. The FoxPro History. Retrieved on 2013-07-12. dBASE was one of the top selling software titles in the 1980s and early 1990s.
1990s, object-oriented
The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be relations to objects and their attributes and not to individual fields.Development of an object-oriented DBMS; Portland, Oregon, United States; Pages: 472–482; 1986; ISBN 0-89791-204-7 The term "object-relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object-relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object-relational mappings (ORMs) attempt to solve the same problem.
2000s, NoSQL and NewSQL
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in enterprise database management, where XML is being used as the machine-to-machine data interoperability standard. XML database management systems include commercial software MarkLogic and Oracle Berkeley DB XML, and a free use software Clusterpoint Distributed XML/JSON Database. All are enterprise software database platforms and support industry standard ACID-compliant transaction processing with strong database consistency characteristics and high level of database security.
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally. The most popular NoSQL systems include MongoDB, Couchbase, Riak, Memcached, Redis, CouchDB, Hazelcast, Apache Cassandra, and HBase, which are all open-source software products.
In recent years, there was a high demand for massively distributed databases with high partition tolerance but according to the CAP theorem it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system. Such databases include ScaleBase, Clustrix, EnterpriseDB, MemSQL, NuoDB, and VoltDB.
Research
Database technology has been an active research topic since the 1960s, both in academia and in the research and development groups of companies (for example IBM Research). Research activity includes theory and development of prototypes. Notable research topics have included models, the atomic transaction concept, and related concurrency control techniques, query languages and query optimization methods, RAID, and more.
The database research area has several dedicated academic journals (for example, ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE).
Examples
One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface type. This section lists a few of the adjectives used to characterize different kinds of databases.
An in-memory database is a database that primarily resides in main memory, but is typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases, and so are often used where response time is critical, such as in telecommunications network equipment. SAP HANA platform is a very hot topic for in-memory database. By May 2012, HANA was able to run on servers with 100TB main memory powered by IBM. The co founder of the company claimed that the system was big enough to run the 8 largest SAP customers.
An active database includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many databases provide active database features in the form of database triggers.
A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud", while its applications are both developed by programmers and later maintained and utilized by (application's) end-users through a web browser and Open APIs.
Data warehouses archive data from operational databases and often from external sources such as market research firms. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading, and managing data so as to make them available for further use.
A deductive database combines logic programming with a relational database, for example by using the Datalog language.
A distributed database is one in which both the data and the DBMS span multiple computers.
A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi structured data, information. Document-oriented databases are one of the main categories of NoSQL databases.
An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in such a way that the DBMS is hidden from the application’s end-users and requires little or no ongoing maintenance.Graves, Steve. "COTS Databases For Embedded Systems", Embedded Computing Design magazine, January 2007. Retrieved on August 13, 2008.
End-user databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases. Some of them are much simpler than full-fledged DBMSs, with more elementary DBMS functionality.
A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a single database by a federated database management system (FDBMS), which transparently integrates multiple autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system), and provides them with an integrated conceptual view.
Sometimes the term multi-database is used as a synonym to federated database, though it may refer to a less integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single application. In this case, typically middleware is used for distribution, which typically includes an atomic commit protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating databases.
A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.
An array DBMS is a kind of NoSQL DBMS that allows to model, store, and retrieve (usually large) multi-dimensional arrays such as satellite images and climate simulation output.
In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias, where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext database.
A knowledge base (abbreviated KB, kb or ΔArgumentation in Artificial Intelligence by Iyad Rahwan, Guillermo R. Simari) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences.
A mobile database can be carried on or synchronized from a mobile computing device.
Operational databases store detailed data about the operations of an organization. They typically process relatively high volumes of updates using transactions. Examples include customer databases that record contact, credit, and demographic information about a business' customers, personnel databases that hold information such as salary, benefits, skills data about employees, enterprise resource planning systems that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.
A parallel database seeks to improve performance through parallelization for tasks such as loading data, building indexes and evaluating queries.
The major parallel DBMS architectures which are induced by the underlying hardware architecture are:
Shared memory architecture, where multiple processors share the main memory space, as well as other data storage.
Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its own main memory, but all units share the other storage.
Shared nothing architecture, where each processing unit has its own main memory and other storage.
Probabilistic databases employ fuzzy logic to draw inferences from imprecise data.
Real-time databases process transactions fast enough for the result to come back and be acted on right away.
A spatial database can store the data with multidimensional features. The queries on such data include location-based queries, like "Where is the closest hotel in my area?".
A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL. More specifically the temporal aspects usually include valid-time and transaction-time.
A terminology-oriented database builds upon an object-oriented database, often customized for a specific field.
An unstructured data database is intended to store in a manageable and protected way diverse objects that do not fit naturally and conveniently in common databases. It may include email messages, documents, journals, multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now support unstructured data in various ways, and new dedicated DBMSs are emerging.
Design and modeling
The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity-relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organisation, like "can a customer also be a supplier?", or "if a product is sold with two different forms of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the analysis of workflow in the organization. This can help to establish what information is needed in the database, and what can be left out. For example, it can help when deciding whether the database needs to hold historic data as well as current data.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modelling notation used to express that design.)
The most popular database model for general-purpose databases is the relational model, or more precisely, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that each elementary "fact" is only recorded in one place, so that insertions, updates, and deletions automatically maintain consistency.
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like. This is often called physical database design. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. There are two types of data independence: Physical data independence and logical data independence. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control to database objects as well as defining security levels and methods for the data itself.
Models
thumb|480px|Collage of five types of database models
A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format.
Common logical data models for databases include:
Navigational databases
Hierarchical database model
Network model
Graph database
Relational model
Entity–relationship model
Enhanced entity–relationship model
Object model
Document model
Entity–attribute–value model
Star schema
An object-relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file
Other models include:
Associative model
Multidimensional model
Array model
Multivalue model
Specialized models are optimized for particular types of data:
XML database
Semantic model
Content store
Event store
Time series model
External, conceptual, and internal views
thumb|320px|Traditional view of dataitl.nist.gov (1993) Integration Definition for Information Modeling (IDEFIX). 21 December 1993.
A database management system provides three views of the database data:
The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level.
The conceptual level unifies the various external views into a compatible global view. It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators.
The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities.
While there is typically only one conceptual (or logical) and physical (or internal) view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are the interest of the human resources department. Thus different departments need different views of the company's database.
The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance.
The conceptual view provides a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types.
Separating the external, conceptual and internal levels was a major feature of the relational database model implementations that dominate 21st century databases.
Languages
Database languages are special-purpose languages, which do one or more of the following:
Data definition language – defines data types such as creating, altering, or dropping and the relationships among them
Data manipulation language – performs tasks such as inserting, updating, or deleting data occurrences
Query language – allows searching for information and computing derived information
Database languages are specific to a particular data model.Notable examples include:
SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the first commercial languages for the relational model, although it departs in some respects from the relational model as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The standards have been regularly enhanced since and is supported (with varying degrees of conformance) by all mainstream commercial relational DBMSs.
OQL is an object model language standard (from the Object Data Management Group). It has influenced the design of some of the newer query languages like JDOQL and EJB QL.
XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist, by relational databases with XML capability such as Oracle and DB2, and also by in-memory XML processors such as Saxon.
SQL/XML combines XQuery with SQL.
A database language may also incorporate features like:
DBMS-specific Configuration and storage engine management
Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing
Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car)
Application programming interface version of the query language, for programmer convenience
Performance, security, and availability
Because of the critical importance of database technology to the smooth running of an enterprise, database systems include complex mechanisms to deliver the required performance, security, and availability, and allow database administrators to control the use of these features.
Storage
Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and often utilizing the operating systems' file systems as intermediates for storage layout), storage properties and configuration setting are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look in the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).
Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be used in the same database.
Various low-level database storage structures are used by the storage engine to serialize the data model so it can be written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional storage is row-oriented, but there are also column-oriented and correlation databases.
Materialized views
Often storage redundancy is employed to increase performance. A common example is storing materialized views, which consist of frequently needed external views or query results. Storing such views saves the expensive computing of them each time they are needed. The downsides of materialized views are the overhead incurred when updating them to keep them synchronized with their original updated database data, and the cost of storage redundancy.
Replication
Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to increase data availability (both to improve performance of simultaneous multiple end-user accesses to a same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a replicated object need to be synchronized across the object copies. In many cases, the entire database is replicated.
Security
Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program).
Database access control deals with controlling who (a person or a certain computer program) is allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or utilizing specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases.
Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see data encryption).
Change and access logging records who accessed which attributes, what was changed, and when it was changed. Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes. Sometimes application-level code is used to record changes rather than leaving this to the database. Monitoring can be set up to attempt to detect security breaches.
Transactions and concurrency
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands).
The acronym ACID describes some ideal properties of a database transaction: Atomicity, Consistency, Isolation, and Durability.
Migration
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations, it is desirable to move, migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This in spite of the fact that tools may exist to help migration between specific DBMSs. Typically, a DBMS vendor provides tools to help importing databases from other popular DBMSs.
Building, maintaining, and tuning
After designing a database for an application, the next stage is building the database. Typically, an appropriate general-purpose DBMS can be selected to be utilized for this purpose. A DBMS provides the needed user interfaces to be utilized by database administrators to define the needed application's data structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.).
When the database is ready (all its data structures and other needed components are defined), it is typically populated with initial application's data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases, the database becomes operational while empty of application data, and data are accumulated during its operation.
After the database is created, initialised and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned (tuning) for better performance; application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc.
Backup and restore
Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this, a backup operation is done occasionally or continuously, where each desired database state (i.e., the values of its data and their embedding in database's data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When this state is needed, i.e., when it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are utilized to restore that state.
Static analysis
Static analysis techniques for software verification can be applied also in the scenario of query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages for relational databases as a way to support sound approximation techniques. The semantics of query languages can be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database system has many interesting applications, in particular, for security purposes, such as fine grained access control, watermarking, etc.
Other
Other DBMS features might include:
Database logs
Graphics component for producing graphs and charts, especially in a data warehouse system
Query optimizer – Performs query optimization on every query to choose for it the most efficient query plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a particular storage engine.
Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a DBMS and related database may span computers, networks, and storage units) and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc.
Increasingly, there are calls for a single systems and methodology that incorporates all of these core functionalities into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some are labeling such offerings "DevOps for Database". Packaged thusly, these database management solutions are supposed to be stable, secure, backed up, compliant, testable, and consistent between environments.
See also
Comparison of database tools
Comparison of object database management systems
Comparison of object-relational database management systems
Comparison of relational database management systems
Data hierarchy
Data bank
Data store
Database theory
Database testing
Database-centric architecture
Journal of Database Management
Question-focused dataset
Notes
References
Sources
Further reading
Ling Liu and Tamer M. Özsu (Eds.) (2009). "Encyclopedia of Database Systems, 4100 p. 60 illus. ISBN 978-0-387-49616-0.
Connolly, Thomas and Carolyn Begg. Database Systems. New York: Harlow, 2002.
Gray, J. and Reuter, A. Transaction Processing: Concepts and Techniques, 1st edition, Morgan Kaufmann Publishers, 1992.
Kroenke, David M. and David J. Auer. Database Concepts. 3rd ed. New York: Prentice, 2007.
Raghu Ramakrishnan and Johannes Gehrke, Database Management Systems
Abraham Silberschatz, Henry F. Korth, S. Sudarshan, Database System Concepts
Teorey, T.; Lightstone, S. and Nadeau, T. Database Modeling & Design: Logical Design, 4th edition, Morgan Kaufmann Press, 2005. ISBN 0-12-685352-5
External links
DB File extension – information about files with the DB extension
| 8,377 | 2017-01 |
Orthodox Judaism | Orthodox Judaism is the approach to religious Judaism which subscribes to a tradition of mass revelation and adheres to the interpretation and application of the laws and ethics of the Torah as legislated in the Talmudic texts by the Tannaim and Amoraim. Orthodox Judaism includes movements such as Modern Orthodox Judaism (אורתודוקסיה מודרנית) and Ultra-Orthodox or Haredi Judaism (יהדות חרדית).
As of 2001, Orthodox Jews and Jews affiliated with an Orthodox synagogue accounted for approximately 50% of British Jews (150,000), 26.5% of Israeli Jews (1,500,000),Poll: 7.1 percent of Israeli Jews define themselves as Reform or Conservative Haaretz, 11 June 2013 and 13% of American Jews (529,000).American Jewish Religious Denominations, United Jewish Communities Report Series on the National Jewish Population Survey 2001-01, (Table 2, pg. 9) Among those affiliated to a synagogue body, Orthodox Jews represent 70% of British Jewry,Synagogue membership in the United Kingdom in 2010 and 27% of American Jewry.
Terminology
Orthodoxy is not one single movement or school of thought. There is no single rabbinical body to which all rabbis are expected to belong, or any one organization representing member congregations.
In the 20th century, a segment of the Orthodox population (as represented by the World Agudath Israel movement) disagreed with Modern Orthodoxy and took a stricter approach. Such rabbis viewed innovations and modifications within Jewish law and customs with extreme care and caution. This form of Judaism may be referred to as Haredi Judaism or "Ultra-Orthodox Judaism".
According to the New Jersey Press Association,Josh Lipowsky, "Paper loses 'divisive' term", New Jersey Jewish Standard, February 5, 2009, pp 10. several media entities refrain from using the term "ultra-Orthodox", including the Religion Newswriters Association; JTA, the global Jewish news service; and the Star-Ledger, New Jersey’s largest daily newspaper. Several local Jewish papers, including New York's Jewish Week and Philadelphia's Jewish Exponent have also dropped use of the term. According to Shammai Engelmayer, spiritual leader of Temple Israel Community Center in Cliffside Park and former executive editor of Jewish Week, this leaves "Orthodox" as "an umbrella term that designates a very widely disparate group of people very loosely tied together by some core beliefs."
Theology
Status
A definite and conclusive credo was never formulated in Judaism; the very question whether the faith contains any equivalent of dogma is a matter of intense scholarly controversy and has been so for centuries. Some researchers attempted to argue that the importance of daily practice and punctilious adherence to Jewish Law (Halakha) relegated theoretical issues to an ancillary status. Others dismissed this view entirely, citing the many debates in ancient rabbinic sources which castigated various heresies without any reference to observance.
However, while lacking a uniform doctrine, Orthodox Judaism is basically united in affirming several core beliefs, disavowal of which is considered major blasphemy. As in other aspects, Orthodox positions reflect the mainstream of traditional Rabbinic Judaism through the ages.
Attempts to codify these were undertaken by several medieval authorities, including Saadia Gaon and Joseph Albo. Each composed his own creed. Yet the 13 Fundamentals expounded by Maimonides in his Commentary on the Mishnah, authored in the 1160s eventually proved the most widely accepted. Various points – for example, Albo listed merely three fundamentals and did not regarded the Messiah as key tenet – the exact formulation, and the status of disbelievers (whether mere errants or heretics who can no longer be considered part of the People Israel) were contested by many of his contemporaries and later sages. But in recent centuries the 13 Principles became standard, and are considered binding and cardinal by Orthodox authorities in a virtually universal manner.See, for example: Marc B. Shapiro. The Limits of Orthodox Theology: Maimonides' Thirteen Principles Reappraised. Littman Library of Jewish Civilization (2011). pp. 1-14.
God
The basic tenets, drawn from ancient sources like the Talmud as well as later sages, include the attributes of God in Judaism: one and indivisible, preceding all creation which He alone brought into being, eternal, omniscient, omnipotent, absolutely incorporeal and beyond human reason. Maimonides delineated this understanding of a monotheistic, personal God in six articles concerning His status as the sole Creator, His oneness, His impalpability, that He is first and last, that God alone may be worshiped and no other, and that He is omniscient.
Eschatology
More specific doctrines refer to the times of Godly salvation and afterlife – in Judaism, Olam haBa, The World to Come. These include belief in divine reward for those who observe the Lord's commandments and likewise, punishment meted unto the transgressors. Maimonides reserved one article for this tenet, oft mentioned in traditional sources, stating merely that God rewards and punishes without specification.
This issue has been subject to much debate and interpretation. For example, while Maimonides stated in his writings (and his explanation was very much controversial) that the Garden of Eden is a location on earth that will be recovered, the term Gehinnom ("Hell") referred to punishment in this world, and that only the soul of the righteous shall survive and delight in bliss. Nahmanides offered a more comprehensive system, with divine remuneration for better or worse both in this world, via natural means, and in a celestial heaven and hell.
One of the most important teachings concerning afterlife in Judaism is the Resurrection of the Dead. The Talmud (Tractate Sanhedrin 11) listed deniers of this faith as heretics who shall have no part in the World to Come. Maimoindes specified it apart as a separate article. This particular notion is closely linked with Reward and Punishment. Saadia Gaon, and many sages who accepted his position, envisioned two Resurrections: one natural, of this world, paired with the salvation of Israel, in which only the righteous among this people alone shall be revived. They will live ordinary, corporeal life but will not die and pass as such into the supernatural World to Come. Then all mankind shall be resuscitated and be given each his just due.
Maimonides himself held to a less popular and extremely controversial interpretation (subjecting him to many accusations of heresy), claiming that Resurrection was distinct from the other eschatological events: the righteous of Israel shall merely be given a second, blessed life in this world and then die naturally. The eternal reward shall be preserved for their soul, as beforehand.
Preceding the miraculous events linked with afterlife is the Advent of the Messiah, also independently listed among Maimonides' Thirteen as a tenet of faith.
Beliefs
Orthodox Judaism maintains the historical understanding of Jewish identity. A Jew is someone who was born to a Jewish mother, or who converts to Judaism in accordance with Jewish law and tradition. Orthodoxy thus rejects patrilineal descent as a means of establishing Jewish identity. Similarly, Orthodoxy strongly condemns interreligious marriage. Intermarriage is seen as a deliberate rejection of Judaism, and an intermarried person is effectively cut off from most of the Orthodox community. However, some Orthodox Jewish organizations do reach out to intermarried Jews.
Orthodox Judaism holds that the words of the Torah, including both the Written Law and those parts of the Oral Law which are halacha leMoshe m'Sinai, were dictated by God to Moses essentially as they exist today. The laws contained in the Written Torah were given along with detailed explanations as how to apply and interpret them, the Oral Law. Although Orthodox Jews believe that many elements of current religious law were decreed or added as "fences" around the law by the rabbis, all Orthodox Jews believe that there is an underlying core of Sinaitic law and that this core of the religious laws Orthodox Jews know today is thus directly derived from Sinai and directly reflects the divine will. As such, Orthodox Jews believe that one must be extremely careful in interpreting Jewish law. Orthodox Judaism holds that, given Jewish law's divine origin, no underlying principle may be compromised in accounting for changing political, social or economic conditions; in this sense, "creativity" and development in Jewish law is limited.
There is significant disagreement within Orthodox Judaism, particularly between Haredi Judaism and Modern Orthodox Judaism, about the extent and circumstances under which the proper application of halakha should be re-examined as a result of changing realities. As a general rule, Haredi Jews believe that when at all possible the law should be maintained as it was understood by their authorities at the haskalah, believing that it had never changed. Modern Orthodox authorities are more willing to assume that under scrupulous examination, identical principles may lead to different applications in the context of modern life. To the Orthodox Jew, halakha is a guide, God's Law, governing the structure of daily life from the moment he or she wakes up to the moment he or she goes to sleep. It includes codes of behaviour applicable to a broad range of circumstances (and many hypothetical ones). There are though a number of halakhic meta-principles that guide the halakhic process and in an instance of opposition between a specific halakha and a meta-principle, the meta-principle often wins out. Examples of halakhic meta-principles are Deracheha Darchei Noam (the ways of Torah are pleasant), Kavod Habriyot (basic respect for human beings), and Pikuach Nefesh (the sanctity of human life).
Orthodox Judaism holds that on biblical Mount Sinai, the Written Law was transmitted along with an Oral Law. The words of the Torah were spoken to Moses by God; the laws contained in this Written Torah, the 613 mitzvot, were given along with detailed explanations in the oral tradition as to how to apply and interpret them. Furthermore, the Oral law includes principles designed to create new rules. The Oral law is held to be transmitted with an extremely high degree of accuracy. Jewish theologians who choose to emphasize the more evolutionary nature of the halacha point to a famous story in the Talmud where Moses is miraculously transported to the House of Study of Rabbi Akiva and is clearly unable to follow the ensuing discussion.
According to Orthodox Judaism, Jewish law today is based on the commandments in the Torah, as viewed through the discussions and debates contained in classical rabbinic literature, especially the Mishnah and the Talmud. Orthodox Judaism thus holds that the halakha represents the "will of God", either directly, or as close to directly as possible. The laws are from the word of God in the Torah, using a set of rules also revealed by God to Moses on Mount Sinai, and have been derived with the utmost accuracy and care, and thus the Oral Law is considered to be no less the word of God. If some of the details of Jewish law may have been lost over the millennia, they were reconstructed in accordance with internally consistent rules.
In this world view, the Mishnaic and Talmudic rabbis are closer to the divine revelation; by corollary, one must be extremely conservative in changing or adapting Jewish law. Orthodox Jews will also study the Talmud for its own sake; this is considered to be the greatest mitzvah of all.
Haredi and Modern Orthodox Judaism vary somewhat in their view of the validity of Halakhic reconsideration. It is held virtually as a principle of belief among many Haredi Jews that halakhah never changes. Haredi Judaism thus views higher criticism of the Talmud as inappropriate, and almost certainly heretical. At the same time, some Modern Orthodox Jews do not have a problem with historical scholarship in this area. See the entry on historical analysis of the Talmud.
History
Roots of Orthodox Judaism
The roots of Orthodox Judaism can be traced to the late 18th or early 19th century, when elements within German Jewry sought to reform Jewish belief and practice in the early 19th century in response to the Age of Enlightenment, Jewish Emancipation, and Haskalah. They sought to modernize education in light of contemporary scholarship. They rejected claims of the absolute divine authorship of the Torah, declaring only biblical laws concerning ethics to be binding, and stated that the rest of halakha (Jewish law) need not be viewed as normative for Jews in wider society. (see Reform Judaism).
In reaction to the emergence of Reform Judaism, a group of traditionalist German Jews emerged in support of some of the values of the Haskalah, but also wanted to defend the classic, traditional interpretation of Jewish law and tradition. This group was led by those who opposed the establishment of a new temple in Hamburg [1819], as reflected in the booklet "Ele Divrei HaBerit". As a group of Reform Rabbis convened in Braunschweig, Rabbi Jacob Ettlinger of Altona published a manifesto entitled "Shlomei Emunei Yisrael" in German and Hebrew, having 177 Rabbis sign on. At this time the first Orthodox Jewish periodical, "Der Treue Zions Waechter", was launched with the Hebrew supplement "Shomer Zion HaNe'eman" [1845 - 1855]. In later years it was Rav Ettlinger's students Rabbi Samson Raphael Hirsch and Rabbi Azriel Hildesheimer of Berlin who deepened the awareness and strength of Orthodox Jewry. Rabbi Samson Raphael Hirsch commented in 1854:It was not the 'Orthodox' Jews who introduced the word 'orthodoxy' into Jewish discussion. It was the modern 'progressive' Jews who first applied this name to 'old', 'backward' Jews as a derogatory term. This name was at first resented by 'old' Jews. And rightly so. 'Orthodox' Judaism does not know any varieties of Judaism. It conceives Judaism as one and indivisible. It does not know a Mosaic, prophetic and rabbinic Judaism, nor Orthodox and Liberal Judaism. It only knows Judaism and non-Judaism. It does not know Orthodox and Liberal Jews. It does indeed know conscientious and indifferent Jews, good Jews, bad Jews or baptized Jews; all, nevertheless, Jews with a mission which they cannot cast off. They are only distinguished accordingly as they fulfill or reject their mission. (Samson Raphael Hirsch, Religion Allied to Progress, in JMW. p. 198) Hirsch held the opinion that Judaism demands an application of Torah thought to the entire realm of human experience, including the secular disciplines. His approach was termed the Torah im Derech Eretz approach, or "neo-Orthodoxy". While insisting on strict adherence to Jewish beliefs and practices, he held that Jews should attempt to engage and influence the modern world, and encouraged those secular studies compatible with Torah thought. This pattern of religious and secular involvement has been evident at many times in Jewish history. Scholars believe it was characteristic of the Jews in Babylon during the Amoraic and Geonic periods, and likewise in early medieval Spain, shown by their engagement with both Muslim and Christian society. It appeared as the traditional response to cultural and scientific innovation.
Some scholars believe that Modern Orthodoxy arose from the religious and social realities of Western European Jewry. While most Jews consider Modern Orthodoxy traditional today, some (the hareidi and hasidic groups) within the Orthodox community consider some elements to be of questionable validity. The neo-Orthodox movement holds that Hirsch's views are not accurately followed by Modern Orthodoxy. [See Torah im Derech Eretz and Torah Umadda "Relationship with Torah im Derech Eretz" for a more extensive listing.]
Development of Orthodox religious practice
thumb|271x271px|The Shulchan Aruch, published in 1565, is the authoritative legal code for Orthodox Jews
Contemporary Orthodox Jews believe that they adhere to the same basic philosophy and legal framework that has existed throughout Jewish history, whereas the other denominations depart from it. Orthodox Judaism, as it exists today, is an outgrowth that claims to extend from the time of Moses, to the time of the Mishnah and Talmud, through the development of oral law and rabbinic literature, until the present time. For some, Orthodox Judaism has been seen as a continuation of what was the mainstream expression of Judaism prior to the 19th century.
However, the Orthodox claim to absolute fidelity to past tradition has been challenged by modern scholars who contend that the Judaism of the Middle Ages bore little resemblance to that practiced by today's Orthodox. Rather, the Orthodox community, as a counterreaction to the liberalism of the Haskalah movement, began to embrace far more stringent halachic practices than their predecessors, most notably in matters of Kashrut and Passover dietary laws, where the strictest possible interpretation becomes a religious requirement, even where the Talmud explicitly prefers a more lenient position, and even where a more lenient position was practiced by prior generations.
Jewish historians also note that certain customs of today's Orthodox are not continuations of past practice, but instead represent innovations that would have been unknown to prior generations. For example, the now-widespread haredi tradition of cutting a boy's hair for the first time on his third birthday (upshirin or upsheerin, Yiddish for "haircut") "originated as an Arab custom that parents cut a newborn boy's hair and burned it in a fire as a sacrifice," and "Jews in Palestine learned this custom from Arabs and adapted it to a special Jewish context." The tradition of lighting bonfires on Lag B'omer also derives from the same Arab practice of burning the child's cut hair, as it was initially on that day (rather than on the third birthday) that the cutting ceremony was performed. The Ashkenazi prohibition against eating kitniyot (grains and legumes such as rice, corn, beans, and peanuts) during Passover was explicitly rejected in the Talmud, has no known precedent before the 12th century and represented a minority position for hundreds of years thereafter, but nonetheless has remained a mandatory prohibition among Ashkenazi Orthodox Jews due to their historic adherence to the ReMA's rulings in the Shulchan Aruch.Orach Haim 453:1
Growth of Orthodox affiliation
In practice, the emphasis on strictness has resulted in the rise of "homogeneous enclaves" with other haredi Jews that are less likely to be threatened by assimilation and intermarriage, or even to interact with other Jews who do not share their doctrines. Nevertheless, this strategy has proved successful and the number of adherents to Orthodox Judaism, especially Haredi and Chassidic communities, has grown rapidly.
In 1915, Yeshiva College (later Yeshiva University) and its Rabbi Isaac Elchanan Theological Seminary was established in New York City for training in an Orthodox milieu. A school branch was established in Los Angeles, California.
A number of other influential Orthodox seminaries, mostly Haredi, were established throughout the country, most notably in New York, Baltimore, Maryland; and Chicago, Illinois. Beth Medrash Govoha, the Haredi yeshiva in Lakewood, New Jersey is the largest Talmudic academy in the United States, with a student body of over 5,000 students.
Holocaust
While some assert that the majority of Jews killed during the Holocaust were religiously Orthodox, numbering between 50-70% of those who perished, researchers have shown that Jewish Orthodoxy was waning at the time, consumed by the Jewish Enlightenment, secular Zionism, and the socialist movements of pre-war Europe.
Streams of Orthodoxy
thumb|200x200px|Rabbi Moshe Feinstein, a leading 20th-century American Orthodox authority.
Orthodox Judaism is heterogeneous, whereby subgroups maintain significant social differences, and less significant differences in understanding Halakha. What unifies various groups under the "Orthodox" umbrella is the central belief that Torah, including the Oral Law, was given directly from God to Moses at Mount Sinai and applies in all times and places. As a result, all Orthodox Jews are required to live in accordance with the Commandments and Jewish law.
Since there is no one Orthodox body, there is no one canonical statement of principles of faith. Rather, each Orthodox group claims to be a non-exclusive heir to the received tradition of Jewish theology. Some groups have affirmed a literal acceptance of Maimonides' thirteen principles.
Given this (relative) philosophic flexibility, variant viewpoints are possible, particularly in areas not explicitly demarcated by the Halakha. The result is a relatively broad range of hashqafoth (Sing. hashkafa – world view, Weltanschauung) within Orthodoxy. The greatest differences within strains of Orthodoxy involve the following issues:
the degree to which an Orthodox Jew should integrate or disengage from secular society
based, in part, on varying interpretations of the Three Oaths, whether Zionism is part of Judaism or opposed to it, and defining the role of the modern State of Israel in Judaism
their spiritual approach to Torah such as the relative roles of mainstream Talmudic study and mysticism or ethics
the validity of secular knowledge including critical Jewish scholarship of Rabbinic literature and modern philosophical ideas
whether the Talmudic obligation to learn while also practicing a trade/profession applies in our times
the centrality of yeshivas as the place for personal Torah study
the validity of authoritative spiritual guidance in areas outside of Halakhic decision (Da'as Torah)
the importance of maintaining non-Halakhic customs, such as dress, language and music
the role of women in (religious) society
the nature of the relationship with non-Jews
Based on their philosophy and doctrine vis-a-vis these core issues, adherents to Orthodoxy can roughly be divided into the subgroups of Modern Orthodox Judaism and Haredi Judaism, with Hasidic Jewish groups falling into the latter category.
Modern Orthodoxy
Modern Orthodoxy comprises a fairly broad spectrum of movements, each drawing on several distinct though related philosophies, which in some combination have provided the basis for all variations of the movement today. In general, Modern Orthodoxy holds that Jewish law is normative and binding, while simultaneously attaching a positive value to interaction with contemporary society. In this view, Orthodox Judaism can "be enriched" by its intersection with modernity; further, "modern society creates opportunities to be productive citizens engaged in the Divine work of transforming the world to benefit humanity". At the same time, in order to preserve the integrity of halakha, any area of "powerful inconsistency and conflict" between Torah and modern culture must be avoided. Modern Orthodoxy, additionally, assigns a central role to the "People of Israel".
Modern Orthodoxy, as a stream of Orthodox Judaism represented by institutions such as the U.S. National Council for Young Israel, is pro-Zionist and thus places a high national, as well as religious, significance on the State of Israel, and its affiliates are, typically, Zionist in orientation. It also practices involvement with non-Orthodox Jews that extends beyond "outreach (Kiruv)" to continued institutional relations and cooperation; see further under Torah Umadda. Other "core beliefs"William B. Helmreich and Reuel Shinnar: Modern Orthodoxy in America: Possibilities for a Movement under Siege are a recognition of the value and importance of secular studies, a commitment to equality of education for both men and women, and a full acceptance of the importance of being able to financially support oneself and one's family.
Haredi Judaism
Haredi Judaism advocates segregation from non-Jewish culture, although not from non-Jewish society entirely. It is characterised by its focus on community-wide Torah study. Haredi Orthodoxy's differences with Modern Orthodoxy usually lie in interpretation of the nature of traditional halakhic concepts and in acceptable application of these concepts. Thus, engaging in the commercial world is a legitimate means to achieving a livelihood, but individuals should participate in modern society as little as possible. The same outlook is applied with regard to obtaining degrees necessary to enter one's intended profession: where tolerated in the Haredi society, attending secular institutions of higher education is viewed as a necessary but inferior activity. Academic interest is instead to be directed toward the religious education found in the yeshiva. Both boys and girls attend school and may proceed to higher Torah study, starting anywhere between the ages of 13 and 18. A significant proportion of students, especially boys, remain in yeshiva until marriage (which is often arranged through facilitated dating – see shiduch), and many study in a kollel (Torah study institute for married men) for many years after marriage. Most Orthodox men (including many Modern Orthodox), even those not in Kollel, will study Torah daily.
Hasidic Judaism
Hasidic or Chasidic Judaism is a type of Haredi Judaism that originated in Eastern Europe (what is now Belarus and Ukraine) in the 18th century. Founded by Israel ben Eliezer, known as the Baal Shem Tov (1698–1760), it emerged in an age of persecution of the Jewish people, when a schism existed between scholarly and common European Jews. In addition to bridging this class gap, Hasidic teachings sought to reintroduce joy in the performance of the commandments and in prayer through the popularisation of Jewish mysticism (this joy had been suppressed in the intense intellectual study of the Talmud). The Ba'al Shem Tov sought to combine rigorous scholarship with more emotional mitzvah observance. In a practical sense, what distinguishes Hasidic Judaism from other forms of Haredi Judaism is the close-knit organization of Hasidic communities centered on a Rebbe (sometimes translated as "Grand Rabbi"), and various customs and modes of dress particular to each community. In some cases, there are religious ideological distinctions between Hasidic groups, as well. Another phenomenon that sets Hasidic Judaism apart from general Haredi Judaism is the strong emphasis placed on speaking Yiddish; in (many) Hasidic households and communities, Yiddish is spoken exclusively.
In practice
thumb|right|250px|The Babylonian Talmud
For guidance in practical application of Jewish law, the majority of Orthodox Jews appeal to the Shulchan Aruch ("Code of Jewish Law" composed in the 16th century by Rabbi Joseph Caro) together with its surrounding commentaries. Thus, at a general level, there is a large degree of uniformity amongst all Orthodox Jews. Concerning the details, however, there is often variance: decisions may be based on various of the standardized codes of Jewish Law that have been developed over the centuries, as well as on the various responsa. These codes and responsa may differ from each other as regards detail (and reflecting the above philosophical differences, as regards the weight assigned to these). By and large, however, the differences result from the historic dispersal of the Jews and the consequent development of differences among regions in their practices (see minhag).
Mizrahi and Sephardic Orthodox Jews base their practice on the Shulchan Aruch. The recent works of Halakha, Kaf HaChaim, Ben Ish Chai and Yalkut Yosef are considered authoritative in many Sephardic communities. Thus Mizrahi and Sephardi Jews may choose to follow the opinion of the Ben Ish Chai when it conflicts with the Shulchan Aruch. Some of these practices are derived from the Kabbalistic school of Isaac Luria.
Ashkenazic Orthodox Jews have traditionally based most of their practices on the Rema, the gloss on the Shulchan Aruch by Rabbi Moses Isserles, reflecting differences between Ashkenazi and Sephardi custom. In the post-World War II period, the Mishnah Berurah has become authoritative. Ashkenazi Jews may choose to follow the Mishna Brurah instead of a particular detail of Jewish law as presented in the Shulchan Aruch.
Chabad Lubavitch Hasidim follows the rulings of Shneur Zalman of Liadi in the Shulchan Aruch HaRav.
Traditional Baladi and Dor Daim (Yemenite Jews) base most of their practices on the Mishneh Torah, the compendium by Maimonides of halakha, written several centuries before the Shulchan Aruch. The Talmidei haRambam also keep Jewish law as codified in the Mishneh Torah.
A smaller number, such as the Romaniote Jews, traditionally rule according to the Jerusalem Talmud over the Babylonian Talmud.
Spanish and Portuguese Jews consider the Shulchan Aruch authoritatively but differ from other Sephardim by making less allowance for more recent authorities, in particular customs based on the Kabbalah. Some customs are based on Maimonides or the Arba'ah Turim.
Orthodox Judaism emphasizes practicing rules of Kashrut, Shabbat, Family Purity, and Tefilah (Prayer). Many Orthodox Jews can be identified by their manner of dress and family lifestyle. Orthodox men and women dress modestly by keeping most of their skin covered. Married women cover their hair, most commonly in the form of a scarf, also in the form of hats, snoods, berets, or, sometimes, wigs. Orthodox men wear a skullcap known as a kipa, and often fringes called tzitzit. Many men grow beards, and Haredi men usually wear black hats and suits. Modern Orthodox Jews are commonly indistinguishable in their dress from those around them.
In the United States
thumb|left|250px|The New York City Metropolitan Area is home to the largest American Orthodox Jewish population.
Although sizable Orthodox Jewish communities are located throughout the United States, the highest number of American Orthodox Jews live in New York State, particularly in the New York City Metropolitan Area. Two of the main Orthodox communities in the United States are located in New York City and Rockland County. In New York City, the neighborhoods of Borough Park, Midwood, Williamsburg, and Crown Heights, located in the borough of Brooklyn, have particularly large Orthodox communities. The most rapidly growing community of American Orthodox Jews is located in Rockland County and the Hudson Valley of New York, including the communities of Monsey, Monroe, New Square, Kiryas Joel, and Ramapo. There are also sizable and rapidly growing Orthodox communities throughout New Jersey, particularly in Lakewood, Freehold, Teaneck, Englewood, Passaic, and Fair Lawn.
In addition, Maryland has a large number of Orthodox Jews, many of whom live in Baltimore, particularly in the Park Heights, Mount Washington, and Pikesville areas. Two other large Orthodox Jewish centers are southern Florida, particularly Miami Beach, and the Los Angeles area of California.
In contrast to the general American Jewish community, which is dwindling due to low fertility and high intermarriage and assimilation rates, the Orthodox Jewish community of the United States is growing rapidly. Among Orthodox Jews, the fertility rate stands at about 4.1 children per family, as compared to 1.9 children per family among non-Orthodox Jews, and intermarriage among Orthodox Jews is practically non-existent, standing at about 2%, in contrast to a 71% intermarriage rate among non-Orthodox Jews. In addition, Orthodox Judaism has a growing retention rate; while about half of those raised in Orthodox homes previously abandoned Orthodox Judaism, that number is declining. According to The New York Times, the high growth rate of Orthodox Jews will eventually render them the dominant demographic force in New York Jewry.
Politically, Orthodox Jews, given their variety of movements and affiliations, tend not to conform easily to the standard left-right political spectrum, with one of the key differences between the movements stemming from the groups' attitudes to Zionism. Generally speaking, of the three key strands of Orthodox Judaism, Haredi Orthodox and Hasidic Orthodox Jews are at best ambivalent towards the ideology of Zionism and the creation of the State of Israel, and there are many groups and organisations who are outspokenly anti-Zionistic, seeing the ideology of Zionism as diametrically opposed to the teaching of the Torah, and the Zionist administration of the State of Israel, with its emphasis on militarism and nationalism, as destructive of the Judaic way of life.
On the other hand, Orthodox Jews subscribing to Modern Orthodoxy in its American and UK incarnations, tend to follow far more right-wing politics than both non-orthodox and other orthodox Jews. While the majority of non-Orthodox American Jews are on average strongly liberal and supporters of the Democratic Party, the Modern Orthodox subgroup of Orthodox Judaism tends to be far more conservative, with roughly half describing themselves as political conservatives, and are mostly Republican Party supporters. Modern Orthodox Jews, compared to both the non-Orthodox American Jewry and the Haredi and Hasidic Jewry, also tend to have a stronger connection to Israel due to their attachment to Zionism.
Movements, organisations and groups
thumb|250px|Heichal Shlomo, former seat of the Chief Rabbinate of Israel in Jerusalem.
Agudath Israel of America is the largest and most influential Haredi organization in America. Its roots go back to the establishment of the original founding of the Agudath Israel movement in 1912 in Katowitz, Prussia (now Katowice, Poland). The American Agudath Israel was founded in 1939. There is an Agudat Israel (Hasidic) in Israel, and also Degel HaTorah (non-Hasidic "Lithuanian"), as well as an Agudath Israel of Europe. These groups are loosely affiliated through the World Agudath Israel, which from time to time holds a major gathering in Israel called a knessia. Agudah unites many rabbinic leaders from the Hasidic Judaism wing with those of the non-Hasidic "yeshiva" world. It is generally non-nationalistic and ambivalent towards the modern State of Israel.
The Union of Orthodox Jewish Congregations of America, known as the Orthodox Union, or "OU", and the Rabbinical Council of America, "RCA", are organizations that represent Modern Orthodox Judaism, a large segment of Orthodoxy in the United States and Canada. These groups should not be confused with the similarly named Union of Orthodox Rabbis (described below).
The National Council of Young Israel (NCYI) and the Council of Young Israel Rabbis (CYIR) are smaller groups that were founded as Modern Orthodox organizations, are Zionistic, and are in the right wing of Modern Orthodox Judaism. Young Israel strongly supports and allies itself with the settlement movement in Israel. While the lay membership of synagogues affiliated with the NCYI are almost exclusively Modern Orthodox in orientation, the rabbinical leadership of the synagogues ranges from Modern Orthodox to Haredi.
The Chief Rabbinate of Israel was founded with the intention of representing all of Judaism within the State of Israel, and has two chief rabbis: One is Ashkenazic (of the East European and Russian Jewish tradition), and one is Sephardic (of the Mediterranean, North African, Central Asian, Middle-Eastern, and of Caucasus Jewish tradition.) The rabbinate has never been accepted by most Israeli Haredi groups. Since the 1960s, the Chief rabbinate of Israel has moved somewhat closer to the positions of Haredi Judaism.
Mizrachi, and political parties such as Mafdal and National Union (Israel) all represent certain sectors within the Religious Zionist movement, both in Israel and the diaspora. The defunctEncyclopaedia Judaica: Volume 8, p. 145 Gush Emunim, Meimad, Tzohar, Hazit and other movements represent over competing divisions within the sector. They firmly believe in the "Land Of Israel for the People of Israel according to the Torah of Israel" principle, although Meimad are pragmatic about such program. Gush Emunim are the settlement wing of National Union (Israel) and support widespread kiruv as well, through such institutions as Machon Meir, Merkaz HaRav and Rabbi Shlomo Aviner. Another sector includes the Hardal faction, which tends to be unallied to the Government and quite centristic.
Chabad Lubavitch is a branch of Hasidic Judaism widely known for its emphasis on outreach and education. The organization has been in existence for 200 years, and especially after the Second World War, it began sending out emissaries (shluchim) who have as a mission the bringing back of disaffected Jews to a level of observance consistent with Chabad norms (i.e., Chassidus, Chabad messianism,The Encyclopedia of Hasidism, entry: Habad, Jonathan Sacks, pp. 161–164 Tanya). They are major players in what is known as the Baal Teshuva movement. Their mandate is to introduce Chabad philosophy to non-observant Jews and to make them more observant as Beinonis. According to sociologists studying contemporary Jewry, the Chabad movement neither fits into the category of Haredi or modern Orthodox, the standard categories for Orthodox Jews. This is due in part to the existence of the "non-Orthodox Hasidim" (of which include former Israeli President Zalman Shazar), the lack of official recognition of political and religious distinctions within Judaism and the open relationship with non-Orthodox Jews represented by the activism of Chabad emissaries.Liebman, Charles S. "Orthodoxy in American Jewish Life." The American Jewish Year Book (1965): 21-97Ferziger, Adam S. "Church/sect theory and American orthodoxy reconsidered." Ambivalent Jew—Charles S. Liebman in memoriam, ed. Stuart Cohen and Bernard Susser (2007): 107-124.
The Rohr Jewish Learning Institute is a provider of adult Jewish courses on Jewish history, law, ethics, philosophy and rabbinical literature. It also develops Jewish studies curricula specifically for women, college students, teenagers, and seniors. In 2014, there were 117,500 people enrolled in JLI, making it the largest Jewish education network in the world.
In Israel, although it shares a similar agenda with the Sephardic Shas political party, Shas is more bipartisan when it comes to its own issues, and non-nationalistic-based with a huge emphasis on Sephardi and Mizrahi Judaism.
The Agudath HaRabbonim, also known as the Union of Orthodox Rabbis of the United States and Canada, is a small Haredi-leaning organization founded in 1902. It should not be confused with "The Union of Orthodox Jewish Congregations of America" (see above) which is a separate organization. While at one time influential within Orthodox Judaism, the Agudath HaRabbonim in the last several decades has progressively moved further to the right; its membership has been dropping and it has been relatively inactive. Some of its members are rabbis from Chabad Lubavitch; some are also members of the RCA (see above). It is currently most famous for its 1997 declaration (citing Israeli Chief Rabbi Yitzhak HaLevi Herzog and Orthodox Rabbi Joseph Soloveitchik) that the Conservative and Reform movements are "not Judaism at all".
The Central Rabbinical Congress of the United States and Canada (CRC) was established in 1952. It is an anti-Zionist, Haredi organization, closely aligned with the Satmar Hasidic group, which has about 100,000 adherents (an unknown number of which are rabbis), and like-minded Haredi groups.
The left-wing Modern Orthodox advocacy group, Edah, formed from United States Modern Orthodox rabbis. Most of its membership came from synagogues affiliated with the Union of Orthodox Congregations and RCA (above). Their motto was, "The courage to be Modern and Orthodox". Edah ceased operations in 2007 and merged some of its programs into the left-wing Yeshivat Chovevei Torah.
The Beis Yaakov educational movement, begun in 1917, introduced the concept of formal Judaic schooling for Orthodox women.
See also
List of Baalei teshuva
Divine providence (Judaism)
Jewish denominations
Jewish philosophy
Lithuanian Judaism
List of Orthodox rabbis
Rabbinic Judaism
Religious Zionism
Sephardi Judaism
Torah Judaism
References
External links
Benjamin Brown, "Orthodox Judaism", in: The Blackwell Companion to Judaism, 2001.
Your Complete Guide to Brochos
Origins of Orthodox Judaism
The different Orthodox Jewish groups
The State of Orthodox Judaism Today
Orthodox Judaism in Israel
Orthodox Jewish population growth and political changes
Culture and orthodox books
Information on Orthodox Jewish culture
Orthodox Retention and Kiruv: The Bad News and the Good News
Category:Jewish religious movements
pl:Judaizm#Judaizm ortodoksyjny | 22,518 | 2017-01 |
Ashkenazi Jews |
thumb|300px|right|The Jews in Central Europe (1881)
Ashkenazi Jews, also known as Ashkenazic Jews or simply Ashkenazim (, Ashkenazi Hebrew pronunciation: , singular: , Modern Hebrew: ; also ),Ashkenaz, based on and his explanation of Genesis 10:3, is considered to be the progenitor of the ancient Gauls (the people of Gallia, meaning, the people from Austria, France and Belgium), and the ancient Franks (of, both, France and Germany). According to Gedaliah ibn Jechia the Spaniard, in the name of Sefer Yuchasin (see: Gedaliah ibn Jechia, Shalshelet Ha-Kabbalah, Jerusalem 1962, p. 219; p. 228 in PDF), the descendants of Ashkenaz had also originally settled in what was then called Bohemia, which today is the present-day Czech Republic. These places, according to the Jerusalem Talmud (Megillah 1:9 [10a], were also called simply by the diocese "Germamia". Germania, Germani, Germanica have all been used to refer to the group of peoples comprising the German Tribes, which include such peoples as Goths, whether Ostrogoths or Visigoths, Vandals and Franks, Burgundians, Alans, Langobards, Angles, Saxons, Jutes, Suebi and Alamanni. The entire region east of the Rhine River was known by the Romans as "Germania" (Germany). are a Jewish diaspora population who coalesced as a distinct community in the Holy Roman Empire around the end of the first millennium. The traditional diaspora language of Ashkenazi Jews is Yiddish (which incorporates several dialects), while until recently Hebrew was only used as a sacred language.
The Ashkenazim settled and established communities throughout Central and Eastern Europe, which was their primary region of concentration and residence from the Middle Ages until recent times. They subsequently evolved their own distinctive culture and diasporic identities.Jessica Mozersky, Risky Genes: fs, Breast Cancer and Jewish Identity, Routledge 2013 p. 140.: 'this research highlights the complex and multiple ways in which identity can be conceived of by Ashkenazi Jews.' Throughout their time in Europe, the Ashkenazim have made many important contributions to philosophy, scholarship, literature, art, music, and science.Glenda Abramson (ed.), Encyclopedia of Modern Jewish Culture, Routledge 2004 p. 20.T. C. W. Blanning (ed.), The Oxford History of Modern Europe, Oxford University Press, 2000 pp. 147–148
In the late Middle Ages, the center of gravity of the Ashkenazi population shifted steadily eastward,Ben-Sasson, Haim Hillel, et al (2007). "Germany." Encyclopaedia Judaica. 2nd ed. Vol. 7. Detroit: Macmillan Reference USA. p. 518-546; here: p. 524. moving out of the Holy Roman Empire into the Pale of Settlement (comprising parts of present-day Belarus, Latvia, Lithuania, Moldova, Poland, Russia, and Ukraine).Mosk (2013), p. 143. "Encouraged to move out of the Holy Roman Empire as persecution of their communities intensified during the twelfth and thirteenth centuries, the Ashkenazi community increasingly gravitated toward Poland."Harshav, Benjamin (1999). The Meaning of Yiddish. Stanford: Stanford University Press. p. 6. "From the fourteenth and certainly by the sixteenth century, the center of European Jewry had shifted to Poland, then ... comprising the Grand Duchy of Lithuania (including today's Byelorussia), Crown Poland, Galicia, the Ukraine and stretching, at times, from the Baltic to the Black Sea, from the approaches to Berlin to a short distance from Moscow." In the course of the late 18th and 19th centuries, those Jews who remained in or returned to the German lands experienced a cultural reorientation; under the influence of the Haskalah and the struggle for emancipation, as well as the intellectual and cultural ferment in urban centers, they gradually abandoned the use of Yiddish, while developing new forms of Jewish religious life and cultural identity.Ben-Sasson, Haim Hillel, et al (2007). "Germany." Encyclopaedia Judaica. 2nd ed. Vol. 7. Detroit: Macmillan Reference USA. p. 518-546; here: p. 526-528. "The cultural and intellectual reorientation of the Jewish minority was closely linked with its struggle for equal rights and social acceptance. While earlier generations had used solely the Yiddish and Hebrew languages among themselves, ... the use of Yiddish was now gradually abandoned, and Hebrew was by and large reduced to liturgical usage" (p. 527).
The genocidal impact of the Holocaust (the mass murder of approximately six million Jews during World War II) devastated the Ashkenazim and their culture, affecting almost every Jewish family.Yaacov Ro'i, "Soviet Jewry from Identification to Identity", in Eliezer Ben Rafael, Yosef Gorni, Yaacov Ro'i (eds.) Contemporary Jewries: Convergence and Divergence, BRILL 2003 p. 186.Dov Katz, "Languages of the Diaspora", in Mark Avrum Ehrlich (ed.), Encyclopedia of the Jewish Diaspora: Origins, Experiences, and Culture, Volume 1, ABC-CLIO 2008 pp. 193ff., p. 195. It is estimated that in the 11th century Ashkenazi Jews composed only three percent of the world's total Jewish population, while at their peak in 1931 they accounted for 92 percent of the world's Jews. Immediately prior to the Holocaust, the number of Jews in the world stood at approximately 16.7 million., based on Statistical figures vary for the contemporary demography of Ashkenazi Jews, oscillating between 10 million and 11.2 million. Sergio DellaPergola in a rough calculation of Sephardic and Mizrahi Jews, implies that Ashkenazi Jews make up less than 74% of Jews worldwide. DellaPergola does not analyze or mention the Ashkenazi statistics, but the figure is implied by his rough estimate that in 2000, Oriental and Sephardi Jews constituted 26% of the population of world Jewry. Other estimates place Ashkenazi Jews as making up about 75% of Jews worldwide.Focus on Genetic Screening Research edited by Sandra R. Pupecki P:58
Genetic studies on Ashkenazim—researching both their paternal and maternal lineages—suggest a significant proportion of Middle Eastern ancestry. Those studies have arrived at diverging conclusions regarding both the degree and the sources of their European ancestry, and have generally focused on the extent of the European genetic origin observed in Ashkenazi maternal lineages. Ashkenazi Jews are popularly contrasted with Sephardi Jews (also called Sephardim), who are descendants of Jews from the Iberian Peninsula (though there are other groups as well). There are some differences in how the two groups pronounce certain Hebrew letters and in points of ritual.
Etymology
The name Ashkenazi derives from the biblical figure of Ashkenaz, the first son of Gomer, son of Khaphet, son of Noah, and a Japhetic patriarch in the Table of Nations (Genesis 10).
The name of Gomer has often been linked to the ethnonym Cimmerians.
Biblical Ashkenaz is usually derived from Assyrian Aškūza (cuneiform Aškuzai/Iškuzai), a people who expelled the Cimmerians from the Armenian area of the Upper Euphrates,Russell E. Gmirkin, Berossus and Genesis, Manetho and Exodus: Hellenistic Histories and the Date of the Pentateuch, T & T Clark, Edinburgh, 2006 pp.148, 149 n.57. whose name is usually associated with the name of the Scythians.Sverre Bøe, Gog and Magog: Ezekiel 38–39 as Pre-text for Revelation 19, 17–21 and 20, 7–10, Tübingen: Mohr Siebeck, 2001, p. 48: "An identification of Ashkenaz and the Scythians must not ... be considered as sure, though it is more probable than an identification with Magog."
Nadav Naʼaman, Ancient Israel and Its Neighbors: Interaction and Counteraction, Eisenbrauns, 2005, p. 364 and note 37.
Jits van Straten, The Origin of Ashkenazi Jewry: The Controversy Unraveled. 2011. p. 182.Vladimir Shneider, Traces of the ten. Beer-sheva, Israel 2002. p. 237
The intrusive n in the Biblical name is likely due to a scribal error confusing a waw ו with a nun נ.Sverre Bøe, Gog and Magog: Ezekiel 38–39 as Pre-text for Revelation 19, 17–21 and 20, 7–10, Tübingen: Mohr Siebeck, 2001, p. 48.Paul Kriwaczek, Yiddish Civilisation, Hachette 2011 p. 173 n. 9.
In Jeremiah 51:27, Ashkenaz figures as one of three kingdoms in the far north, the others being Minni and Ararat, perhaps corresponding to Urartu, called on by God to resist Babylon.Otto Michel "Σκύθης", in Gerhard Kittel, Geoffrey William Bromiley, Gerhard Friedrich (eds.) Theological Dictionary of the New Testament, William B. Erdmanns, (1971) 1995 vol. 11, pp. 447–50, p. 448
In the Yoma tractate of the Babylonian Talmud the name Gomer is rendered as Germania, which elsewhere in rabbinical literature was identified with Germanikia in northwestern Syria, but later became associated with Germania.
Ashkenaz is linked to Scandza/Scanzia, viewed as the cradle of Germanic tribes, as early as a 6th-century gloss to the Historia Ecclesiastica of Eusebius."Ashkenaz" in Michael Berenbaum and Fred Skolnik (eds.) Encyclopaedia Judaica, 2nd ed. Vol. 2. Detroit: Macmillan Reference USA, Gale Virtual Reference Library, 2007. 569–571. Yoma 10a
In the 10th-century History of Armenia of Yovhannes Drasxanakertc'i (1.15) Ashkenaz was associated with Armenia,Gmirkin (2006), p. 148. as it was occasionally in Jewish usage, where its denotation extended at times to Adiabene, Khazaria, Crimea and areas to the east.Abraham N. Poliak 0 "Armenia", in Michael Berenbaum and Fred Skolnik (eds), Encyclopaedia Judaica, 2nd.ed. Macmillan Reference USA Detroit, Gale Virtual Reference Library 2007, Vol. 2, pp. 472–74 His contemporary Saadia Gaon identified Ashkenaz with the Saquliba or Slavic territories,David Malkiel, Reconstructing Ashkenaz: The Human Face of Franco-German Jewry, 1000–1250, Stanford University Press, 2008, p. 263 n.1. and such usage covered also the lands of tribes neighboring the Slavs, and Eastern and Central Europe. In modern times, Samuel Krauss identified the Biblical "Ashkenaz" with Khazaria.Malkiel (2008),p. 263, n.1, citing Samuel Krauss, "Hashemot ashkenaz usefarad" in Tarbiz, 1932, 3:423–430. Krauss identified Ashkenaz with the Khazars, a thesis immediately disputed by Jacob Mann the following year.
Sometime in the early medieval period, the Jews of central and eastern Europe came to be called by this term. In conformity with the custom of designating areas of Jewish settlement with biblical names, Spain was denominated Sefarad (Obadiah 20), France was called Tsarefat (1 Kings 17:9), and Bohemia was called the Land of Canaan.Michael Miller, Rabbis and Revolution: The Jews of Moravia in the Age of Emancipation Stanford University Press,2010 p. 15.
By the high medieval period, Talmudic commentators like Rashi began to use Ashkenaz/Eretz Ashkenaz to designate Germany, earlier known as Loter, where, especially in the Rhineland communities of Speyer, Worms and Mainz, the most important Jewish communities arose.Michael Brenner, A Short History of the Jews Princeton University Press 2010 p. 96. Rashi uses leshon Ashkenaz (Ashkenazi language) to describe German speech, and Byzantium and Syrian Jewish letters referred to the Crusaders as Ashkenazim. Given the close links between the Jewish communities of France and Germany following the Carolingian unification, the term Ashkenazi came to refer to both the Jews of medieval Germany and France.Malkiel p. ix
History
History of Jews in Europe before the Ashkenazim
Outside of their origins in ancient Israel, the history of Ashkenazim is shrouded in mystery, and many theories have arisen speculating on their emergence as a distinct community of Jews. The most well-supported theory is the one that details a Jewish migration from Israel through what is now Italy and other parts of southern Europe.Gregory Cochran, Henry Harpending, The 10,000 Year Explosion: How Civilization Accelerated Human Evolution, Basic Books, 2009 pp. 195–196. The historical record attests to Jewish communities in southern Europe since pre-Christian times.K. R. Stow,
The Jews in Rome: The Roman Jew BRILL, 1995 pp. 18–19. Many Jews were denied full Roman citizenship until 212 CE when Emperor Caracalla granted all free peoples this privilege. Jews were required to pay a poll tax until the reign of Emperor Julian in 363. In the late Roman Empire, Jews were free to form networks of cultural and religious ties and enter into various local occupations. But, after Christianity became the official religion of Rome and Constantinople in 380, Jews were increasingly marginalized.
The history of Jews in Greece goes back to at least the Archaic Era of Greece, when the classical culture of Greece was undergoing a process of formalization after the Greek Dark Age. The Greek historian Herodotus knew of the Jews, whom he called "Palestinian Syrians", and listed them among the levied naval forces in service of the invading Persians. While Jewish monotheism was not deeply affected by Greek Polytheism, the Greek way of living was attractive for many wealthier Jews.A Dictionary of the Ancient Greek World By David Sacks P.126 The Synagogue in the Agora of Athens is dated to the period between 267 and 396 CE. The Stobi Synagogue in Macedonia, was built on the ruins of a more ancient synagogue in the 4th century, while later in the 5th century, the synagogue was transformed into Christian basilica.Ancient Synagogues: Historical Analysis and Archaeological Discovery edited by Dan Urman, Paul Virgil McCracken Flesher P:113 Hellenistic Judaism thrived in Antioch and Alexandria, many of these Greek-speaking Jews would convert to Christianity.Jewish Virtual Library: Hellenism
SporadicAndrás Mócsy, Pannonia and Upper Moesia: A History of the Middle Danube Provinces of the Roman Empire, (1974) Routledge 2014 pp.228-230. epigraphic evidence in grave site excavations, particularly in Brigetio (Szőny), Aquincum (Óbuda), Intercisa (Dunaújváros), Triccinae (Sárvár), Savaria (Szombathely), Sopianae (Pécs) in Hungary, and Osijek in Croatia, attest to the presence of Jews after the 2nd and 3rd centuries where Roman garrisons were established,Toch, Michael (2013). The Economic History of European Jews: Late Antiquity and Early Middle Ages. Leiden: Brill. p. 156-157. There was a sufficient number of Jews in Pannonia to form communities and build a synagogue. Jewish troops were among the Syrian soldiers transferred there, and replenished from the Middle East, after 175 C.E. Jews and especially Syrians came from Antioch, Tarsus and Cappadocia. Others came from Italy and the Hellenized parts of the Roman empire. The excavations suggest they first lived in isolated enclaves attached to Roman legion camps and intermarried with other similar oriental families within the military orders of the region. Raphael Patai states that later Roman writers remarked that they differed little in either customs, manner of writing, or names from the people among whom they dwelt; and it was especially difficult to differentiate Jews from the Syrians.Sándor Scheiber, Jewish Inscriptions in Hungary: From the 3rd Century to 1686, pp.14-30, p.14: "a relatively large number of Jews appeared in Pannonia from the 3rd century ACE onwards."Jits van Straten, The Origin of Ashkenazi Jewry: The Controversy Unraveled, Walter de Gruyter, 2011 p. 60, citing Patai. After Pannonia was ceded to the Huns in 433, the garrison populations were withdrawn to Italy, and only a few, enigmatic traces remain of a possible Jewish presence in the area some centuries later.Toch (2013). p. 242.
No evidence has yet been found of a Jewish presence in antiquity in Germany beyond its Roman border, nor in Eastern Europe. In Gaul and Germany itself, with the possible exception of Trier and Cologne, the archeological evidence suggests at most a fleeting presence of very few Jews, primarily itinerant traders or artisans.Toch (2013), p. 67, p. 239. A substantial Jewish population emerged in northern Gaul by the Middle Ages,Toch (2013), p. 68. but Jewish communities existed in 465 CE in Brittany, in 524 CE in Valence, and in 533 CE in Orleans.'Some sources have been plainly misinterpreted, others point to "virtual" Jews, yet others to single persons not resident in the region. Thus Tyournai, Paris, Nantes, Tours, and Bourges, all localities claimed to have housed communities, have no place in the list of Jewish habitation in their period. In central Gaul Poitiers should be struck from the list, In Bordeaux it is doubtful as to the presence of a community, and only Clermont is likely to have possessed one. Further important places, like Macon, Chalon sur Saone, Vienne, and Lyon, were to be inhabited by Jews only from the Carolingian period onwards. In the south we have a Jewish population in Auch, possibly in Uzès, and in Arles, Narbonne and Marseilles. In the whole of France altogether, eight places stand scrutiny (including two questionable ones), while eight other towns have been found to lack a Jewish presence formerly claimed on insufficient evidence. Continuity of settlement from Late Antiquity throughout the Early Middle Ages is evident only in the south, in Arles and Narbonne, possibly also in Marseilles.... Between the mid-7th and the mid-8th century no sources mention Jews in Frankish lands, except for an epitaph from Narbonne and an inscription from Auch' Toch, The Economic History of European Jews pp. 68–9 Throughout this period and into the early Middle Ages, some Jews assimilated into the dominant Greek and Latin cultures, mostly through conversion to Christianity.Shaye J. D. Cohen, The Beginnings of Jewishness: Boundaries, Varieties, Uncertainties University of California Press 2001. King Dagobert I of the Franks expelled the Jews from his Merovingian kingdom in 629. Jews in former Roman territories faced new challenges as harsher anti-Jewish Church rulings were enforced.
Charlemagne's expansion of the Frankish empire around 800, including northern Italy and Rome, brought on a brief period of stability and unity in Francia. This created opportunities for Jewish merchants to settle again north of the Alps. Charlemagne granted the Jews freedoms similar to those once enjoyed under the Roman Empire. In addition, Jews from southern Italy, fleeing religious persecution, began to move into central Europe. Returning to Frankish lands, many Jewish merchants took up occupations in finance and commerce, including money lending, or usury. (Church legislation banned Christians from lending money in exchange for interest.) From Charlemagne's time to the present, Jewish life in northern Europe is well documented. By the 11th century, when Rashi of Troyes wrote his commentaries, Jews in what came to be known as "Ashkenaz" were known for their halakhic learning, and Talmudic studies. They were criticized by Sephardim and other Jewish scholars in Islamic lands for their lack of expertise in Jewish jurisprudence (dinim) and general ignorance of Hebrew linguistics and literature.David Malkiel, Reconstructing Ashkenaz: The Human Face of Franco-German Jewry, 1000–1250 Stanford University Press, 2008 pp. 2–5, 16–18. Yiddish emerged as a result of Judeo-Latin language contact with various High German vernaculars in the medieval period.Neil G. Jacobs, Yiddish: A Linguistic Introduction Cambridge University Press, 2005 p. 55. It is a Germanic language written in Hebrew letters, and heavily influenced by Hebrew and Aramaic, with some elements of Romance and later Slavic languages.YIDDISH LANGUAGE
High and Late Middle Ages migrations
Historical records show evidence of Jewish communities north of the Alps and Pyrenees as early as the 8th and 9th century. By the 11th century Jewish settlers, moving from southern European and Middle Eastern centers, appear to have begun to settle in the north, especially along the Rhine, often in response to new economic opportunities and at the invitation of local Christian rulers. Thus Baldwin V, Count of Flanders, invited Jacob ben Yekutiel and his fellow Jews to settle in his lands; and soon after the Norman Conquest of England, William the Conqueror likewise extended a welcome to continental Jews to take up residence there. Bishop Rüdiger Huzmann called on the Jews of Mainz to relocate to Speyer. In all of these decisions, the idea that Jews had the know-how and capacity to jump-start the economy, improve revenues, and enlarge trade seems to have played a prominent role.Nina Rowe, The Jew, the Cathedral and the Medieval City: Synagoga and Ecclesia in the 13th Century Cambridge University Press, 2011 p. 30. Typically Jews relocated close to the markets and churches in town centres, where, though they came under the authority of both royal and ecclesiastical powers, they were accorded administrative autonomy.
In the 11th century, both Rabbinic Judaism and the culture of the Babylonian Talmud that underlies it became established in southern Italy and then spread north to Ashkenaz.Guenter Stemberger, "The Formation of Rabbinic Judaism, 70–640 CE" in Neusner & Avery-Peck (eds.), The Blackwell Companion to Judaism, Blackwell Publishing, 2000, p. 92.
The Jewish communities along the Rhine river from Cologne to Mainz were decimated in the Rhineland massacres of 1096. With the onset of the Crusades in 1095, and the expulsions from England (1290), France (1394), and parts of Germany (15th century), Jewish migration pushed eastward into Poland (10th century), Lithuania (10th century), and Russia (12th century). Over this period of several hundred years, some have suggested, Jewish economic activity was focused on trade, business management, and financial services, due to several presumed factors: Christian European prohibitions restricting certain activities by Jews, preventing certain financial activities (such as "usurious" loans) between Christians, high rates of literacy, near universal male education, and ability of merchants to rely upon and trust family members living in different regions and countries.
thumb|right|The Polish-Lithuanian Commonwealth at its greatest extent.
By the 15th century, the Ashkenazi Jewish communities in Poland were the largest Jewish communities of the Diaspora. This area, which eventually fell under the domination of Russia, Austria, and Prussia (Germany), would remain the main center of Ashkenazi Jewry until the Holocaust.
The answer to why there was so little assimilation of Jews in central and eastern Europe for so long would seem to lie in part in the probability that the alien surroundings in central and eastern Europe were not conducive, though contempt did not prevent some assimilation. Furthermore, Jews lived almost exclusively in shtetls, maintained a strong system of education for males, heeded rabbinic leadership, and scorned the lifestyle of their neighbors; and all of these tendencies increased with every outbreak of antisemitism.Feldman, Louis H. Jew and Gentile in the Ancient World : Attitudes and Interactions from Alexander to Justinian. Ewing, NJ, USA: Princeton University Press, 1996. p 43.
Medieval references
thumb|Jews from Worms (Germany) wear the mandatory yellow badge.
In the first half of the 11th century, Hai Gaon refers to questions that had been addressed to him from Ashkenaz, by which he undoubtedly means Germany. Rashi in the latter half of the 11th century refers to both the language of AshkenazCommentary on Deuteronomy 3:9; idem on Talmud tractate Sukkah 17a and the country of Ashkenaz.Talmud, Hullin 93a During the 12th century, the word appears quite frequently. In the Mahzor Vitry, the kingdom of Ashkenaz is referred to chiefly in regard to the ritual of the synagogue there, but occasionally also with regard to certain other observances.ib. p. 129
In the literature of the 13th century, references to the land and the language of Ashkenaz often occur. Examples include Solomon ben Aderet's Responsa (vol. i., No. 395); the Responsa of Asher ben Jehiel (pp. 4, 6); his Halakot (Berakot i. 12, ed. Wilna, p. 10); the work of his son Jacob ben Asher, Tur Orach Chayim (chapter 59); the Responsa of Isaac ben Sheshet (numbers 193, 268, 270).
In the Midrash compilation, Genesis Rabbah, Rabbi Berechiah mentions Ashkenaz, Riphath, and Togarmah as German tribes or as German lands. It may correspond to a Greek word that may have existed in the Greek dialect of the Jews in Syria Palaestina, or the text is corrupted from "Germanica." This view of Berechiah is based on the Talmud (Yoma 10a; Jerusalem Talmud Megillah 71b), where Gomer, the father of Ashkenaz, is translated by Germamia, which evidently stands for Germany, and which was suggested by the similarity of the sound.
In later times, the word Ashkenaz is used to designate southern and western Germany, the ritual of which sections differs somewhat from that of eastern Germany and Poland. Thus the prayer-book of Isaiah Horowitz, and many others, give the piyyutim according to the Minhag of Ashkenaz and Poland.
According to 16th-century mystic Rabbi Elijah of Chelm, Ashkenazi Jews lived in Jerusalem during the 11th century. The story is told that a German-speaking Jew saved the life of a young German man surnamed Dolberger. So when the knights of the First Crusade came to siege Jerusalem, one of Dolberger's family members who was among them rescued Jews in Palestine and carried them back to Worms to repay the favor.Seder ha-Dorot, p. 252, 1878 ed. Further evidence of German communities in the holy city comes in the form of halakhic questions sent from Germany to Jerusalem during the second half of the 11th century.Epstein, in "Monatsschrift," xlvii. 344; Jerusalem: Under the Arabs
Modern history
Material relating to the history of German Jews has been preserved in the communal accounts of certain communities on the Rhine, a Memorbuch, and a Liebesbrief, documents that are now part of the Sassoon Collection.David Solomon Sassoon, Ohel Dawid (Descriptive catalogue of the Hebrew and Samaritan Manuscripts in the Sassoon Library, London), vol. 1, Oxford Univ. Press: London 1932, Introduction p. xxxix Heinrich Graetz has also added to the history of German Jewry in modern times in the abstract of his seminal work, History of the Jews, which he entitled "Volksthümliche Geschichte der Juden."
In an essay on Sephardi Jewry, Daniel Elazar at the Jerusalem Center for Public Affairs summarized the demographic history of Ashkenazi Jews in the last thousand years, noting that at the end of the 11th century, 97% of world Jewry was Sephardic and 3% Ashkenazi; by the end of XVI century, the: 'Treaty on the redemption of captives', by Gracian of the God's Mother, Mercy Priest, who was imprisoned by Turks, cites a Tunisian Hebrew, made captive when arriving to Gaeta, who aided others with money, named: 'Simon Escanasi', in the mid-17th century, "Sephardim still outnumbered Ashkenazim three to two", but by the end of the 18th century, "Ashkenazim outnumbered Sephardim three to two, the result of improved living conditions in Christian Europe versus the Ottoman Muslim world." By 1931, Ashkenazi Jews accounted for nearly 92% of world Jewry. These factors are sheer demography showing the migration patterns of Jews from Southern and Western Europe to Central and Eastern Europe.
In 1740 a family from Lithuania became the first Ashkenazi Jews to settle in the Jewish Quarter of Jerusalem.Kurzman, Don (1970) Genesis 1948. The First Arab-Israeli War. An Nal Book, New York. Library of Congress number 77-96925. p. 44
In the generations after emigration from the west, Jewish communities in places like Poland, Russia, and Belarus enjoyed a comparatively stable socio-political environment. A thriving publishing industry and the printing of hundreds of biblical commentaries precipitated the development of the Hasidic movement as well as major Jewish academic centers.Breuer, Edward. "Post-medieval Jewish Interpretation." The Jewish Study Bible. Ed. Adele Berlin and Marc Zvi Brettler. New York: Oxford University Press, 2004. 1900. After two centuries of comparative tolerance in the new nations, massive westward emigration occurred in the 19th and 20th centuries in response to pogroms in the east and the economic opportunities offered in other parts of the world. Ashkenazi Jews have made up the majority of the American Jewish community since 1750.
In the context of the European Enlightenment, Jewish emancipation began in 18th century France and spread throughout Western and Central Europe. Disabilities that had limited the rights of Jews since the Middle Ages were abolished, including the requirements to wear distinctive clothing, pay special taxes, and live in ghettos isolated from non-Jewish communities, and the prohibitions on certain professions. Laws were passed to integrate Jews into their host countries, forcing Ashkenazi Jews to adopt family names (they had formerly used patronymics). Newfound inclusion into public life led to cultural growth in the Haskalah, or Jewish Enlightenment, with its goal of integrating modern European values into Jewish life.Breuer, 1901 As a reaction to increasing antisemitism and assimilation following the emancipation, Zionism was developed in central Europe."Jews", William Bridgwater, ed. The Columbia-Viking Desk Encyclopedia; second ed., New York: Dell Publishing Co., 1964; p. 906. Other Jews, particularly those in the Pale of Settlement, turned to socialism. These tendencies would be united in Labor Zionism, the founding ideology of the State of Israel.
The Holocaust
Of the estimated 8.8 million Jews living in Europe at the beginning of World War II, the majority of whom were Ashkenazi, about 6 million – more than two-thirds – were systematically murdered in the Holocaust. These included 3 million of 3.3 million Polish Jews (91%); 900,000 of 1.5 million in Ukraine (60%); and 50–90% of the Jews of other Slavic nations, Germany, Hungary, and the Baltic states, and over 25% of the Jews in France. Sephardi communities suffered similar depletions in a few countries, including Greece, the Netherlands and the former Yugoslavia.
As the large majority of the victims were Ashkenazi Jews, their percentage dropped from nearly 92% of world Jewry in 1931 to nearly 80% of world Jewry today. The Holocaust also effectively put an end to the dynamic development of the Yiddish language in the previous decades, as the vast majority of the Jewish victims of the Holocaust, around 5 million, were Yiddish speakers.Solomo Birnbaum, Grammatik der jiddischen Sprache (4., erg. Aufl., Hamburg: Buske, 1984), p. 3. Many of the surviving Ashkenazi Jews emigrated to countries such as Israel, Canada, Argentina, Australia, and the United States after the war.
Following the Holocaust, some sources place Ashkenazim today as making up approximately 83–85 percent of Jews worldwide,Gershon Shafir, Yoav Peled, Being Israeli: The Dynamics of Multiple Citizenship Cambridge University Press 2002 p. 324 'The Zionist movement was a European movement in its goals and orientation and its target population was Ashkenazi Jews who constituted, in 1895, 90 percent of the 10.5 million Jews then living in the world (Smooha 1978: 51).'Encyclopædia Britannica, 'Today Ashkenazim constitute more than 80 percent of all the Jews in the world, vastly outnumbering Sephardic Jews.'Asher Arian (1981) in Itamar Rabinovich, Jehuda Reinharz, Israel in the Middle East: Documents and Readings on Society, Politics, and Foreign Relations, pre-1948 to the present UPNE/Brandeis University Press 2008 p. 324 "About 85 percent of the world's Jews are Ashkenazi"David Whitten Smith, Elizabeth Geraldine Burr, Understanding World Religions: A Road Map for Justice and Peace Rowman & Littlefield, 2007 p. 72 'Before the German Holocaust, about 90% of Jews worldwide were Ashkenazim. Since the Holocaust, the percentage has dropped to about 83%.' while Sergio DellaPergola in a rough calculation of Sephardic and Mizrahi Jews, implies that Ashkenazi make up a notably lower figure, less than 74%. Other estimates place Ashkenazi Jews as making up about 75% of Jews worldwide. Ashkenazi Jews constitute around 35–36% of Israel's total population, or 47.5% of Israel's Jewish population.
Israel
In Israel, the term Ashkenazi is now used in a manner unrelated to its original meaning, often applied to all Jews who settled in Europe and sometimes including those whose ethnic background is actually Sephardic. Jews of any non-Ashkenazi background, including Mizrahi, Yemenite, Kurdish and others who have no connection with the Iberian Peninsula, have similarly come to be lumped together as Sephardic. Jews of mixed background are increasingly common, partly because of intermarriage between Ashkenazi and non-Ashkenazi, and partly because many do not see such historic markers as relevant to their life experiences as Jews.
Religious Ashkenazi Jews living in Israel are obliged to follow the authority of the chief Ashkenazi rabbi in halakhic matters. In this respect, a religiously Ashkenazi Jew is an Israeli who is more likely to support certain religious interests in Israel, including certain political parties. These political parties result from the fact that a portion of the Israeli electorate votes for Jewish religious parties; although the electoral map changes from one election to another, there are generally several small parties associated with the interests of religious Ashkenazi Jews. The role of religious parties, including small religious parties that play important roles as coalition members, results in turn from Israel's composition as a complex society in which competing social, economic, and religious interests stand for election to the Knesset, a unicameral legislature with 120 seats.
People of Ashkenazi descent constitute around 47.5% of Israeli Jews (and therefore 35–36% of Israelis). They have played a prominent role in the economy, media, and politics, every President of Israel since the country's foundation in 1948 has been an Ashkenazi Jew of Israel since its founding. During the first decades of Israel as a state, strong cultural conflict occurred between Sephardic and Ashkenazi Jews (mainly east European Ashkenazim). The roots of this conflict, which still exists to a much smaller extent in present-day Israeli society, are chiefly attributed to the concept of the "melting pot". That is to say, all Jewish immigrants who arrived in Israel were strongly encouraged to "melt down" their own particular exilic identities within the general social "pot" in order to become Israeli.Yitzhaki, Shlomo and Schechtman, EdnaThe "Melting Pot": A Success Story? Journal of Economic Inequality, Vol; 7, No. 2, June 2009, pp. 137–51. Earlier version by Schechtman, Edna and Yitzhaki, Shlomo , Working Paper No. 32, Central Bureau of Statistics, Jerusalem, Nov. 2007, i + 30 pp.
The Ashkenazi Chief Rabbis in the Yishuv and Israel include:
Abraham Isaac Kook: (23 February 1921 – 1 September 1935)
Isaac Halevi Herzog: (1937 – 25 July 1959)
Isser Yehuda Unterman: (1964–1972)
Shlomo Goren: (1972–1983)
Avraham Shapira: (1983–1993)
Israel Meir Lau: (1993 – 3 April 2003)
She'ar Yashuv Cohen (acting): (3 April 2003 – 14 April 2003)
Yona Metzger: (14 April 2003 – 14 August 2013)
David Lau: (14 August 2013 – present)
Definition
By religion
Religious Jews have Minhagim, customs, in addition to Halakha, or religious law, and different interpretations of law. Different groups of religious Jews in different geographic areas historically adopted different customs and interpretations. On certain issues, Orthodox Jews are required to follow the customs of their ancestors, and do not believe they have the option of picking and choosing. For this reason, observant Jews at times find it important for religious reasons to ascertain who their household's religious ancestors are in order to know what customs their household should follow. These times include, for example, when two Jews of different ethnic background marry, when a non-Jew converts to Judaism and determines what customs to follow for the first time, or when a lapsed or less observant Jew returns to traditional Judaism and must determine what was done in his or her family's past. In this sense, "Ashkenazic" refers both to a family ancestry and to a body of customs binding on Jews of that ancestry. Reform Judaism, which does not necessarily follow those minhagim, did nonetheless originate among Ashkenazi Jews."The Origins of Reform Judaism." Jewish Virtual Library. 27 May 2014.
In a religious sense, an Ashkenazi Jew is any Jew whose family tradition and ritual follows Ashkenazi practice. Until the Ashkenazi community first began to develop in the Early Middle Ages, the centers of Jewish religious authority were in the Islamic world, at Baghdad and in Islamic Spain. Ashkenaz (Germany) was so distant geographically that it developed a minhag of its own. Ashkenazi Hebrew came to be pronounced in ways distinct from other forms of Hebrew."Pronunciations of Hebrew." Jewish Virtual Library. 27 May 2014.
In this respect, the counterpart of Ashkenazi is Sephardic, since most non-Ashkenazi Orthodox Jews follow Sephardic rabbinical authorities, whether or not they are ethnically Sephardic. By tradition, a Sephardic or Mizrahi woman who marries into an Orthodox or Haredi Ashkenazi Jewish family raises her children to be Ashkenazi Jews; conversely an Ashkenazi woman who marries a Sephardi or Mizrahi man is expected to take on Sephardic practice and the children inherit a Sephardic identity, though in practice many families compromise. A convert generally follows the practice of the beth din that converted him or her. With the integration of Jews from around the world in Israel, North America, and other places, the religious definition of an Ashkenazi Jew is blurring, especially outside Orthodox Judaism.
New developments in Judaism often transcend differences in religious practice between Ashkenazi and Sephardic Jews. In North American cities, social trends such as the chavurah movement, and the emergence of "post-denominational Judaism" often bring together younger Jews of diverse ethnic backgrounds. In recent years, there has been increased interest in Kabbalah, which many Ashkenazi Jews study outside of the Yeshiva framework. Another trend is the new popularity of ecstatic worship in the Jewish Renewal movement and the Carlebach style minyan, both of which are nominally of Ashkenazi origin.
By culture
Culturally, an Ashkenazi Jew can be identified by the concept of Yiddishkeit, which means "Jewishness" in the Yiddish language. Yiddishkeit is specifically the Jewishness of Ashkenazi Jews. Before the Haskalah and the emancipation of Jews in Europe, this meant the study of Torah and Talmud for men, and a family and communal life governed by the observance of Jewish Law for men and women. From the Rhineland to Riga to Romania, most Jews prayed in liturgical Ashkenazi Hebrew, and spoke Yiddish in their secular lives. But with modernization, Yiddishkeit now encompasses not just Orthodoxy and Hasidism, but a broad range of movements, ideologies, practices, and traditions in which Ashkenazi Jews have participated and somehow retained a sense of Jewishness. Although a far smaller number of Jews still speak Yiddish, Yiddishkeit can be identified in manners of speech, in styles of humor, in patterns of association. Broadly speaking, a Jew is one who associates culturally with Jews, supports Jewish institutions, reads Jewish books and periodicals, attends Jewish movies and theater, travels to Israel, visits historical synagogues, and so forth. It is a definition that applies to Jewish culture in general, and to Ashkenazi Yiddishkeit in particular.
As Ashkenazi Jews moved away from Europe, mostly in the form of aliyah to Israel, or immigration to North America, and other English-speaking areas such as South Africa; and Europe (particularly France) and Latin America, the geographic isolation that gave rise to Ashkenazim has given way to mixing with other cultures, and with non-Ashkenazi Jews who, similarly, are no longer isolated in distinct geographic locales. Hebrew has replaced Yiddish as the primary Jewish language for many Ashkenazi Jews, although many Hasidic and Hareidi groups continue to use Yiddish in daily life. (There are numerous Ashkenazi Jewish anglophones and Russian-speakers as well, although English and Russian are not originally Jewish languages.)
France's blended Jewish community is typical of the cultural recombination that is going on among Jews throughout the world. Although France expelled its original Jewish population in the Middle Ages, by the time of the French Revolution, there were two distinct Jewish populations. One consisted of Sephardic Jews, originally refugees from the Inquisition and concentrated in the southwest, while the other community was Ashkenazi, concentrated in formerly German Alsace, and mainly speaking a German dialect similar to Yiddish. (A third community of Provençal Jews living in Comtat Venaissin were technically outside France, and were later absorbed into the Sephardim.) The two communities were so separate and different that the National Assembly emancipated them separately in 1790 and 1791."French Revolution." Jewish Virtual Library. 2008. 29 May 2014.
But after emancipation, a sense of a unified French Jewry emerged, especially when France was wracked by the Dreyfus affair in the 1890s. In the 1920s and 1930s, Ashkenazi Jews from Europe arrived in large numbers as refugees from antisemitism, the Russian revolution, and the economic turmoil of the Great Depression. By the 1930s, Paris had a vibrant Yiddish culture, and many Jews were involved in diverse political movements. After the Vichy years and the Holocaust, the French Jewish population was augmented once again, first by Ashkenazi refugees from Central Europe, and later by Sephardi immigrants and refugees from North Africa, many of them francophone.
Then, in the 1990s, yet another Ashkenazi Jewish wave began to arrive from countries of the former Soviet Union and Central Europe. The result is a pluralistic Jewish community that still has some distinct elements of both Ashkenazi and Sephardic culture. But in France, it is becoming much more difficult to sort out the two, and a distinctly French Jewishness has emerged.Wall, Irwin. (2002) "Remaking Jewish Identity in France", in Howard Wettstein, Diaspora's and Exiles. University of California Press, pp. 164–90.
By ethnicity
In an ethnic sense, an Ashkenazi Jew is one whose ancestry can be traced to the Jews who settled in Central Europe. For roughly a thousand years, the Ashkenazim were a reproductively isolated population in Europe, despite living in many countries, with little inflow or outflow from migration, conversion, or intermarriage with other groups, including other Jews. Human geneticists have argued that genetic variations have been identified that show high frequencies among Ashkenazi Jews, but not in the general European population, be they for patrilineal markers (Y-chromosome haplotypes) and for matrilineal markers (mitotypes). Since the middle of the 20th century, many Ashkenazi Jews have intermarried, both with members of other Jewish communities and with people of other nations and faiths.
A 2006 study found Ashkenazi Jews to be a clear, homogeneous genetic subgroup. Strikingly, regardless of the place of origin, Ashkenazi Jews can be grouped in the same genetic cohort – that is, regardless of whether an Ashkenazi Jew's ancestors came from Poland, Russia, Hungary, Lithuania, or any other place with a historical Jewish population, they belong to the same ethnic group. The research demonstrates the endogamy of the Jewish population in Europe and lends further credence to the idea of Ashkenazi Jews as an ethnic group. Moreover, though intermarriage among Jews of Ashkenazi descent has become increasingly common, many Haredi Jews, particularly members of Hasidic or Hareidi sects, continue to marry exclusively fellow Ashkenazi Jews. This trend keeps Ashkenazi genes prevalent and also helps researchers further study the genes of Ashkenazi Jews with relative ease. It is noteworthy that these Haredi Jews often have extremely large families.
Customs, laws and traditions
The Halakhic practices of (Orthodox) Ashkenazi Jews may differ from those of Sephardi Jews, particularly in matters of custom. Differences are noted in the Shulkhan Arukh itself, in the gloss of Moses Isserles. Well known differences in practice include:
thumb|The example of the chevra kadisha, the Jewish burial society, Prague, 1772
Observance of Pesach (Passover): Ashkenazi Jews traditionally refrain from eating legumes, grain, millet, and rice (quinoa, however, has become accepted as foodgrain in the North American communities), whereas Sephardi Jews typically do not prohibit these foods.
Ashkenazi Jews freely mix and eat fish and milk products; some Sephardic Jews refrain from doing so.
Ashkenazim are more permissive toward the usage of wigs as a hair covering for married and widowed women.
In the case of kashrut for meat, conversely, Sephardi Jews have stricter requirements – this level is commonly referred to as Beth Yosef. Meat products that are acceptable to Ashkenazi Jews as kosher may therefore be rejected by Sephardi Jews. Notwithstanding stricter requirements for the actual slaughter, Sephardi Jews permit the rear portions of an animal after proper Halakhic removal of the sciatic nerve, while many Ashkenazi Jews do not. This is not because of different interpretations of the law; rather, slaughterhouses could not find adequate skills for correct removal of the sciatic nerve and found it more economical to separate the hindquarters and sell them as non-kosher meat.
Ashkenazi Jews frequently name newborn children after deceased family members, but not after living relatives. Sephardi Jews, in contrast, often name their children after the children's grandparents, even if those grandparents are still living. A notable exception to this generally reliable rule is among Dutch Jews, where Ashkenazim for centuries used the naming conventions otherwise attributed exclusively to Sephardim such as Chuts.
Ashkenazi tefillin bear some differences from Sephardic tefillin. In the traditional Ashkenazic rite, the tefillin are wound towards the body, not away from it. Ashkenazim traditionally don tefillin while standing, whereas other Jews generally do so while sitting down.
Ashkenazic traditional pronunciations of Hebrew differ from those of other groups. The most prominent consonantal difference from Sephardic and Mizrahic Hebrew dialects is the pronunciation of the Hebrew letter tav in certain Hebrew words (historically, in postvocalic undoubled context) as an /s/ and not a /t/ or /θ/ sound.
The prayer shawl, or tallit (or tallis in Ashkenazi Hebrew), is worn by the majority of Ashkenazi men after marriage, but western European Ashkenazi men wear it from Bar Mitzvah. In Sephardi or Mizrahi Judaism, the prayer shawl is commonly worn from early childhood.
Ashkenazic liturgy
The term Ashkenazi also refers to the nusach Ashkenaz (Hebrew, "liturgical tradition", or rite) used by Ashkenazi Jews in their Siddur (prayer book). A nusach is defined by a liturgical tradition's choice of prayers, the order of prayers, the text of prayers, and melodies used in the singing of prayers. Two other major forms of nusach among Ashkenazic Jews are Nusach Sefard (not to be confused with the Sephardic ritual), which is the general Polish Hasidic nusach, and Nusach Ari, as used by Lubavitch Hasidim.
Ashkenazi as a surname
Several famous people have Ashkenazi as a surname, such as Vladimir Ashkenazy. However, most people with this surname hail from within Sephardic communities, particularly from the Syrian Jewish community. The Sephardic carriers of the surname would have some Ashkenazi ancestors since the surname was adopted by families who were initially of Ashkenazic origins who moved to Sephardi countries and joined those communities. Ashkenazi would be formally adopted as the family surname having started off as a nickname imposed by their adopted communities. Some have shortened the name to Ash.
Relations with Sephardim
Relations between Ashkenazim and Sephardim have not always been warm. North African Sepharadim and Berber Jews were often looked upon by Ashkenazim as second-class citizens during the first decade after the creation of Israel. This has led to protest movements such as the Israeli Black Panthers led by Saadia Marciano a Moroccan Jew. Nowadays, relations are getting better. In some instances, Ashkenazi communities have accepted significant numbers of Sephardi newcomers, sometimes resulting in intermarriage.Shahar, Charles. "A Comprehensive Study of the Ultra Orthodox Community of Greater Montreal (2003)." Federation CJA (Montreal). 2003.
Notable Ashkenazim
Ashkenazi Jews have a noted history of achievement in Western societies in the fields of exact and social sciences, literature, finance, politics, media, and others. In those societies where they have been free to enter any profession, they have a record of high occupational achievement, entering professions and fields of commerce where higher education is required. Ashkenazi Jews have won a large number of the Nobel awards. While they make up about 2% of the U.S. population,G. Cochran, J. Hardy, H. Harpending. "Natural History of Ashkenazi Intelligence" , Journal of Biosocial Science 38 (5), pp. 659–93 (2006), University of Utah 27% of United States Nobel prize winners in the 20th century, a quarter of Fields Medal winners, 25% of ACM Turing Award winners, half the world's chess champions, including 8% of the top 100 world chess players, and a quarter of Westinghouse Science Talent Search winners have Ashkenazi Jewish ancestry.
Time magazine's person of the 20th century, Albert Einstein, was an Ashkenazi Jew. According to a study performed by Cambridge University, 21% of Ivy League students, 25% of the Turing Award winners, 23% of the wealthiest Americans, and 38% of the Oscar-winning film directors, and 29% of Oslo awardees are Ashkenazi Jews.
Genetics
Genetic origins
Efforts to identify the origins of Ashkenazi Jews through DNA analysis began in the 1990s. Currently, there are three types of genetic origin testing, autosomal DNA (atDNA), mitochondrial DNA (mtDNA), and Y-chromosomal DNA (Y-DNA). Autosomal DNA is a mixture from an individual's entire ancestry, Y-DNA shows a male's lineage only along his strict paternal line, mtDNA shows any person's lineage only along the strict maternal line. Genome-wide association studies have also been employed to yield findings relevant to genetic origins.
Like most DNA studies of human migration patterns, the earliest studies on Ashkenazi Jews focused on the Y-DNA and mtDNA segments of the human genome. Both segments are unaffected by recombination (except for the ends of the Y chromosome – the pseudoautosomal regions known as PAR1 and PAR2), thus allowing tracing of direct maternal and paternal lineages.
These studies revealed that Ashkenazi Jews originate from an ancient (2000 BCE - 700 BCE) population of the Middle East who had spread to Europe.
Ashkenazic Jews display the homogeneity of a genetic bottleneck, meaning they descend from a larger population whose numbers were greatly reduced but recovered through a few founding individuals.
Although the Jewish people, in general, were present across a wide geographical area as described, genetic research done by Gil Atzmon of the Longevity Genes Project at Albert Einstein College of Medicine suggests "that Ashkenazim branched off from other Jews around the time of the destruction of the First Temple, 2,500 years ago ... flourished during the Roman Empire but then went through a 'severe bottleneck' as they dispersed, reducing a population of several million to just 400 families who left Northern Italy around the year 1000 for Central and eventually Eastern Europe."
Various studies have arrived at diverging conclusions regarding both the degree and the sources of the non-Levantine admixture in Ashkenazim, particularly with respect to the extent of the non-Levantine genetic origin observed in Ashkenazi maternal lineages, which is in contrast to the predominant Levantine genetic origin observed in Ashkenazi paternal lineages. All studies nevertheless agree that genetic overlap with the Fertile Crescent exists in both lineages, albeit at differing rates. Collectively, Ashkenazi Jews are less genetically diverse than other Jewish ethnic divisions, due to their genetic bottleneck.
Male lineages: Y-chromosomal DNA
The majority of genetic findings to date concerning Ashkenazi Jews conclude that the male line was founded by ancestors from the Middle East. Natural History 102:11 (November 1993): 12–19. Others have found a similar genetic line among Greeks, and Macedonians.
A study of haplotypes of the Y-chromosome, published in 2000, addressed the paternal origins of Ashkenazi Jews. Hammer et al. found that the Y-chromosome of Ashkenazi and Sephardic Jews contained mutations that are also common among other Middle Eastern peoples, but uncommon in the autochthonous European population. This suggested that the male ancestors of the Ashkenazi Jews could be traced mostly to the Middle East. The proportion of male genetic admixture in Ashkenazi Jews amounts to less than 0.5% per generation over an estimated 80 generations, with "relatively minor contribution of European Y chromosomes to the Ashkenazim," and a total admixture estimate "very similar to Motulsky's average estimate of 12.5%." This supported the finding that "Diaspora Jews from Europe, Northwest Africa, and the Near East resemble each other more closely than they resemble their non-Jewish neighbors." "Past research found that 50–80 percent of DNA from the Ashkenazi Y chromosome, which is used to trace the male lineage, originated in the Near East," Richards said.
The population has subsequently spread out. Based on accounts such as those of Jewish historian Flavius Josephus, by the time of the destruction of the Second Temple in 70 CE, as many as six million Jews were already living in the Roman Empire, but outside of Israel, mainly in Italy and Southern Europe. In contrast, only about 500,000 lived in Judea, said Ostrer, who was not involved in the new study.
A 2001 study by Nebel et al. showed that both Ashkenazi and Sephardic Jewish populations share the same overall paternal Near Eastern ancestries. In comparison with data available from other relevant populations in the region, Jews were found to be more closely related to groups in the north of the Fertile Crescent. The authors also report on Eu 19 (R1a) chromosomes, which are very frequent in Central and Eastern Europeans (54%–60%) at elevated frequency (12.7%) in Ashkenazi Jews. They hypothesized that the differences among Ashkenazim Jews could reflect low-level gene flow from surrounding European populations or genetic drift during isolation.Almut Nebel, Dvora Filon, Bernd Brinkmann, Partha P. Majumder, Marina Faerman, Ariella Oppenheim. "The Y Chromosome Pool of Jews as Part of the Genetic Landscape of the Middle East", The American Journal of Human Genetics (2001), Volume 69, number 5. pp. 1095–112 A later 2005 study by Nebel et al., found a similar level of 11.5% of male Ashkenazim belonging to R1a1a (M17+), the dominant Y-chromosome haplogroup in Central and Eastern Europeans.
Female lineages: Mitochondrial DNA
Before 2006, geneticists had largely attributed the ethnogenesis of most of the world's Jewish populations, including Ashkenazi Jews, to Israelite Jewish male migrants from the Middle East and "the women from each local population whom they took as wives and converted to Judaism." Thus, in 2002, in line with this model of origin, David Goldstein, now of Duke University, reported that unlike male Ashkenazi lineages, the female lineages in Ashkenazi Jewish communities "did not seem to be Middle Eastern", and that each community had its own genetic pattern and even that "in some cases the mitochondrial DNA was closely related to that of the host community." In his view, this suggested, "that Jewish men had arrived from the Middle East, taken wives from the host population and converted them to Judaism, after which there was no further intermarriage with non-Jews."
In 2006, a study by Behar et al., based on what was at that time high-resolution analysis of haplogroup K (mtDNA), suggested that about 40% of the current Ashkenazi population is descended matrilineally from just four women, or "founder lineages", that were "likely from a Hebrew/Levantine mtDNA pool" originating in the Middle East in the 1st and 2nd centuries CE. Additionally, Behar et al. suggested that the rest of Ashkenazi mtDNA is originated from ~150 women, and that most of those were also likely of Middle Eastern origin. In reference specifically to Haplogroup K, they suggested that although it is common throughout western Eurasia, "the observed global pattern of distribution renders very unlikely the possibility that the four aforementioned founder lineages entered the Ashkenazi mtDNA pool via gene flow from a European host population".
In 2013, however, a study of Ashkenazi mitochondrial DNA by a team led by Martin B. Richards of the University of Huddersfield in England reached different conclusions, corroborating the pre-2006 origin hypothesis. Testing was performed on the full 16,600 DNA units composing mitochondrial DNA (the 2006 Behar study had only tested 1,000 units) in all their subjects, and the study found that the four main female Ashkenazi founders had descent lines that were established in Europe 10,000 to 20,000 years in the past while most of the remaining minor founders also have a deep European ancestry. The study states that the great majority of Ashkenazi maternal lineages were not brought from the Near East (i.e., they were non-Israelite), nor were they recruited in the Caucasus (i.e., they were non-Khazar), but instead they were assimilated within Europe, primarily of Italian and Old French origins. Richards summarized the findings on the female line as such: "[N]one [of the mtDNA] came from the North Caucasus, located along the border between Europe and Asia between the Black and Caspian seas. All of our presently available studies including my own, should thoroughly debunk one of the most questionable, but still tenacious, hypotheses: that most Ashkenazi Jews can trace their roots to the mysterious Khazar Kingdom that flourished during the ninth century in the region between the Byzantine Empire and the Persian Empire." The 2013 study estimated that 80 percent of Ashkenazi maternal ancestry comes from women indigenous to Europe, and only 8 percent from the Near East, while the origin of the remainder is undetermined. According to the study these findings "point to a significant role for the conversion of women in the formation of Ashkenazi communities." Karl Skorecki at Technion criticized the study for perceived flaws in phylogenetic analysis. "While Costa et al have re-opened the question of the maternal origins of Ashkenazi Jewry, the phylogenetic analysis in the manuscript does not 'settle' the question."European link to Jewish maternal ancestry
A 2014 study by Fernández et al. has found that Ashkenazi Jews display a frequency of haplogroup K in their maternal DNA that suggests an ancient Near Eastern origin, similar to the results of Behar. He stated that this observation clearly contradicts the results of the study led by Richards that suggested a European source for 3 exclusively Ashkenazi K lineages.
Association and linkage studies
In genetic epidemiology, a genome-wide association study (GWA study, or GWAS) is an examination of all or most of the genes (the genome) of different individuals of a particular species to see how much the genes vary from individual to individual. These techniques were originally designed for epidemiological uses, to identify genetic associations with observable traits.
A 2006 study by Seldin et al. used over five thousand autosomal SNPs to demonstrate European genetic substructure. The results showed "a consistent and reproducible distinction between 'northern' and 'southern' European population groups". Most northern, central, and eastern Europeans (Finns, Swedes, English, Irish, Germans, and Ukrainians) showed >90% in the "northern" population group, while most individual participants with southern European ancestry (Italians, Greeks, Portuguese, Spaniards) showed >85% in the "southern" group. Both Ashkenazi Jews as well as Sephardic Jews showed >85% membership in the "southern" group. Referring to the Jews clustering with southern Europeans, the authors state the results were "consistent with a later Mediterranean origin of these ethnic groups".
A 2007 study by Bauchet et al. found that Ashkenazi Jews were most closely clustered with Arabic North African populations when compared to Global population, and in the European structure analysis, they share similarities only with Greeks and Southern Italians, reflecting their east Mediterranean origins.
A 2010 study on Jewish ancestry by Atzmon-Ostrer et al. stated "Two major groups were identified by principal component, phylogenetic, and identity by descent (IBD) analysis: Middle Eastern Jews and European/Syrian Jews. The IBD segment sharing and the proximity of European Jews to each other and to southern European populations suggested similar origins for European Jewry and refuted large-scale genetic contributions of Central and Eastern European and Slavic populations to the formation of Ashkenazi Jewry", as both groups – the Middle Eastern Jews and European/Syrian Jews – shared common ancestors in the Middle East about 2500 years ago. The study examines genetic markers spread across the entire genome and shows that the Jewish groups (Ashkenazi and non Ashkenazi) share large swaths of DNA, indicating close relationships and that each of the Jewish groups in the study (Iranian, Iraqi, Syrian, Italian, Turkish, Greek and Ashkenazi) has its own genetic signature but is more closely related to the other Jewish groups than to their fellow non-Jewish countrymen. Atzmon's team found that the SNP markers in genetic segments of 3 million DNA letters or longer were 10 times more likely to be identical among Jews than non-Jews. Results of the analysis also tally with biblical accounts of the fate of the Jews. The study also found that with respect to non-Jewish European groups, the population most closely related to Ashkenazi Jews are modern-day Italians. The study speculated that the genetic-similarity between Ashkenazi Jews and Italians may be due to inter-marriage and conversions in the time of the Roman Empire. It was also found that any two Ashkenazi Jewish participants in the study shared about as much DNA as fourth or fifth cousins.
A 2010 study by Bray et al., using SNP microarray techniques and linkage analysis found that when assuming Druze and Palestinian Arab populations to represent the reference to world Jewry ancestor genome, between 35 and 55 percent of the modern Ashkenazi genome can possibly be of European origin, and that European "admixture is considerably higher than previous estimates by studies that used the Y chromosome" with this reference point. Assuming this reference point the linkage disequilibrium in the Ashkenazi Jewish population was interpreted as "matches signs of interbreeding or 'admixture' between Middle Eastern and European populations". On the Bray et al. tree, Ashkenazi Jews were found to be a genetically more divergent population than Russians, Orcadians, French, Basques, Italians, Sardinians and Tuscans. The study also observed that Ashkenazim are more diverse than their Middle Eastern relatives, which was counterintuitive because Ashkenazim are supposed to be a subset, not a superset, of their assumed geographical source population. Bray et al. therefore postulate that these results reflect not the population antiquity but a history of mixing between genetically distinct populations in Europe. However, it's possible that the relaxation of marriage prescription in the ancestors of Ashkenazim that drove their heterozygosity up, while the maintenance of the FBD rule in native Middle Easterners have been keeping their heterozygosity values in check. Ashkenazim distinctiveness as found in the Bray et al. study, therefore, may come from their ethnic endogamy (ethnic inbreeding), which allowed them to "mine" their ancestral gene pool in the context of relative reproductive isolation from European neighbors, and not from clan endogamy (clan inbreeding). Consequently, their higher diversity compared to Middle Easterners stems from the latter's marriage practices, not necessarily from the former's admixture with Europeans.
The genome-wide genetic study carried out in 2010 by Behar et al. examined the genetic relationships among all major Jewish groups, including Ashkenazim, as well as the genetic relationship between these Jewish groups and non-Jewish ethnic populations. The study found that contemporary Jews (excluding Indian and Ethiopian Jews) have a close genetic relationship with people from the Levant. The authors explained that "the most parsimonious explanation for these observations is a common genetic origin, which is consistent with an historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant".
The Khazar hypothesis
In the late 19th century, it was proposed that the core of today's Ashkenazi Jewry are genetically descended from a hypothetical Khazarian Jewish diaspora who had migrated westward from modern Russia and Ukraine into modern France and Germany (as opposed to the currently held theory that Jews from France and Germany migrated into Eastern Europe). The hypothesis is not corroborated by historical sourcesThe Karaites of Galicia: An Ethnoreligious Minority Among the Ashkenazim and is unsubstantiated by genetics, but it is still occasionally supported by scholars who have had some success in keeping the theory in the academic conscience.. The theory is associated with antisemitism. and anti-Zionism..
A 2013 trans-genome study carried out by 30 geneticists, from 13 universities and academies, from 9 countries, assembling the largest data set available to date, for assessment of Ashkenazi Jewish genetic origins found no evidence of Khazar origin among Ashkenazi Jews. "Thus, analysis of Ashkenazi Jews together with a large sample from the region of the Khazar Khaganate corroborates the earlier results that Ashkenazi Jews derive their ancestry primarily from populations of the Middle East and Europe, that they possess considerable shared ancestry with other Jewish populations, and that there is no indication of a significant genetic contribution either from within or from north of the Caucasus region", the authors concluded.
Medical genetics
There are many references to Ashkenazi Jews in the literature of medical and population genetics. Indeed, much awareness of "Ashkenazi Jews" as an ethnic group or category stems from the large number of genetic studies of disease, including many that are well reported in the media, that have been conducted among Jews. Jewish populations have been studied more thoroughly than most other human populations, for a variety of reasons:
Jewish populations, and particularly the large Ashkenazi Jewish population, are ideal for such research studies, because they exhibit a high degree of endogamy, yet they are sizable.
Jewish communities are comparatively well informed about genetics research, and have been supportive of community efforts to study and prevent genetic diseases.
The result is a form of ascertainment bias. This has sometimes created an impression that Jews are more susceptible to genetic disease than other populations. Healthcare professionals are often taught to consider those of Ashkenazi descent to be at increased risk for colon cancer.Agency for Healthcare Research and Quality. (2009). The guide to clinical preventive services 2009. AHRQ Publication No. 09-IP006.
Genetic counseling and genetic testing are often undertaken by couples where both partners are of Ashkenazi ancestry. Some organizations, most notably Dor Yeshorim, organize screening programs to prevent homozygosity for the genes that cause related diseases.E. L. Abel's book Jewish Genetic Disorders: A Layman's Guide, McFarland, 2008: ISBN 0-7864-4087-2See Chicago Center for Jewish Genetic Disorders
See also
History of the Jews in Europe
History of the Jews in Germany
History of the Jews in Poland
History of the Jews in Russia (Ukraine, Belarus)
Jewish ethnic divisions
List of Israeli Ashkenazi Jews
Memorbuch, a book dedicated to the memory of martyrs
Nusach Ashkenaz
Oberlander Jews
Sephardi Jews
References
References for "Who is an Ashkenazi Jew?"
Other references
Beider, Alexander (2001): A Dictionary of Ashkenazic Given Names: Their Origins, Structure, Pronunciations, and Migrations. Avotaynu. ISBN 1-886223-12-2.
Biale, David (2002): Cultures of the Jews: A New History. Schoken Books. ISBN 0-8052-4131-0.
Brook, Kevin Alan (2003): "The Origins of East European Jews" in Russian History/Histoire Russe vol. 30, nos. 1–2, pp. 1–22.
Gross, N. (1975): Economic History of the Jews. Schocken Books, New York.
Haumann, Heiko (2001): A History of East European Jews. Central European University Press. ISBN 963-9241-26-1.
Kriwaczek, Paul (2005): Yiddish Civilization: The Rise and Fall of a Forgotten Nation. Alfred A. Knopf, New York. ISBN 1-4000-4087-6
Lewis, Bernard (1984): The Jews of Islam. Princeton University Press. ISBN 0-691-05419-3.
Bukovec, Predrag: East and South-East European Jews in the 19th and 20th Centuries, European History Online, Mainz: Institute of European History, 2010, retrieved: 17 December 2012.
Vital, David (1999): A People Apart: A History of the Jews in Europe. Oxford University Press. ISBN 0-19-821980-6.
External links
The YIVO Encyclopedia of Jews in Eastern Europe
Ashkenazi history at the Jewish Virtual Library
Ashkenazi Jewish mtDNA haplogroup distribution varies among distinct subpopulations: lessons of population substructure in a closed group-European Journal of Human Genetics, 2007
"Analysis of genetic variation in Ashkenazi Jews by high density SNP genotyping"
Nusach Ashkenaz, and Discussion Forum
Ashkenaz Heritage
Category:Ashkenazi Jews topics
Category:Ethnic groups in Israel
Category:Ethnic groups in Russia
Category:Ethnic groups in the United States
Category:Jewish diaspora
Category:Jewish ethnic groups
Category:Middle Eastern people
Category:Semitic-speaking peoples | 150,184 | 2017-01 |
Immunology | Immunology is a branch of biology that covers the study of immune systems in all organisms.Janeway's Immunobiology textbook Searchable free online version at the National Center for Biotechnology Information It was the Russian biologist Ilya Ilyich Mechnikov who boosted studies on immunology, and received the Nobel Prize in 1908 for his work. He jabbed the thorn of a rose on a starfish and noted that, 24 hours later, cells were surrounding the tip. It was an active response of the body, trying to maintain its integrity. It was Mechnikov who first observed the phenomenon of phagocytosis, in which the body defends itself against a foreign body, and coined the term. It charts, measures, and contextualizes the: physiological functioning of the immune system in states of both health and diseases; malfunctions of the immune system in immunological disorders (such as autoimmune diseases, hypersensitivities, immune deficiency, and transplant rejection); the physical, chemical and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo. Immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, virology, bacteriology, parasitology, psychiatry, and dermatology.
Prior to the designation of immunity from the etymological root immunis, which is Latin for "exempt"; early physicians characterized organs that would later be proven as essential components of the immune system. The important lymphoid organs of the immune system are the thymus and bone marrow, and chief lymphatic tissues such as spleen, tonsils, lymph vessels, lymph nodes, adenoids, and liver. When health conditions worsen to emergency status, portions of immune system organs including the thymus, spleen, bone marrow, lymph nodes and other lymphatic tissues can be surgically excised for examination while patients are still alive.
Many components of the immune system are typically cellular in nature and not associated with any specific organ; but rather are embedded or circulating in various tissues located throughout the body.
Classical immunology
Classical immunology ties in with the fields of epidemiology and medicine. It studies the relationship between the body systems, pathogens, and immunity. The earliest written mention of immunity can be traced back to the plague of Athens in 430 BCE. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. Many other ancient societies have references to this phenomenon, but it was not until the 19th and 20th centuries before the concept developed into scientific theory.
The study of the molecular and cellular components that comprise the immune system, including their function and interaction, is the central science of immunology. The immune system has been divided into a more primitive innate immune system and, in vertebrates, an acquired or adaptive immune system. The latter is further divided into humoral (or antibody) and cell-mediated components.
The humoral (antibody) response is defined as the interaction between antibodies and antigens. Antibodies are specific proteins released from a certain class of immune cells known as B lymphocytes, while antigens are defined as anything that elicits the generation of antibodies ("anti"body "gen"erators). Immunology rests on an understanding of the properties of these two biological entities and the cellular response to both.
Immunological research continues to become more specialized, pursuing non-classical models of immunity and functions of cells, organs and systems not previously associated with the immune system (Yemeserach 2010).
Clinical immunology
Clinical immunology is the study of diseases caused by disorders of the immune system (failure, aberrant action, and malignant growth of the cellular elements of the system). It also involves diseases of other systems, where immune reactions play a part in the pathology and clinical features.
The diseases caused by disorders of the immune system fall into two broad categories:
immunodeficiency, in which parts of the immune system fail to provide an adequate response (examples include chronic granulomatous disease and primary immune diseases);
autoimmunity, in which the immune system attacks its own host's body (examples include systemic lupus erythematosus, rheumatoid arthritis, Hashimoto's disease and myasthenia gravis).
Other immune system disorders include various hypersensitivities (such as in asthma and other allergies) that respond inappropriately to otherwise harmless compounds.
The most well-known disease that affects the immune system itself is AIDS, an immunodeficiency characterized by the suppression of CD4+ ("helper") T cells, dendritic cells and macrophages by the Human Immunodeficiency Virus (HIV).
Clinical immunologists also study ways to prevent the immune system's attempts to destroy allografts (transplant rejection).
Developmental immunology
The body’s capability to react to antigen depends on a person's age, antigen type, maternal factors and the area where the antigen is presented. Neonates are said to be in a state of physiological immunodeficiency, because both their innate and adaptive immunological responses are greatly suppressed. Once born, a child’s immune system responds favorably to protein antigens while not as well to glycoproteins and polysaccharides. In fact, many of the infections acquired by neonates are caused by low virulence organisms like Staphylococcus and Pseudomonas. In neonates, opsonic activity and the ability to activate the complement cascade is very limited. For example, the mean level of C3 in a newborn is approximately 65% of that found in the adult. Phagocytic activity is also greatly impaired in newborns. This is due to lower opsonic activity, as well as diminished up-regulation of integrin and selectin receptors, which limit the ability of neutrophils to interact with adhesion molecules in the endothelium. Their monocytes are slow and have a reduced ATP production, which also limits the newborn's phagocytic activity. Although, the number of total lymphocytes is significantly higher than in adults, the cellular and humoral immunity is also impaired. Antigen-presenting cells in newborns have a reduced capability to activate T cells. Also, T cells of a newborn proliferate poorly and produce very small amounts of cytokines like IL-2, IL-4, IL-5, IL-12, and IFN-g which limits their capacity to activate the humoral response as well as the phagocitic activity of macrophage. B cells develop early during gestation but are not fully active.
thumb|Artist's impression of monocytes.
Maternal factors also play a role in the body’s immune response. At birth, most of the immunoglobulin present is maternal IgG. Because IgM, IgD, IgE and IgA don’t cross the placenta, they are almost undetectable at birth. Some IgA is provided by breast milk. These passively-acquired antibodies can protect the newborn for up to 18 months, but their response is usually short-lived and of low affinity. These antibodies can also produce a negative response. If a child is exposed to the antibody for a particular antigen before being exposed to the antigen itself then the child will produce a dampened response. Passively acquired maternal antibodies can suppress the antibody response to active immunization. Similarly the response of T-cells to vaccination differs in children compared to adults, and vaccines that induce Th1 responses in adults do not readily elicit these same responses in neonates. Between six and nine months after birth, a child’s immune system begins to respond more strongly to glycoproteins, but there is usually no marked improvement in their response to polysaccharides until they are at least one year old. This can be the reason for distinct time frames found in vaccination schedules.
During adolescence, the human body undergoes various physical, physiological and immunological changes triggered and mediated by hormones, of which the most significant in females is 17-β-oestradiol (an oestrogen) and, in males, is testosterone. Oestradiol usually begins to act around the age of 10 and testosterone some months later. There is evidence that these steroids not only act directly on the primary and secondary sexual characteristics but also have an effect on the development and regulation of the immune system, including an increased risk in developing pubescent and post-pubescent autoimmunity. There is also some evidence that cell surface receptors on B cells and macrophages may detect sex hormones in the system.
The female sex hormone 17-β-oestradiol has been shown to regulate the level of immunological response, while some male androgens such as testosterone seem to suppress the stress response to infection. Other androgens, however, such as DHEA, increase immune response. As in females, the male sex hormones seem to have more control of the immune system during puberty and post-puberty than during the rest of a male's adult life.
Physical changes during puberty such as thymic involution also affect immunological response.
Immunotherapy
The use of immune system components to treat a disease or disorder is known as immunotherapy. Immunotherapy is most commonly used in the context of the treatment of cancers together with chemotherapy (drugs) and radiotherapy (radiation). However, immunotherapy is also often used in the immunosuppressed (such as HIV patients) and people suffering from other immune deficiencies or autoimmune diseases.
This includes regulating factors such as IL-2, IL-10, GM-CSF B, IFN-α.
Diagnostic immunology
The specificity of the bond between antibody and antigen has made the antibody an excellent tool for the detection of substances by a variety of diagnostic techniques. Antibodies specific for a desired antigen can be conjugated with an isotopic (radio) or fluorescent label or with a color-forming enzyme in order to detect it. However, the similarity between some antigens can lead to false positives and other errors in such tests by antibodies cross-reacting with antigens that aren't exact matches.
Cancer immunology
The study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer.
Reproductive immunology
This area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance. The term has also been used by fertility clinics to address fertility problems, recurrent miscarriages, premature deliveries and dangerous complications such as pre-eclampsia.
Theoretical immunology
Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells – more precisely, phagocytes – that were responsible for immune responses. In contrast, the humoral theory of immunity, held by Robert Koch and Emil von Behring, among others, stated that the active immune agents were soluble components (molecules) found in the organism’s “humors” rather than its cells.
In the mid-1950s, Frank Burnet, inspired by a suggestion made by Niels Jerne, formulated the clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (e.g., pathogens, an allograft) trigger a destructive immune response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, but remain very influential.
More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views, "cognitive immune" views, the "danger model" (or "danger theory"), and the "discontinuity" theory. The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions.
Immunologist
According to the American Academy of Allergy, Asthma, and Immunology (AAAAI), "an immunologist is a research scientist who investigates the immune system of vertebrates (including the human immune system). Immunologists include research scientists (PhDs) who work in laboratories. Immunologists also include physicians who, for example, treat patients with immune system disorders. Some immunologists are physician-scientists who combine laboratory research with patient care."
Career in immunology
Bioscience is the overall major in which undergraduate students who are interested in general well-being take in college. Immunology is a branch of bioscience for undergraduate programs but the major gets specified as students move on for graduate program in immunology. The aim of immunology is to study the health of humans and animals through effective yet consistent research, (AAAAI, 2013).http://www.aaaai.org/about-the-aaaai/allergist---immunologists--specialized-skills.aspx. The most important thing about being immunologists is the research because it is the biggest portion of their jobs. North Carolina Association for Biomedical Research, 2013.
Most graduate immunology schools follow the AAI courses immunology which are offered throughout numerous schools in the United States. American Association of Immunology, n.d. For example, in New York State, there are several universities that offer the AAI courses immunology: Albany Medical College, Cornell University, Icahn School of Medicine at Mount Sinai, New York University Langone Medical Center, University at Albany (SUNY), University at Buffalo (SUNY), University of Rochester Medical Center and Upstate Medical University (SUNY). The AAI immunology courses include an Introductory Course and an Advance Course.http://www.aai.org/Careers/Graduate_Programs.html. The Introductory Course is a course that gives students an overview of the basics of immunology.
In addition, this Introductory Course gives students more information to complement general biology or science training. It also has two different parts: Part I is an introduction to the basic principles of immunology and Part II is a clinically-oriented lecture series. On the other hand, the Advanced Course is another course for those who are willing to expand or update their understanding of immunology. It is advised for students who want to attend the Advanced Course to have a background of the principles of immunology. The American Association of Immunologists, n.d. Most schools require students to take electives in other to complete their degrees. A Master’s degree requires two years of study following the attainment of a bachelor's degree. For a doctoral programme it is required to take two additional years of study. Stanford School of Medicine.
The expectation of occupational growth in immunology is an increase of 36 percent from 2010 to 2020.http://www.bls.gov/bls/confidentiality.htm Bureau of Labor Statistics, 2013, May 02. The median annual wage was $76,700 in May 2010. However, the lowest 10 percent of immunologists earned less than $41,560, and the top 10 percent earned more than $142,800, (Bureau of Labor Statistics, 2013). The practice of immunology itself is not specified by the U.S. Department of Labor but it belongs to the practice of life science in general.http://www.bls.gov/bls/confidentiality.htm Bureau of Labor Statistics, 2013.
See also
History of immunology
Immunomics
International Reviews of Immunology
List of immunologists
Osteoimmunology
Outline of immunology
References
External links
The Immunology Link
American Academy of Allergy, Asthma & Immunology
British Society for Immunology
Annual Review of Immunology journal
BMC: Immunology at BioMed Central, an open-access journal publishing original peer-reviewed research articles.
Nature reviews: Immunology
The Immunology Database and Analysis Portal, a NIAID-funded database resource.
Immunology group at researchgate.net.
Federation of Clinical Immunology Societies
Immunology Simplified—from AIDS to ZZZZZZ (PowerPoint file).
| 14,959 | 2017-01 |
Flowering plant | The flowering plants (angiosperms), also known as Angiospermae or Magnoliophyta, are the most diverse group of land plants, with 416 families, approx. 13,164 known genera and a total of c. 295,383 known species. Like gymnosperms, angiosperms are seed-producing plants; they are distinguished from gymnosperms by characteristics including flowers, endosperm within the seeds, and the production of fruits that contain the seeds. Etymologically, angiosperm means a plant that produces seeds within an enclosure, in other words, a fruiting plant. The term "angiosperm" comes from the Greek composite word (angeion, "case" or "casing", and sperma, "seed") meaning "enclosed seeds", after the enclosed condition of the seeds.
The ancestors of flowering plants diverged from gymnosperms in the Triassic Period, during the range 245 to 202 million years ago (mya), and the first flowering plants are known from 160 mya. They diversified extensively during the Lower Cretaceous, became widespread by 120 mya, and replaced conifers as the dominant trees from 100 to 60 mya.
Angiosperm derived characteristics
thumb|Bud of a pink rose
Angiosperms differ from other seed plants in several ways, described in the table. These distinguishing characteristics taken together have made the angiosperms the most diverse and numerous land plants and the most commercially important group to humans.
+ Distinctive features of Angiosperms Feature Description Flowering organs Flowers, the reproductive organs of flowering plants, are the most remarkable feature distinguishing them from the other seed plants. Flowers provided angiosperms with the means to have a more species-specific breeding system, and hence a way to evolve more readily into different species without the risk of crossing back with related species. Faster speciation enabled the Angiosperms to adapt to a wider range of ecological niches. This has allowed flowering plants to largely dominate terrestrial ecosystems. Stamens with two pairs of pollen sacs Stamens are much lighter than the corresponding organs of gymnosperms and have contributed to the diversification of angiosperms through time with adaptations to specialized pollination syndromes, such as particular pollinators. Stamens have also become modified through time to prevent self-fertilization, which has permitted further diversification, allowing angiosperms eventually to fill more niches. Reduced male parts, three cells The male gametophyte in angiosperms is significantly reduced in size compared to those of gymnosperm seed plants. The smaller size of the pollen reduces the amount of time between pollination — the pollen grain reaching the female plant — and fertilization. In gymnosperms, fertilization can occur up to a year after pollination, whereas in angiosperms, fertilization begins very soon after pollination. The shorter amount of time between pollination and fertilization allows angiosperms to produce seeds earlier after pollination than gymnosperms, providing angiosperms a distinct evolutionary advantage. Closed carpel enclosing the ovules (carpel or carpels and accessory parts may become the fruit) The closed carpel of angiosperms also allows adaptations to specialized pollination syndromes and controls. This helps to prevent self-fertilization, thereby maintaining increased diversity. Once the ovary is fertilized, the carpel and some surrounding tissues develop into a fruit. This fruit often serves as an attractant to seed-dispersing animals. The resulting cooperative relationship presents another advantage to angiosperms in the process of dispersal. Reduced female gametophyte, seven cells with eight nuclei The reduced female gametophyte, like the reduced male gametophyte, may be an adaptation allowing for more rapid seed set, eventually leading to such flowering plant adaptations as annual herbaceous life-cycles, allowing the flowering plants to fill even more niches. Endosperm In general, endosperm formation begins after fertilization and before the first division of the zygote. Endosperm is a highly nutritive tissue that can provide food for the developing embryo, the cotyledons, and sometimes the seedling when it first appears.
Evolution
Fossilized spores suggest that higher plants (embryophytes) have lived on land for at least 475 million years. Early land plants reproduced sexually with flagellated, swimming sperm, like the green algae from which they evolved. An adaptation to terrestrialization was the development of upright meiosporangia for dispersal by spores to new habitats. This feature is lacking in the descendants of their nearest algal relatives, the Charophycean green algae. A later terrestrial adaptation took place with retention of the delicate, avascular sexual stage, the gametophyte, within the tissues of the vascular sporophyte. This occurred by spore germination within sporangia rather than spore release, as in non-seed plants. A current example of how this might have happened can be seen in the precocious spore germination in Selaginella, the spike-moss. The result for the ancestors of angiosperms was enclosing them in a case, the seed. The first seed bearing plants, like the ginkgo, and conifers (such as pines and firs), did not produce flowers. The pollen grains (male gametophytes) of Ginkgo and cycads produce a pair of flagellated, mobile sperm cells that "swim" down the developing pollen tube to the female and her eggs.
thumb|left|200px|Flowers of Malus sylvestris (crab apple)
thumb|left|200px|Flowers and leaves of Oxalis pes-caprae (Bermuda buttercup)
The apparently sudden appearance of nearly modern flowers in the fossil record initially posed such a problem for the theory of evolution that Charles Darwin called it an "abominable mystery". However, the fossil record has considerably grown since the time of Darwin, and recently discovered angiosperm fossils such as Archaefructus, along with further discoveries of fossil gymnosperms, suggest how angiosperm characteristics may have been acquired in a series of steps. Several groups of extinct gymnosperms, in particular seed ferns, have been proposed as the ancestors of flowering plants, but there is no continuous fossil evidence showing exactly how flowers evolved. Some older fossils, such as the upper Triassic Sanmiguelia, have been suggested. Based on current evidence, some propose that the ancestors of the angiosperms diverged from an unknown group of gymnosperms in the Triassic period (245–202 million years ago). Fossil angiosperm-like pollen from the Middle Triassic (247.2–242.0 Ma) suggests an older date for their origin. A close relationship between angiosperms and gnetophytes, proposed on the basis of morphological evidence, has more recently been disputed on the basis of molecular evidence that suggest gnetophytes are instead more closely related to other gymnosperms.
The evolution of seed plants and later angiosperms appears to be the result of two distinct rounds of whole genome duplication events. These occurred at and . Another possible whole genome duplication event at perhaps created the ancestral line that led to all modern flowering plants. That event was studied by sequencing the genome of an ancient flowering plant, Amborella trichopoda, and directly addresses Darwin's "abominable mystery."
The earliest known macrofossil confidently identified as an angiosperm, Archaefructus liaoningensis, is dated to about 125 million years BP (the Cretaceous period), whereas pollen considered to be of angiosperm origin takes the fossil record back to about 130 million years BP. However, one study has suggested that the early-middle Jurassic plant Schmeissneria, traditionally considered a type of ginkgo, may be the earliest known angiosperm, or at least a close relative. In addition, circumstantial chemical evidence has been found for the existence of angiosperms as early as 250 million years ago. Oleanane, a secondary metabolite produced by many flowering plants, has been found in Permian deposits of that age together with fossils of gigantopterids.Oily Fossils Provide Clues To The Evolution Of Flowers — ScienceDaily (April 5, 2001) Gigantopterids are a group of extinct seed plants that share many morphological traits with flowering plants, although they are not known to have been flowering plants themselves.
In 2013 flowers encased in amber were found and dated 100 million years before present. The amber had frozen the act of sexual reproduction in the process of taking place. Microscopic images showed tubes growing out of pollen and penetrating the flower's stigma. The pollen was sticky, suggesting it was carried by insects.
Recent DNA analysis based on molecular systematicsNOVA — Transcripts — First Flower — PBS Airdate: April 17, 2007 showed that Amborella trichopoda, found on the Pacific island of New Caledonia, belongs to a sister group of the other flowering plants, and morphological studiesSouth Pacific plant may be missing link in evolution of flowering plants — Public release date: 17 May 2006 suggest that it has features that may have been characteristic of the earliest flowering plants.
The orders Amborellales, Nymphaeales, and Austrobaileyales diverged as separate lineages from the remaining angiosperm clade at a very early stage in flowering plant evolution.
The great angiosperm radiation, when a great diversity of angiosperms appears in the fossil record, occurred in the mid-Cretaceous (approximately 100 million years ago). However, a study in 2007 estimated that the division of the five most recent (the genus Ceratophyllum, the family Chloranthaceae, the eudicots, the magnoliids, and the monocots) of the eight main groups occurred around 140 million years ago.
By the late Cretaceous, angiosperms appear to have dominated environments formerly occupied by ferns and cycadophytes, but large canopy-forming trees replaced conifers as the dominant trees only close to the end of the Cretaceous 66 million years ago or even later, at the beginning of the Tertiary. The radiation of herbaceous angiosperms occurred much later. Yet, many fossil plants recognizable as belonging to modern families (including beech, oak, maple, and magnolia) had already appeared by the late Cretaceous.
thumb|upright|Two bees on a flower head of Creeping Thistle, Cirsium arvense
It is generally assumed that the function of flowers, from the start, was to involve mobile animals in their reproduction processes. That is, pollen can be scattered even if the flower is not brightly colored or oddly shaped in a way that attracts animals; however, by expending the energy required to create such traits, angiosperms can enlist the aid of animals and, thus, reproduce more efficiently.
Island genetics provides one proposed explanation for the sudden, fully developed appearance of flowering plants. Island genetics is believed to be a common source of speciation in general, especially when it comes to radical adaptations that seem to have required inferior transitional forms. Flowering plants may have evolved in an isolated setting like an island or island chain, where the plants bearing them were able to develop a highly specialized relationship with some specific animal (a wasp, for example). Such a relationship, with a hypothetical wasp carrying pollen from one plant to another much the way fig wasps do today, could result in the development of a high degree of specialization in both the plant(s) and their partners. Note that the wasp example is not incidental; bees, which, it is postulated, evolved specifically due to mutualistic plant relationships, are descended from wasps.
Animals are also involved in the distribution of seeds. Fruit, which is formed by the enlargement of flower parts, is frequently a seed-dispersal tool that attracts animals to eat or otherwise disturb it, incidentally scattering the seeds it contains (see frugivory). Although many such mutualistic relationships remain too fragile to survive competition and to spread widely, flowering proved to be an unusually effective means of reproduction, spreading (whatever its origin) to become the dominant form of land plant life.
Flower ontogeny uses a combination of genes normally responsible for forming new shoots.Age-Old Question On Evolution Of Flowers Answered — 15-Jun-2001 The most primitive flowers probably had a variable number of flower parts, often separate from (but in contact with) each other. The flowers tended to grow in a spiral pattern, to be bisexual (in plants, this means both male and female parts on the same flower), and to be dominated by the ovary (female part). As flowers evolved, some variations developed parts fused together, with a much more specific number and design, and with either specific sexes per flower or plant or at least "ovary-inferior".
Flower evolution continues to the present day; modern flowers have been so profoundly influenced by humans that some of them cannot be pollinated in nature. Many modern domesticated flower species were formerly simple weeds, which sprouted only when the ground was disturbed. Some of them tended to grow with human crops, perhaps already having symbiotic companion plant relationships with them, and the prettiest did not get plucked because of their beauty, developing a dependence upon and special adaptation to human affection.Human Affection Altered Evolution of Flowers — By Robert Roy Britt, LiveScience Senior Writer (posted: 26 May 2005 06:53 am ET)
A few paleontologists have also proposed that flowering plants, or angiosperms, might have evolved due to interactions with dinosaurs. One of the idea's strongest proponents is Robert T. Bakker. He proposes that herbivorous dinosaurs, with their eating habits, provided a selective pressure on plants, for which adaptations either succeeded in deterring or coping with predation by herbivores.
Classification
The phylogeny of the flowering plants, as of APG III (2009). Alternative phylogeny (2010), p. 1300
There are eight groups of living angiosperms:
Amborella, a single species of shrub from New Caledonia;
Nymphaeales, about 80 species,, Figure 2 water lilies and Hydatellaceae;
Austrobaileyales, about 100 species of woody plants from various parts of the world;
Chloranthales, several dozen species of aromatic plants with toothed leaves;
Magnoliids, about 9,000 species, characterized by trimerous flowers, pollen with one pore, and usually branching-veined leaves—for example magnolias, bay laurel, and black pepper;
Monocots, about 70,000 species, characterized by trimerous flowers, a single cotyledon, pollen with one pore, and usually parallel-veined leaves—for example grasses, orchids, and palms;
Ceratophyllum, about 6 species of aquatic plants, perhaps most familiar as aquarium plants;
Eudicots, about 175,000 species, characterized by 4- or 5-merous flowers, pollen with three pores, and usually branching-veined leaves—for example sunflowers, petunia, buttercup, apples, and oaks.
The exact relationship between these eight groups is not yet clear, although there is agreement that the first three groups to diverge from the ancestral angiosperm were Amborellales, Nymphaeales, and Austrobaileyales. The term basal angiosperms refers to these three groups. Among the rest, the relationship between the three broadest of these groups (magnoliids, monocots, and eudicots) remains unclear. Some analyses make the magnoliids the first to diverge, others the monocots. Ceratophyllum seems to group with the eudicots rather than with the monocots.
APG IV classification
Based on the 4th version of the Angiosperm Phylogeny Group classification.
Angiosperm classification }}
}}
}}
}}
}}
}}
History of classification
thumb|upright|From 1736, an illustration of Linnaean classification.
The botanical term "Angiosperm", from the Ancient Greek αγγείον, angeíon (bottle, vessel) and σπέρμα, (seed), was coined in the form Angiospermae by Paul Hermann in 1690, as the name of one of his primary divisions of the plant kingdom. This included flowering plants possessing seeds enclosed in capsules, distinguished from his Gymnospermae, or flowering plants with achenial or schizo-carpic fruits, the whole fruit or each of its pieces being here regarded as a seed and naked. The term and its antonym were maintained by Carl Linnaeus with the same sense, but with restricted application, in the names of the orders of his class Didynamia. Its use with any approach to its modern scope became possible only after 1827, when Robert Brown established the existence of truly naked ovules in the Cycadeae and Coniferae,Brown R., Character and description of Kingia, a new genus of plants found on the southwest coast of New Holland: with observations on the structure of its unimpregnated ovulum; and on the female flower of Cycadeae and Coniferae, in: King P.P.(Ed.) Narrative of a Survey of the Intertropical and western coasts of Australia, performed between years 1818 and 1822. John Murray, London, 1827, vol. 2., pp. 534–565, . and applied to them the name Gymnosperms. From that time onward, as long as these Gymnosperms were, as was usual, reckoned as dicotyledonous flowering plants, the term Angiosperm was used antithetically by botanical writers, with varying scope, as a group-name for other dicotyledonous plants.
left|thumb|Auxanometer: Device for measuring increase or rate of growth in plants
In 1851, Hofmeister discovered the changes occurring in the embryo-sac of flowering plants, and determined the correct relationships of these to the Cryptogamia. This fixed the position of Gymnosperms as a class distinct from Dicotyledons, and the term Angiosperm then gradually came to be accepted as the suitable designation for the whole of the flowering plants other than Gymnosperms, including the classes of Dicotyledons and Monocotyledons. This is the sense in which the term is used today.
In most taxonomies, the flowering plants are treated as a coherent group. The most popular descriptive name has been Angiospermae (Angiosperms), with Anthophyta ("flowering plants") a second choice. These names are not linked to any rank. The Wettstein system and the Engler system use the name Angiospermae, at the assigned rank of subdivision. The Reveal system treated flowering plants as subdivision Magnoliophytina (Frohne & U. Jensen ex Reveal, Phytologia 79: 70 1996), but later split it to Magnoliopsida, Liliopsida, and Rosopsida. The Takhtajan system and Cronquist system treat this group at the rank of division, leading to the name Magnoliophyta (from the family name Magnoliaceae). The Dahlgren system and Thorne system (1992) treat this group at the rank of class, leading to the name Magnoliopsida. The APG system of 1998, and the later 2003 and 2009 revisions, treat the flowering plants as a clade called angiosperms without a formal botanical name. However, a formal classification was published alongside the 2009 revision in which the flowering plants form the Subclass Magnoliidae.|
The internal classification of this group has undergone considerable revision. The Cronquist system, proposed by Arthur Cronquist in 1968 and published in its full form in 1981, is still widely used but is no longer believed to accurately reflect phylogeny. A consensus about how the flowering plants should be arranged has recently begun to emerge through the work of the Angiosperm Phylogeny Group (APG), which published an influential reclassification of the angiosperms in 1998. Updates incorporating more recent research were published as APG II in 2003 and as APG III in 2009.
upright|thumb|Monocot (left) and dicot seedlings
Traditionally, the flowering plants are divided into two groups, which in the Cronquist system are called Magnoliopsida (at the rank of class, formed from the family name Magnoliaceae) and Liliopsida (at the rank of class, formed from the family name Liliaceae). Other descriptive names allowed by Article 16 of the ICBN include Dicotyledones or Dicotyledoneae, and Monocotyledones or Monocotyledoneae, which have a long history of use. In English a member of either group may be called a dicotyledon (plural dicotyledons) and monocotyledon (plural monocotyledons), or abbreviated, as dicot (plural dicots) and monocot (plural monocots). These names derive from the observation that the dicots most often have two cotyledons, or embryonic leaves, within each seed. The monocots usually have only one, but the rule is not absolute either way. From a broad diagnostic point of view, the number of cotyledons is neither a particularly handy nor a reliable character.
Recent studies, as by the APG, show that the monocots form a monophyletic group (clade) but that the dicots do not (they are paraphyletic). Nevertheless, the majority of dicot species do form a monophyletic group, called the eudicots or tricolpates. Of the remaining dicot species, most belong to a third major clade known as the magnoliids, containing about 9,000 species. The rest include a paraphyletic grouping of primitive species known collectively as the basal angiosperms, plus the families Ceratophyllaceae and Chloranthaceae.
Flowering plant diversity
thumb|A poster of twelve different species of flowers of the Asteraceae family
thumb|Lupinus pilosus
The number of species of flowering plants is estimated to be in the range of 250,000 to 400,000. This compares to around 12,000 species of moss or 11,000 species of pteridophytes,Raven, Peter H., Ray F. Evert, & Susan E. Eichhorn, 2005. Biology of Plants, 7th edition. (New York: W. H. Freeman and Company). ISBN 0-7167-1007-2. showing that the flowering plants are much more diverse. The number of families in APG (1998) was 462. In APG II (2003) it is not settled; at maximum it is 457, but within this number there are 55 optional segregates, so that the minimum number of families in this system is 402. In APG III (2009) there are 415 families.
The diversity of flowering plants is not evenly distributed. Nearly all species belong to the eudicot (75%), monocot (23%), and magnoliid (2%) clades. The remaining 5 clades contain a little over 250 species in total; i.e. less than 0.1% of flowering plant diversity, divided among 9 families. The 42 most-diverse of 443 families of flowering plants by species, in their APG circumscriptions, are
Asteraceae or Compositae (daisy family): 22,750 species;
Orchidaceae (orchid family): 21,950;
Fabaceae or Leguminosae (bean family): 19,400;
Rubiaceae (madder family): 13,150;
Poaceae or Gramineae (grass family): 10,035;
Lamiaceae or Labiatae (mint family): 7,175;
Euphorbiaceae (spurge family): 5,735;
Melastomataceae or Melastomaceae (melastome family): 5,005;
Myrtaceae (myrtle family): 4,625;
Apocynaceae (dogbane family): 4,555;
Cyperaceae (sedge family): 4,350;
Malvaceae (mallow family): 4,225;
Araceae (arum family): 4,025;
Ericaceae (heath family): 3,995;
Gesneriaceae (gesneriad family): 3,870;
Apiaceae or Umbelliferae (parsley family): 3,780;
Brassicaceae or Cruciferae (cabbage family): 3,710:
Piperaceae (pepper family): 3,600;
Acanthaceae (acanthus family): 3,500;
Rosaceae (rose family): 2,830;
Boraginaceae (borage family): 2,740;
Urticaceae (nettle family): 2,625;
Ranunculaceae (buttercup family): 2,525;
Lauraceae (laurel family): 2,500;
Solanaceae (nightshade family): 2,460;
Campanulaceae (bellflower family): 2,380;
Arecaceae (palm family): 2,361;
Annonaceae (custard apple family): 2,220;
Caryophyllaceae (pink family): 2,200;
Orobanchaceae (broomrape family): 2,060;
Amaranthaceae (amaranth family): 2,050;
Iridaceae (iris family): 2,025;
Aizoaceae or Ficoidaceae (ice plant family): 2,020;
Rutaceae (rue family): 1,815;
Phyllanthaceae (phyllanthus family): 1,745;
Scrophulariaceae (figwort family): 1,700;
Gentianaceae (gentian family): 1,650;
Convolvulaceae (bindweed family): 1,600;
Proteaceae (protea family): 1,600;
Sapindaceae (soapberry family): 1,580;
Cactaceae (cactus family): 1,500;
Araliaceae (Aralia or ivy family): 1,450.
Of these, the Orchidaceae, Poaceae, Cyperaceae, Arecaceae, and Iridaceae are monocot families; Piperaceae, Lauraceae, and Annonaceae are magnoliid dicots; the rest of the families are eudicots.
Vascular anatomy
thumb|left|250px|Cross-section of a stem of the angiosperm flax:
1. Pith,
2. Protoxylem,
3. Xylem I,
4. Phloem I,
5. Sclerenchyma (bast fibre),
6. Cortex,
7. Epidermis
The amount and complexity of tissue-formation in flowering plants exceeds that of gymnosperms. The vascular bundles of the stem are arranged such that the xylem and phloem form concentric rings.
In the dicotyledons, the bundles in the very young stem are arranged in an open ring, separating a central pith from an outer cortex. In each bundle, separating the xylem and phloem, is a layer of meristem or active formative tissue known as cambium. By the formation of a layer of cambium between the bundles (interfascicular cambium), a complete ring is formed, and a regular periodical increase in thickness results from the development of xylem on the inside and phloem on the outside. The soft phloem becomes crushed, but the hard wood persists and forms the bulk of the stem and branches of the woody perennial. Owing to differences in the character of the elements produced at the beginning and end of the season, the wood is marked out in transverse section into concentric rings, one for each season of growth, called annual rings.
Among the monocotyledons, the bundles are more numerous in the young stem and are scattered through the ground tissue. They contain no cambium and once formed the stem increases in diameter only in exceptional cases.
The flower, fruit, and seed
Flowers
105px|thumb|A collection of flowers forming an inflorescence
The characteristic feature of angiosperms is the flower. Flowers show remarkable variation in form and elaboration, and provide the most trustworthy external characteristics for establishing relationships among angiosperm species. The function of the flower is to ensure fertilization of the ovule and development of fruit containing seeds. The floral apparatus may arise terminally on a shoot or from the axil of a leaf (where the petiole attaches to the stem). Occasionally, as in violets, a flower arises singly in the axil of an ordinary foliage-leaf. More typically, the flower-bearing portion of the plant is sharply distinguished from the foliage-bearing or vegetative portion, and forms a more or less elaborate branch-system called an inflorescence.
There are two kinds of reproductive cells produced by flowers. Microspores, which will divide to become pollen grains, are the "male" cells and are borne in the stamens (or microsporophylls). The "female" cells called megaspores, which will divide to become the egg cell (megagametogenesis), are contained in the ovule and enclosed in the carpel (or megasporophyll).
The flower may consist only of these parts, as in willow, where each flower comprises only a few stamens or two carpels. Usually, other structures are present and serve to protect the sporophylls and to form an envelope attractive to pollinators. The individual members of these surrounding structures are known as sepals and petals (or tepals in flowers such as Magnolia where sepals and petals are not distinguishable from each other). The outer series (calyx of sepals) is usually green and leaf-like, and functions to protect the rest of the flower, especially the bud. The inner series (corolla of petals) is, in general, white or brightly colored, and is more delicate in structure. It functions to attract insect or bird pollinators. Attraction is effected by color, scent, and nectar, which may be secreted in some part of the flower. The characteristics that attract pollinators account for the popularity of flowers and flowering plants among humans.
While the majority of flowers are perfect or hermaphrodite (having both pollen and ovule producing parts in the same flower structure), flowering plants have developed numerous morphological and physiological mechanisms to reduce or prevent self-fertilization. Heteromorphic flowers have short carpels and long stamens, or vice versa, so animal pollinators cannot easily transfer pollen to the pistil (receptive part of the carpel). Homomorphic flowers may employ a biochemical (physiological) mechanism called self-incompatibility to discriminate between self and non-self pollen grains. In other species, the male and female parts are morphologically separated, developing on different flowers.
Fertilization and embryogenesis
thumb|Angiosperm life cycle
Double fertilization refers to a process in which two sperm cells fertilize cells in the ovary. This process begins when a pollen grain adheres to the stigma of the pistil (female reproductive structure), germinates, and grows a long pollen tube. While this pollen tube is growing, a haploid generative cell travels down the tube behind the tube nucleus. The generative cell divides by mitosis to produce two haploid (n) sperm cells. As the pollen tube grows, it makes its way from the stigma, down the style and into the ovary. Here the pollen tube reaches the micropyle of the ovule and digests its way into one of the synergids, releasing its contents (which include the sperm cells). The synergid that the cells were released into degenerates and one sperm makes its way to fertilize the egg cell, producing a diploid (2n) zygote. The second sperm cell fuses with both central cell nuclei, producing a triploid (3n) cell. As the zygote develops into an embryo, the triploid cell develops into the endosperm, which serves as the embryo's food supply. The ovary will now develop into a fruit and the ovule will develop into a seed.
Fruit and seed
thumb|left|The fruit of the Aesculus or Horse Chestnut tree
As the development of embryo and endosperm proceeds within the embryo sac, the sac wall enlarges and combines with the nucellus (which is likewise enlarging) and the integument to form the seed coat. The ovary wall develops to form the fruit or pericarp, whose form is closely associated with the manner of distribution of the seed.
Frequently, the influence of fertilization is felt beyond the ovary, and other parts of the flower take part in the formation of the fruit, e.g., the floral receptacle in the apple, strawberry, and others.
The character of the seed coat bears a definite relation to that of the fruit. They protect the embryo and aid in dissemination; they may also directly promote germination. Among plants with indehiscent fruits, in general, the fruit provides protection for the embryo and secures dissemination. In this case, the seed coat is only slightly developed. If the fruit is dehiscent and the seed is exposed, in general, the seed-coat is well developed, and must discharge the functions otherwise executed by the fruit.
Meiosis
Flowering plants generate gametes using a specialized cell division called meiosis. Meiosis takes place in the ovule (a structure within the ovary that is located within the pistil at the center of the flower) (see diagram labeled “Angiosperm lifecycle”). A diploid cell (megaspore mother cell) in the ovule undergoes meiosis (involving two successive cell divisions) to produce four cells (megaspores or female gametes) with haploid nuclei.Snustad DP, Simmons MJ (2008). Principles of Genetics (5th ed.). Wiley. ISBN 978-0-470-38825-9. One of these four cells (megaspore) then undergoes three successive mitotic divisions to produce an immature embryo sac (megagametocyte) with eight haploid nuclei. Next, these nuclei are segregated into separate cells by cytokinesis to producing 3 antipodal cells, 2 synergid cells and an egg cell. Two polar nuclei are left in the central cell of the embryo sac.
Pollen is also produced by meiosis in the male anther (microsporangium). During meiosis, a diploid microspore mother cell undergoes two successive meiotic divisions to produce 4 haploid cells (microspores or male gametes). Each of these microspores, after further mitoses, becomes a pollen grain (microgametophyte) containing two haploid generative (sperm) cells and a tube nucleus. When a pollen grain makes contact with the female stigma, the pollen grain forms a pollen tube that grows down the style into the ovary. In the act of fertilization, a male sperm nucleus fuses with the female egg nucleus to form a diploid zygote that can then develop into an embryo within the newly forming seed. Upon germination of the seed, a new plant can grow and mature.
The adaptive function of meiosis is currently a matter of debate. A key event during meiosis in a diploid cell is the pairing of homologous chromosomes and homologous recombination (the exchange of genetic information) between homologous chromosomes. This process promotes the production of increased genetic diversity among progeny and the recombinational repair of damages in the DNA to be passed on to progeny. To explain the adaptive function of meiosis in flowering plants, some authors emphasize diversity and others emphasize DNA repair
Apomixis
Apomixis (reproduction via asexually formed seeds) is found naturally in about 2.2% of angiosperm genera One type of apomixis, gametophytic apomixis found in a dandelion species involves formation of an unreduced embryo sac due to incomplete meiosis (apomeiosis) and development of an embryo from the unreduced egg inside the embryo sac, without fertilization (parthenogenesis).
Economic importance
Agriculture is almost entirely dependent on angiosperms, which provide virtually all plant-based food, and also provide a significant amount of livestock feed. Of all the families of plants, the Poaceae, or grass family (grains), is by far the most important, providing the bulk of all feedstocks (rice, corn — maize, wheat, barley, rye, oats, pearl millet, sugar cane, sorghum). The Fabaceae, or legume family, comes in second place. Also of high importance are the Solanaceae, or nightshade family (potatoes, tomatoes, and peppers, among others), the Cucurbitaceae, or gourd family (also including pumpkins and melons), the Brassicaceae, or mustard plant family (including rapeseed and the innumerable varieties of the cabbage species Brassica oleracea), and the Apiaceae, or parsley family. Many of our fruits come from the Rutaceae, or rue family (including oranges, lemons, grapefruits, etc.), and the Rosaceae, or rose family (including apples, pears, cherries, apricots, plums, etc.).
In some parts of the world, certain single species assume paramount importance because of their variety of uses, for example the coconut (Cocos nucifera) on Pacific atolls, and the olive (Olea europaea) in the Mediterranean region.
Flowering plants also provide economic resources in the form of wood, paper, fiber (cotton, flax, and hemp, among others), medicines (digitalis, camphor), decorative and landscaping plants, and many other uses. The main area in which they are surpassed by other plants — namely, coniferous trees (Pinales), which are non-flowering (gymnosperms) — is timber and paper production.
See also
List of garden plants
List of plant orders
List of plants by common name
List of systems of plant taxonomy
Notes
References
Bibliography
Simpson, M.G. Plant Systematics, 2nd Edition. Elsevier/Academic Press. 2010.
Raven, P.H., R.F. Evert, S.E. Eichhorn. Biology of Plants, 7th Edition. W.H. Freeman. 2004.
Sattler, R. 1973. Organogenesis of Flowers. A Photographic Text-Atlas. University of Toronto Press.
Cole, Theodor C.H.; Hilger, Dr. Harmut H. Angiosperm Phylogeny Poster Flowering Plant Systematics
Cromie, William J. (December 16, 1999). "Oldest Known Flowering Plants Identified By Genes". Harvard University Gazette.
Watson, L. and Dallwitz, M.J. (1992 onwards). The families of flowering plants: descriptions, illustrations, identification, information retrieval.
External links
01
F01
F01
Category:Pollination
Category:Plant sexuality
Category:Extant Early Cretaceous first appearances | 18,967 | 2017-01 |
Capital punishment in the United States | thumb|250px|A map showing the use of the death penalty in the United States by individual states. Note that the death penalty is used throughout the United States for certain federal crimes.
Capital punishment is a legal penalty in the United States, currently used by 31 states, the federal government, and the U.S. military. Its existence can be traced to the beginning of the American colonies.
There were no executions in the entire country between 1967 and 1977. In 1972, the U.S. Supreme Court struck down capital punishment statutes in Furman v. Georgia, reducing all death sentences pending at the time to life imprisonment.Barry Latzer (2010), Death Penalty Cases: Leading U.S. Supreme Court Cases on Capital Punishment, Elsevier, p.37.
Subsequently, a majority of states passed new death penalty statutes, and the court affirmed the legality of capital punishment in the 1976 case Gregg v. Georgia. Since then, more than 1,400 offenders have been executed, including 20 in 2016.
The United States is the only Western country currently applying the death penalty and was the first to develop lethal injection as a method of execution, which has since been adopted by five other countries.
History
Pre-Furman history
thumb|right|350px|Executions in the United States from 1608 to 2009
The first recorded death sentence in the British North American colonies was carried out in 1608 on Captain George Kendall, who was executed by firing squad at the Jamestown colony for allegedly spying for the Spanish government.
The Espy file, compiled by M. Watt Espy and John Ortiz Smykla, lists 15,269 people executed in the United States and its predecessor colonies between 1608 and 1991. From 1930 to 2002, there were 4,661 executions in the U.S., about two-thirds of them in the first 20 years.Department of Justice of the United States of America Additionally, the United States Army executed 135 soldiers between 1916 and 1955 (the most recent).John A. Bennett
The Bill of Rights in 1789 included the Eighth Amendment which prohibited cruel and unusual punishment. The Fifth Amendment was drafted with language implying a possible use of the death penalty, requiring a grand jury indictment for "capital crime" and a due process of law for deprivation of "life" by the government. The Fourteenth Amendment adopted in 1868 also requires a due process of law for deprivation of life by any state.
Three states abolished the death penalty for murder during the 19th century: Michigan in 1846, Wisconsin in 1853 and Maine in 1887. Rhode Island is also a state with a long abolitionist background, having repealed the death penalty in 1852, though it was theoretically available for murder committed by a prisoner between 1872 and 1984.
Other states which abolished the death penalty for murder before Gregg v. Georgia include: Minnesota in 1911, Vermont in 1964, Iowa and West Virginia in 1965 and North Dakota in 1973. Hawaii abolished the death penalty in 1948 and Alaska in 1957, both before their statehood. Puerto Rico repealed it in 1929 and the District of Columbia in 1981.
Arizona and Oregon abolished the death penalty by popular vote in 1916 and 1964 respectively, but both reinstated it, again by popular vote, some years later: Arizona in 1918 and Oregon in 1978.
Puerto Rico and Michigan are the only two U.S. jurisdictions to have explicitly prohibited capital punishment in their constitutions: in 1952 and 1964, respectively.
Nevertheless, capital punishment continued to be used by a majority of states and the federal government for various crimes, especially murder and rape, from the creation of the United States up to the beginning of the 1960s. Until then "save for a few mavericks, no one gave any credence to the possibility of ending the death penalty by judicial interpretation of constitutional law" according to abolitionist Hugo Bedau.The Courts, the Constitution, and Capital Punishment 118 (1977)
The possibility of challenging the constitutionality of the death penalty became progressively more realistic after the Supreme Court of the United States decided Trop v. Dulles in 1958, when the court said explicitly for the first time that the Eighth Amendment's cruel and unusual clause must draw its meaning from the "evolving standards of decency that mark the progress of a maturing society" rather than from its original meaning.
But in the 1932 case Powell v. Alabama, the court had already made the first step of what would be latter called the "death is different" jurisprudence, when it held that any indigent defendant was entitled to a court-appointed attorney in capital cases—a right that was only later extended to non-capital defendants in 1963, with Gideon v. Wainwright.
Capital punishment suspended (1972)
thumb|350px|right|upright=1.2|Executions in the United States since 1960
In Furman v. Georgia, the U.S. Supreme Court considered a group of consolidated cases. The lead case involved an individual convicted under Georgia's death penalty statute, which featured a "unitary trial" procedure in which the jury was asked to return a verdict of guilt or innocence and, simultaneously, determine whether the defendant would be punished by death or life imprisonment. The last pre-Furman execution was that of Luis Monge on June 2, 1967.
In a 5–4 decision, the Supreme Court struck down the impositions of the death penalty in each of the consolidated cases as unconstitutional in violation of the Eighth and Fourteenth Amendments of the United States Constitution. The Supreme Court has never ruled the death penalty to be per se unconstitutional. The five justices in the majority did not produce a common opinion or rationale for their decision, however, and agreed only on a short statement announcing the result. The narrowest opinions, those of Byron White and Potter Stewart, expressed generalized concerns about the inconsistent application of the death penalty across a variety of cases but did not exclude the possibility of a constitutional death penalty law. Stewart and William O. Douglas worried explicitly about racial discrimination in enforcement of the death penalty. Thurgood Marshall and William J. Brennan Jr. expressed the opinion that the death penalty was proscribed absolutely by the Eighth Amendment as cruel and unusual punishment.
The Furman decision caused all death sentences pending at the time to be reduced to life imprisonment, and was described by scholars as a "legal bombshell". The next day, columnist Barry Schweid wrote that it was "unlikely" that the death penalty could exist anymore in the United States.The Free Lance-Star - Jun 30, 1972 : New laws unlikely on death penalty, by Barry Schweid
But instead of abandoning capital punishment, 37 states enacted new death penalty statutes that attempted to address the concerns of White and Stewart. Some states responded by enacting mandatory death penalty statutes which prescribed a sentence of death for anyone convicted of certain forms of murder. White had hinted that such a scheme would meet his constitutional concerns in his Furman opinion.
Other states adopted "bifurcated" trial and sentencing procedures, with various procedural limitations on the jury's ability to pronounce a death sentence designed to limit juror discretion. The Court clarified Furman in Woodson v. North CarolinaWoodson v. North Carolina, and Roberts v. Louisiana,Roberts v. Louisiana, , which explicitly forbade any state from punishing a specific form of murder (such as that of a police officer) with a mandatory death penalty.
Since Furman, 11 states have organized popular votes dealing with the death penalty through the initiative and referendum process. All resulted in a vote for reinstating it, rejecting its abolition, expanding its application field, specifying in the state constitution that it is not unconstitutional, or expediting the appeal process in capital cases.
Capital punishment reinstated (1976)
right|thumb|299px|U.S. Supreme Court seat in Washington, D.C.
In 1976, contemporaneously with Woodson and Roberts, the Court decided Gregg v. GeorgiaGregg v. Georgia, and upheld 7–2 a procedure in which the trial of capital crimes was bifurcated into guilt-innocence and sentencing phases. At the first proceeding, the jury decides the defendant's guilt; if the defendant is innocent or otherwise not convicted of first-degree murder, the death penalty will not be imposed. At the second hearing, the jury determines whether certain statutory aggravating factors exist, whether any mitigating factors exist, and, in many jurisdictions, weigh the aggravating and mitigating factors in assessing the ultimate penalty – either death or life in prison, either with or without parole.
In 1977, the Supreme Court's Coker v. Georgia decision barred the death penalty for rape of an adult woman. Previously, the death penalty for rape of an adult had been gradually phased out in the United States, and at the time of the decision, Georgia and the U.S. Federal government were the only two jurisdictions to still retain the death penalty for that offense.
Executions resumed on January 17, 1977, when Gary Gilmore went before a firing squad in Utah. But the pace was quite slow due to the use of litigation tactics which involved filing repeated writs of habeas corpus, which succeeded for many in delaying their actual execution for many years. Although hundreds of individuals were sentenced to death in the United States during the 1970s and early 1980s, only ten people besides Gilmore (who had waived all of his appeal rights) were actually executed prior to 1984.
Subsequent history
right|thumb|300px|The lethal injection room in Florida State Prison.
Executions rose at a near-continuous pace until 1999, when it peaked at 98. After 1999, executions near-continuously lowered every year, and the 20 executions in 2016 were the fewest since 1991.
The death penalty was a notable issue during the 1988 presidential election. It came up in the October 13, 1988 debate between the two presidential nominees George H. W. Bush and Michael Dukakis when Bernard Shaw, the moderator of the debate, asked Dukakis, "Governor, if Kitty Dukakis [his wife] were raped and murdered, would you favor an irrevocable death penalty for the killer?" Dukakis replied, "No, I don't, and I think you know that I've opposed the death penalty during all of my life. I don't see any evidence that it's a deterrent, and I think there are better and more effective ways to deal with violent crime." Bush was elected and many, including Dukakis himself, cite the statement as the beginning of the end of his campaign. Negative ads were also aired portraying Dukakis as soft on crime, citing explicitly his opposition to the death penalty.
In 1996, Congress passed the Antiterrorism and Effective Death Penalty Act (AEDPA) to streamline the appeal process in capital cases. The bill was signed into law by President Bill Clinton, who had endorsed capital punishment during his 1992 presidential campaign.
The U.S. Supreme Court has placed two major restrictions on the use of the death penalty. First, the case of Atkins v. Virginia, decided on June 20, 2002, held that the execution of intellectually disabled (then termed "mentally retarded") inmates is unconstitutional. Second, in 2005, the court's decision in Roper v. SimmonsRoper v. Simmons, struck down executions for offenders under the age of 18 at the time of the crime. In the 2008 case Kennedy v. Louisiana, the court also held 5–4 the death penalty unconstitutional when applied to non-homicidal crimes against the person including child rape.
In 2004, New York and Kansas capital sentencing schemes were struck down by their respective state highest courts. Kansas successfully appealed the Kansas Supreme Court decision to the United States Supreme Court, who reinstated the statute in Kansas v. Marsh (2006), holding it did not violate the U.S. Constitution. The decision of New York Court of Appeals was instead based on the state constitution, making unavailable any appeal, and the state lower house have since blocked all attempts to reinstate the death penalty by adopting a valid sentencing scheme.
In 2007, New Jersey became the first state to repeal the death penalty by legislative vote since Gregg v. Georgia, followed by New Mexico in 2009, Maria Medina, « Governor OK with Astorga capital case » Illinois in 2011, Connecticut in 2012, and Maryland in 2013. The repeals were not retroactive, but in New Jersey, Illinois and Maryland, governors commuted all death sentences after enacting the new law. In Connecticut, the Connecticut Supreme Court ruled in 2015 that the repeal must be retroactive, making New Mexico the only state with remaining death row inmates though no present death penalty statute.
Nebraska's legislature also passed a repeal in 2015, but a referendum campaign gathered enough signatures to suspend it, and capital punishment was reinstated by popular vote on November 8, 2016. The same day, California's electorate defeated a proposal to repeal the death penalty, and adopted another initiative to expedite its appeal process.
In June 2015, the U.S. Supreme Court reaffirmed the constitutionality of lethal injection in Glossip v. Gross. Justice Breyer, joined by Justice Ginsburg, wrote a dissenting opinion saying it was time for the court to prohibit capital punishment entirely, believing it is "highly likely that the death penalty violates the Eighth Amendment" because of unreliability, arbitrariness, and "unconscionably long delays that undermine the death penalty's penological purpose."
Because Breyer and Ginsburg are not the first justices to change their minds on that issue, the move led to a scathing retort from Justice Scalia, joined by Justice Thomas, who began his concurring opinion by saying "Welcome to Groundhog Day". He expressed the view that whenever a justice asserts it is now time to judicially abolish the death penalty, he only advances the same contentions that have not convinced the court earlier.
Justice Thomas, joined by Justice Scalia, also wrote a concurring opinion in this case, saying that Breyer and Ginsburg are engaging in the "ceaseless quest to end the death penalty through undemocratic means" and that the court should never have prohibited mandatory death sentences, because they are the best way to impose a uniform death penalty appliance. He felt contradictory that some judges were willing to get rid of the death penalty on the grounds of a sentencing arbitrariness and delays for which the court itself is to blame.
Capital crimes
Aggravated murder
All inmates executed since the United States reinstated the death penalty in 1976 were convicted of intentional homicide.
In the 1980 case Godfrey v. Georgia, the U.S. Supreme Court ruled that murder can be punished by death only if it involves a narrow and precise aggravating factor.
Such factors allowing prosecution to seek capital punishment vary greatly from one state to one another, California for example having 22, while New Hampshire has only seven.
But some aggravating circumstances are nearly universal among death penalty states, such as robbery-murder, murder involving rape of the victim, and murder of an on-duty police officer.
Several states have included child murder to their list of aggravating factors, but the victim's age under which the murder is punishable by death varies between them. In 2011, Texas raised this age from six to 10.
The high number of aggravating factors in some states has been criticized as giving local prosecutors too much discretion in picking cases where they believe capital punishment warranted. In California especially, an official commission proposed in 2008 to reduce them to only five (multiple murders, torture murder, murder of a police officer, murder committed in jail, and murder related to another felony). Columnist Charles Lane went further, and proposed that murder related to a felony other than rape should no longer be a capital crime when there is only one victim killed.<ref>Charles Lane (2010), Stay of Execution: Saving the Death Penalty from Itself, Rowman & Littlefield Publishers, p.110-111</ref>
Other crimes against persons
In June 2008, the U.S. Supreme Court held 5–4 in Kennedy v. Louisiana that the death penalty cannot be imposed for non-homicidal crimes against the person. In this case, it struck down a Louisiana statute providing capital punishment for raping a child under the age of 12. Only two death row inmates (both in Louisiana) have been affected by the decision. Nevertheless, the ruling came less than five months before the 2008 presidential election and was criticized by both major party candidates Barack Obama and John McCain.
Numerous states still have on their statutes books various provisions allowing the death penalty for child rape or other non-homicidal crimes such as kidnapping.
Crimes against the state
The opinion of the court in Kennedy v. Louisiana says that the ruling does not apply to "treason, espionage, terrorism, and drug kingpin activity, which are offenses against the State".Kennedy v. Louisiana, 554 U.S. 407, 437 (2007); see also
Since no one is on death row for such offenses, the court has yet to rule on the constitutionality of the death penalty applied for them.
Treason, espionage and large-scale drug trafficking are all capital crimes under federal law. Treason is also a punishable by death in six states (Arkansas, California, Georgia, Louisiana, Mississippi and Missouri) and large-scale drug trafficking in two states (Florida and Missouri). Vermont still have a pre-Furman statute providing the death penalty for treason.
Legal process
The legal administration of the death penalty in the United States, typically, it involves five critical steps: (1) prosecutorial decision to seek the death penalty (2) sentencing, (3) direct review, (4) state collateral review, and (5) federal habeas corpus.
Clemency, through which the Governor or President of the jurisdiction can unilaterally reduce or abrogate a death sentence, is an executive rather than judicial process.See generally Separation of powers.
Decision to seek the death penalty
While judges in criminal cases can usually impose a harsher prison sentence than the one demanded by prosecution, the death penalty can be handed down only if the accuser has specifically decided to seek it.
In the decades since Furman, new questions have emerged about whether or not prosecutorial arbitrariness has replaced sentencing arbitrariness. A study by Pepperdine University School of Law published in Temple Law Review, surveyed the decision-making process among prosecutors in various states. The authors found that prosecutors' capital punishment filing decisions remain marked by local "idiosyncrasies," suggesting they are not in keeping with the spirit of the Supreme Court's directive. This means that "the very types of unfairness that the Supreme Court sought to eliminate" may still "infect capital cases." Wide prosecutorial discretion remains because of overly broad criteria. California law, for example, has 22 "special circumstances," making nearly all premeditated murders potential capital cases.
A proposed remedy against prosecutorial arbitrariness is to transfer the prosecution of capital cases to a statewide prosecution office or to the state attorney general.
Sentencing
Of the 31 states using the death penalty, three (Alabama, Montana and Nebraska) provide the sentence to be decided by one or three judges when the prosecution seeks capital punishment (with a jury nonbinding advice in Alabama).
The 28 other states provide the sentence to be decided by a jury, and 27 of them require a unanimous sentence. However, the states differ on what happens if the penalty phase results in a hung jury:
In 4 states (Arizona, California, Kentucky and Nevada), a retrial of the penalty phase will happen before another jury (the common law rule for mistrial).See United States v. Perez, 1824
In 2 states (Indiana and Missouri), the judge will decide the sentence.
In the 21 other states, a hung jury results in a life sentence, even if a single juror opposed death. Federal law also provides that outcome.
In 2016, Florida adopted a statute providing a 10 jurors supermajority to impose a death sentence, but it was struck down by the state supreme court the same year, holding that a jury can issue such sentence only unanimously (see Capital punishment in Florida).
In Nebraska, the only state in which the sentence is decided by a three-judge panel, a life sentence is handed down even if only one of the three judges opposed death.
In all states in which the jury is involved, only death-qualified veniremen can be selected in such a jury, to exclude both people who will always vote the death sentence and those who are categorically opposed to it.
Direct review
If a defendant is sentenced to death at the trial level, the case then goes into a direct review.See, e.g., 18 U.S.C. § 3595. ("In a case in which a sentence of death is imposed, the sentence shall be subject to review by the court of appeals upon appeal by the defendant.") The direct review process is a typical legal appeal. An appellate court examines the record of evidence presented in the trial court and the law that the lower court applied and decides whether the decision was legally sound or not.See generally Appeal. Direct review of a capital sentencing hearing will result in one of three outcomes. If the appellate court finds that no significant legal errors occurred in the capital sentencing hearing, the appellate court will affirm the judgment, or let the sentence stand. If the appellate court finds that significant legal errors did occur, then it will reverse the judgment, or nullify the sentence and order a new capital sentencing hearing.Poland v. Arizona, 476 U.S. 147 152–54 (1986). Lastly, if the appellate court finds that no reasonable juror could find the defendant eligible for the death penalty, a rarity, then it will order the defendant acquitted, or not guilty, of the crime for which he/she was given the death penalty, and order him sentenced to the next most severe punishment for which the offense is eligible. About 60 percent survive the process of direct review intact.Eric M. Freedman, "Giarratano is a Scarecrow: The Right to Counsel in State Postconviction Proceedings, Legalize Drugs" 91 Cornell L. Rev. 1079, 1097 (2001)
State collateral review
At times when a death sentence is affirmed on direct review, supplemental methods to attack the judgment, though less familiar than a typical appeal, do remain. These supplemental remedies are considered collateral review, that is, an avenue for upsetting judgments that have become otherwise final.Teague v. Lane, 489 U.S. 288, 306 (1989). Where the prisoner received his death sentence in a state-level trial, as is usually the case, the first step in collateral review is state collateral review, which is often called state habeas corpus. (If the case is a federal death penalty case, it proceeds immediately from direct review to federal habeas corpus.) Although all states have some type of collateral review, the process varies widely from state to state.LaFave, Israel, & King, 6 Crim. Proc. § 28.11(b) (2d ed. 2007). Generally, the purpose of these collateral proceedings is to permit the prisoner to challenge his sentence on grounds that could not have been raised reasonably at trial or on direct review.LaFave, Israel, & King, 6 Crim. Proc. § 28.11(a) (2d ed. 2007). Most often these are claims, such as ineffective assistance of counsel, which requires the court to consider new evidence outside the original trial record, something courts may not do in an ordinary appeal. State collateral review, though an important step in that it helps define the scope of subsequent review through federal habeas corpus, is rarely successful in and of itself. Only around 6 percent of death sentences are overturned on state collateral review.Eric M. Freedman, "Giarratano is a Scarecrow: The Right to Counsel in State Postconviction Proceedings," 91 Cornell L. Rev. 1079, 1097 (2006).
In Virginia, state habeas corpus for condemned men are heard by the state supreme court under exclusive original jurisdiction since 1995, immediately after direct review by the same court. This avoids any proceeding before the lower courts, and is in part why Virginia has the shortest time have average between death sentence and execution (less than 8 years) and has executed 111 offenders since 1976 with only 7 remaining on death row as of March 2016.
Federal habeas corpus
After a death sentence is affirmed in state collateral review, the prisoner may file for federal habeas corpus, which is a unique type of lawsuit that can be brought in federal courts. Federal habeas corpus is a species of collateral review, and it is the only way that state prisoners may attack a death sentence in federal court (other than petitions for certiorari to the United States Supreme Court after both direct review and state collateral review). The scope of federal habeas corpus is governed by the Antiterrorism and Effective Death Penalty Act of 1996 (AEDPA), which restricted significantly its previous scope. The purpose of federal habeas corpus is to ensure that state courts, through the process of direct review and state collateral review, have done at least a reasonable job in protecting the prisoner's federal constitutional rights. Prisoners may also use federal habeas corpus suits to bring forth new evidence that they are innocent of the crime, though to be a valid defense at this late stage in the process, evidence of innocence must be truly compelling.House v. Bell, 126 S. Ct. 2064 (2006) According to Eric Freedman, 21 percent of death penalty cases are reversed through federal habeas corpus.
James Liebman, a professor of law at Columbia Law School, stated in 1996 that his study found that when habeas corpus petitions in death penalty cases were traced from conviction to completion of the case that there was "a 40 percent success rate in all capital cases from 1978 to 1995." Similarly, a study by Ronald Tabak in a law review article puts the success rate in habeas corpus cases involving death row inmates even higher, finding that between "1976 and 1991, approximately 47 percent of the habeas petitions filed by death row inmates were granted." The different numbers are largely definitional, rather than substantive. Freedam's statistics looks at the percentage of all death penalty cases reversed, while the others look only at cases not reversed prior to habeas corpus review.
A similar process is available for prisoners sentenced to death by the judgment of a federal court.see 28 U.S.C. § 2255.
The AEDPA also provides an expeditious habeas procedure in capital cases for states meeting several requirements set forth in it concerning counsel appointment for death row inmates (28 USC §§ 2261 – 2266). Under this program, federal habeas corpus for condemned men would be decided in less than three years from affirmance of the sentence on state collateral review. In 2006, Congress conferred the determination of whether a state fulfil the requirements to the U.S. attorney general, with a possible appeal of the state to the D.C. circuit appeal court. As of March 2016, the Department of Justice has still not granted any certifications.
Section 1983
If the federal court refuses to issue a writ of habeas corpus, the death sentence becomes final for all purposes. In recent times, however, prisoners have postponed execution through another way of federal litigation using the Civil Rights Act of 1871 — codified at — which allows people to bring lawsuits against state actors to protect their federal constitutional and statutory rights.
While the aforementioned appeals are normally limited to one and automatically stay the execution of the death sentence, Section 1983 lawsuits are unlimited, but the petitioner will be granted a stay of execution only if the court believes he has a likelihood of success on the merits.
Traditionally, Section 1983 was of limited use for a state prisoner under sentence of death because the Supreme Court has held that habeas corpus, not Section 1983, is the only vehicle by which a state prisoner can challenge his judgment of death. In the 2006 Hill v. McDonough case, however, the United States Supreme Court approved the use of Section 1983 as a vehicle for challenging a state's method of execution as cruel and unusual punishment in violation of the Eighth Amendment. The theory is that a prisoner bringing such a challenge is not attacking directly his judgment of death, but rather the means by which that the judgment will be carried out. Therefore, the Supreme Court held in the Hill case that a prisoner can use Section 1983 rather than habeas corpus to bring the lawsuit. Yet, as Clarence Hill's own case shows, lower federal courts have often refused to hear suits challenging methods of execution on the ground that the prisoner brought the claim too late and only for the purposes of delay. Further, the Court's decision in Baze v. Rees, upholding a lethal injection method used by many states, has drastically narrowed the opportunity for relief through Section 1983.
Execution warrant
While the execution warrant is issued by the governor in several states, in the vast majority it is a judicial order, issued by a judge or by the state supreme court at the request of the prosecution.
The warrant usually sets an execution date. Some states instead provide a longer period, such as a week or 10 days to carry out the execution. This is designated to avoid issuing a new warrant in case of a last-minute stay of execution that would be vacated by a higher court only few days or few hours later.
Distribution of sentences
thumb|right|300px| Total number of prisoners on Death Row in the United States from 1953 to 2008
Within the context of the overall murder rate, the death penalty cannot be said to be widely or routinely used in the United States; in recent years the average has been about one death sentence for every 200 murder convictions.
Alabama has the highest per capita rate of death sentences. This is due to judges overriding life imprisonment sentences and imposing the death penalty.
Among states
The distribution of death sentences among states is loosely proportional to their populations and murder rates. California, which is the most populous state, has also the largest death row with over 700 inmates. Wyoming, which is the least populous state, has only one condemned man.
But executions are more frequent (and happen more quickly after sentencing) in conservative states. Texas, which is the second most populous state of the Union, carried out over 500 executions during the post-Furman era, more than a third of the national total. California has carried out only 13 executions during the same period.
Among races
African Americans made up 41% of death row inmates while making up only 12.6% of the general population. They have made up 34% of those actually executed since 1976. According to a 2003 Amnesty International report, blacks and whites were the victims of murder in almost equal numbers, yet 80% of the people executed since 1977 were convicted of murders involving white victims.
Approximately 13.5% of death row inmates are of Hispanic or Latino descent, while they make up 17.4% of the general population.
Among sexes
As of October 1, 2014, men accounted for 98% of people currently on death row and 99% of executions since 1976.
Methods
thumb|250px|Usage of lethal injection in the US.
thumb|Number of executions each year by the method used in the United States and the earlier colonies from 1608 to 2004. The adoption of electrocution caused a marked drop off in the number of hangings, which was used even less with the use of gas inhalation. After Gregg v. Georgia, most states changed to lethal injection, leading to its rise.
All 31 states with the death penalty provide lethal injection as the primary method of execution.
Several states continue to use the historical three-drug protocol: an anesthetic, pancuronium bromide a paralytic, and potassium chloride to stop the heart. Eight states have used a single-drug protocol, inflicting only an overdose of a single anesthetic to the prisoner.
While some state statutes specify the drugs required, a majority do not, giving more flexibility to corrections officials.
Pressures from anti-death penalty activists and shareholders have since made it difficult for correctional services to get the chemicals, and most states have made it a criminal offense to reveal the identities of execution team members or furnishers of lethal injection drugs to avoid this. In 2015 imports of sodium thiopental for Texas and Arkansas from an Indian supplier not approved for the U.S. were seized by federal officials at airports.
Hospira, the only U.S. manufacturer of sodium thiopental, stopped making the drug in 2011. Since then, some states have used other barbiturates, such as pentobarbital or midazolam. In 2016 it was reported that more than 20 U.S. and European drug manufacturers including Pfizer (the owner of Hospira) had taken steps to prevent their drugs from being used for lethal injections.
In November 2015, California adopted regulations allowing the state to use its own public compounding pharmacies to make the chemicals.
Some states allow other methods than lethal injection, but only as secondary methods to be used merely at the request of the prisoner or if lethal injection is unavailable.
From 1976 to January 1, 2016, there were 1,422 executions, of which 1,247 were by lethal injection, 158 by electrocution, 11 by gas inhalation, 3 by hanging, and 3 by firing squad.
Lethal injection has been held constitutional by the U.S. Supreme Court two times in Baze v. Rees (2008) and Glossip v. Gross (2015).
Offender-selected methods
In the following states, death row inmates with an execution warrant may choose to be executed by:
Electrocution in Alabama, Arkansas, Florida, Kentucky, South Carolina, Tennessee and Virginia.
Gas inhalation in Arizona and California.
Hanging in Washington.
Firing squad in Utah.
In five states (Arizona, Arkansas, Kentucky, Tennessee and Utah), the alternative method is offered only to inmates sentenced to death for crimes committed prior to a specified date (usually when the state switched from the earlier method to lethal injection).
When an offender chooses to be executed by a means different from the state default method, which is always lethal injection, he loses the right to challenge its constitutionality in court (Stewart v. Lagrand, 1999).
The last executions by methods other than injection are as follows (all have chosen this method):
Method Date State Convict Electrocution Virginia Robert Gleason Firing squad Utah Ronnie Lee Gardner Lethal gas Arizona Walter LaGrand Hanging Delaware William Bailey
Backup methods
Depending on the state, the following alternative methods are statutorily provided in the event that lethal injection is either found unconstitutional by a court or unavailable for practical reasons:South Carolina Code of Laws:
Electrocution in Florida, Oklahoma, South Carolina and Tennessee.
Gas inhalation in California, Missouri, Oklahoma and Wyoming.
Hanging in New Hampshire.
Firing squad in Oklahoma and Utah.
Oklahoma is the only state allowing more than two methods of execution in its statutes, providing lethal injection, nitrogen hypoxia, electrocution and firing squad to be used in that order in the event that all earlier methods are unavailable. The nitrogen option was added by the Oklahoma Legislature in 2015 and has never been used in a judicial execution, though it is routinely used to give a painfree death in animal euthanasia.The Dawn of a New Form of Capital Punishment, Time, April 17, 2015
Three states (Oklahoma, Tennessee and Utah) have added back-up methods recently in 2014 or 2015 (or have expanded their application fields) in reaction to the shortage of lethal injection drugs.
Florida and Tennessee have the largest provisions dealing with execution methods unavailability, requiring their state departments of corrections to use "any constitutional method" if both lethal injection and electrocution are found unconstitutional. This was designed to make unnecessary any further legislative intervention in that event, but the provisions apply only to legal (not practical) infeasibility.
In May 2016, an Oklahoma grand jury recommended the state to use nitrogen hypoxia as its primary method of execution rather than as a mere backup, after experts testified that the method would be painfree, easy and "inexpensive".
Federal executions
The method of execution of federal prisoners for offenses under the Violent Crime Control and Law Enforcement Act of 1994 is that of the state in which the conviction took place. If the state has no death penalty, the judge must choose a state with the death penalty for carrying out the execution.
For offenses under the Drug Kingpin Act of 1988 or the Uniform Code of Military Justice, the method of execution is lethal injection.
Execution attendance
right|thumb|250px|The around 300 witnesses to the execution of Timothy McVeigh were mostly individuals related to the victims of the Oklahoma City bombing.
The last public execution in the U.S. was that of Rainey Bethea in Owensboro, Kentucky, on August 14, 1936.
It was the last execution in the nation at which the general public was permitted to attend without any legally imposed restrictions. "Public execution" is a legal phrase, defined by the laws of various states, and carried out pursuant to a court order. Similar to "public record" or "public meeting," it means that anyone who wants to attend the execution may do so.
Around 1890, a political movement developed in the United States to mandate private executions. Several states enacted laws which required executions to be conducted within a "wall" or "enclosure" or to "exclude public view." Most states laws currently use such explicit wording to prohibit public executions, while others do so only implicitly by enumerating the only authorized witnesses.Connecticut § 54–100 Kentucky 431.220 Missouri § 546.730 New Mexico § 31-14-12 ch. 279 Massachusetts § 60North Carolina § 15-188 Oklahoma Title 22 § 1015 Montana § 46-19-103 Ohio § 2949.22 Tennessee § 40-23-116 US Code Title 18 § 3596 Federal regulations 28 CFR 26.4
But nearly all states allow news reporters to be execution witnesses for information of the general public. Several states also allow victims' families and relatives selected by the prisoner to watch executions. An hour or two before the execution, the condemned is offered religious services and a last meal (except in Texas).
The execution of Timothy McVeigh on June 11, 2001 was witnessed by around 300 people, some by closed-circuit television.
Public opinion
Gallup, Inc. monitors support for the death penalty in the United States since 1937 by asking "Are you in favor of the death penalty for a person convicted of murder". The last poll in October 2016 gave 60% in favor and 37% opposed. A month earlier, a Pew Research poll found that 49% of Americans supported the death penalty for convicted murderers and 42% opposed, down from 80% in 1974 and the lowest support in 40 years.
When given a choice between the death penalty and life without parole, support has traditionally been significantly lower than polling which has only mentioned the death penalty. In 2010, for instance, one poll showed 49% favoring the death penalty and 46% favoring life imprisonment, while in another 61% said they preferred another punishment to the death penalty.
On the other hand, in November 2009, another Gallup poll found that 77% of Americans say that September 11 attacks' mastermind Khalid Sheikh Mohammed should get the death penalty if convicted, including 12 who normally opposed death penalty when asked the 1937 question. A similar result was found in 2001 when respondents were polled about the execution of Timothy McVeigh for the Oklahoma City Bombing that killed 168 victims.
Debate
Capital punishment is a controversial issue, with many prominent organizations and individuals participating in the debate. Amnesty International and some religions oppose capital punishment on moral grounds, while the Innocence Project works to free wrongly convicted prisoners, including death row inmates, based on newly available DNA tests. Other groups, such as some law enforcement organizations, and some victims' rights groups support capital punishment.
The United States is one of only five industrialized democracies that still practice capital punishment. From the others, Japan, Singapore, and Taiwan have executed prisoners, while South Korea currently has a moratorium in effect.
Religious groups are widely split on the issue of capital punishment. The Fiqh Council of North America, a group of highly influential Muslim scholars in the United States, has issued a fatwa calling for a moratorium on capital punishment in the United States until various preconditions in the legal system are met.
In October 2009, the American Law Institute voted to disavow the framework for capital punishment that it had created in 1962, as part of the Model Penal Code, "in light of the current intractable institutional and structural obstacles to ensuring a minimally adequate system for administering capital punishment." A study commissioned by the institute had said that experience had proved that the goal of individualized decisions about who should be executed and the goal of systemic fairness for minorities and others could not be reconciled.
In total, 156 prisoners have been either acquitted, or received pardons or commutations on the basis of possible innocence, between 1973 and 2015. Death penalty opponents often argue that this statistic shows how perilously close states have come to undertaking wrongful executions; proponents point out that the statistic refers only to those exonerated in law, and that the truly innocent may be a smaller number. Statistics likely understate the actual problem of wrongful convictions because once an execution has occurred there is often insufficient motivation and finance to keep a case open, and it becomes unlikely at that point that the miscarriage of justice will ever be exposed.
Arguments for and against capital punishment are based on moral, practical, and religious grounds. Advocates of the death penalty argue that it deters crime, is a good tool for prosecutors (in plea bargaining for example), improves the community by eliminating recidivism by executed criminals, provides closure to surviving victims or loved ones, and is a just penalty for the crimes it punishes.
Opponents argue that the death penalty is not an effective means of deterring crime,Experts Agree: Death Penalty Not A Deterrent To Violent Crime, http://news.ufl.edu/1997/01/15/death1/, January 15, 1997, accessed September 27, 2007 risks the execution of the innocent, is unnecessarily barbaric in nature, cheapens human life, and puts a government on the same base moral level as those criminals involved in murder.American Justice Volume 1 Furthermore, some opponents argue that the arbitrariness with which it is administered and the systemic influence of racial, socio-economic, geographic, and gender bias on determinations of desert make the current practice of capital punishment immoral and illegitimate.Londono, O. (2013), A Retributive Critique of Racial Bias and Arbitrariness in Capital Punishment. Journal of Social Philosophy, 44: 95–105.
Another argument (specific to the United States) in the capital punishment debate is the cost. The convict is more likely to use the whole appeals process if the jury issues a death sentence, than if it issues life without parole. But, others, who contest this argument, say that the greater cost of appeals where the prosecution does seek the death penalty is offset by the savings from avoiding trial altogether in cases where the defendant pleads guilty to avoid the death penalty.
The American public has maintained its position of support for capital punishment for murder. However, when given a choice between the death penalty and life imprisonment without parole, support has traditionally been significantly lower than polling which has only mentioned the death penalty as a punishment. In 2010, for instance, one poll showed 49 percent favoring the death penalty and 46 percent favoring life imprisonment while in another 61% said they preferred another punishment to the death penalty. The highest level of support for the death penalty recorded overall was 80 percent in 1994 (16 percent opposed), and the lowest recorded was 42 percent in 1966 (47 percent opposed). On the question of the death penalty vs. life without parole, the strongest preference for the death penalty was 61 percent in 1997 (29 percent favoring life), and the lowest preference for the death penalty was 47 percent in 2006 (48 percent favoring life).
After the September 2011 execution of Troy Davis, believed by many to be innocent, Richard Dieter, the director of the Death Penalty Information Center, said this case was a clear wake-up call to politicians across the United States. He said: "They weren't expecting such passion from people in opposition to the death penalty. There's a widely held perception that all Americans are united in favor of executions, but this message came across loud and clear that many people are not happy with it." Brian Evans of Amnesty International, which led the campaign to spare Davis's life, said that there was a groundswell in America of people "who are tired of a justice system that is inhumane and inflexible and allows executions where there is clear doubts about guilt". He predicted the debate would now be conducted with renewed energy.
Clemency and commutations
The largest number of clemencies was granted in January 2003 in Illinois when outgoing Governor George Ryan, who had already imposed a moratorium on executions, pardoned four death-row inmates and commuted the sentences of the remaining 167 to life in prison without the possibility of parole. When Governor Pat Quinn signed legislation abolishing the death penalty in Illinois in March 2011, he commuted the sentences of the fifteen inmates on death row to life imprisonment.
Previous post-Furman mass clemencies took place in 1986 in New Mexico, when Governor Toney Anaya commuted all death sentences because of his personal opposition to the death penalty. In 1991, outgoing Ohio Governor Dick Celeste commuted the sentences of eight prisoners, among them all four women on the state's death row. And during his two terms (1979–1987) as Florida's Governor, Bob Graham, although a strong death penalty supporter who had overseen the first post-Furman involuntary execution as well as 15 others, agreed to commute the sentences of six people on the grounds of "possible innocence" or "disproportionality."
Suicide on death row and volunteering for execution
The suicide rate of death row inmates was found by Lester and Tartaro to be 113 per 100,000 for the period 1976–1999. This is about ten times the rate of suicide in the United States as a whole and about six times the rate of suicide in the general U.S. prison population."Suicide on death row", David Lester and Christine Tartaro, Journal of Forensic Sciences, ISSN 0022-1198, 2002, vol. 47, no5, pp. 1108–1111
Since the reinstitution of the death penalty to January 1, 2016, 143 prisoners have waived their appeals and asked that the execution be carried out. Four states (Connecticut, New Mexico, Oregon, and Pennsylvania) have executed only volunteers in the post-Furman era.[deathpentaltyinfo.org]
Execution hiatus
All executions were suspended through the country between September 2007 and April 2008, when the U.S. Supreme Court was examining the constitutionality of lethal injection in Baze v. Rees, something unprecedented. It is the longest period with zero executions in the United States from 1982 to date. The method was ultimately upheld by a 7–2 margin.
In addition to the states that have no valid death penalty statute, the following states and jurisdictions have an official moratorium, or no executions for more than five years, as of 2016:
State / JurisdictionStatusHiatus status
Since the reinstatement of the death penalty, Kansas, New Hampshire, and the United States military have performed no executions. But in these jurisdictions, and in Wyoming, the lack of recent executions is caused by the absence of any condemned having yet exhausted the appeal process.
Since 1976, four states have executed only condemned prisoners who voluntarily waived further appeals: Pennsylvania has executed three inmates, Oregon two, Connecticut one, and New Mexico one.
In North Carolina, executions are suspended following a decision by the state's medical board that physicians cannot participate in executions, which is a requirement under state law.
In California, United States District Judge Jeremy Fogel suspended all executions in the state on December 15, 2006, ruling that the implementation used in California was unconstitutional but that it could be fixed.Judge says executions unconstitutional
On November 25, 2009, the Kentucky Supreme Court suspended executions until the state adopts regulations for carrying out the penalty by lethal injection.
In November 2011, Oregon Governor John Kitzhaber announced a moratorium on executions in Oregon, canceling a planned execution and ordering a review of the death penalty system in the state.
Pharmaceutical companies whose products are used in the three-drug cocktails for lethal injections are predominantly European, and they have strenuously objected to the use of their drugs for executions and taken steps to prevent their use. For example, Hospira, the sole American manufacturer of sodium thiopental, the critical anesthetic in the three-drug cocktail, announced in 2011 that it would no longer manufacture the drug for the American market, in part for ethical reasons and in part because its transfer of sodium thiopental manufacturing to Italy would subject it to the European Union's Torture Regulation, which forbids the use of any product manufactured within the Union for torture (as execution by lethal injection is considered by the Regulation). Since the drug manufacturers began taking these steps and the EU regulation ended the importation of drugs produced in Europe, the resulting shortage of execution drugs has led to or influenced decisions to suspend executions in Arkansas, California, Kentucky, Louisiana, Mississippi, Montana, Nevada, North Carolina, and Tennessee.
On June 22, 2012, the Arkansas Supreme Court ruled that the state's lethal injection law violates the Arkansas Constitution, primarily on technical separation-of-powers grounds.Arkansas Court Upends Death Penalty, New York Times, Robbie Brown, June 22, 2012
On February 11, 2014, Washington state Governor Jay Inslee announced a capital punishment moratorium. All death penalty cases that come to Inslee will result in him issuing a reprieve, not a pardon or commutation.
In May 2014, Oklahoma Director of Corrections, Robert Patton, recommended an indefinite hold on executions in the state after the botched execution of Clayton Lockett.
On February 13, 2015, Pennsylvania Governor Tom Wolf announced a moratorium on the death penalty. Wolf will issue a reprieve for every execution until a commission on capital punishment that was established in 2011 by the Pennsylvania State Senate produces a recommendation.
See also
Capital punishment debate in the United States
Capital punishment by the United States federal government
List of United States Supreme Court decisions on capital punishment
List of offenders executed in the United States in
List of death row inmates in the United States
List of last executions in the United States by crime
References
Further reading
Books
Bakken, Gordon Morris, ed. Invitation to an Execution: A History of the Death Penalty in the United States. University of New Mexico Press; 2010.
Banner, Stuart (2002). The Death Penalty: An American History. Harvard University Press. ISBN 0-674-00751-4.
Bessler, John D. Cruel and Unusual: The American Death Penalty and the Founders' Eighth Amendment. Boston, MA: Northeastern University Press, 2012.
Delfino, Michelangelo and Mary E. Day. (2007). Death Penalty USA 2005 – 2006 MoBeta Publishing, Tampa, Florida. ISBN 978-0-9725141-2-5; and Death Penalty USA 2003 – 2004 (2008). MoBeta Publishing, Tampa, Florida. ISBN 978-0-9725141-3-2.
Dow, David R., Dow, Mark (eds.) (2002). Machinery of Death: The Reality of America's Death Penalty Regime. Routledge, New York. ISBN 0-415-93266-1 (cloth), ISBN 0-415-93267-X (paperback) (Provides critical perspectives on the death penalty)
Garland, David (2010). Peculiar Institution: America's Death Penalty in an Age of Abolition. Harvard University Press.
Hartnett, Stephen John (2010). Executing Democracy, Volume 1: Capital Punishment and the Making of America, 1683–1807. East Lansing, MI: Michigan State University Press.
Hartnett, Stephen John (2012). Executing Democracy, Volume 2: Capital Punishment and the Making of America, 1635–1843. East Lansing, MI: Michigan State University Press.
Megivern, James J. (1997), The Death Penalty: An Historical and Theological Survey. Paulist Press, New York. ISBN 0-8091-0487-3
Osler, Mark William (2009). Jesus on Death Row: The Trial of Jesus and American Capital Punishment. Abingdon Press. ISBN 978-0-687-64756-9
Prejean, Helen (1993). Dead Man Walking. Random House. ISBN 0-679-75131-9 (paperback)(Describes the case of death row convict Elmo Patrick Sonnier, while also giving a general overview of issues connected to the death penalty)
Journal articles
Vidma, Neil and Phoebe Ellsworth. "Public Opinion and the Death Penalty" (Archive). Stanford Law Review''. June 1974. Volume 26, pp. 1245–1270.
External links
Prisoners Executed Under Civil Authority in the United States, by Year, Region, and Jurisdiction, 1977–2012 Bureau of Justice Statistics
United States of America: Death Penalty Worldwide Academic research database on the laws, practice, and statistics of capital punishment for every death penalty country in the world.
Category:Crime in the United States
Category:United States law | 412,425 | 2017-01 |
Switzerland | Switzerland (), officially the Swiss Confederation, is a federal republic in Europe. It consists of 26 cantons, and the city of Bern is the seat of the federal authorities.Bern is referred to as "federal city" (, , . Swiss law does not designate a capital as such, but the federal parliament and government are located in Bern, while the federal courts are located in other cities.
The country is situated in Western-Central Europe,There are several definitions. See Geography of Switzerland#Western or Central Europe?. and is bordered by Italy to the south, France to the west, Germany to the north, and Austria and Liechtenstein to the east. Switzerland is a landlocked country geographically divided between the Alps, the Swiss Plateau and the Jura, spanning an area of . While the Alps occupy the greater part of the territory, the Swiss population of approximately eight million people is concentrated mostly on the plateau, where the largest cities are to be found: among them are the two global cities and economic centres Zürich and Geneva.
The establishment of the Old Swiss Confederacy dates to the late medieval period, resulting from a series of military successes against Austria and Burgundy. Swiss independence from the Holy Roman Empire was formally recognized in the Peace of Westphalia in 1648. The country has a history of armed neutrality going back to the Reformation; it has not been in a state of war internationally since 1815 and did not join the United Nations until 2002. Nevertheless, it pursues an active foreign policy and is frequently involved in peace-building processes around the world. In addition to being the birthplace of the Red Cross, Switzerland is home to numerous international organisations, including the second largest UN office. On the European level, it is a founding member of the European Free Trade Association, but notably not part of the European Union or the European Economic Area. However, it participates in the Schengen Area and the European Single Market through bilateral treaties.
Spanning the intersection of Germanic and Romance Europe, Switzerland comprises four main linguistic and cultural regions: German, French, Italian and Romansh. Although the majority of the population are German speaking, Swiss national identity is rooted in a common historical background, shared values such as federalism and direct democracy, and Alpine symbolism. Due to its linguistic diversity, Switzerland is known by a variety of native names: Schweiz (German);Swiss Standard German spelling and pronunciation. The Swiss German name is sometimes spelled as Schwyz or Schwiiz . Schwyz is also the standard German (and international) name of one of the Swiss cantons. Suisse (French); Svizzera (Italian); and Svizra or (Romansh).The latter is the common Sursilvan pronunciation. On coins and stamps, Latin (frequently shortened to "Helvetia") is used instead of the four living languages.
Switzerland is one of the most developed countries in the world, with the highest nominal wealth per adult and the eighth-highest per capita gross domestic product according to the IMF. Switzerland ranks at or near the top globally in several metrics of national performance, including government transparency, civil liberties, quality of life, economic competitiveness, and human development. Zürich and Geneva have each been ranked among the top cities in the world in terms of quality of life, with the former ranked second globally, according to Mercer.
Etymology
The English name Switzerland is a compound containing Switzer, an obsolete term for the Swiss, which was in use during the 16th to 19th centuries.OED Online Etymology Dictionary etymonline.com. Retrieved on 25 June 2009 The English adjective Swiss is a loan from French , also in use since the 16th century. The name Switzer is from the Alemannic , in origin an inhabitant of Schwyz and its associated territory, one of the Waldstätten cantons which formed the nucleus of the Old Swiss Confederacy. The name originates as an exonym, applied pars pro toto to the troops of the Confederacy. The Swiss began to adopt the name for themselves after the Swabian War of 1499, used alongside the term for "Confederates", Eidgenossen (literally: comrades by oath), used since the 14th century. The data code for Switzerland, CH, is derived from Latin Confoederatio Helvetica ().
The toponym Schwyz itself was first attested in 972, as Old High German , ultimately perhaps related to "to burn", referring to the area of forest that was burned and cleared to build.Room, Adrian (2003) Placenames of the World. London: MacFarland and Co., ISBN 0-7864-1814-1. The name was extended to the area dominated by the canton, and after the Swabian War of 1499 gradually came to be used for the entire Confederation.Switzerland, the Catholic Encyclopedia newadvent.org. Retrieved on 26 January 2010On Schwyzers, Swiss and Helvetians , Federal Department of Home Affairs, admin.ch.
The Swiss German name of the country, , is homophonous to that of the canton and the settlement, but distinguished by the use of the definite article ( for the Confederation,Züritütsch, Schweizerdeutsch (p. 2) schweizerdeutsch.ch. Retrieved on 26 January 2010 but simply for the canton and the town).Kanton Schwyz: Kurzer historischer Überblick sz.ch. Retrieved on 26 January 2010
The Latin name Confoederatio Helvetica was neologized and introduced gradually after the formation of the federal state in 1848, harking back to the Napoleonic Helvetic Republic, appearing on coins from 1879, inscribed on the Federal Palace in 1902 and after 1948 used in the official seal.Marco Marcacci, Confederatio helvetica (2002), Historical Lexicon of Switzerland. (The ISO banking code, "CHF" for the Swiss franc, is taken from the state's Latin name). Helvetica is derived from the Helvetii, a Gaulish tribe living on the Swiss plateau before the Roman era.
Helvetia appears as a national personification of the Swiss confederacy in the 17th century with a 1672 play by Johann Caspar Weissenbach.
History
Switzerland has existed as a state in its present form since the adoption of the Swiss Federal Constitution in 1848. The precursors of Switzerland established a protective alliance at the end of the 13th century (1291), forming a loose confederation of states which persisted for centuries.
Early history
The oldest traces of hominid existence in Switzerland date back about 150,000 years.History. swissworld.org. Retrieved on 27 June 2009 The oldest known farming settlements in Switzerland, which were found at Gächlingen, have been dated to around 5300 BC.
thumb|left|Founded in 44 BC by Lucius Munatius Plancus, Augusta Raurica was the first Roman settlement on the Rhine and is now among the most important archaeological sites in Switzerland.Switzerland's Roman heritage comes to life swissinfo.ch
The earliest known cultural tribes of the area were members of the Hallstatt and La Tène cultures, named after the archaeological site of La Tène on the north side of Lake Neuchâtel. La Tène culture developed and flourished during the late Iron Age from around 450 BC, possibly under some influence from the Greek and Etruscan civilisations. One of the most important tribal groups in the Swiss region was the Helvetii. Steadily harassed by the Germanic tribes, in 58 BC the Helvetii decided to abandon the Swiss plateau and migrate to western Gallia, but Julius Caesar's armies pursued and defeated them at the Battle of Bibracte, in today's eastern France, forcing the tribe to move back to its original homeland. In 15 BC, Tiberius, who was destined to be the second Roman emperor and his brother, Drusus, conquered the Alps, integrating them into the Roman Empire. The area occupied by the Helvetii—the namesakes of the later Confoederatio Helvetica—first became part of Rome's Gallia Belgica province and then of its Germania Superior province, while the eastern portion of modern Switzerland was integrated into the Roman province of Raetia. Sometime around the start of the Common Era, the Romans maintained a large legionary camp called Vindonissa, now a ruin at the confluence of the Aare and Reuss rivers, near the town of Windisch, an outskirt of Brugg.
The first and second century AD were an age of prosperity for the population living on the Swiss plateau. Several towns, like Aventicum, Iulia Equestris and Augusta Raurica, reached a remarkable size, while hundreds of agricultural estates (Villae rusticae) were founded in the countryside.
Around 260 AD, the fall of the Agri Decumates territory north of the Rhine transformed today's Switzerland into a frontier land of the Empire. Repeated raids by the Alamanni tribes provoked the ruin of the Roman towns and economy, forcing the population to find shelter near Roman fortresses, like the Castrum Rauracense near Augusta Raurica. The Empire built another line of defence at the north border (the so-called Donau-Iller-Rhine-Limes), but at the end of the fourth century the increased Germanic pressure forced the Romans to abandon the linear defence concept, and the Swiss plateau was finally open to the settlement of Germanic tribes.
In the Early Middle Ages, from the end of the 4th century, the western extent of modern-day Switzerland was part of the territory of the Kings of the Burgundians. The Alemanni settled the Swiss plateau in the 5th century and the valleys of the Alps in the 8th century, forming Alemannia. Modern-day Switzerland was therefore then divided between the kingdoms of Alemannia and Burgundy. The entire region became part of the expanding Frankish Empire in the 6th century, following Clovis I's victory over the Alemanni at Tolbiac in 504 AD, and later Frankish domination of the Burgundians.Switzerland history Nationsencyclopedia.com. Retrieved on 27 November 2009History of Switzerland Nationsonline.org. Retrieved on 27 November 2009
Throughout the rest of the 6th, 7th and 8th centuries the Swiss regions continued under Frankish hegemony (Merovingian and Carolingian dynasties). But after its extension under Charlemagne, the Frankish Empire was divided by the Treaty of Verdun in 843. The territories of present-day Switzerland became divided into Middle Francia and East Francia until they were reunified under the Holy Roman Empire around 1000 AD.
By 1200, the Swiss plateau comprised the dominions of the houses of Savoy, Zähringer, Habsburg, and Kyburg. Some regions (Uri, Schwyz, Unterwalden, later known as Waldstätten) were accorded the Imperial immediacy to grant the empire direct control over the mountain passes. With the extinction of its male line in 1263 the Kyburg dynasty fell in AD 1264; then the Habsburgs under King Rudolph I (Holy Roman Emperor in 1273) laid claim to the Kyburg lands and annexed them extending their territory to the eastern Swiss plateau.
Old Swiss Confederacy
thumb|The 1291 Bundesbrief (Federal charter)
The Old Swiss Confederacy was an alliance among the valley communities of the central Alps. The Confederacy facilitated management of common interests and ensured peace on the important mountain trade routes. The Federal Charter of 1291 agreed between the rural communes of Uri, Schwyz, and Unterwalden is considered the confederacy's founding document, even though similar alliances are likely to have existed decades earlier.Schwabe & Co.: Geschichte der Schweiz und der Schweizer, Schwabe & Co 1986/2004. ISBN 3-7965-2067-7
thumb|upright=1.2|left|The Old Swiss Confederacy from 1291 (dark green) to the sixteenth century (light green) and its associates (blue). In the other colours are shown the subject territories.
By 1353, the three original cantons had joined with the cantons of Glarus and Zug and the Lucerne, Zürich and Bern city states to form the "Old Confederacy" of eight states that existed until the end of the 15th century. The expansion led to increased power and wealth for the confederation. By 1460, the confederates controlled most of the territory south and west of the Rhine to the Alps and the Jura mountains, particularly after victories against the Habsburgs (Battle of Sempach, Battle of Näfels), over Charles the Bold of Burgundy during the 1470s, and the success of the Swiss mercenaries. The Swiss victory in the Swabian War against the Swabian League of Emperor Maximilian I in 1499 amounted to de facto independence within the Holy Roman Empire.
The Old Swiss Confederacy had acquired a reputation of invincibility during these earlier wars, but expansion of the confederation suffered a setback in 1515 with the Swiss defeat in the Battle of Marignano. This ended the so-called "heroic" epoch of Swiss history. The success of Zwingli's Reformation in some cantons led to inter-cantonal religious conflicts in 1529 and 1531 (Wars of Kappel). It was not until more than one hundred years after these internal wars that, in 1648, under the Peace of Westphalia, European countries recognised Switzerland's independence from the Holy Roman Empire and its neutrality.
During the Early Modern period of Swiss history, the growing authoritarianism of the patriciate families combined with a financial crisis in the wake of the Thirty Years' War led to the Swiss peasant war of 1653. In the background to this struggle, the conflict between Catholic and Protestant cantons persisted, erupting in further violence at the First War of Villmergen, in 1656, and the Toggenburg War (or Second War of Villmergen), in 1712.
Napoleonic era
right|thumb|The Act of Mediation was Napoleon's attempt at a compromise between the Ancien Régime and a Republic.
In 1798, the revolutionary French government conquered Switzerland and imposed a new unified constitution. This centralised the government of the country, effectively abolishing the cantons: moreover, Mülhausen joined France and Valtellina valley, the Cisalpine Republic, separating from Switzerland. The new regime, known as the Helvetic Republic, was highly unpopular. It had been imposed by a foreign invading army and destroyed centuries of tradition, making Switzerland nothing more than a French satellite state. The fierce French suppression of the Nidwalden Revolt in September 1798 was an example of the oppressive presence of the French Army and the local population's resistance to the occupation.
When war broke out between France and its rivals, Russian and Austrian forces invaded Switzerland. The Swiss refused to fight alongside the French in the name of the Helvetic Republic. In 1803 Napoleon organised a meeting of the leading Swiss politicians from both sides in Paris. The result was the Act of Mediation which largely restored Swiss autonomy and introduced a Confederation of 19 cantons. Henceforth, much of Swiss politics would concern balancing the cantons' tradition of self-rule with the need for a central government.
In 1815 the Congress of Vienna fully re-established Swiss independence and the European powers agreed to permanently recognise Swiss neutrality. Swiss troops still served foreign governments until 1860 when they fought in the Siege of Gaeta. The treaty also allowed Switzerland to increase its territory, with the admission of the cantons of Valais, Neuchâtel and Geneva. Switzerland's borders have not changed since, except for some minor adjustments.. It should be noticed that in valle di Lei Italy got in exchange a territory of the same area. See here
Federal state
thumb|The first Federal Palace in Bern (1857). One of the three cantons presiding over the Tagsatzung (former legislative and executive council), Bern was chosen as the federal capital in 1848, mainly because of its closeness to the French-speaking area.
The restoration of power to the patriciate was only temporary. After a period of unrest with repeated violent clashes such as the Züriputsch of 1839, civil war (the Sonderbundskrieg) broke out in 1847 when some Catholic cantons tried to set up a separate alliance (the Sonderbund). The war lasted for less than a month, causing fewer than 100 casualties, most of which were through friendly fire. Yet however minor the Sonderbundskrieg appears compared with other European riots and wars in the 19th century, it nevertheless had a major impact on both the psychology and the society of the Swiss and of Switzerland.
The war convinced most Swiss of the need for unity and strength towards its European neighbours. Swiss people from all strata of society, whether Catholic or Protestant, from the liberal or conservative current, realised that the cantons would profit more if their economic and religious interests were merged.
Thus, while the rest of Europe saw revolutionary uprisings, the Swiss drew up a constitution which provided for a federal layout, much of it inspired by the American example. This constitution provided for a central authority while leaving the cantons the right to self-government on local issues. Giving credit to those who favoured the power of the cantons (the Sonderbund Kantone), the national assembly was divided between an upper house (the Council of States, two representatives per canton) and a lower house (the National Council, with representatives elected from across the country). Referendums were made mandatory for any amendment of this constitution.
thumb|left|Inauguration in 1882 of the Gotthard Rail Tunnel connecting the southern canton of Ticino, the longest in the world at the time.Tunnel Vision: Switzerland's AlpTransit Gotthard Tunnel inboundlogistics.com. Retrieved on 24 April 2010
A system of single weights and measures was introduced and in 1850 the Swiss franc became the Swiss single currency. Article 11 of the constitution forbade sending troops to serve abroad, though the Swiss were still obliged to serve Francis II of the Two Sicilies with Swiss Guards present at the Siege of Gaeta in 1860, marking the end of foreign service.
An important clause of the constitution was that it could be re-written completely if this was deemed necessary, thus enabling it to evolve as a whole rather than being modified one amendment at a time.Histoire de la Suisse, Éditions Fragnière, Fribourg, Switzerland
This need soon proved itself when the rise in population and the Industrial Revolution that followed led to calls to modify the constitution accordingly. An early draft was rejected by the population in 1872 but modifications led to its acceptance in 1874. It introduced the facultative referendum for laws at the federal level. It also established federal responsibility for defence, trade, and legal matters.
In 1891, the constitution was revised with unusually strong elements of direct democracy, which remain unique even today.
Modern history
thumb|General Ulrich Wille, Commander-in-Chief of the Swiss Army during World War I
Switzerland was not invaded during either of the world wars. During World War I, Switzerland was home to Vladimir Illych Ulyanov (Vladimir Lenin) and he remained there until 1917.Lenin and the Swiss non-revolution swissinfo.ch. Retrieved on 25 January 2010 Swiss neutrality was seriously questioned by the Grimm–Hoffmann Affair in 1917, but it was short-lived. In 1920, Switzerland joined the League of Nations, which was based in Geneva, on condition that it was exempt from any military requirements.
During World War II, detailed invasion plans were drawn up by the Germans,Urner, Klaus (2001) Let's Swallow Switzerland, Lexington Books, pp. 4, 7, ISBN 0739102559. but Switzerland was never attacked. Switzerland was able to remain independent through a combination of military deterrence, concessions to Germany, and good fortune as larger events during the war delayed an invasion.Book review: Target Switzerland: Swiss Armed Neutrality in World War II, Halbrook, Stephen P. stonebooks.com. Retrieved on 2 December 2009 Under General Henri Guisan central command, a general mobilisation of the armed forces was ordered. The Swiss military strategy was changed from one of static defence at the borders to protect the economic heartland, to one of organised long-term attrition and withdrawal to strong, well-stockpiled positions high in the Alps known as the Reduit. Switzerland was an important base for espionage by both sides in the conflict and often mediated communications between the Axis and Allied powers.
Switzerland's trade was blockaded by both the Allies and by the Axis. Economic cooperation and extension of credit to the Third Reich varied according to the perceived likelihood of invasion and the availability of other trading partners. Concessions reached a peak after a crucial rail link through Vichy France was severed in 1942, leaving Switzerland completely surrounded by the Axis. Over the course of the war, Switzerland interned over 300,000 refugees and the International Red Cross, based in Geneva, played an important part during the conflict. Strict immigration and asylum policies as well as the financial relationships with Nazi Germany raised controversy, but not until the end of the 20th century.Switzerland, National Socialism and the Second World War. Final Report of the Independent Commission of Experts Switzerland, Pendo Verlag GmbH, Zürich 2002, ISBN 3-85842-603-2, p. 498.
During the war, the Swiss Air Force engaged aircraft of both sides, shooting down 11 intruding Luftwaffe planes in May and June 1940, then forcing down other intruders after a change of policy following threats from Germany. Over 100 Allied bombers and their crews were interned during the war. During 1944–45, Allied bombers mistakenly bombed a few places in Switzerland, among which were the cities of Schaffhausen, Basel and Zürich.
After the war, the Swiss government exported credits through the charitable fund known as the Schweizerspende and also donated to the Marshall Plan to help Europe's recovery, efforts that ultimately benefited the Swiss economy.Switzerland, National Socialism and the Second World War. Final Report of the Independent Commission of Experts Switzerland, Pendo Verlag GmbH, Zürich 2002, ISBN 3-85842-603-2, p. 521.
During the Cold War, Swiss authorities considered the construction of a Swiss nuclear bomb.7.4 States Formerly Possessing or Pursuing Nuclear Weapons Retrieved 6 March 2014 Leading nuclear physicists at the Federal Institute of Technology Zürich such as Paul Scherrer made this a realistic possibility. However, financial problems with the defence budget prevented the substantial funds from being allocated, and the Nuclear Non-Proliferation Treaty of 1968 was seen as a valid alternative. All remaining plans for building nuclear weapons were dropped by 1988.
Switzerland was the last Western republic to grant women the right to vote. Some Swiss cantons approved this in 1959, while at the federal level it was achieved in 1971Country profile: Switzerland. UK Foreign and Commonwealth Office (29 October 2012). and, after resistance, in the last canton Appenzell Innerrhoden (one of only two remaining Landsgemeinde) in 1990. After obtaining suffrage at the federal level, women quickly rose in political significance, with the first woman on the seven member Federal Council executive being Elisabeth Kopp, who served from 1984–1989, and the first female president being Ruth Dreifuss in 1999.
thumb|In 2003, by granting the Swiss People's Party a second seat in the governing cabinet, the Parliament altered the coalition which had dominated Swiss politics since 1959.
Switzerland joined the Council of Europe in 1963. In 1979 areas from the canton of Bern attained independence from the Bernese, forming the new canton of Jura. On 18 April 1999 the Swiss population and the cantons voted in favour of a completely revised federal constitution.
In 2002 Switzerland became a full member of the United Nations, leaving the Vatican City as the last widely recognised state without full UN membership. Switzerland is a founding member of the EFTA, but is not a member of the European Economic Area. An application for membership in the European Union was sent in May 1992, but not advanced since the EEA was rejected in December 1992 when Switzerland was the only country to launch a referendum on the EEA. There have since been several referendums on the EU issue; due to a mixed reaction from the population the membership application has been frozen. Nonetheless, Swiss law is gradually being adjusted to conform with that of the EU, and the government has signed a number of bilateral agreements with the European Union. Switzerland, together with Liechtenstein, has been completely surrounded by the EU since Austria's entry in 1995. On 5 June 2005, Swiss voters agreed by a 55% majority to join the Schengen treaty, a result that was regarded by EU commentators as a sign of support by Switzerland, a country that is traditionally perceived as independent and reluctant to enter supranational bodies.
Geography
thumb|upright=1.1|Physical map of Switzerland
thumb|Köppen climate classification types of Switzerland
Extending across the north and south side of the Alps in west-central Europe, Switzerland encompasses a great diversity of landscapes and climates on a limited area of . The population is about 8 million, resulting in an average population density of around 195 people per square kilometre (500/sq mi). The more mountainous southern half of the country is far more sparsely populated than the northern half. In the largest Canton of Graubünden, lying entirely in the Alps, population density falls to 27 /km² (70 /sq mi).
Switzerland lies between latitudes 45° and 48° N, and longitudes 5° and 11° E. It contains three basic topographical areas: the Swiss Alps to the south, the Swiss Plateau or Central Plateau, and the Jura mountains on the west. The Alps are a high mountain range running across the central-south of the country, comprising about 60% of the country's total area. The majority of the Swiss population live in the Swiss Plateau. Among the high valleys of the Swiss Alps many glaciers are found, totalling an area of . From these originate the headwaters of several major rivers, such as the Rhine, Inn, Ticino and Rhône, which flow in the four cardinal directions into the whole of Europe. The hydrographic network includes several of the largest bodies of freshwater in Central and Western Europe, among which are included Lake Geneva (also called le Lac Léman in French), Lake Constance (known as Bodensee in German) and Lake Maggiore. Switzerland has more than 1500 lakes, and contains 6% of Europe's stock of fresh water. Lakes and glaciers cover about 6% of the national territory. The largest lake is Lake Geneva, in western Switzerland shared with France. The Rhône is both the main source and outflow of Lake Geneva. Lake Constance is the second largest Swiss lake and, like the Lake Geneva, an intermediate step by the Rhine at the border to Austria and Germany. While the Rhône flows into the Mediterranean Sea at the French Camargue region and the Rhine flows into the North Sea at Rotterdam in the Netherlands, about apart, both springs are only about apart from each other in the Swiss Alps.
48 of Switzerland's mountains are above sea in altitude or higher. At , Monte Rosa is the highest, although the Matterhorn () is often regarded as the most famous. Both are located within the Pennine Alps in the canton of Valais, on the border with Italy. The section of the Bernese Alps above the deep glacial Lauterbrunnen valley, containing 72 waterfalls, is well known for the Jungfrau () Eiger and Mönch, and the many picturesque valleys in the region. In the southeast the long Engadin Valley, encompassing the St. Moritz area in canton of Graubünden, is also well known; the highest peak in the neighbouring Bernina Alps is Piz Bernina ().
The more populous northern part of the country, comprising about 30% of the country's total area, is called the Swiss Plateau. It has greater open and hilly landscapes, partly forested, partly open pastures, usually with grazing herds, or vegetables and fruit fields, but it is still hilly. There are large lakes found here and the biggest Swiss cities are in this area of the country.
Climate
The Swiss climate is generally temperate, but can vary greatly between the localities, from glacial conditions on the mountaintops to the often pleasant near Mediterranean climate at Switzerland's southern tip. There are some valley areas in the southern part of Switzerland where some cold-hardy palm trees are found. Summers tend to be warm and humid at times with periodic rainfall so they are ideal for pastures and grazing. The less humid winters in the mountains may see long intervals of stable conditions for weeks, while the lower lands tend to suffer from inversion, during these periods, thus seeing no sun for weeks.
A weather phenomenon known as the föhn (with an identical effect to the chinook wind) can occur at all times of the year and is characterised by an unexpectedly warm wind, bringing air of very low relative humidity to the north of the Alps during rainfall periods on the southern face of the Alps. This works both ways across the alps but is more efficient if blowing from the south due to the steeper step for oncoming wind from the south. Valleys running south to north trigger the best effect.
The driest conditions persist in all inner alpine valleys that receive less rain because arriving clouds lose a lot of their content while crossing the mountains before reaching these areas. Large alpine areas such as Graubünden remain drier than pre-alpine areas and as in the main valley of the Valais wine grapes are grown there.
The wettest conditions persist in the high Alps and in the Ticino canton which has much sun yet heavy bursts of rain from time to time. Precipitation tends to be spread moderately throughout the year with a peak in summer. Autumn is the driest season, winter receives less precipitation than summer, yet the weather patterns in Switzerland are not in a stable climate system and can be variable from year to year with no strict and predictable periods.
Environment
Switzerland's ecosystems can be particularly fragile, because the many delicate valleys separated by high mountains often form unique ecologies. The mountainous regions themselves are also vulnerable, with a rich range of plants not found at other altitudes, and experience some pressure from visitors and grazing. The climatic, geological and topographical conditions of the alpine region make for a very fragile ecosystem that is particularly sensitive to climate change. Nevertheless, according to the 2014 Environmental Performance Index, Switzerland ranks first among 132 nations in safeguarding the environment, due to its high scores on environmental public health, its heavy reliance on renewable sources of energy (hydropower and geothermal energy), and its control of greenhouse gas emissions.
Politics
thumb|The Swiss Federal Council in 2016 with President Johann Schneider-Ammann (front, centre)As shown in this image, the current members of the council are (as of January 2016, from left to right): Federal Councillor Alain Berset, Federal Councillor Didier Burkhalter, Vice-President Doris Leuthard, President Johann Schneider-Ammann, Federal Councillor Ueli Maurer, Federal Councillor Simonetta Sommaruga, Federal Councillor Guy Parmelin and Federal Chancellor Corina Casanova
The Federal Constitution adopted in 1848 is the legal foundation of the modern federal state. It is among the oldest constitutions in the world. A new Constitution was adopted in 1999, but did not introduce notable changes to the federal structure. It outlines basic and political rights of individuals and citizen participation in public affairs, divides the powers between the Confederation and the cantons and defines federal jurisdiction and authority. There are three main governing bodies on the federal level: the bicameral parliament (legislative), the Federal Council (executive) and the Federal Court (judicial).
thumb|upright|left|The Federal Palace, seat of the Federal Assembly and the Federal Council.
The Swiss Parliament consists of two houses: the Council of States which has 46 representatives (two from each canton and one from each half-canton) who are elected under a system determined by each canton, and the National Council, which consists of 200 members who are elected under a system of proportional representation, depending on the population of each canton. Members of both houses serve for 4 years and only serve as members of parliament part-time (so-called "Milizsystem" or Citizen legislature). When both houses are in joint session, they are known collectively as the Federal Assembly. Through referendums, citizens may challenge any law passed by parliament and through initiatives, introduce amendments to the federal constitution, thus making Switzerland a direct democracy.
The Federal Council constitutes the federal government, directs the federal administration and serves as collective Head of State. It is a collegial body of seven members, elected for a four-year mandate by the Federal Assembly which also exercises oversight over the Council. The President of the Confederation is elected by the Assembly from among the seven members, traditionally in rotation and for a one-year term; the President chairs the government and assumes representative functions. However, the president is a primus inter pares with no additional powers, and remains the head of a department within the administration.
The Swiss government has been a coalition of the four major political parties since 1959, each party having a number of seats that roughly reflects its share of electorate and representation in the federal parliament.
The classic distribution of 2 CVP/PDC, 2 SPS/PSS, 2 FDP/PRD and 1 SVP/UDC as it stood from 1959 to 2003 was known as the "magic formula". Following the 2015 Federal Council elections, the seven seats in the Federal Council were distributed as follows:
1 seat for the Christian Democratic People's Party (CVP/PDC),
2 seats for the Free Democratic Party (FDP/PRD),
2 seats for the Social Democratic Party (SPS/PSS),
2 seats for the Swiss People's Party (SVP/UDC).
The function of the Federal Supreme Court is to hear appeals against rulings of cantonal or federal courts. The judges are elected by the Federal Assembly for six-year terms.
Direct democracy
thumb|The Landsgemeinde is an old form of direct democracy. It is still practised in two cantons.
Direct democracy and federalism are hallmarks of the Swiss political system. Swiss citizens are subject to three legal jurisdictions: the municipality, canton and federal levels. The 1848/1999 federal constitution defines a system of direct democracy (sometimes called half-direct or representative direct democracy because it is aided by the more commonplace institutions of a representative democracy). The instruments of this system at the federal level, known as popular rights (, , ), include the right to submit a federal initiative and a referendum, both of which may overturn parliamentary decisions.
By calling a federal referendum, a group of citizens may challenge a law passed by parliament, if they gather 50,000 signatures against the law within 100 days. If so, a national vote is scheduled where voters decide by a simple majority whether to accept or reject the law. Any 8 cantons together can also call a constitutional referendum on a federal law.
Similarly, the federal constitutional initiative allows citizens to put a constitutional amendment to a national vote, if 100,000 voters sign the proposed amendment within 18 months.Since 1999, an initiative can also be in the form of a general proposal to be elaborated by Parliament, but because it is considered less attractive for various reasons, this form of initiative has yet to find any use. The Federal Council and the Federal Assembly can supplement the proposed amendment with a counter-proposal, and then voters must indicate a preference on the ballot in case both proposals are accepted. Constitutional amendments, whether introduced by initiative or in parliament, must be accepted by a double majority of the national popular vote and the cantonal popular votes.That is a majority of 23 cantonal votes, because the result of the popular vote in the six traditional half-cantons each counts as half the vote of one of the other cantons.
Administrative divisions
The Swiss Confederation consists of 20 cantons and 6 half cantons:
500 px|Swiss cantons
Canton ID Capital Canton ID Capital 14px Aargau 19 Aarau 14px *Nidwalden 7 Stans 14px *Appenzell Ausserrhoden 15 Herisau 14px *Obwalden 6 Sarnen 14px *Appenzell Innerrhoden 16 Appenzell 14px Schaffhausen 14 Schaffhausen 14px *Basel-Landschaft 13 Liestal 14px Schwyz 5 Schwyz 14px *Basel-Stadt 12 Basel 14px Solothurn 11 Solothurn 14px Bern 2 Bern 14px St. Gallen 17 St. Gallen 14px Fribourg 10 Fribourg 14px Thurgau 20 Frauenfeld 14px Geneva 25 Geneva 14px Ticino 21 Bellinzona 14px Glarus 8 Glarus 14px Uri 4 Altdorf 14px Graubünden 18 Chur 14px Valais 23 Sion 14px Jura 26 Delémont 14px Vaud 22 Lausanne 14px Lucerne 3 Lucerne 14px Zug 9 Zug 14px Neuchâtel 24 Neuchâtel 14px Zürich 1 Zürich
*These cantons are known as half-cantons and are thus represented by only one councillor (instead of two) in the Council of States.
The cantons have a permanent constitutional status and, in comparison with the situation in other countries, a high degree of independence. Under the Federal Constitution, all 26 cantons are equal in status. Each canton has its own constitution, and its own parliament, government and courts. However, there are considerable differences between the individual cantons, most particularly in terms of population and geographical area. Their populations vary between 15,000 (Appenzell Innerrhoden) and 1,253,500 (Zürich), and their area between (Basel-Stadt) and (Graubünden). The cantons comprise a total of 2,485 municipalities. Within Switzerland there are two enclaves: Büsingen belongs to Germany, Campione d'Italia belongs to Italy.Enclaves of the world enclaves.webs.com. Retrieved on 15 December 2009
Foreign relations and international institutions
Traditionally, Switzerland avoids alliances that might entail military, political, or direct economic action and has been neutral since the end of its expansion in 1515. Its policy of neutrality was internationally recognised at the Congress of Vienna in 1815.Neutrality and isolationism swissworld.org, Retrieved on 23 June 2009 Only in 2002 did Switzerland become a full member of the United Nations and it was the first state to join it by referendum. Switzerland maintains diplomatic relations with almost all countries and historically has served as an intermediary between other states. Switzerland is not a member of the European Union; the Swiss people have consistently rejected membership since the early 1990s. However, Switzerland does participate in the Schengen Area.
thumb|upright|left|The monochromatically reversed Swiss flag became the symbol of the Red Cross Movement, founded in 1863 by Henri Dunant.Henri Dunant, the Nobel Peace Prize 1901 nobelprize.org. Retrieved on 2 December 2009
A large number of international institutions have their seats in Switzerland, in part because of its policy of neutrality. Geneva is the birthplace of the Red Cross and Red Crescent Movement and the Geneva Conventions and, since 2006, hosts the United Nations Human Rights Council. Even though Switzerland is one of the most recent countries to have joined the United Nations, the Palace of Nations in Geneva is the second biggest centre for the United Nations after New York, and Switzerland was a founding member and home to the League of Nations.
Apart from the United Nations headquarters, the Swiss Confederation is host to many UN agencies, like the World Health Organization (WHO), the International Labour Organization (ILO), the International Telecommunication Union (ITU), the United Nations High Commissioner for Refugees (UNHCR) and about 200 other international organisations, including the World Trade Organization and the World Intellectual Property Organization. The annual meetings of the World Economic Forum in Davos bring together top international business and political leaders from Switzerland and foreign countries to discuss important issues facing the world, including health and the environment. Additionally the headquarters of the Bank for International Settlements (BIS) are located in Basel since 1930.
Furthermore, many sport federations and organisations are located throughout the country, such as the International Basketball Federation in Geneva, the Union of European Football Associations (UEFA) in Nyon, the International Federation of Association Football (FIFA) and the International Ice Hockey Federation both in Zürich, the International Cycling Union in Aigle, and the International Olympic Committee in Lausanne.Sports directory if-sportsguide.ch. Retrieved on 25 January 2010
Military
thumb|upright|A Swiss Air Force F/A-18 Hornet at Axalp Air Show
The Swiss Armed Forces, including the Land Forces and the Air Force, are composed mostly of conscripts, male citizens aged from 20 to 34 (in special cases up to 50) years. Being a landlocked country, Switzerland has no navy; however, on lakes bordering neighbouring countries, armed military patrol boats are used. Swiss citizens are prohibited from serving in foreign armies, except for the Swiss Guards of the Vatican, or if they are dual citizens of a foreign country and reside there.
The structure of the Swiss militia system stipulates that the soldiers keep their Army issued equipment, including all personal weapons, at home. Some organisations and political parties find this practice controversialAn initiative to abandon this practice has been launched on 4 September 2007, and supported by GSoA, the Green Party of Switzerland and the Social Democratic Party of Switzerland as well as other organisations which are listed at Tragende und unterstützende Organisationen. schutz-vor-waffengewalt.ch but mainstream Swiss opinion is in favour of the system. Compulsory military service concerns all male Swiss citizens; women can serve voluntarily. Men usually receive military conscription orders for training at the age of 18. About two thirds of the young Swiss are found suited for service; for those found unsuited, various forms of alternative service exist. Annually, approximately 20,000 persons are trained in recruit centres for a duration from 18 to 21 weeks. The reform "Army XXI" was adopted by popular vote in 2003, it replaced the previous model "Army 95", reducing the effectives from 400,000 to about 200,000. Of those, 120,000 are active in periodic Army training and 80,000 are non-training reserves.Die Armee in Zahlen – Truppenbestände. www.vbs.admin.ch (in German)
thumb|left|Swiss built Mowag Eagles of the Land Forces
Overall, three general mobilisations have been declared to ensure the integrity and neutrality of Switzerland. The first one was held on the occasion of the Franco-Prussian War of 1870–71. The second was in response to the outbreak of the First World War in August 1914. The third mobilisation of the army took place in September 1939 in response to the German attack on Poland; Henri Guisan was elected as the General-in-Chief.
Because of its neutrality policy, the Swiss army does not currently take part in armed conflicts in other countries, but is part of some peacekeeping missions around the world. Since 2000 the armed force department has also maintained the Onyx intelligence gathering system to monitor satellite communications.As context, according to Edwin Reischauer, "To be neutral you must be ready to be highly militarized, like Switzerland or Sweden." – see Chapin, Emerson. "Edwin Reischauer, Diplomat and Scholar, Dies at 79," New York Times. 2 September 1990.
Following the end of the Cold War there have been a number of attempts to curb military activity or even abolish the armed forces altogether. A notable referendum on the subject, launched by an anti-militarist group, was held on 26 November 1989. It was defeated with about two thirds of the voters against the proposal.Volksabstimmung vom 26. November 1989 admin.ch. Retrieved on 25 January 2010L'évolution de la politique de sécurité de la Suisse ("Evolution of Swiss Security Policies") by Manfred Rôsch, NATO.int A similar referendum, called for before, but held shortly after the 11 September attacks in the US, was defeated by over 78% of voters.Volksinitiative 'für eine glaubwürdige Sicherheitspolitik und eine Schweiz ohne Armee (in German) admin.ch. Retrieved on 7 December 2009
Gun politics in Switzerland are unique in Europe in that a relatively high percentage (29%) of citizens are legally armed. The large majority of firearms kept at home are militia-issued weapons, but ammunition is not issued.
Economy and labour law
thumb|Swiss Bond: 3.5% Obligation, issued 6. July 1889
thumb|The Omega Speedmaster worn on the moon during the Apollo missions. In terms of value, Switzerland is responsible for half of the world production of watches.
Switzerland has a stable, prosperous and high-tech economy and enjoys great wealth, being ranked as the wealthiest country in the world per capita in multiple rankings. In 2011 it was ranked as the wealthiest country in the world in per capita terms (with "wealth" being defined to include both financial and non-financial assets), while the 2013 Credit Suisse Global Wealth Report showed that Switzerland was the country with the highest average wealth per adult in 2013.Credit Suisse: Global wealth has soared 14% since 2010 to USD 231 trillion with the strongest growth in emerging markets. Credit Suisse.Table 2: Top 10 countries with the highest average wealth per adult in 2011. Credit Suisse. It has the world's nineteenth largest economy by nominal GDP and the thirty-sixth largest by purchasing power parity. It is the twentieth largest exporter, despite its small size. Switzerland has the highest European rating in the Index of Economic Freedom 2010, while also providing large coverage through public services.2012 Index of Economic Freedom: Switzerland heritage.org. Retrieved on 25 January 2011 The nominal per capita GDP is higher than those of the larger Western and Central European economies and Japan. If adjusted for purchasing power parity, Switzerland ranks 8th in the world in terms of GDP per capita, according to the World Bank and IMF (ranked 15th according to the CIA Worldfactbook).
The World Economic Forum's Global Competitiveness Report currently ranks Switzerland's economy as the most competitive in the world, while ranked by the European Union as Europe's most innovative country.The Innovation Union's performance scoreboard for Research and Innovation 2010. Maastricht Economic and social Research and training centre on Innovation and Technology, 1 February 2011. For much of the 20th century, Switzerland was the wealthiest country in Europe by a considerable margin (by GDP – per capita). In 2007 the gross median household income in Switzerland was an estimated 137,094 USD at Purchasing power parity while the median income was 95,824 USD.Comparative Agendas accessed 12 July 2013 Switzerland also has one of the world's largest account balances as a percentage of GDP.
thumb|upright|left|The Greater Zürich Area, home to 1.5 million inhabitants and 150,000 companies, is one of the most important economic centres in the world.The most powerful cities in the world citymayors.com. Retrieved on 27 April 2012
Switzerland is home to several large multinational corporations. The largest Swiss companies by revenue are Glencore, Gunvor, Nestlé, Novartis, Hoffmann-La Roche, ABB, Mercuria Energy Group and Adecco. Also, notable are UBS AG, Zurich Financial Services, Credit Suisse, Barry Callebaut, Swiss Re, Tetra Pak, The Swatch Group and Swiss International Air Lines. Switzerland is ranked as having one of the most powerful economies in the world.
Switzerland's most important economic sector is manufacturing. Manufacturing consists largely of the production of specialist chemicals, health and pharmaceutical goods, scientific and precision measuring instruments and musical instruments. The largest exported goods are chemicals (34% of exported goods), machines/electronics (20.9%), and precision instruments/watches (16.9%). Exported services amount to a third of exports.Swiss Statistical Yearbook 2008 by Swiss Federal Statistical Office The service sector – especially banking and insurance, tourism, and international organisations – is another important industry for Switzerland.
Around 3.8 million people work in Switzerland; about 25% of employees belonged to a trade union in 2004. Switzerland has a more flexible job market than neighbouring countries and the unemployment rate is very low. The unemployment rate increased from a low of 1.7% in June 2000 to a peak of 4.4% in December 2009.Swiss jobless reach 12-year high – a mere 4.4 pct. Associated Press (8 January 2010). The unemployment rate decreased to 3.2% in 2014 without further decrease in 2015 and 2016.The World Factbook Population growth from net immigration is quite high, at 0.52% of population in 2004. The foreign citizen population was 21.8% in 2004, about the same as in Australia. GDP per hour worked is the world's 16th highest, at 49.46 international dollars in 2012.
thumb|The Engadin Valley. Tourism constitutes an important revenue for the less industrialised alpine regions.
Switzerland has an overwhelmingly private sector economy and low tax rates by Western World standards; overall taxation is one of the smallest of developed countries. Switzerland is a relatively easy place to do business, currently ranking 20th of 189 countries in the Ease of Doing Business Index. The slow growth Switzerland experienced in the 1990s and the early 2000s has brought greater support for economic reforms and harmonisation with the European Union.Policy Brief: Economic Survey of Switzerland, 2007 (326 KiB), OECD Economic Policy Reforms: Going for Growth 2008 – Switzerland Country Note. Organisation for Economic Co-operation and Development (OECD), 2008, ISBN 978-92-64-04284-1 According to Credit Suisse, only about 37% of residents own their own homes, one of the lowest rates of home ownership in Europe. Housing and food price levels were 171% and 145% of the EU-25 index in 2007, compared to 113% and 104% in Germany.
The Swiss Federal budget had a size of 62.8 billion Swiss francs in 2010, which is an equivalent 11.35% of the country's GDP in that year; however, the regional (canton) budgets and the budgets of the municipalities are not counted as part of the federal budget and the total rate of government spending is closer to 33.8% of GDP. The main sources of income for the federal government are the value-added tax (33%) and the direct federal tax (29%) and the main expenditure is located in the areas of social welfare and finance & tax. The expenditures of the Swiss Confederation have been growing from 7% of GDP in 1960 to 9.7% in 1990 and to 10.7% in 2010. While the sectors social welfare and finance & tax have been growing from 35% in 1990 to 48.2% in 2010, a significant reduction of expenditures has been occurring in the sectors of agriculture and national defence; from 26.5% in to 12.4% (estimation for the year 2015).Federal Department of Finance. (2012/1). p. 82.
Agricultural protectionism—a rare exception to Switzerland's free trade policies—has contributed to high food prices. Product market liberalisation is lagging behind many EU countries according to the OECD. Nevertheless, domestic purchasing power is one of the best in the world.Domestic purchasing power of wages (68 KiB) Switzerland tops in buying power. Swiss News (1 May 2005).Want the world's best wages? Move to Switzerland reuters.com. Retrieved on 14 January 2010. Apart from agriculture, economic and trade barriers between the European Union and Switzerland are minimal and Switzerland has free trade agreements worldwide. Switzerland is a member of the European Free Trade Association (EFTA).
Education and science
thumb|upright|Some Swiss scientists who played a key role in their discipline (clockwise):Leonhard Euler (mathematics)Louis Agassiz (glaciology)Auguste Piccard (aeronautics)Albert Einstein (physics)
Education in Switzerland is very diverse because the constitution of Switzerland delegates the authority for the school system to the cantons.The Swiss education system swissworld.org, Retrieved on 23 June 2009 There are both public and private schools, including many private international schools. The minimum age for primary school is about six years in all cantons, but most cantons provide a free "children's school" starting at four or five years old. Primary school continues until grade four, five or six, depending on the school. Traditionally, the first foreign language in school was always one of the other national languages, although recently (2000) English was introduced first in a few cantons.
At the end of primary school (or at the beginning of secondary school), pupils are separated according to their capacities in several (often three) sections. The fastest learners are taught advanced classes to be prepared for further studies and the matura, while students who assimilate a little more slowly receive an education more adapted to their needs.
thumb|left|The campus of the Swiss Federal Institute of Technology Zurich (ETHZ).
There are 12 universities in Switzerland, ten of which are maintained at cantonal level and usually offer a range of non-technical subjects. The first university in Switzerland was founded in 1460 in Basel (with a faculty of medicine) and has a tradition of chemical and medical research in Switzerland. The largest university in Switzerland is the University of Zurich with nearly 25,000 students.The Swiss Federal Institute of Technology Zurich (ETHZ) and the University of Zurich are listed 20th and 54th respectively, on the 2015 Academic Ranking of World Universities.Academic Ranking of World Universities 2015 Academic Ranking of World Universities. ShanghaiRanking Consultancy. 2015. Retrieved 25 July 2016Top.Universities Retrieved on 30 April 2010The League of European Research Universities (LERU) Retrieved on 26 July 2016
The two institutes sponsored by the federal government are the Swiss Federal Institute of Technology Zurich (ETHZ) in Zürich, founded 1855 and the EPFL in Lausanne, founded 1969 as such, which was formerly an institute associated with the University of Lausanne.In 2008, the ETH Zürich was ranked 15th in the field Natural Sciences and Mathematics by the Shanghai Academic Ranking of World Universities and the EPFL in Lausanne was ranked 18th in the field Engineering/Technology and Computer Sciences by the same ranking.
In addition, there are various Universities of Applied Sciences. In business and management studies, the University of St. Gallen, (HSG) is ranked 329th in the world according to QS World University Rankings Ranking by Top Universities and the International Institute for Management Development (IMD), was ranked first in open programmes worldwide by the Financial Times.Financial Time Executive Education Rankings – Open Programs – 2015 Retrieved 8 July 2015 Switzerland has the second highest rate (almost 18% in 2003) of foreign students in tertiary education, after Australia (slightly over 18%).Education at Glance 2005 by the OECD: Percentage of foreign students in tertiary education.
As might befit a country that plays home to innumerable international organisations, the Graduate Institute of International and Development Studies, located in Geneva, is not only continental Europe's oldest graduate school of international and development studies, but also widely believed to be one of its most prestigious.
Many Nobel Prize laureates have been Swiss scientists. They include the world-famous physicist Albert Einstein in the field of physics, who developed his Special relativity while working in Bern. More recently Vladimir Prelog, Heinrich Rohrer, Richard Ernst, Edmond Fischer, Rolf Zinkernagel and Kurt Wüthrich received Nobel Prizes in the sciences. In total, 113 Nobel Prize winners in all fields stand in relation to SwitzerlandNobel prizes in non-science categories included. and the Nobel Peace Prize has been awarded nine times to organisations residing in Switzerland.
thumb|The LHC tunnel. CERN is the world's largest laboratory and also the birthplace of the World Wide Web.info.cern.ch Retrieved on 30 April 2010
Geneva and the nearby French department of Ain co-host the world's largest laboratory, CERN, dedicated to particle physics research. Another important research centre is the Paul Scherrer Institute. Notable inventions include lysergic acid diethylamide (LSD), the scanning tunnelling microscope (Nobel prize) and Velcro. Some technologies enabled the exploration of new worlds such as the pressurised balloon of Auguste Piccard and the Bathyscaphe which permitted Jacques Piccard to reach the deepest point of the world's oceans.
Switzerland Space Agency, the Swiss Space Office, has been involved in various space technologies and programmes. In addition it was one of the 10 founders of the European Space Agency in 1975 and is the seventh largest contributor to the ESA budget. In the private sector, several companies are implicated in the space industry such as Oerlikon SpaceOerlikon Space at a Glance. www.oerlikon.com or Maxon Motors who provide spacecraft structures.
Switzerland and the European Union
Switzerland voted against membership in the European Economic Area in a referendum in December 1992 and has since maintained and developed its relationships with the European Union (EU) and European countries through bilateral agreements. In March 2001, the Swiss people refused in a popular vote to start accession negotiations with the EU. In recent years, the Swiss have brought their economic practices largely into conformity with those of the EU in many ways, in an effort to enhance their international competitiveness. The economy grew at 3% in 2010, 1.9% in 2011, and 1% in 2012. Full EU membership is a long-term objective of some in the Swiss government, but there is considerable popular sentiment against this supported by the conservative SVP party. The western French-speaking areas and the urban regions of the rest of the country tend to be more pro-EU, however with far from any significant share of the population.
The government has established an Integration Office under the Department of Foreign Affairs and the Department of Economic Affairs. To minimise the negative consequences of Switzerland's isolation from the rest of Europe, Bern and Brussels signed seven bilateral agreements to further liberalise trade ties. These agreements were signed in 1999 and took effect in 2001. This first series of bilateral agreements included the free movement of persons. A second series covering nine areas was signed in 2004 and has since been ratified, which includes the Schengen Treaty and the Dublin Convention besides others. They continue to discuss further areas for cooperation.
In 2006, Switzerland approved 1 billion francs of supportive investment in the poorer Southern and Central European countries in support of cooperation and positive ties to the EU as a whole. A further referendum will be needed to approve 300 million francs to support Romania and Bulgaria and their recent admission. The Swiss have also been under EU and sometimes international pressure to reduce banking secrecy and to raise tax rates to parity with the EU. Preparatory discussions are being opened in four new areas: opening up the electricity market, participation in the European GNSS project Galileo, cooperating with the European centre for disease prevention and recognising certificates of origin for food products.Switzerland and the European Union europa.admin.ch. Retrieved on 25 January 2010
On 27 November 2008, the interior and justice ministers of European Union in Brussels announced Switzerland's accession to the Schengen passport-free zone from 12 December 2008. The land border checkpoints will remain in place only for goods movements, but should not run controls on people, though people entering the country had their passports checked until 29 March 2009 if they originated from a Schengen nation.Switzerland in Schengen: end to passport checks euronews.net. Retrieved on 25 January 2010
On 9 February 2014, Swiss voters narrowly approved by 50.3% a ballot initiative launched by the national conservative Swiss People's Party (SVP/UDC) to restrict immigration, and thus reintroducing a quota system on the influx of foreigners. This initiative was mostly backed by rural (57.6% approvals), suburban (51.2% approvals), and isolated cities (51.3% approvals) of Switzerland as well as by a strong majority (69.2% approval) in the canton of Ticino, while metropolitan centres (58.5% rejection) and the French-speaking part (58.5% rejection) of Switzerland rather rejected it. Some news commentators claim that this proposal de facto contradicts the bilateral agreements on the free movement of persons from these respective countries.Swiss voters back limit on immigration Herald-Tribune (The Associated Press). 9 February 2014. Retrieved 10 February 2014.Niklaus Nuspliger (Febr.2014). «Der Ball ist im Feld der Schweiz» (in German). Neue Zürcher Zeitung NZZ.ch. Retrieved 10 February 2014.
Energy, infrastructure and environment
thumb|upright=1.4|Switzerland has the tallest dams in Europe, among which the Mauvoisin Dam, in the Alps. Hydroelectricity is the most important domestic source of energy in the country.
Electricity generated in Switzerland is 56% from hydroelectricity and 39% from nuclear power, resulting in a nearly CO2-free electricity-generating network. On 18 May 2003, two anti-nuclear initiatives were turned down: Moratorium Plus, aimed at forbidding the building of new nuclear power plants (41.6% supported and 58.4% opposed), and Electricity Without Nuclear (33.7% supported and 66.3% opposed) after a previous moratorium expired in 2000. However, as a reaction to the Fukushima nuclear disaster, the Swiss government announced in 2011 that it plans to end its use of nuclear energy in the next 2 or 3 decades. In November 2016, Swiss voters rejected a proposal by the green party to accelerate the phaseout of nuclear power (45.8% supported and 54.2% opposed). The Swiss Federal Office of Energy (SFOE) is the office responsible for all questions relating to energy supply and energy use within the Federal Department of Environment, Transport, Energy and Communications (DETEC). The agency is supporting the 2000-watt society initiative to cut the nation's energy use by more than half by the year 2050.
thumb|left|Entrance of the new Lötschberg Base Tunnel, the third-longest railway tunnel in the world, under the old Lötschberg railway line. It is the first completed tunnel of the greater project AlpTransit.
The most dense rail network in Europe of carries over 596 million passengers annually (as of 2015). In 2015, each Swiss citizen travelled on average by rail, which makes them the keenest rail users. Virtually 100% of the network is electrified. The vast majority (60%) of the network is operated by the Swiss Federal Railways (SBB CFF FFS). Besides the second largest standard gauge railway company BLS AG two railways companies operating on narrow gauge networks are the Rhaetian Railways (RhB) in the southeastern canton of Graubünden, which includes some World Heritage lines,Rhaetian Railway in the Albula / Bernina Landscapes unesco.org and the Matterhorn Gotthard Bahn (MGB), which co-operates together with RhB the Glacier Express between Zermatt and St. Moritz/Davos. On 31 May 2016 the world's longest and deepest railway tunnel and the first flat, low-level route through the Alps, the Gotthard Base Tunnel, opened as the largest part of the New Railway Link through the Alps (NRLA) project after 17 years of realization. It starts its daily business for passenger transport on 11 December 2016 replacing the old, mountainous, scenic route over and through the St Gotthard Massif.
Swiss private-public managed road network is funded by road tolls and vehicle taxes. The Swiss autobahn/autoroute system requires the purchase of a vignette (toll sticker)—which costs 40 Swiss francs—for one calendar year in order to use its roadways, for both passenger cars and trucks. The Swiss autobahn/autoroute network has a total length of (as of 2000) and has, by an area of , also one of the highest motorway densities in the world. Zürich Airport is Switzerland's largest international flight gateway, which handled 22.8 million passengers in 2012.anna.aero European Airport Traffic Trends accessed 12 July 2013 The other international airports are Geneva Airport (13.9 million passengers in 2012),Geneva Airport statistics accessed 12 July 2013 EuroAirport Basel-Mulhouse-Freiburg which is located in France, Bern Airport, Lugano Airport, St. Gallen-Altenrhein Airport and Sion Airport. Swiss International Air Lines is the flag carrier of Switzerland. Its main hub is Zürich.
Switzerland has one of the best environmental records among nations in the developed world;Swiss sit atop ranking of greenest nations msnbc.com. Retrieved on 2 December 2009 it was one of the countries to sign the Kyoto Protocol in 1998 and ratified it in 2003. With Mexico and the Republic of Korea it forms the Environmental Integrity Group (EIG).Party grouping unfccc.int. Retrieved on 2 December 2009 The country is heavily active in recycling and anti-littering regulations and is one of the top recyclers in the world, with 66% to 96% of recyclable materials being recycled, depending on the area of the country. The 2014 Global Green Economy Index ranked Switzerland among the top 10 green economies in the world.
In many places in Switzerland, household rubbish disposal is charged for. Rubbish (except dangerous items, batteries etc.) is only collected if it is in bags which either have a payment sticker attached, or in official bags with the surcharge paid at the time of purchase.Stadtreinigung Basel-Stadt —Pricelist bags and stickers This gives a financial incentive to recycle as much as possible, since recycling is free. Illegal disposal of garbage is not tolerated but usually the enforcement of such laws is limited to violations that involve the unlawful disposal of larger volumes at traffic intersections and public areas. Fines for not paying the disposal fee range from CHF 200–500.Richtig Entsorgen (Kanton Basel-Stadt) (1.6 MiB)—Wilde Deponien sind verboten... Für die Beseitigung widerrechtlich deponierter Abfälle wird zudem eine Umtriebsgebühr von Fr. 200.– oder eine Busse erhoben (page 90)
Switzerland also has internationally the most efficient system to recycle old newspapers and cardboard materials. Publicly organised collection by volunteers and economical railway transport logistics started as early as 1865 under the leadership of the notable industrialist Hans Caspar Escher (Escher Wyss AG) when the first modern Swiss paper manufacturing plant was built in Biberist.History of paper manufacturing in German, Retrieved 3 May 2011
Demographics
thumb|Population density in Switzerland (2016)
thumb|Percentage of foreigners in Switzerland (2016)
In 2012, Switzerland's population slightly exceeded eight million. In common with other developed countries, the Swiss population increased rapidly during the industrial era, quadrupling between 1800 and 1990. Growth has since stabilised, and like most of Europe, Switzerland faces an ageing population, albeit with consistent annual growth projected into 2035, due mostly to immigration and a fertility rate close to replacement level.
, resident foreigners made up 23.3% of the population, one of the largest proportions in the developed world. Most of these (64%) were from European Union or EFTA countries. Italians were the largest single group of foreigners, with 15.6% of total foreign population, followed closely by Germans (15.2%), immigrants from Portugal (12.7%), France (5.6%), Serbia (5.3%), Turkey (3.8%), Spain (3.7%), and Austria (2%). Immigrants from Sri Lanka, most of them former Tamil refugees, were the largest group among people of Asian origin (6.3%).
Additionally, the figures from 2012 show that 34.7% of the permanent resident population aged 15 or over in Switzerland (around 2.33 million), had an immigrant background. A third of this population (853,000) held Swiss citizenship. Four fifths of persons with an immigration background were themselves immigrants (first generation foreigners and native-born and naturalised Swiss citizens), whereas one fifth were born in Switzerland (second generation foreigners and native-born and naturalised Swiss citizens).
In the 2000s, domestic and international institutions expressed concern about what was perceived as an increase in xenophobia, particularly in some political campaigns. In reply to one critical report, the Federal Council noted that "racism unfortunately is present in Switzerland", but stated that the high proportion of foreign citizens in the country, as well as the generally unproblematic integration of foreigners", underlined Switzerland's openness.Definitive report on racism in Switzerland by UN expert humanrights.ch
Languages
thumb|Official languages in Switzerland (2017):
Switzerland has four official languages: principally German (spoken by 63.3% of the population in 2014); French (22.7%) in the west; and Italian (8.1%) in the south. The fourth official language, Romansh (0.5%), is a Romance language spoken locally in the southeastern trilingual canton of Graubünden, and is designated by Article 4 of the Federal Constitution as a national language along with German, French, and Italian, and in Article 70 as an official language if the authorities communicate with persons who speak Romansh. However, federal laws and other official acts do not need to be decreed in Romansh.
In 2013, the languages most spoken at home among permanent residents aged 15 and older were Swiss German (60.1%), French (23.4%), Standard German (10.1%), and Italian (8.4%). More than two-fifths (42.6%) of the permanent resident population indicated speaking more than one language regularly. Other languages spoken at home included English (4.6%), Portuguese (3.5%), Albanian (2.6%), Serbian and Croatian (2.5%), Spanish (2.2%), and Turkish (1.3%).
The federal government is obliged to communicate in the official languages, and in the federal parliament simultaneous translation is provided from and into German, French and Italian.
Aside from the official forms of their respective languages, the four linguistic regions of Switzerland also have their local dialectal forms. The role played by dialects in each linguistic region varies dramatically: in the German-speaking regions, Swiss German dialects have become ever more prevalent since the second half of the 20th century, especially in the media, such as radio and television, and are used as an everyday language, while the Swiss variety of Standard German is almost always used instead of dialect for written communication (c.f. diglossic usage of a language). Conversely, in the French-speaking regions the local dialects have almost disappeared (only 6.3% of the population of Valais, 3.9% of Fribourg, and 3.1% of Jura still spoke dialects at the end of the 20th century), while in the Italian-speaking regions dialects are mostly limited to family settings and casual conversation.
The principal official languages (German, French, and Italian) have terms, not used outside of Switzerland, known as Helvetisms. German Helvetisms are, roughly speaking, a large group of words typical of Swiss Standard German, which do not appear either in Standard German, nor in other German dialects. These include terms from Switzerland's surrounding language cultures (German Billett from French), from similar term in another language (Italian azione used not only as act but also as discount from German Aktion). The French spoken in Switzerland has similar terms, which are equally known as Helvetisms. The most frequent characteristics of Helvetisms are in vocabulary, phrases, and pronunciation, but certain Helvetisms denote themselves as special in syntax and orthography likewise. Duden, one of the prescriptive sources for Standard German, is aware of about 3000 Helvetisms. Current French dictionaries, such as the Petit Larousse, include several hundred Helvetisms.
Learning one of the other national languages at school is compulsory for all Swiss pupils, so many Swiss are supposed to be at least bilingual, especially those belonging to linguistic minority groups.
Health
Swiss citizens are universally required to buy health insurance from private insurance companies, which in turn are required to accept every applicant. While the cost of the system is among the highest, it compares well with other European countries in terms of health outcomes; patients who are citizens have been reported as being, in general, highly satisfied with it. In 2012, life expectancy at birth was 80.4 years for men and 84.7 years for women — the highest in the world. However, spending on health is particularly high at 11.4% of GDP (2010), on par with Germany and France (11.6%) and other European countries, and notably less than spending in the USA (17.6%). From 1990, a steady increase can be observed, reflecting the high costs of the services provided.OECD and WHO survey of Switzerland's health system oecd.org. Retrieved on 29 June 2009 With an ageing population and new healthcare technologies, health spending will likely continue to rise.
Urbanisation
thumb|Urbanisation in the Rhone Valley (outskirts of Sion)
Between two thirds and three quarters of the population live in urban areas.Where people live swissworld.org. Retrieved on 26 June 2009Städte und Agglomerationen unter der Lupe admin.ch. Retrieved on 26 June 2009 Switzerland has gone from a largely rural country to an urban one in just 70 years. Since 1935 urban development has claimed as much of the Swiss landscape as it did during the previous 2,000 years. This urban sprawl does not only affect the plateau but also the Jura and the Alpine foothillsSwiss countryside succumbs to urban sprawl swissinfo.ch. Retrieved on 30 June 2009 and there are growing concerns about land use.Enquête représentative sur l'urbanisation de la Suisse (Pronatura) gfs-zh.ch. Retrieved on 30 June 2009 However, from the beginning of the 21st century, the population growth in urban areas is higher than in the countryside.
Switzerland has a dense network of cities, where large, medium and small cities are complementary. The plateau is very densely populated with about 450 people per km2 and the landscape continually shows signs of human presence.Swiss plateau swissworld.org. Retrieved on 29 June 2009 The weight of the largest metropolitan areas, which are Zürich, Geneva–Lausanne, Basel and Bern tend to increase. In international comparison the importance of these urban areas is stronger than their number of inhabitants suggests. In addition the two main centres of Zürich and Geneva are recognised for their particularly great quality of life.Quality of living mercer.com. Retrieved on 26 June 2009
Religion
{| class="wikitable sortable floatright" style="font-size: 80%"
|+ style="font-size:100%" | Religion (age 15+) in Switzerland – 2014
! Affiliation
! colspan="2"|% of Swiss population
|-
| Christianity
|align=right| |-
| style="text-align:left; text-indent:15px;"| Roman Catholic
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Swiss Reformed
|align=right| |-
| style="text-align:left; text-indent:15px;"| Eastern Orthodox
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Evangelical
|align=right| |-
| style="text-align:left; text-indent:15px;"| Lutheran
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Anglican
|align=right| |-
| style="text-align:left; text-indent:15px;"| Old Catholic or other Christian
|align=right|
|-
| Non-Christian faiths
|align=right| |-
| style="text-align:left; text-indent:15px;"| Muslim
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Buddhist
|align=right| |-
| style="text-align:left; text-indent:15px;"| Hindu
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Jewish
|align=right| |-
| style="text-align:left; text-indent:15px;"| Other non-Christian faith
|align=right|
|-
| Unaffiliated*
|align=right| |-
| Total || '|-
| *of whom: 42% theistic/ietsistic, 32% atheistic, 25% agnostic|}
Switzerland has no official state religion, though most of the cantons (except Geneva and Neuchâtel) recognise official churches, which are either the Catholic Church or the Swiss Reformed Church. These churches, and in some cantons also the Old Catholic Church and Jewish congregations, are financed by official taxation of adherents.
Christianity is the predominant religion of Switzerland (about 71% of resident population and 75% of Swiss citizens), divided between the Catholic Church (38.21% of the population), the Swiss Reformed Church (26.93%), further Protestant churches (2.89%) and other Christian denominations (2.79%). There has been a recent rise in Evangelicalism. Immigration has established Islam (4.95%) and Eastern Orthodoxy (around 2%) as sizeable minority religions. According to a 2015 poll by Gallup International, 12% of Swiss people self-identified as "convinced atheists."
As of the 2000 census other Christian minority communities included Neo-Pietism (0.44%), Pentecostalism (0.28%, mostly incorporated in the Schweizer Pfingstmission), Methodism (0.13%), the New Apostolic Church (0.45%), Jehovah's Witnesses (0.28%), other Protestant denominations (0.20%), the Old Catholic Church (0.18%), other Christian denominations (0.20%). Non-Christian religions are Hinduism (0.38%), Buddhism (0.29%), Judaism (0.25%) and others (0.11%); 4.3% did not make a statement. 21.4% in 2012 declared themselves as unchurched i.e. not affiliated with any church or other religious body (Agnostic, Atheist, or just not related to any official religion).
The country was historically about evenly balanced between Catholic and Protestant, with a complex patchwork of majorities over most of the country. Geneva converted to Protestantism in 1536, just before John Calvin arrived there. It became known internationally as the Protestant Rome, being base for such reformers as Theodore Beza or William Farel. Zürich became another stronghold around the same time, with Huldrych Zwingli and Heinrich Bullinger taking the lead there. One canton, Appenzell, was officially divided into Catholic and Protestant sections in 1597. The larger cities and their cantons (Bern, Geneva, Lausanne, Zürich and Basel) used to be predominantly Protestant. Central Switzerland, the Valais, the Ticino, Appenzell Innerrhodes, the Jura, and Fribourg are traditionally Catholic. The Swiss Constitution of 1848, under the recent impression of the clashes of Catholic vs. Protestant cantons that culminated in the Sonderbundskrieg, consciously defines a consociational state, allowing the peaceful co-existence of Catholics and Protestants. A 1980 initiative calling for the complete separation of church and state was rejected by 78.9% of the voters.Volksabstimmung vom 2. März 1980 admin.ch. Retrieved on 2010 Some traditionally Protestant cantons and cities nowadays have a slight Catholic majority, not because they were growing in members, quite the contrary, but only because since about 1970 a steadily growing minority became not affiliated with any church or other religious body (21.4% in Switzerland, 2012) especially in traditionally Protestant regions, such as Basel-City (42%), canton of Neuchâtel (38%), canton of Geneva (35%), canton of Vaud (26%), or Zürich city (city: >25%; canton: 23%).
Culture
thumb|Alphorn concert in Vals
Three of Europe's major languages are official in Switzerland. Swiss culture is characterised by diversity, which is reflected in a wide range of traditional customs.Swiss culture swissworld.org. Retrieved on 1 December 2009 A region may be in some ways strongly culturally connected to the neighbouring country that shares its language, the country itself being rooted in western European culture.European Year of Intercultural Dialogue Dr Michael Reiterer. Retrieved on 1 December 2009 The linguistically isolated Romansh culture in Graubünden in eastern Switzerland constitutes an exception, it survives only in the upper valleys of the Rhine and the Inn and strives to maintain its rare linguistic tradition.
Switzerland is home to many notable contributors to literature, art, architecture, music and sciences. In addition the country attracted a number of creative persons during time of unrest or war in Europe.Switzerland: culture traveldocs.com. Retrieved on 1 December 2009
Some 1000 museums are distributed through the country; the number has more than tripled since 1950.Museums swissworld.org. Retrieved on 2 December 2009 Among the most important cultural performances held annually are the Paléo Festival, Lucerne Festival,Lucerne Festival nytimes.com. Retrieved on 15 December 2010 the Montreux Jazz Festival,Montreux Jazz Festival Retrieved on 26 August 2013 the Locarno International Film Festival and the Art Basel.Film festivals swissworld.org. Retrieved on 2 December 2009
Alpine symbolism has played an essential role in shaping the history of the country and the Swiss national identity.Mountains and hedgehogs. swissworld.org. Retrieved on 1 December 2009 Nowadays some concentrated mountain areas have a strong highly energetic ski resort culture in winter, and a hiking (ger: das Wandern) or Mountain biking culture in summer. Other areas throughout the year have a recreational culture that caters to tourism, yet the quieter seasons are spring and autumn when there are fewer visitors. A traditional farmer and herder culture also predominates in many areas and small farms are omnipresent outside the cities. Folk art is kept alive in organisations all over the country. In Switzerland it is mostly expressed in music, dance, poetry, wood carving and embroidery. The alphorn, a trumpet-like musical instrument made of wood, has become alongside yodeling and the accordion an epitome of traditional Swiss music.Folk music swissworld.org. Retrieved on 2 December 2009Culture of Switzerland europe-cities.com. Retrieved on 14 December 2009
Literature
thumb|upright|Jean-Jacques Rousseau was not only a writer but also an influential philosopher of the eighteenth centuryArt in literature cp-pc.ca. Retrieved on 14 December 2009 (his statue in Geneva).
As the Confederation, from its foundation in 1291, was almost exclusively composed of German-speaking regions, the earliest forms of literature are in German. In the 18th century, French became the fashionable language in Bern and elsewhere, while the influence of the French-speaking allies and subject lands was more marked than before.From Encyclopædia Britannica Eleventh Edition, Swiss literature
Among the classics of Swiss German literature are Jeremias Gotthelf (1797–1854) and Gottfried Keller (1819–1890). The undisputed giants of 20th century Swiss literature are Max Frisch (1911–91) and Friedrich Dürrenmatt (1921–90), whose repertoire includes Die Physiker (The Physicists) and Das Versprechen (The Pledge), released in 2001 as a Hollywood film.Literature swissworld.org, Retrieved on 23 June 2009
Prominent French-speaking writers were Jean-Jacques Rousseau (1712–1778) and Germaine de Staël (1766–1817). More recent authors include Charles Ferdinand Ramuz (1878–1947), whose novels describe the lives of peasants and mountain dwellers, set in a harsh environment and Blaise Cendrars (born Frédéric Sauser, 1887–1961). Also Italian and Romansh-speaking authors contributed but in more modest way given their small number.
Probably the most famous Swiss literary creation, Heidi, the story of an orphan girl who lives with her grandfather in the Alps, is one of the most popular children's books ever and has come to be a symbol of Switzerland. Her creator, Johanna Spyri (1827–1901), wrote a number of other books on similar themes.
Media
The freedom of the press and the right to free expression is guaranteed in the federal constitution of Switzerland.Press and the media ch.ch. Retrieved on 25 June 2009 The Swiss News Agency (SNA) broadcasts information around-the-clock in three of the four national languages—on politics, economics, society and culture. The SNA supplies almost all Swiss media and a couple dozen foreign media services with its news.
Switzerland has historically boasted the greatest number of newspaper titles published in proportion to its population and size.Press in Switzerland pressreference.com. Retrieved on 25 June 2009 The most influential newspapers are the German-language Tages-Anzeiger and Neue Zürcher Zeitung NZZ, and the French-language Le Temps, but almost every city has at least one local newspaper. The cultural diversity accounts for a large number of newspapers.
The government exerts greater control over broadcast media than print media, especially due to finance and licensing. The Swiss Broadcasting Corporation, whose name was recently changed to SRG SSR, is charged with the production and broadcast of radio and television programmes. SRG SSR studios are distributed throughout the various language regions. Radio content is produced in six central and four regional studios while the television programmes are produced in Geneva, Zürich and Lugano. An extensive cable network also allows most Swiss to access the programmes from neighbouring countries.
Sports
thumb|Ski area over the glaciers of Saas-Fee
Skiing, snowboarding and mountaineering are among the most popular sports in Switzerland, the nature of the country being particularly suited for such activities.Sport in Switzerland europe-cities.com. Retrieved on 14 December 2009 Winter sports are practised by the natives and tourists since the second half of the 19th century with the invention of bobsleigh in St. Moritz.A brief history of bobsleigh fibt.com. Retrieved on 2 November 2009 The first world ski championships were held in Mürren (1931) and St. Moritz (1934). The latter town hosted the second Winter Olympic Games in 1928 and the fifth edition in 1948. Among the most successful skiers and world champions are Pirmin Zurbriggen and Didier Cuche.
thumb|left|Spengler Cup in Davos
thumb|left| The Switzerland national football team lining up against Argentina in 2012
Most prominently watched sport events in Switzerland are football, ice hockey, Alpin skiing, "Schwingen", and tennis.
The headquarters of the international football's and ice hockey's governing bodies, the International Federation of Association Football (FIFA) and International Ice Hockey Federation(IIHF), are located in Zürich. Actually many other headquarters of international sports federatios are to be found in Switzerland. For example, the International Olympic Committee (IOC), IOC's Olympic Museum and the Court of Arbitration for Sport (CAS) are located in Lausanne.
Switzerland hosted the 1954 FIFA World Cup, and was the joint host, with Austria, of the Euro 2008 tournament. The Swiss Super League is the nation's professional football club league. Europe's highest football pitch, at above sea level, is located in Switzerland and is named the Ottmar Hitzfeld Stadium.
Many Swiss also follow ice hockey and support one of the 12 clubs in the League A, which is the most attended league in Europe. In 2009, Switzerland hosted the IIHF World Championship for the 10th time. It also became World Vice-Champion in 2013. The numerous lakes make Switzerland an attractive place for sailing. The largest, Lake Geneva, is the home of the sailing team Alinghi which was the first European team to win the America's Cup in 2003 and which successfully defended the title in 2007. Tennis has become an increasingly popular sport, and Swiss players such as Martina Hingis, Roger Federer, and most recently, Stanislas Wawrinka have won multiple Grand Slams.
thumb|upright|In a nine-year span, Roger Federer has won a record 17 Grand Slam singles titles, making him the most successful men's tennis player ever.Roger Federer's Grand Slam Titles sportsillustrated.cnn.com. Retrieved on 14 June 2010
Motorsport racecourses and events were banned in Switzerland following the 1955 Le Mans disaster with exception to events such as Hillclimbing. During this period, the country still produced successful racing drivers such as Clay Regazzoni, Sébastien Buemi, Jo Siffert, Dominique Aegerter, successful World Touring Car Championship driver Alain Menu, 2014 24 Hours of Le Mans winner Marcel Fässler and 2015 24 Hours Nürburgring winner Nico Müller. Switzerland also won the A1GP World Cup of Motorsport in 2007–08 with driver Neel Jani. Swiss motorcycle racer Thomas Lüthi won the 2005 MotoGP World Championship in the 125cc category. In June 2007 the Swiss National Council, one house of the Federal Assembly of Switzerland, voted to overturn the ban, however the other house, the Swiss Council of States rejected the change and the ban remains in place.Wikinews:Switzerland lifts ban on motor racing
Traditional sports include Swiss wrestling or "Schwingen". It is an old tradition from the rural central cantons and considered the national sport by some. Hornussen is another indigenous Swiss sport, which is like a cross between baseball and golf.Hornussen swissroots.org. Retrieved on 25 January 2010 Steinstossen is the Swiss variant of stone put, a competition in throwing a heavy stone. Practised only among the alpine population since prehistoric times, it is recorded to have taken place in Basel in the 13th century. It is also central to the Unspunnenfest, first held in 1805, with its symbol the 83.5 stone named Unspunnenstein.Tradition and history interlaken.ch. Retrieved on 25 January 2010
Cuisine
thumb|left|200px|Fondue is melted cheese, into which bread is dipped
The cuisine of Switzerland is multifaceted. While some dishes such as fondue, raclette or rösti are omnipresent through the country, each region developed its own gastronomy according to the differences of climate and languages.Zürcher Geschnetzeltes Zürcher Geschnetzeltes, engl.: sliced meat Zürich style
Flavors of Switzerland theworldwidegourmet.com. Retrieved on 24 June 2009 Traditional Swiss cuisine uses ingredients similar to those in other European countries, as well as unique dairy products and cheeses such as Gruyère or Emmental, produced in the valleys of Gruyères and Emmental. The number of fine-dining establishments is high, particularly in western Switzerland.Michelin Guide Switzerland 2010 attests to the high quality of gourmet cooking with one new 2 star restaurant and 8 new one star Press information, Michelin. Retrieved on 14 December 2009Swiss region serves up food with star power usatoday.com. Retrieved on 14 December 2009
Chocolate has been made in Switzerland since the 18th century but it gained its reputation at the end of the 19th century with the invention of modern techniques such as conching and tempering which enabled its production on a high quality level. Also a breakthrough was the invention of solid milk chocolate in 1875 by Daniel Peter. The Swiss are the world's largest consumers of chocolate.Chocolate swissworld.org. Retrieved on 24 June 2009Swiss Chocolate germanworldonline.com (4 December 2009). Retrieved on 14 June 2010
Due to the popularisation of processed foods at the end of the 19th century, Swiss health food pioneer Maximilian Bircher-Benner created the first nutrition-based therapy in form of the well-known rolled oats cereal dish, called Birchermüesli.
The most popular alcoholic drink in Switzerland is wine. Switzerland is notable for the variety of grapes grown because of the large variations in terroirs, with their specific mixes of soil, air, altitude and light. Swiss wine is produced mainly in Valais, Vaud (Lavaux), Geneva and Ticino, with a small majority of white wines. Vineyards have been cultivated in Switzerland since the Roman era, even though certain traces can be found of a more ancient origin. The most widespread varieties are the Chasselas (called Fendant in Valais) and Pinot noir. The Merlot is the main variety produced in Ticino.Wine-producing Switzerland in short swisswine.ch. Retrieved on 24 June 2009Table 38. Top wine consuming nations per capita, 2006 winebiz.com. Retrieved on 14 June 2010
See also
Index of Switzerland-related articles
Outline of Switzerland
Notes
References
Bibliography
Church, Clive H. (2004) The Politics and Government of Switzerland. Palgrave Macmillan. ISBN 0-333-69277-2.
Dalton, O.M. (1927) The History of the Franks, by Gregory of Tours. Oxford: The Clarendon Press.
Fahrni, Dieter. (2003) An Outline History of Switzerland. From the Origins to the Present Day. 8th enlarged edition. Pro Helvetia, Zürich. ISBN 3-908102-61-8
von Matt, Peter: Das Kalb vor der Gotthardpost. Zur Literatur und Politik in der Schweiz. Carl Hanser Verlag, München, 2012, ISBN 978-3-446-23880-0, S. 127–138.
Historical Dictionary of Switzerland (2002–). Published electronically and in print simultaneously in three national languages of Switzerland.
External links
Government
The Federal Authorities of the Swiss Confederation
The Federal Council
Switzerland's information portal
Swiss Statistics at the Swiss Federal Statistical Office.
Practical informations
Reference
Switzerland at UCB Libraries GovPubs.
Switzerland entry at Encyclopædia Britannica''.
Switzerland profile from the BBC News.
Geography
Federal Office of Topography
Searchable interactive map (search.ch)
Travel
Tourism
History
Historical Dictionary of Switzerland
Swiss American Historical Society
Languages
swiss-linguistics.com, a portal on current linguistic research in Switzerland.
News media
Daily newspapers
Tages-Anzeiger
Neue Zürcher Zeitung
Le Temps
Corriere Del Ticino
swissinfo.ch, Swiss News – Worldwide
Education
Universities in Switzerland
The Swiss School System
Science, research, and technology
State Secretariat for Education and Research, SER
The Swiss Portal for Research and Innovation (private source).
Category:Central European countries
Category:Federal republics
Category:French-speaking countries and territories
Category:German-speaking countries and territories
Category:Italian-speaking countries and territories
Category:Landlocked countries
Category:Liberal democracies
Category:Member states of the Council of Europe
Category:Member states of the Organisation internationale de la Francophonie
Category:Member states of the United Nations | 26,748 | 2017-01 |
Beyoncé | Beyoncé Giselle Knowles-Carter (; born September 4, 1981) is an American singer, lyricist and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child and rose to fame in the late 1990s as lead singer of R&B girl-group Destiny's Child. Managed by her father, Mathew Knowles, the group became one of the world's best-selling girl groups of all time. Their hiatus saw the release of Beyoncé's debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles "Crazy in Love" and "Baby Boy".
Following the disbandment of Destiny's Child in 2006, she released her second solo album, B'Day (2006), which contained hits "Déjà Vu", "Irreplaceable", and "Beautiful Liar". Beyoncé also ventured into acting, with Dreamgirls (2006) and starring roles in The Pink Panther (2006) and Obsessed (2009). Her marriage to rapper Jay Z and portrayal of Etta James in Cadillac Records (2008) influenced her third album, I Am... Sasha Fierce (2008), which saw the birth of her alter-ego Sasha Fierce and earned a record-setting six Grammy Awards in 2010, including Song of the Year for "Single Ladies (Put a Ring on It)". Beyoncé took a hiatus from music in 2010 and took over management of her career; her fourth album 4 (2011) was subsequently mellower in tone, exploring 1970s funk, 1980s pop, and 1990s soul. Her critically acclaimed fifth album, Beyoncé (2013), was distinguished from previous releases by its experimental production and exploration of darker themes. With the release of Lemonade (2016), Beyoncé became the first artist to have their first six studio albums debut at number one on the Billboard 200 chart.
Throughout a career spanning 19 years, she has sold over 100 million records as a solo artist, and a further 60 million with Destiny's Child, making her one of the best-selling music artists of all time. She has won 20 Grammy Awards and is the most nominated woman in the award's history. She is the most awarded artist at the MTV Video Music Awards, with 24 wins. The Recording Industry Association of America recognized her as the Top Certified Artist in America during the 2000s (decade). In 2009, Billboard named her the Top Radio Songs Artist of the Decade, the Top Female Artist of the 2000s (decade) and handed their Millennium Award in 2011. In 2014, she became the highest-paid black musician in history and was listed among Time's 100 most influential people in the world for a second year in a row. Forbes listed her as the most powerful female in entertainment of 2015, and in 2016 she occupied the sixth place for Person of the Year.
Early life
Beyoncé Giselle Knowles was born in Houston, Texas, to Celestine "Tina" Knowles (née Beyincé), a hairdresser and salon owner, and Mathew Knowles, a Xerox sales manager. Beyoncé's name is a tribute to her mother's maiden name. Beyoncé's younger sister Solange is also a singer and a former member of Destiny's Child. Solange and Beyoncé are the first sisters to have both had No. 1 albums. Mathew is African American, while Tina is of Louisiana Creole descent (African, Native American, and French). Through her mother, Beyoncé is a descendant of Acadian leader Joseph Broussard.
Beyoncé attended St. Mary's Montessori School in Houston, where she enrolled in dance classes. Her singing talent was discovered when dance instructor Darlette Johnson began humming a song and she finished it, able to hit the high-pitched notes. Beyoncé's interest in music and performing continued after winning a school talent show at age seven, singing John Lennon's "Imagine" to beat 15/16-year-olds. In fall of 1990, Beyoncé enrolled in Parker Elementary School, a music magnet school in Houston, where she would perform with the school's choir. She also attended the High School for the Performing and Visual Arts and later Alief Elsik High School. Beyoncé was also a member of the choir at St. John's United Methodist Church as a soloist for two years.
When Beyoncé was eight, she and childhood friend Kelly Rowland met LaTavia Roberson while in an audition for an all-girl entertainment group. They were placed into a group with three other girls as Girl's Tyme, and rapped and danced on the talent show circuit in Houston. After seeing the group, R&B producer Arne Frager brought them to his Northern California studio and placed them in Star Search, the largest talent show on national TV at the time. Girl's Tyme failed to win, and Beyoncé later said the song they performed was not good.
In 1995 Beyoncé's father resigned from his job to manage the group. The move reduced Beyoncé's family's income by half, and her parents were forced to move into separated apartments. Mathew cut the original line-up to four and the group continued performing as an opening act for other established R&B girl groups. The girls auditioned before record labels and were finally signed to Elektra Records, moving to Atlanta Records briefly to work on their first recording, only to be cut by the company. This put further strain on the family, and Beyoncé's parents separated. On October 5, 1995, Dwayne Wiggins's Grass Roots Entertainment signed the group. In 1996, the girls began recording their debut album under an agreement with Sony Music, the Knowles family reunited, and shortly after, the group got a contract with Columbia Records.
Career
1997–2002: Destiny's Child
The group changed their name to Destiny's Child in 1996, based upon a passage in the Book of Isaiah. In 1997, Destiny's Child released their major label debut song "Killing Time" on the soundtrack to the 1997 film, Men in Black. The following year, the group released their self-titled debut album, scoring their first major hit "No, No, No". The album established the group as a viable act in the music industry, with moderate sales and winning the group three Soul Train Lady of Soul Awards for Best R&B/Soul Album of the Year, Best R&B/Soul or Rap New Artist, and Best R&B/Soul Single for "No, No, No". The group released their Multi-Platinum second album The Writing's on the Wall in 1999. The record features some of the group's most widely known songs such as "Bills, Bills, Bills", the group's first number-one single, "Jumpin' Jumpin'" and "Say My Name", which became their most successful song at the time, and would remain one of their signature songs. "Say My Name" won the Best R&B Performance by a Duo or Group with Vocals and the Best R&B Song at the 43rd Annual Grammy Awards. The Writing's on the Wall sold more than eight million copies worldwide. During this time, Beyoncé recorded a duet with Marc Nelson, an original member of Boyz II Men, on the song "After All Is Said and Done" for the soundtrack to the 1999 film, The Best Man.
LeToya Luckett and Roberson became unhappy with Mathew's managing of the band and eventually were replaced by Farrah Franklin and Michelle Williams. Beyoncé experienced depression following the split with Luckett and Roberson after being publicly blamed by the media, critics, and blogs for its cause. Her long-standing boyfriend left her at this time. The depression was so severe it lasted for a couple of years, during which she occasionally kept herself in her bedroom for days and refused to eat anything. Beyoncé stated that she struggled to speak about her depression because Destiny's Child had just won their first Grammy Award and she feared no one would take her seriously. Beyoncé would later speak of her mother as the person who helped her fight it. Franklin was dismissed, leaving just Beyoncé, Rowland, and Williams.
The remaining band members recorded "Independent Women Part I", which appeared on the soundtrack to the 2000 film Charlie's Angels. It became their best-charting single, topping the U.S. Billboard Hot 100 chart for eleven consecutive weeks. In early 2001, while Destiny's Child was completing their third album, Beyoncé landed a major role in the MTV made-for-television film, Carmen: A Hip Hopera, starring alongside American actor Mekhi Phifer. Set in Philadelphia, the film is a modern interpretation of the 19th-century opera Carmen by French composer Georges Bizet. When the third album Survivor was released in May 2001, Luckett and Roberson filed a lawsuit claiming that the songs were aimed at them. The album debuted at number one on the U.S. Billboard 200, with first-week sales of 663,000 copies sold. The album spawned other number-one hits, "Bootylicious" and the title track, "Survivor", the latter of which earned the group a Grammy Award for Best R&B Performance by a Duo or Group with Vocals. After releasing their holiday album 8 Days of Christmas in October 2001, the group announced a hiatus to further pursue solo careers.
In July 2002, Beyoncé continued her acting career playing Foxxy Cleopatra alongside Mike Myers in the comedy film Austin Powers in Goldmember, which spent its first weekend atop the US box office and grossed $73 million. Beyoncé released "Work It Out" as the lead single from its soundtrack album which entered the top ten in the UK, Norway, and Belgium. In 2003, Beyoncé starred opposite Cuba Gooding, Jr., in the musical comedy The Fighting Temptations as Lilly, a single mother with whom Gooding's character falls in love. The film received mixed reviews from critics but grossed $30 million in the U.S. Beyoncé released "Fighting Temptation" as the lead single from the film's soundtrack album, with Missy Elliott, MC Lyte, and Free which was also used to promote the film. Another of Beyoncé's contributions to the soundtrack, "Summertime", fared better on the US charts.
2003–2007: Dangerously in Love and B'Day
thumb|Beyoncé performing "Baby Boy", which spent nine consecutive weeks at number one on the Billboard Hot 100 chart| alt=A woman, flanked by two male dancers, holds a microphone in one hand as she dances
Beyoncé's first solo recording was a feature on Jay Z's "'03 Bonnie & Clyde" that was released in October 2002, peaking at number four on the U.S. Billboard Hot 100 chart. Her first solo album Dangerously in Love was released on June 24, 2003, after Michelle Williams and Kelly Rowland had released their solo efforts. The album sold 317,000 copies in its first week, debuted atop the Billboard 200, and has since sold 11 million copies worldwide. The album's lead single, "Crazy in Love", featuring Jay Z, became Beyoncé's first number-one single as a solo artist in the US. The single "Baby Boy" also reached number one, and singles, "Me, Myself and I" and "Naughty Girl", both reached the top-five. The album earned Beyoncé a then record-tying five awards at the 46th Annual Grammy Awards; Best Contemporary R&B Album, Best Female R&B Vocal Performance for "Dangerously in Love 2", Best R&B Song and Best Rap/Sung Collaboration for "Crazy in Love", and Best R&B Performance by a Duo or Group with Vocals for "The Closer I Get to You" with Luther Vandross.
left|thumb|upright|Beyoncé performing "Listen" from the motion picture Dreamgirls during The Beyoncé Experience tour. She received a Golden Globe nomination for her performance as Deena Jones in the film.|alt=A woman stands with a microphone
In November 2003, she embarked on the Dangerously in Love Tour in Europe and later toured alongside Missy Elliott and Alicia Keys for the Verizon Ladies First Tour in North America. On February 1, 2004, Beyoncé performed the American national anthem at Super Bowl XXXVIII, at the Reliant Stadium in Houston, Texas. After the release of Dangerously in Love, Beyoncé had planned to produce a follow-up album using several of the left-over tracks. However, this was put on hold so she could concentrate on recording Destiny Fulfilled, the final studio album by Destiny's Child. Released on November 15, 2004, in the US and peaking at number two on the Billboard 200, Destiny Fulfilled included the singles "Lose My Breath" and "Soldier", which reached the top five on the Billboard Hot 100 chart. Destiny's Child embarked on a worldwide concert tour, Destiny Fulfilled... and Lovin' It and during the last stop of their European tour, in Barcelona on June 11, 2005, Rowland announced that Destiny's Child would disband following the North American leg of the tour. The group released their first compilation album Number 1's on October 25, 2005, in the US and accepted a star on the Hollywood Walk of Fame in March 2006.
Beyoncé's second solo album B'Day was released on September 4, 2006, in the US, to coincide with her twenty-fifth birthday. It sold 541,000 copies in its first week and debuted atop the Billboard 200, becoming Beyoncé's second consecutive number-one album in the United States. The album's lead single "Déjà Vu", featuring Jay Z, reached the top five on the Billboard Hot 100 chart. The second international single "Irreplaceable" was a commercial success worldwide, reaching number one in Australia, Hungary, Ireland, New Zealand and the United States. B'Day also produced three other singles; "Ring the Alarm", "Get Me Bodied", and "Green Light" (released in the United Kingdom only).
Her first acting role of 2006 was in the comedy film The Pink Panther starring opposite Steve Martin, grossing $158.8 million at the box office worldwide. Her second film Dreamgirls, the film version of the 1981 Broadway musical loosely based on The Supremes, received acclaim from critics and grossed $154 million internationally. In it, she starred opposite Jennifer Hudson, Jamie Foxx, and Eddie Murphy playing a pop singer based on Diana Ross. To promote the film, Beyoncé released "Listen" as the lead single from the soundtrack album. In April 2007, Beyoncé embarked on The Beyoncé Experience, her first worldwide concert tour, visiting 97 venues and grossed over $24 million. Beyoncé conducted pre-concert food donation drives during six major stops in conjunction with her pastor at St. John's and America's Second Harvest. At the same time, B'Day was re-released with five additional songs, including her duet with Shakira "Beautiful Liar".
2008–2010: Marriage, I Am... Sasha Fierce, and films
thumb|upright|Beyoncé performing "Single Ladies (Put a Ring on It)" during the I Am... World Tour. The song reached number one on the Billboard Hot 100, earned the Grammy Award for Song of the Year and spawned the Internet's first major dance craze.|alt=A woman stands looking out to a crowd
On April 4, 2008, Beyoncé married Jay Z. She publicly revealed their marriage in a video montage at the listening party for her third studio album, I Am... Sasha Fierce, in Manhattan's Sony Club on October 22, 2008. I Am... Sasha Fierce was released on November 18, 2008, in the United States. The album formally introduces Beyoncé's alter ego Sasha Fierce, conceived during the making of her 2003 single "Crazy in Love". It was met with generally mediocre reviews from critics, but sold 482,000 copies in its first week, debuting atop the Billboard 200, and giving Beyoncé her third consecutive number-one album in the US. The album featured the number-one song "Single Ladies (Put a Ring on It)" and the top-five songs "If I Were a Boy" and "Halo". Achieving the accomplishment of becoming her longest-running Hot 100 single in her career, "Halo"'s success in the US helped Beyoncé attain more top-ten singles on the list than any other woman during the 2000s. It also included the successful "Sweet Dreams", and singles "Diva", "Ego", "Broken-Hearted Girl" and "Video Phone". The music video for "Single Ladies" has been parodied and imitated around the world, spawning the "first major dance craze" of the Internet age according to the Toronto Star. The video has won several awards, including Best Video at the 2009 MTV Europe Music Awards, the 2009 Scottish MOBO Awards, and the 2009 BET Awards. At the 2009 MTV Video Music Awards, the video was nominated for nine awards, ultimately winning three including Video of the Year. Its failure to win the Best Female Video category, which went to American country pop singer Taylor Swift's "You Belong with Me", led to Kanye West interrupting the ceremony and Beyoncé improvising a re-presentation of Swift's award during her own acceptance speech. In March 2009, Beyoncé embarked on the I Am... World Tour, her second headlining worldwide concert tour, consisting of 108 shows, grossing $119.5 million.
Beyoncé further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic Cadillac Records. Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. Beyoncé donated her entire salary from the film to Phoenix House, an organization of rehabilitation centers for heroin addicts around the country. On January 20, 2009, Beyoncé performed James' "At Last" at the First Couple's first inaugural ball. Beyoncé starred opposite Ali Larter and Idris Elba in the thriller, Obsessed. She played Sharon Charles, a mother and wife who learns of a woman's obsessive behavior over her husband. Although the film received negative reviews from critics, the movie did well at the US box office, grossing $68 million—$60 million more than Cadillac Records—on a budget of $20 million. The fight scene finale between Sharon and the character played by Ali Larter also won the 2010 MTV Movie Award for Best Fight.
At the 52nd Annual Grammy Awards, Beyoncé received ten nominations, including Album of the Year for I Am... Sasha Fierce, Record of the Year for "Halo", and Song of the Year for "Single Ladies (Put a Ring on It)", among others. She tied with Lauryn Hill for most Grammy nominations in a single year by a female artist. Knowles went on to win six of those nominations, breaking a record she previously tied in 2004 for the most Grammy awards won in a single night by a female artist with six. In 2010, Beyoncé was featured on Lady Gaga's single "Telephone" and appeared in its music video. The song topped the US Pop Songs chart, becoming the sixth number-one for both Beyoncé and Gaga, tying them with Mariah Carey for most number-ones since the Nielsen Top 40 airplay chart launched in 1992. "Telephone" received a Grammy Award nomination for Best Pop Collaboration with Vocals.
Beyoncé announced a hiatus from her music career in January 2010, heeding her mother's advice, "to live life, to be inspired by things again". During the break she and her father parted ways as business partners. Beyoncé's musical break lasted nine months and saw her visit multiple European cities, the Great Wall of China, the Egyptian pyramids, Australia, English music festivals and various museums and ballet performances.
2011–2015: 4 and Beyoncé
left|thumb|upright|Beyoncé's sound became mellower with 2011's 4 which focused on traditional R&B styles. She performed the album during her 4 Intimate Nights with Beyoncé residency show in August 2011|alt=The upper body of a woman is shown as she sings into a microphone
On June 26, 2011, she became the first solo female artist to headline the main Pyramid stage at the 2011 Glastonbury Festival in over twenty years. Her fourth studio album 4 was released two days later in the US. 4 sold 310,000 copies in its first week and debuted atop the Billboard 200 chart, giving Beyoncé her fourth consecutive number-one album in the US. The album was preceded by two of its singles "Run the World (Girls)" and "Best Thing I Never Had". The fourth single "Love on Top" spent seven consecutive weeks at number one on the Hot R&B/Hip-Hop Songs chart, while peaking at number 20 on the Billboard Hot 100, the highest peak from the album."Love on Top": 4 also produced four other singles; "Party", "Countdown", "I Care" and "End of Time". "Eat, Play, Love", a cover story written by Beyoncé for Essence that detailed her 2010 career break, won her a writing award from the New York Association of Black Journalists. In late 2011, she took the stage at New York's Roseland Ballroom for four nights of special performances: the 4 Intimate Nights with Beyoncé concerts saw the performance of her 4 album to a standing room only. On August 1, 2011, the album was certified platinum by the Recording Industry Association of America (RIAA), having shipped 1 million copies to retail stores. By December 2015, it reached sales of 1.5 million copies in the US.
On January 7, 2012, Beyoncé gave birth to her first child, a daughter, Blue Ivy Carter, at Lenox Hill Hospital in New York. Five months later, she performed for four nights at Revel Atlantic City's Ovation Hall to celebrate the resort's opening, her first performances since giving birth to Blue Ivy.
In January 2013, Destiny's Child released Love Songs, a compilation album of the romance-themed songs from their previous albums and a newly recorded track, "Nuclear". Beyoncé performed the American national anthem singing along with a pre-recorded track at President Obama's second inauguration in Washington, D.C. The following month, Beyoncé performed at the Super Bowl XLVII halftime show, held at the Mercedes-Benz Superdome in New Orleans. The performance stands as the second most tweeted about moment in history at 268,000 tweets per minute. At the 55th Annual Grammy Awards, Beyoncé won for Best Traditional R&B Performance for "Love on Top". Her feature-length documentary film, Life Is But a Dream, first aired on HBO on February 16, 2013.
thumb|right|upright|Beyoncé performing during The Mrs. Carter Show World Tour in 2013. The tour is one of the highest grossing tours of the decade.
Beyoncé embarked on The Mrs. Carter Show World Tour on April 15 in Belgrade, Serbia; the tour included 132 dates that ran through to March 2014. It became the most successful tour of her career and one of the most successful tours of all time. In May, Beyoncé's cover of Amy Winehouse's "Back to Black" with André 3000 on The Great Gatsby soundtrack was released. Beyoncé voiced Queen Tara in the 3D CGI animated film, Epic, released by 20th Century Fox on May 24, and recorded an original song for the film, "Rise Up", co-written with Sia.
On December 13, 2013, Beyoncé unexpectedly released her eponymous fifth studio album on the iTunes Store without any prior announcement or promotion. The album debuted atop the Billboard 200 chart, giving Beyoncé her fifth consecutive number-one album in the US. This made her the first woman in the chart's history to have her first five studio albums debut at number one. Beyoncé received critical acclaim and commercial success, selling one million digital copies worldwide in six days; Musically an electro-R&B album, it concerns darker themes previously unexplored in her work, such as "bulimia, postnatal depression [and] the fears and insecurities of marriage and motherhood". The single "Drunk in Love", featuring Jay Z, peaked at number two on the Billboard Hot 100 chart. In April 2014, after much speculation, Beyoncé and Jay Z officially announced their On the Run Tour. It served as the couple's first co-headlining stadium tour together. On August 24, 2014, she received the Video Vanguard Award at the 2014 MTV Video Music Awards. Knowles also won home three competitive awards: Best Video with a Social Message and Best Cinematography for "Pretty Hurts", as well as best collaboration for "Drunk in Love". In November, Forbes reported that Beyoncé was the top-earning woman in music for the second year in a row—earning $115 million in the year, more than double her earnings in 2013. Beyoncé was reissued with new material in three forms: as an extended play, a box set, as well as a full platinum edition. According to the International Federation of the Phonographic Industry (IFPI), in the last 19 days of 2013, the album sold 2.3 million units worldwide, becoming the tenth best-selling album of 2013. The album also went on to become the twentieth best-selling album of 2014. As of November 2014, Beyoncé has sold over 5 million copies worldwide and has generated over 1 billion streams, as of March 2015.
At the 57th Annual Grammy Awards in February 2015, Beyoncé was nominated for six awards, ultimately winning three: Best R&B Performance and Best R&B Song for "Drunk in Love", and Best Surround Sound Album for Beyoncé. She was nominated for Album of the Year, but the award went to Beck for his album Morning Phase.
2016–present: Lemonade
thumb|left|250px|upright|Beyoncé performing during The Formation World Tour in 2016
On February 6, 2016, Beyoncé released "Formation" and its accompanying music video exclusively on the music streaming platform Tidal; the song was made available to download for free. She performed "Formation" live for the first time during the NFL Super Bowl 50 halftime show. The appearance was considered controversial as it appeared to reference the 50th anniversary of the Black Panther Party and the NFL forbids political statements in its performances. Immediately following the performance, Beyoncé announced The Formation World Tour, which highlighted stops in both North America, and Europe. It ended on October 7, with Beyoncé bringing out her husband Jay Z, Kendrick Lamar, and Serena Williams for the last show. The tour went on to win "Tour of the Year" at the 44th American Music Awards.
On April 16, 2016, Beyoncé released a teaser clip for a project called Lemonade. It turned out to be a one-hour movie which aired on HBO exactly a week later, April 23 at 10:00 pm EST; a corresponding album with the same title was released on the same day exclusively on the streaming platform Tidal. Lemonade debuted at number one on the US Billboard 200, making Beyoncé the first act in Billboard history to have their first six studio albums debut atop the chart; she broke a record previously tied with DMX in 2013. With all 12 tracks of Lemonade debuting on the Billboard Hot 100 chart, Beyoncé also became the first female act to chart 12 or more songs at the same time. Additionally, Lemonade was streamed 115 million times via Tidal, setting a record for the most-streamed album in a single week by a female artist in history. The album is Beyoncé's most critically acclaimed work to date, receiving universal acclaim according to Metacritic, a website collecting reviews from professional music critics. It is the 23rd album to receive a five-star rating from Rolling Stone. Several music publications included the album among the best of 2016, including Rolling Stone, which listed Lemonade at number one. As of November 2016, it has sold 1.5 million copies in the US. The album's visuals were nominated for 11 MTV Video Music Awards in 2016, the most ever received by Beyoncé in a single year, and went on to win 8 awards, including Video of the Year for "Formation". The eight wins makes Beyoncé the most awarded artist in the history of the VMAs (24), surpassing Madonna (20). For the 59th Grammy Awards, Lemonade led the nominations with nine, including Album of the Year, and Record of the Year and Song of the Year for "Formation". Beyoncé occupied the sixth place for Time magazine'''s 2016 Person of the Year. In January 2017, it was announced that Beyoncé would headline the Coachella Music and Arts Festival. This would make Beyoncé only the second female headliner of the festival since it was founded in 1999.
Artistry
Voice and song-writing
Jody Rosen highlights her tone and timbre as particularly distinctive, describing her voice as "one of the most compelling instruments in popular music". Her vocal abilities mean she is identified as the centerpiece of Destiny's Child. Jon Pareles of The New York Times commented that her voice is "velvety yet tart, with an insistent flutter and reserves of soul belting". Rosen notes that the hip hop era highly influenced Beyoncé's unique rhythmic vocal style, but also finds her quite traditionalist in her use of balladry, gospel and falsetto. Other critics praise her range and power, with Chris Richards of The Washington Post saying she was "capable of punctuating any beat with goose-bump-inducing whispers or full-bore diva-roars."
Beyoncé's music is generally R&B, but she also incorporates pop, soul and funk into her songs. 4 demonstrated Beyoncé's exploration of 1990s-style R&B, as well as further use of soul and hip hop than compared to previous releases. While she almost exclusively releases English songs, Beyoncé recorded several Spanish songs for Irreemplazable (re-recordings of songs from B'Day for a Spanish-language audience), and the re-release of B'Day. To record these, Beyoncé was coached phonetically by American record producer Rudy Perez.
She has received co-writing credits for most of the songs recorded with Destiny's Child and her solo efforts. Her early songs were personally driven and female-empowerment themed compositions like "Independent Women" and "Survivor", but after the start of her relationship with Jay Z, she transitioned to more man-tending anthems such as "Cater 2 U". Beyoncé has also received co-producing credits for most of the records in which she has been involved, especially during her solo efforts. However, she does not formulate beats herself, but typically comes up with melodies and ideas during production, sharing them with producers.
In 2001, she became the first black woman and second female lyricist to win the Pop Songwriter of the Year award at the American Society of Composers, Authors, and Publishers Pop Music Awards. Beyoncé was the third woman to have writing credits on three number one songs ("Irreplaceable", "Grillz" and "Check on It") in the same year, after Carole King in 1971 and Mariah Carey in 1991. She is tied with American lyricist Diane Warren at third with nine song-writing credits on number-one singles. (The latter wrote her 9/11-motivated song "I Was Here" for 4.) In May 2011, Billboard magazine listed Beyoncé at number 17 on their list of the "Top 20 Hot 100 Songwriters", for having co-written eight singles that hit number one on the Billboard Hot 100 chart. She was one of only three women on that list, along with Alicia Keys and Taylor Swift.
Influences
Beyoncé names Michael Jackson as her major musical influence. Aged five, Beyoncé attended her first ever concert where Jackson performed and she claims to have realized her purpose. When she presented him with a tribute award at the World Music Awards in 2006, Beyoncé said, "if it wasn't for Michael Jackson, I would never ever have performed." She admires Diana Ross as an "all-around entertainer" and Whitney Houston, who she said "inspired me to get up there and do what she did."Caldwell, Rebecca (July 21, 2001). "Destiny's Child". The Globe and Mail. page R1. She credits Mariah Carey's singing and her song "Vision of Love" as influencing her to begin practicing vocal runs as a child. Her other musical influences include Aaliyah, Prince, Lauryn Hill, Sade Adu, Donna Summer, Mary J. Blige, Janet Jackson, Anita Baker and Rachelle Ferrell.
The feminism and female empowerment themes on Beyoncé's second solo album B'Day were inspired by her role in Dreamgirls and by singer Josephine Baker. Beyoncé paid homage to Baker by performing "Déjà Vu" at the 2006 Fashion Rocks concert wearing Baker's trademark mini-hula skirt embellished with fake bananas. Beyoncé's third solo album I Am... Sasha Fierce was inspired by Jay Z and especially by Etta James, whose "boldness" inspired Beyoncé to explore other musical genres and styles. Her fourth solo album, 4, was inspired by Fela Kuti, 1990s R&B, Earth, Wind & Fire, DeBarge, Lionel Richie, Teena Marie, The Jackson 5, New Edition, Adele, Florence and the Machine, and Prince.
Beyoncé has stated that she is personally inspired by US First Lady Michelle Obama, saying "She proves you can do it all" and she has described Oprah Winfrey as "the definition of inspiration and a strong woman". She has also discussed how Jay Z is a continuing inspiration to her, both with what she describes as his lyrical genius and in the obstacles he has overcome in his life. Beyoncé has expressed admiration for the artist Jean-Michel Basquiat, posting in a letter "what I find in the work of Jean-Michel Basquiat, I search for in every day in music... he is lyrical and raw". In February 2013, Beyoncé said that Madonna inspired her to take control of her own career. She commented: "I think about Madonna and how she took all of the great things she achieved and started the label and developed other artists. But there are not enough of those women.".
Stage and alter ego
thumb|Beyoncé performing "Run the World (Girls)" on the 2011 Good Morning America Summer Concert Series|alt=A woman in a yellow dress, flanked by three female dancers, salutes to the crowd
In 2006, Beyoncé introduced her all-female tour band Suga Mama (also the name of a song in B'Day) which includes bassists, drummers, guitarists, horn players, keyboardists and percussionists. Her background singers, The Mamas, consist of Montina Cooper-Donnell, Crystal Collins and Tiffany Moniqué Riddick. They made their debut appearance at the 2006 BET Awards and re-appeared in the music videos for "Irreplaceable" and "Green Light". The band have supported Beyoncé in most subsequent live performances, including her 2007 concert tour The Beyoncé Experience, 2009–2010 I Am... World Tour and 2013–2014 The Mrs. Carter Show World Tour.
Beyoncé has received praise for her stage presence and voice during live performances. Jarett Wieselman of the New York Post placed her at number one on her list of the Five Best Singer/Dancers. According to Barbara Ellen of The Guardian Beyoncé is the most in-charge female artist she's seen onstage, while Alice Jones of The Independent wrote she "takes her role as entertainer so seriously she's almost too good." The ex-President of Def Jam L.A. Reid has described Beyoncé as the greatest entertainer alive. Jim Farber of the Daily News and Stephanie Classen of Star Phoenix both praised her strong voice and her stage presence.
Described as being "sexy, seductive and provocative" when performing on stage, Beyoncé has said that she originally created the alter ego "Sasha Fierce" to keep that stage persona separate from who she really is. She described Sasha as being "too aggressive, too strong, too sassy [and] too sexy", stating, "I'm not like her in real life at all." Sasha was conceived during the making of "Crazy in Love", and Beyoncé introduced her with the release of her 2008 album I Am... Sasha Fierce. In February 2010, she announced in an interview with Allure magazine that she was comfortable enough with herself to no longer need Sasha Fierce. However, Beyoncé announced in May 2012 that she would bring her back for her Revel Presents: Beyoncé Live shows later that month.
Public image
thumb|upright|left|Beyoncé at the premiere for her 2006 film, Dreamgirls|alt=A woman waves to the crowd on a red-carpet
Beyoncé has been described as having a wide-ranging sex appeal, with music journalist Touré writing that since the release of Dangerously in Love, she has "become a crossover sex symbol". Offstage Beyoncé says that while she likes to dress sexily, her onstage dress "is absolutely for the stage." Due to her curves and the term's catchiness, in the 2000s (decade), the media often used the term "Bootylicious" (a portmanteau of the words booty and delicious) to describe Beyoncé, the term popularized by Destiny's Child's single of the same name. In 2006, it was added to the Oxford English Dictionary.
In September 2010, Beyoncé made her runway modelling debut at Tom Ford's Spring/Summer 2011 fashion show. She was named "World's Most Beautiful Woman" by People and the "Hottest Female Singer of All Time" by Complex in 2012. In January 2013, GQ placed her on its cover, featuring her atop its "100 Sexiest Women of the 21st Century" list. VH1 listed her at number 1 on its 100 Sexiest Artists list. Several wax figures of Beyoncé are found at Madame Tussauds Wax Museums in major cities around the world, including New York, Washington, D.C., Amsterdam, Bangkok, Hollywood and Sydney.
According to Italian fashion designer Roberto Cavalli, Beyoncé uses different fashion styles to work with her music while performing. Her mother co-wrote a book, published in 2002, titled Destiny's Style an account of how fashion affected the trio's success. The B'Day Anthology Video Album showed many instances of fashion-oriented footage, depicting classic to contemporary wardrobe styles. In 2007, Beyoncé was featured on the cover of the Sports Illustrated Swimsuit Issue, becoming the second African American woman after Tyra Banks, and People magazine recognized Beyoncé as the best-dressed celebrity.
The Bey Hive is the name given to Beyoncé's fan base. Fans were previously titled "The Beyontourage", (a portmanteau of Beyoncé and entourage). The name Bey Hive derives from the word beehive, purposely misspelled to resemble her first name, and was penned by fans after petitions on the online social networking service Twitter and online news reports during competitions.
In 2006, the animal rights organization People for the Ethical Treatment of Animals (PETA), criticized Beyoncé for wearing and using fur in her clothing line House of Deréon. In 2011, she appeared on the cover of French fashion magazine L'Officiel, in blackface and tribal makeup that drew criticism from the media. A statement released from a spokesperson for the magazine said that Beyoncé's look was "far from the glamorous Sasha Fierce" and that it was "a return to her African roots".
Beyoncé's lighter skin color and costuming has drawn criticism from some in the African-American community. Emmett Price, a professor of music at Northeastern University, wrote in 2007, that he thinks race plays a role in many of these criticisms, saying white celebrities who dress similarly do not attract as many comments. In 2008, L'Oréal was accused of whitening her skin in their Feria hair color advertisements, responding that "it is categorically untrue", and in 2013, Beyoncé herself criticized H&M for their proposed "retouching" of promotional images of her, and according to Vogue requested that only "natural pictures be used".
Personal life
thumb|Beyoncé performing on the I Am... Tour with Jay Z, whom she married in 2008|alt=A woman stands next to a man who is performing using a microphone
Beyoncé started a relationship with Shawn "Jay Z" Carter after their collaboration on "'03 Bonnie & Clyde", which appeared on his seventh album The Blueprint 2: The Gift & The Curse (2002). Beyoncé appeared as Jay Z's girlfriend in the music video for the song, fuelling speculation about their relationship. On April 4, 2008, Beyoncé and Jay Z married without publicity. As of April 2014, the couple had sold a combined 300 million records together. They are known for their private relationship, although they have appeared to become more relaxed in recent years.
Beyoncé suffered a miscarriage in 2010 or 2011, describing it as "the saddest thing" she had ever endured. She returned to the studio and wrote music in order to cope with the loss. In April 2011, Beyoncé and Jay Z traveled to Paris in order to shoot the album cover for 4, and unexpectedly became pregnant in Paris. In August, the couple attended the 2011 MTV Video Music Awards, at which Beyoncé performed "Love on Top" and ended the performance by revealing she was pregnant. Her appearance helped that year's MTV Video Music Awards become the most-watched broadcast in MTV history, pulling in 12.4 million viewers; the announcement was listed in Guinness World Records for "most tweets per second recorded for a single event" on Twitter, receiving 8,868 tweets per second and "Beyonce pregnant" was the most Googled term the week of August 29, 2011. On January 7, 2012, Beyoncé gave birth to a daughter, Blue Ivy Carter, at Lenox Hill Hospital in New York City.
Beyoncé performed "America the Beautiful" at the 2009 presidential inauguration, as well as "At Last" during the first inaugural dance at the Neighborhood Ball two days later. They held a fundraiser at Jay Z's 40/40 Club in Manhattan for Obama's 2012 presidential campaign which raised $4 million. In the 2012 Presidential election, Beyoncé voted for Obama. She performed the American national anthem at his second inauguration. The Washington Post reported in May 2015, that Beyoncé attended a major celebrity fundraiser for 2016 presidential nominee Hillary Clinton.
In 2013, Beyoncé stated in an interview with Vogue that she considered herself to be "a modern-day feminist". She would later align herself more publicly with the movement, sampling "We should all be feminists", a speech delivered by Nigerian author Chimamanda Ngozi Adichie at a TEDx talk in April 2013, in her song "Flawless", released later that year. She has also contributed to the Ban Bossy campaign, which uses television and social media to encourage leadership in girls. Following Beyoncé's public identification as a feminist, the sexualized nature of her performances and the fact that she championed her marriage was questioned.
Beyoncé publicly endorsed same sex marriage on March 26, 2013, after the Supreme Court debate on California's Proposition 8. The singer has also condemned police brutality against black Americans. Beyoncé and Jay-Z attended a rally in 2013 in response to the acquittal of George Zimmerman for the shooting of Trayvon Martin. The film for her sixth album Lemonade included the mothers of Trayvon Martin, Michael Brown and Eric Garner, holding pictures of their murdered sons in the video for "Freedom". In a 2016 interview with Elle, she responded to the controversy surrounding her song "Formation" which was perceived to be critical of the police. She clarified, "I am against police brutality and injustice. Those are two separate things. If celebrating my roots and culture during Black History Month made anyone uncomfortable, those feelings were there long before a video and long before me".
WealthForbes magazine began reporting on Beyoncé's earnings in 2008, calculating that the $80 million earned between June 2007 to June 2008, for her music, tour, films and clothing line made her the world's best-paid music personality at the time, above Madonna and Celine Dion. They placed her fourth on the Celebrity 100 list in 2009
and ninth on the "Most Powerful Women in the World" list in 2010. The following year, Forbes placed her eighth on the "Best-Paid Celebrities Under 30" list, having earned $35 million in the past year for her clothing line and endorsement deals. In 2012, Forbes placed Beyoncé at number 16 on the Celebrity 100 list, twelve places lower than three years ago yet still having earned $40 million in the past year for her album 4, clothing line and endorsement deals. In the same year, Beyoncé and Jay Z placed at number one on the "World's Highest-Paid Celebrity Couples", for collectively earning $78 million. The couple made it into the previous year's Guinness World Records as the "highest-earning power couple" for collectively earning $122 million in 2009. For the years 2009 to 2011, Beyoncé earned an average of $70 million per year, and earned $40 million in 2012. In 2013, Beyoncé's endorsements of Pepsi and H&M made her and Jay Z the world's first billion dollar couple in the music industry. That year, Beyoncé was published as the fourth most-powerful celebrity in the Forbes rankings.
MTV estimated that by the end of 2014, Beyoncé would become the highest-paid black musician in history; she proceeded to do so in April 2014. In June 2014, Beyoncé ranked at #1 on the Forbes Celebrity 100 list, earning an estimated $115 million throughout June 2013 – June 2014. This in turn was the first time she had topped the Celebrity 100 list as well as being her highest yearly earnings to date. In 2016, Beyoncé ranked at #34 on the Celebrity 100 list with earnings of $54 million. Herself and Jay Z also topped the highest paid celebrity couple list, with combined earnings of $107.5 million. As of June 2016, Forbes calculated her net worth to be $265 million.
Legacy
upright|thumb|Beyoncé performing during her I Am... Tour in 2009|alt=A woman is shown leaning back and singing into a microphone, surrounded by smoke
In The New Yorker, music critic Jody Rosen described Beyoncé as "the most important and compelling popular musician of the twenty-first century..... the result, the logical end point, of a century-plus of pop." When The Guardian named her Artist of the Decade, Llewyn-Smith wrote, "Why Beyoncé? [...] Because she made not one but two of the decade's greatest singles, with Crazy in Love and Single Ladies (Put a Ring on It), not to mention her hits with Destiny's Child; and this was the decade when singles – particularly R&B singles – regained their status as pop's favourite medium. [...] [She] and not any superannuated rock star was arguably the greatest live performer of the past 10 years." In 2013, Beyoncé made the Time 100 list, with Baz Luhrmann writing "no one has that voice, no one moves the way she moves, no one can hold an audience the way she does... When Beyoncé does an album, when Beyoncé sings a song, when Beyoncé does anything, it's an event, and it's broadly influential. Right now, she is the heir-apparent diva of the USA — the reigning national voice." In 2014, Beyoncé was listed again on the Time 100 and also featured on the cover of the issue.
Beyoncé's work has influenced numerous artists including Adele, Ariana Grande, Lady Gaga, Ellie Goulding, Rihanna, Kelly Rowland, Sam Smith, Nicole Scherzinger, Jessica Sanchez, Cheryl, JoJo, Meghan Trainor, Grimes, Rita Ora, Zendaya, Alexis Jordan, Bridgit Mendler, and Azealia Banks. American indie rock band White Rabbits also cited her an inspiration for their third album Milk Famous (2012), friend Gwyneth Paltrow studied Beyoncé at her live concerts while learning to become a musical performer for the 2010 film Country Strong.
Her debut single, "Crazy in Love" was named VH1's "Greatest Song of the 2000s", NMEs "Best Track of the 00s" and "Pop Song of the Century", considered by Rolling Stone to be one of the 500 greatest songs of all time, earned two Grammy Awards and is one of the best-selling singles of all time at around 8 million copies. The music video for "Single Ladies (Put a Ring on It)", which achieved fame for its intricate choreography and its deployment of jazz hands, was credited by the Toronto Star as having started the "first major dance craze of both the new millennium and the Internet", triggering a number of parodies of the dance choreography and a legion of amateur imitators on YouTube. In 2013, Drake released a single titled "Girls Love Beyoncé", which featured an interpolation from Destiny Child's "Say My Name" and discussed his relationship with women. In January 2012, research scientist Bryan Lessard named Scaptia beyonceae, a species of horse fly found in Northern Queensland, Australia after Beyoncé due to the fly's unique golden hairs on its abdomen. In July 2014, a Beyoncé exhibit was introduced into the "Legends of Rock" section of the Rock and Roll Hall of Fame. The black leotard from the "Single Ladies" video and her outfit from the Super Bowl half time performance are among several pieces housed at the museum. Architects credit Beyoncé's look in "Ghost" music video as the inspiration of the design of the Premier Tower under construction in Australia.http://nypost.com/2016/12/27/curvy-beyonce-tower-on-the-rise-in-australia/
Honors and awards
Beyoncé has received numerous awards. As a solo artist she has sold over 17 million albums in the US, and over 100 million records worldwide (a further 60 million additionally with Destiny's Child), making her one of the best-selling music artists of all time. The Recording Industry Association of America (RIAA) listed Beyoncé as the top certified artist of the 2000s (decade), with a total of 64 certifications. Her songs "Crazy in Love", "Single Ladies (Put a Ring on It)", "Halo", and "Irreplaceable" are some of the best-selling singles of all time worldwide. In 2009, The Observer named her the Artist of the Decade and Billboard named her the Top Female Artist and Top Radio Songs Artist of the Decade. In 2010, Billboard named her in their "Top 50 R&B/Hip-Hop Artists of the Past 25 Years" list at number 15. In 2012 VH1 ranked her third on their list of the "100 Greatest Women in Music". Beyoncé was honored with the International Artist Award at the American Music Awards. She has also received the Legend Award at the 2008 World Music Awards, the Billboard Millennium Award at the 2011 Billboard Music Awards, the Michael Jackson Video Vanguard Award at the MTV Video Music Awards in 2014, and the Fashion Icon Award at the CFDA Awards in 2016.
Beyoncé has won 20 Grammy Awards, both as a solo artist and member of Destiny's Child, making her the second most honored female artist by the Grammys, behind Alison Krauss and the most nominated woman in Grammy Award history with a total of 62 nominations. "Single Ladies (Put a Ring on It)" won Song of the Year in 2010 while "Say My Name" and "Crazy in Love" had previously won Best R&B Song. Dangerously in Love, B'Day and I Am... Sasha Fierce have all won Best Contemporary R&B Album. Beyoncé set the record for the most Grammy awards won by a female artist in one night in 2010 when she won six awards, breaking the tie she previously held with Alicia Keys, Norah Jones, Alison Krauss, and Amy Winehouse, with Adele equaling this in 2012.
Beyoncé has also won 24 MTV Video Music Awards, making her the most-awarded artist in Video Music Award history. "Single Ladies (Put a Ring on It)" and "Formation" won Video of the Year in 2009 and 2016 respectively. Beyoncé tied the record set by Lady Gaga in 2010 for the most VMAs won in one night for a female artist with eight in 2016. She is also the most awarded and nominated artist in BET Award history, winning 24 awards from a total of 54 nominations.
Following her role in Dreamgirls Beyoncé was nominated for Best Original Song for "Listen" and Best Actress at the Golden Globe Awards, and Outstanding Actress in a Motion Picture at the NAACP Image Awards. Beyoncé won two awards at the Broadcast Film Critics Association Awards 2006; Best Song for "Listen" and Best Original Soundtrack for Dreamgirls: Music from the Motion Picture. According to Fuse in 2014, Beyoncé is the second most award-winning artists of all time, after Michael Jackson.
She was named on the 2016 BBC Radio 4 Woman's Hour Power List as one of seven women judged to have had the biggest impact on women's lives over the past 70 years, alongside Margaret Thatcher, Barbara Castle, Helen Brook, Germaine Greer, Jayaben Desai and Bridget Jones."Margaret Thatcher tops Woman's Hour Power List", BBC News (Arts & Entertainment), 14 December 2016.
In 2016, she was announced by WatsUp TV as the first winner of the Best International Videohttp://ameyawdebrah.com/beyonce-diamond-platnumz-mr-eazi-win-watsup-tv-africa-music-video-awards/ Category with her Formation video at the maiden edition of the WatsUp TV Africa Music Video Awards held in Accra, Ghana.
Other ventures
Endorsements
Beyoncé has worked with Pepsi since 2002, and in 2004 appeared in a Gladiator-themed commercial with Britney Spears, Pink, and Enrique Iglesias. In 2012, Beyoncé signed a $50 million deal to endorse Pepsi. The Center for Science in the Public Interest (CSPINET) wrote Beyoncé an open letter asking her to reconsider the deal because of the unhealthiness of the product and to donate the proceeds to a medical organisation. Nevertheless, NetBase found that Beyoncé's campaign was the most talked about endorsement in April 2013, with a 70 per cent positive audience response to the commercial and print ads.
Beyoncé has worked with Tommy Hilfiger for the fragrances True Star (singing a cover version of "Wishing on a Star") and True Star Gold; she also promoted Emporio Armani's Diamonds fragrance in 2007. Beyoncé launched her first official fragrance, Heat in 2010. The commercial, which featured the 1956 song "Fever", was shown after the water shed in the United Kingdom as it begins with an image of Beyoncé appearing to lie naked in a room. In February 2011, Beyoncé launched her second fragrance, Heat Rush. Beyoncé's third fragrance, Pulse, was launched in September 2011. In 2013, The Mrs. Carter Show Limited Edition version of Heat was released. The six editions of Heat are the world's best-selling celebrity fragrance line, with sales of over $400 million.
The release of a video-game Starpower: Beyoncé was cancelled after Beyoncé pulled out of a $100 million with GateFive who alleged the cancellation meant the sacking of 70 staff and millions of pounds lost in development. It was settled out of court by her lawyers in June 2013 who said that they had cancelled because GateFive had lost its financial backers. Beyoncé also has had deals with American Express, Nintendo DS and L'Oréal since the age of 18.
In October 2014, Beyoncé partnered with British fashion retailer Topshop in a 50/50 split subsidiary business named Parkwood Topshop Athletic Ltd. The new division was created for Topshop to break into the activewear market. The company and collection is set to launch and hit stores in the fall of 2015.
In March 2015, Beyoncé became a co-owner, with other artists, of the music streaming service Tidal. The service specializes in lossless audio and high definition music videos. Beyoncé's husband Jay Z acquired the parent company of Tidal, Aspiro, in the first quarter of 2015. Including Beyoncé and Jay-Z, sixteen artist stakeholders (such as Kanye West, Rihanna, Madonna, Chris Martin, Nicki Minaj and more) co-own Tidal, with the majority owning a 3% equity stake. The idea of having an all artist owned streaming service was created by those involved to adapt to the increased demand for streaming within the current music industry.
Fashion lines
Beyoncé and her mother introduced House of Deréon, a contemporary women's fashion line, in 2005. The concept is inspired by three generations of women in their family, the name paying tribute to Beyoncé's grandmother, Agnèz Deréon, a respected seamstress. According to Tina, the overall style of the line best reflects her and Beyoncé's taste and style. Beyoncé and her mother founded their family's company Beyond Productions, which provides the licensing and brand management for House of Deréon, and its junior collection, Deréon. House of Deréon pieces were exhibited in Destiny's Child's shows and tours, during their Destiny Fulfilled era. The collection features sportswear, denim offerings with fur, outerwear and accessories that include handbags and footwear, and are available at department and specialty stores across the US and Canada.
In 2005, Beyoncé teamed up with House of Brands, a shoe company, to produce a range of footwear for House of Deréon. In January 2008, Starwave Mobile launched Beyoncé Fashion Diva, a "high-style" mobile game with a social networking component, featuring the House of Deréon collection. In July 2009, Beyoncé and her mother launched a new junior apparel label, Sasha Fierce for Deréon, for back-to-school selling. The collection included sportswear, outerwear, handbags, footwear, eyewear, lingerie and jewelry. It was available at department stores including Macy's and Dillard's, and specialty stores Jimmy Jazz and Against All Odds. On May 27, 2010, Beyoncé teamed up with clothing store C&A to launch Deréon by Beyoncé at their stores in Brazil. The collection included tailored blazers with padded shoulders, little black dresses, embroidered tops and shirts and bandage dresses.
In October 2014, Beyoncé signed a deal to launch an activewear line of clothing with British fashion retailer Topshop. The 50–50 venture is called Parkwood Topshop Athletic Ltd and is scheduled to launch its first dance, fitness and sports ranges in autumn 2015. The line was launched in April 2016.
Philanthropy
right|thumb|Beyoncé (center) and her mother, Tina, (left) at the opening of the Beyoncé Cosmetology Center on March 5, 2010|alt=A woman is surrounded by several others, all behind a piece of white tape
After Hurricane Katrina in 2005, Beyoncé and Rowland founded the Survivor Foundation to provide transitional housing for victims in the Houston area, to which Beyoncé contributed an initial $250,000. The foundation has since expanded to work with other charities in the city, and also provided relief following Hurricane Ike three years later.
Beyoncé participated in George Clooney and Wyclef Jean's Hope for Haiti Now: A Global Benefit for Earthquake Relief telethon and was named the official face of the limited edition CFDA "Fashion For Haiti" T-shirt, made by Theory which raised a total of $1 million. On March 5, 2010, Beyoncé and her mother Tina opened the Beyoncé Cosmetology Center at the Brooklyn Phoenix House, offering a seven-month cosmetology training course for men and women. In April 2011, Beyoncé joined forces with US First Lady Michelle Obama and the National Association of Broadcasters Education Foundation, to help boost the latter's campaign against child obesity by reworking her single "Get Me Bodied". Following the death of Osama bin Laden, Beyoncé released her cover of the Lee Greenwood song "God Bless the USA", as a charity single to help raise funds for the New York Police and Fire Widows' and Children's Benefit Fund.
In December, Beyoncé along with a variety of other celebrities teamed up and produced a video campaign for "Demand A Plan", a bipartisan effort by a group of 950 US mayors and others designed to influence the federal government into rethinking its gun control laws, following the Sandy Hook Elementary School shooting. Beyoncé became an ambassador for the 2012 World Humanitarian Day campaign donating her song "I Was Here" and its music video, shot in the UN, to the campaign. In 2013, it was announced that Beyoncé would work with Salma Hayek and Frida Giannini on a Gucci "Chime for Change" campaign that aims to spread female empowerment. The campaign, which aired on February 28, was set to her new music. A concert for the cause took place on June 1, 2013 in London and included other acts like Ellie Goulding, Florence and the Machine, and Rita Ora. In advance of the concert, she appeared in a campaign video released on May 15, 2013, where she, along with Cameron Diaz, John Legend and Kylie Minogue, described inspiration from their mothers, while a number of other artists celebrated personal inspiration from other women, leading to a call for submission of photos of women of viewers' inspiration from which a selection was shown at the concert. Beyoncé said about her mother Tina Knowles that her gift was "finding the best qualities in every human being." With help of the crowdfunding platform Catapult, visitors of the concert could choose between several projects promoting education of women and girls. Beyoncé is also taking part in "Miss a Meal", a food-donation campaign, and supporting Goodwill charity through online charity auctions at Charitybuzz that support job creation throughout Europe and the U.S. In December 2016, Beyoncé was named the Most Charitable Celebrity of the year.http://www.usmagazine.com/celebrity-news/news/beyonce-named-most-charitable-celeb-2016-w457809
Discography
Dangerously in Love (2003)
B'Day (2006)
I Am... Sasha Fierce (2008)
4 (2011)
Beyoncé (2013)
Lemonade (2016)
Filmography
Carmen: A Hip Hopera (2001)
Austin Powers in Goldmember (2002)
The Fighting Temptations (2003)
The Pink Panther (2006)
Dreamgirls (2006)
Cadillac Records (2008)
Wow! Wow! Wubbzy!: Wubb Idol (2009)
Obsessed (2009)
Life Is But a Dream (2013)
Epic'' (2013)
Tours and residency shows
Headlining tours
Dangerously in Love Tour (2003)
The Beyoncé Experience (2007)
I Am... World Tour (2009–2010)
The Mrs. Carter Show World Tour (2013–2014)
The Formation World Tour (2016)
Co-headlining tours
Verizon Ladies First Tour (with Alicia Keys and Missy Elliott) (2004)
On the Run Tour (with Jay Z) (2014)
Residency shows
I Am... Yours (2009)
4 Intimate Nights with Beyoncé (2011)
Revel Presents: Beyoncé Live (2012)
See also
Honorific nicknames in popular music
List of artists who reached number one in the United States
List of Billboard Social 50 number-one artists
List of black Golden Globe Award winners and nominees
List of artists with the most number ones on the U.S. dance chart
Notes
References
External links
Category:Beyoncé
Category:1981 births
Category:20th-century American businesspeople
Category:20th-century American singers
Category:21st-century American actresses
Category:21st-century American businesspeople
Category:21st-century American singers
Category:Actresses from Houston
Category:African-American businesspeople
Category:African-American choreographers
Category:African-American fashion designers
Category:African-American female dancers
Category:African-American female singers
Category:African-American feminists
Category:African-American singers
Category:African-American film producers
Category:African-American Methodists
Category:African-American record producers
Category:African-American women writers
Category:American cosmetics businesspeople
Category:American fashion businesspeople
Category:American female pop singers
Category:American hip hop record producers
Category:American hip hop singers
Category:American music publishers (people)
Category:American music video directors
Category:American people of Acadian descent
Category:American people of Creole descent
Category:American people of Irish descent
Category:American people of Native American descent
Category:American people of Spanish descent
Category:American philanthropists
Category:American retail chief executives
Category:American rhythm and blues singer-songwriters
Category:American singer-songwriters
Category:American soul singers
Category:American television producers
Category:American women business executives
Category:American women philanthropists
Category:Brit Award winners
Category:Businesspeople from Houston
Category:California Democrats
Category:Columbia Records artists
Category:Destiny's Child members
Category:Female music video directors
Category:Feminist musicians
Category:Film directors from Texas
Category:Gold Star Records artists
Category:Grammy Award winners
Category:Ivor Novello Award winners
Category:Jay Z
Category:Living people
Category:Louisiana Creole people
Category:Musicians from Houston
Category:Music video codirectors
Category:Parkwood Entertainment artists
Category:Shoe designers
Category:Singers with a four-octave vocal range
Category:Sony/ATV Music Publishing artists
Category:Songwriters from Texas
Category:Spanish-language singers of the United States
Category:United Methodists
Category:World Music Awards winners | 83,688 | 2017-01 |
Tristan da Cunha | thumb|right|Gough Island, Tristan da Cunha
thumb|Edinburgh of the Seven Seas, Tristan da Cunha
thumb|Housing in Tristan da Cunha
Tristan da Cunha (), colloquially Tristan, is the name of both a remote group of volcanic islands in the south Atlantic Ocean and the main island of that group. It is the most remote inhabited archipelago in the world, lying from the nearest inhabited land, Saint Helena, and from the nearest continental land, South Africa. It is from South America. The territory consists of the main island, named Tristan da Cunha, which has a north–south length of and an area of , along with the smaller, uninhabited Nightingale Islands and the wildlife reserves of Inaccessible and Gough islands. As of January 2017, the main island has 262 permanent inhabitants. Meanwhile, the other islands are uninhabited, except for the personnel of a weather station on Gough Island.
Tristan da Cunha is part of the British overseas territory of Saint Helena, Ascension and Tristan da Cunha. This includes Saint Helena and equatorial Ascension Island, some to the north of Tristan.
History
right|thumb|Tristan da Cunha
Discovery
The islands were first recorded as sighted in 1506 by Portuguese explorer Tristão da Cunha; rough seas prevented a landing. He named the main island after himself, Ilha de Tristão da Cunha. It was later anglicised from its earliest mention on British Admiralty charts, to Tristan da Cunha Island. Some sources state that the Portuguese made the first landing in 1520, when the Lás Rafael captained by Ruy Vaz Pereira called at Tristan for water.Arnaldo Faustini. The Annals of Tristan da Cunha, p9. The first undisputed landing was made on 7 February 1643 by the crew of the Dutch East India Company ship Heemstede, captained by Claes Gerritsz Bierenbroodspot. The Dutch stopped at the island four more times in the next 25 years, and in 1656 created the first rough charts of the archipelago.
The first full survey of the archipelago was made by crew of the French corvette Heure du Berger in 1767. The first scientific exploration was conducted by French naturalist Louis-Marie Aubert du Petit-Thouars, who stayed on the island for three days in January 1793, during a French mercantile expedition from Brest, France to Mauritius. Aubert made botanical collections and reported traces of human habitation, including fireplaces and overgrown gardens, probably left by Dutch explorers in the 17th century.
19th century
The first permanent settler was Jonathan Lambert, from Salem, Massachusetts, United States, who arrived at the islands in December 1810 with two other men, and later a third. Lambert publicly declared the islands his property and named them the Islands of Refreshment. Three of the four men died in 1812; however, the survivor among the original three permanent settlers, Thomas Currie (or Tommaso Corri) remained as a farmer on the island.
In 1816, the United Kingdom annexed the islands, ruling them from the Cape Colony in South Africa. This is reported to have primarily been a measure to ensure that the French would be unable to use the islands as a base for a rescue operation to free Napoleon Bonaparte from his prison on Saint Helena. The occupation also prevented the United States from using Tristan da Cunha as a cruiser base, as it had during the War of 1812.
The islands were occupied by a garrison of British Marines and a civilian population gradually grew. Whalers set up bases on the islands for operations in the Southern Atlantic. However, the opening of the Suez Canal in 1869, together with the gradual transition from sailing ships to coal-fired steam ships, increased the isolation of the islands. They were no longer needed as a stopping port for lengthy sail voyages, or for shelter for journeys from Europe to East Asia.
In 1867, Prince Alfred, Duke of Edinburgh and second son of Queen Victoria, visited the islands. The main settlement, Edinburgh of the Seven Seas, was named in honour of his visit.
On 15 October 1873, the Royal Navy scientific survey vessel HMS Challenger docked at Tristan to conduct geographic and zoological surveys on Tristan, Inaccessible Island and the Nightingale Islands. In his log, Captain George Nares recorded a total of 15 families and 86 individuals living on the island.
20th century
After an especially difficult winter in 1906, and years of hardship since the 1880s, the British government offered to evacuate the island. Those remaining on Tristan held a meeting and decided to refuse, thus deepening the island's isolation. It was reported that no ships visited from 1909 until 1919, when HMS Yarmouth finally stopped to inform the islanders of the outcome of World War I.
The Shackleton–Rowett Expedition stopped in Tristan for 5 days in May 1922, collecting geological and botanical samples before returning to Cape Town. Of the few ships that visited in the coming years were the RMS Asturias, a Royal Mail Steam Packet Company passenger liner, in 1927, and the ocean liners RMS Empress of France in 1928, RMS Duchess of Atholl in 1929, and RMS Empress of Australia in 1935.
In 1936, The Daily Telegraph of London reported the population of the island was 167 individuals, with 185 cattle and 42 horses.
From December 1937 to March 1938, a Norwegian party made a dedicated Scientific Expedition to Tristan da Cunha, and sociologist Peter A. Munch extensively documented island culture (he would later revisit the island in 1964-1965). The island was also visited in 1938 by W. Robert Foran, reporting for the National Geographic Society, whose account Tristan da Cunha, Isles of Contentment was published in November 1938.
On 12 January 1938 by Letters Patent, Britain declared the islands a dependency of Saint Helena, creating the British Overseas Territory of Saint Helena and Dependencies, which also included nearby Ascension Island.
During the Second World War, Britain used the islands as a secret Royal Navy weather and radio station codenamed , to monitor Nazi U-boats (which were required to maintain radio contact) and shipping movements in the South Atlantic Ocean.
The Duke of Edinburgh, the husband of Queen Elizabeth II, visited the islands in 1957 as part of a world tour on board the royal yacht Britannia.
On 10 October 1961, the eruption of Queen Mary's Peak forced the evacuation of the entire population of 264 individuals. Evacuees took to the water in open boats and sailed to uninhabited Nightingale Island, where they were picked up by a Dutch passenger ship that took them via Cape Town to Britain. The islanders arrived in the UK to a big press reception, and were settled in an old Royal Air Force camp outside of Calshot, Hampshire. The following year a Royal Society expedition went to the islands to assess the damage, and reported that the settlement of Edinburgh of the Seven Seas had been only marginally affected. Most families returned in 1963.
21st century
thumb|upright=1.5|Tristan da Cunha on 6 February 2013, as seen from the International Space Station
On 23 May 2001, the islands were hit by an extratropical cyclone that generated winds up to . A number of structures were severely damaged, and numerous cattle were killed, prompting emergency aid provided by the British government.
In 2005, the islands were given a United Kingdom post code (TDCU 1ZZ), to make it easier for the residents to order goods online.
On 13 February 2008, fire destroyed the fishing factory and the four generators that supplied power to the island. On 14 March 2008, new generators were installed and uninterrupted power was restored. This fire was devastating to the island because fishing is a mainstay of the economy. While a new factory was being planned and built, M/V Kelso came to the island and acted as a factory ship, with island fishermen based on board for stints normally of one week. The new facility was ready in July 2009, for the start of the 2009–10 fishing season.
The St Helena, Ascension and Tristan da Cunha Constitution Order 2009 ended the "dependency status" of Ascension and Tristan da Cunha.
On 16 March 2011, the freighter ran aground on Nightingale Island, spilling tons of heavy fuel oil into the ocean. The resulting oil slick threatened the island's population of rockhopper penguins. Nightingale Island has no fresh water, so the penguins were transported to Tristan da Cunha for cleaning.
Solar eclipse
A total solar eclipse will pass over the island on 5 December 2048. The island is calculated to be on the centre line of the umbra's path for nearly three and a half minutes of totality.
Environment
Geography
thumb|Map of Tristan da Cunha group (including Gough Island)
Tristan da Cunha is thought to have been formed by a long-lived centre of upwelling mantle called the Tristan hotspot. Tristan da Cunha is the main island of the Tristan da Cunha archipelago, which consists of the following islands:
Tristan da Cunha, the main and largest island, area: , ()
Inaccessible Island, area:
Nightingale Islands, area:
Nightingale Island, area:
Middle Island, area:
Stoltenhoff Island, area:
Gough Island (Diego Alvarez), area:
Inaccessible Island and the Nightingale Islands are SW by W and SSW off the main island respectively, whereas Gough Island is SSE.
The main island is generally mountainous. The only flat area is on the north-west coast, which is the location of the only settlement, Edinburgh of the Seven Seas. The highest point is a volcano called Queen Mary's Peak , which is covered by snow in winter. The other islands of the group are uninhabited, except for a weather station with a staff of six on Gough Island. This has been operated by South Africa since 1956 (since 1963 at its present location at Transvaal Bay on the south-east coast).
Climate
The archipelago has a wet oceanic climate under the Köppen system with pleasant temperatures, but consistent moderate to heavy rainfall and very limited sunshine, due to the persistent westerly winds. Under the Trewartha classification Tristan da Cunha is a humid subtropical climate due to the lack of cold temperatures. The number of rainy days is comparable to the Aleutian Islands at a much higher latitude in the northern hemisphere, while sunshine hours are comparable to Juneau, Alaska, 20° farther from the equator. Frost is unknown below elevations of and summer temperatures are similarly mild, never reaching . Sandy Point on the east coast is reputed to be the warmest and driest place on the island, being in the lee of the prevailing winds.
Flora and fauna
Many of the flora and fauna have a broad circumpolar distribution in the South Atlantic and South Pacific Oceans. Thus many of the species that occur in Tristan da Cunha appear as far away as New Zealand. For example, the plant species Nertera depressa was first collected in Tristan da Cunha, but has since been recorded in occurrence as far distant as New Zealand.
Tristan is primarily known for its wildlife. The island has been identified as an Important Bird Area by BirdLife International because there are 13 known species of breeding seabirds on the island and two species of resident land birds. The seabirds include northern rockhopper penguins, Atlantic yellow-nosed albatrosses, sooty albatrosses, Atlantic petrels, great-winged petrels, soft-plumaged petrels, broad-billed prions, grey petrels, great shearwaters, sooty shearwaters, Tristan skuas, Antarctic terns and brown noddies. Tristan and Gough Islands are the only known breeding sites in the world for the Atlantic petrel (Pterodroma incerta; IUCN status EN). Inaccessible Island is also the only known breeding ground of the Spectacled Petrel (Procellaria conspicillata; IUCN Vulnerable). The Tristan albatross (IUCN status CR) is known to breed only on Gough and Inaccessible Islands: all nest on Gough except for one or two pairs who nest on Inaccessible Island.
The endemic Tristan thrush or starchy occurs on all of the northern islands and each has its own subspecies, with Tristan birds being slightly smaller and duller than those on Nightingale and Inaccessible. The endemic Inaccessible Island rail, the smallest extant flightless bird in the world, is found only on Inaccessible Island. In 1956 eight Gough moorhens were released at Sandy Point on Tristan, and have subsequently colonised the island.
Various species of whales and dolphins can be seen around Tristan from time to time with increasing sighting rate. The subantarctic fur seal Arctocephalus tropicalis can also be found in the Tristan archipelago, mostly on Gough Island.
Economy
The island's unique social and economic organisation has evolved over the years, but is based on the principles set out by William Glass in 1817, when he established a settlement based on equality. All Tristan families are farmers, owning their own stock and/or fishing. All land is communally owned. All households have plots of land at The Patches on which they grow potatoes. Livestock numbers are strictly controlled to conserve pasture and to prevent better-off families from accumulating wealth. Unless the community votes for a change in its law, no outsiders are allowed to buy land or settle on Tristan; theoretically the whole island would have to be put up for sale. All people – including children and pensioners – are involved in farming, while adults additionally have salaried jobs working either for the Government, or, a small number in domestic service. Many of the men are involved in the fishing industry, going to sea in good weather. The nominal fishing season lasts 90 days; however, during the 2013 fishing season – 1 July to 30 September – there were only 10 days suitable for fishing.
Valuable foreign earnings come from the royalties from the commercial crawfish or Tristan rock lobster (Jasus) industry. Other revenues are derived from the sale of postage stamps and coins, especially to collectors worldwide. Limited revenue from tourism includes providing accommodation, guides and sales of handicrafts and souvenirs to visitors and by mail order. The income from foreign revenue earners enables Tristan to run Government services, especially health and education.
The 1961 volcanic eruption destroyed the Tristan da Cunha canned crawfish factory, which was rebuilt a short time later. The crawfish catchers and processors work for the South African company Ovenstone, which has an exclusive contract to sell crawfish to the United States and Japan. Although Tristan da Cunha is a UK overseas territory, it is not permitted direct access to European Union markets. Recent economic conditions have meant that the islanders have had to draw from their reserves. The islands' financial problems may cause delays in updating communication equipment and improving education on the island. The fire of 13 February 2008 (see History) resulted in major temporary economic disruption.
Although Tristan da Cunha is part of the same overseas territory as Saint Helena, it does not use the local Saint Helena pound. Instead, the island uses the United Kingdom issue of the pound sterling. The Bank of Saint Helena was established on Saint Helena and Ascension Island in 2004. This bank does not have a physical presence on Tristan da Cunha, but residents of Tristan are entitled to its services. There are occasionally commemorative coins minted for the island.
The island is located in the South Atlantic Anomaly, an area of the Earth with an abnormally weak magnetic field. On 14 November 2008 a geomagnetic observatory was inaugurated on the island as part of a joint venture between the Danish Meteorological Institute and DTU Space.
Transport
The remote location of the islands makes transport to the outside world difficult. Lacking an airport, the islands can be reached only by sea. Fishing boats from South Africa service the islands eight or nine times a year. The RMS Saint Helena used to connect the main island to St Helena and South Africa once each year during its January voyage, but has done so only twice in the last few years, in 2006 and 2011. The wider territory has access to air travel, with Ascension island served by RAF Ascension Island." The Saint Helena Airport was constructed and expected to open in May 2016 but has been delayed due to shear wind. There is no direct, regular service to Tristan da Cunha itself from either location. The harbour at Edinburgh of the Seven Seas is called Calshot Harbour, named after the place in Hampshire where the islanders temporarily stayed during the volcanic eruption.
Communications
Telecommunication
Although Tristan da Cunha shares the +290 code with St Helena, residents have access to the Foreign and Commonwealth Office Telecommunications Network, provided by Global Crossing. This uses a London 020 numbering range, meaning that numbers are accessed via the UK telephone numbering plan.Tristan Da Cunha Contact Information
From 1998 to 2006, internet was available in Tristan da Cunha but its high cost made it almost unaffordable for the local population, who primarily used it only to send email. The connection was also extremely unreliable, connecting through a 64 kbit/s satellite phone connection provided by Inmarsat. From 2006, a very-small-aperture terminal provides 3072 kbit/s of publicly accessible bandwidth via an internet cafe.
There is no mobile phone coverage on the islands.
Amateur radio
DXpeditions are sometimes conducted in the island group by amateur radio operators. One was ZD9ZS in September/October 2014.
Government
Executive authority is vested in the Queen, who is represented in the territory by the Governor of Saint Helena. As the Governor resides permanently in Saint Helena, an Administrator is appointed to represent the Governor in the islands. The Administrator is a career civil servant in the Foreign Office and is selected by London. Since 1998, each Administrator has served a single, three-year term (which begins in September, upon arrival of the supply ship from Cape Town.) The Administrator acts as the local head of government, and takes advice from the Tristan da Cunha Island Council. Sean Burns began a second term as Administrator in November 2016. The Island Council is made up of eight elected and three appointed members, who serve a 3-year term which begins in February (or March).
Chief Islander: From amongst the eight elected councillors, the one receiving the most votes is named "Chief Islander" and serves as Acting Administrator when that official is off the island: Ian Lavorello was elected, unopposed, for a second consecutive 3-year term in February 2013. As "Chief Islander," he lit the island's beacon celebrating the Queen's Diamond Jubilee in 2012.
The Administrator and Island Council work from the Government Building, which is the only two-storey building on the island: the lower floor houses the Saint Helena Police Service office in Tristan da Cunha. It is sometimes referred to as "Whitehall" or the "H'admin Building" and contains the Administrator's Office, Treasury Department, Administration Offices, and the Council Chamber where Island Council meetings are held.
There are no political parties or trade unions on Tristan. Policing in Tristan da Cunha is undertaken by one full-time police officer (Inspector) and three special constables with the Saint Helena Police Service.
Tristan da Cunha has some of its own legislation, but the law of Saint Helena applies generally (to the extent that it is not inconsistent with local law, insofar as it is suitable for local circumstances and subject to such modifications as local circumstances make necessary).
Demographics
Tristan da Cunha recorded a population of 293 in the March 2016 census. The main settlement is Edinburgh of the Seven Seas (known locally as "The Settlement"). The only religion is Christianity, with denominations of Anglican and Roman Catholic. The current population is thought to have descended from 15 ancestors, eight males and seven females, who arrived on the island at various times between 1816 and 1908. The males were European and the women were mixed race and African. Now all of the population has mixed ancestry. In addition, there was an unnamed male contributor of eastern European/Russian descent in the early 1900s. In 1963 when families returned after the evacuation (due to the 1961 volcanic eruption), the 200 settlers included four Tristan da Cunha women who brought with them new English husbands.Richard Cavendish, "The evacuation of Tristan da Cunha", History Today Volume 61 Issue 10, October 2011; accessed 25 May 2016
The women descendants have been traced by genetic study to five female founders, believed to be women of colour (mixed-race, of African, Asian and European descent) from Saint Helena. The historical data recounted that there were two pairs of sisters, but the MtDNA evidence showed only one pair of sisters.
The early male founders originated from Scotland, England, the Netherlands, the United States and Italy, and belonged to 3 Y-haplogroups: I (M170), R-SRY10831.2 and R (M207) (xSRY10831.2)"Genealogy and genes: tracing the founding fathers of Tristan da Cunha", European Journal of Human Genetics and share nine surnames: Collins, Glass, Green, Hagan, Lavarello, Repetto, Rogers, Squibb and Swain. In addition, a new haplotype was found that is associated with men of eastern Europe and Russia. It entered the population in the early 1900s, at a time when the island was visited by Russian sailing ships. There is "evidence for the contribution of a hidden ancestor who left his genes but not his name on the island."Himla Soodyall1,2, Almut Nebel1,2, Bharti Morar1 and Trefor Jenkins1, "Genealogy and genes: tracing the founding fathers of Tristan da Cunha", European Journal of Human Genetics (2003) 11, 705–709. doi:10.1038/sj.ejhg.5201022, accessed 25 May 2016 Another four instances of non-paternity were found among male descendants, but researchers believed their fathers were probably among the island population.
There are 80 families on the island. Tristan da Cunha's isolation has led to development of an unusual, patois-like dialect of English described by the writer Simon Winchester as "a sonorous amalgam of Home Counties lockjaw and 19th century idiom, Afrikaans slang and Italian." Bill Bryson documents some examples of the island's dialect in his book, The Mother Tongue.
Education
Education is fairly rudimentary; children leave school at age 16, and although they can take GCSEs a year later, few do. The school on the island is St Mary's School, which serves children from ages 4 to 16. It opened in 1975 and has five classrooms, a kitchen, a stage, a computer room, and a craft and science room.
The Tristan Song Project was a collaboration between St Mary's School and amateur composers in Britain, led by music teacher Tony Triggs. It began in 2010 and involved St Mary's pupils writing poems and Tony Triggs providing musical settings by himself and his pupils.Aquila (nom de plume), July/August 2012, "The Rockhopper songbook", Aquila, pp 4-5 A desktop publication entitled Rockhopper Penguins and Other Songs (2010) embraced most of the songs completed that year and funded a consignment of guitars to the school.SARTMA 19 June 2011 In February 2013 the Tristan Post Office issued a set of four Song Project stamps featuring island musical instruments and lyrics from Song Project songs about Tristan's volcano and wildlife. In 2014 the Project broadened its scope and continues as the International Song Project.
Health
There are instances of health problems attributed to endogamy, including glaucoma. In addition, there is a very high (42%) incidence of asthma among the population and research by Dr. Noe Zamel of the University of Toronto has led to discoveries about the genetic nature of the disease. Three of the original settlers of the island were asthma sufferers. Diabetes, obesity and high alcohol intake are common.
In 2012 Dr Gerard Bulger was the only qualified doctor on the island, providing medical services for the 270 inhabitants. A small hospital - the Camogli Hospital - which has ultrasound, X-ray facilities a gastroscope, but no sigmoidoscope. There were no nurses, only care assistants, but Bulger praised the care provided in the community. Bulger reported difficulty keeping pharmacy and consumable stocks in date. Supplies came from South Africa. One of his jobs was to check the water supply.
Healthcare is funded by the government, undertaken by one resident doctor from South Africa and five nurses. Surgery or facilities for complex childbirth are therefore limited, and emergencies can necessitate communicating with passing fishing vessels so the injured person can be ferried to Cape Town. As of late 2007, IBM and Beacon Equity Partners, co-operating with Medweb, the University of Pittsburgh Medical Center and the island's government on "Project Tristan", has supplied the island's doctor with access to long distance tele-medical help, making it possible to send EKG and X-ray pictures to doctors in other countries for instant consultation. This system has been limited owing to the poor reliability of Internet connections and an absence of qualified technicians on the island to service fibre optic links between the hospital and Internet centre at the administration buildings.
Culture
Media
Local television began in 1984 using taped programming on Tuesday, Thursday and Sunday evenings. Live television did not arrive on the island until 2001, with the introduction of the British Forces Broadcasting Service, which now provides BBC1, BBC2, ITV and BFBC Extra, relayed to islanders via local analogue transmitters. BFBS Radio 2 is the locally available radio station.
A comprehensive website www.tristandc.com is provided by the island government and the Tristan da Cunha Association which maintains it from the UK. A weekly local printed newsletter, Village Voice, is produced on the island.
Holidays
According to the island's January 2014 newsletter, the summer season gets underway with Sheep Shearing Day held on a Saturday in mid-December. Almost the entire population gathers on the far end of Patches Plain where the sheep pens are sited. Hand-clippers are used in the shearing and the wool is later carded, spun and hand-knitted into garments, some of which are sold under the name "37 Degrees South Knitwear Range".
There is an annual break from government and factory work which begins before Christmas and lasts for 3 weeks. Break-Up Day is usually marked with parties at various work "departments". Break-Up includes the Island Store, which means that families must be organised to have a full larder of provisions during the period. In 2013, the Island Store closed a week earlier than usual to conduct a comprehensive inventory, and all purchases had to be made by Friday 13 December as the shop did not open again until a month later.
The January 2014 New Year Message from Administrator Alex Mitham announced that, in 2013, the Island Council recognised there was no national holiday that specifically celebrates Tristan's heritage and culture, 'So I am pleased to announce that the Council have agreed that a new national holiday called Longboat Day that will be instated in 2015, and the traditional longboats race brought back. There was no immediate indication of which date would be selected for the new holiday.
In popular culture
Film
In Wim Wenders' Wings of Desire, a dying man recollecting the things that have apparently meant most to him mentions "Tristan da Cunha".dying man
37°4 S is a short film about two teenagers who live on the island.
Literature
Edgar Allan Poe's The Narrative of Arthur Gordon Pym of Nantucket (1838), Chapter 15, has a detailed history and description of the island.
In Jules Verne's novel In Search of the Castaways, one of the chapters is set on Tristan da Cunha, and a brief history of the island is mentioned. The island is also referred to in Verne's novel The Sphinx of the Ice Fields (1897), which he wrote as an unauthorised sequel to Poe's The Narrative of Arthur Gordon Pym of Nantucket. The 1899 English translation by Mrs. Cashel Hoey of Ice Fields was published under the title An Antarctic Mystery.
South African poet Roy Campbell wrote "Tristan de Cunha" (1927) Tristan de Cunha, an elegiac poem about the island.
Tristan da Cunha is the site of a top-secret nuclear disarmament conference in Fletcher Knebel's 1968 political thriller Vanished. The book was adapted as a 1971 two-part NBC made-for-TV movie starring Richard Widmark.
Hervé Bazin's novel Les Bienheureux de la Désolation (1970) describes the 1961 forced exile of the population to England after the volcano erupted, and their subsequent return.
In Primo Levi's memoir The Periodic Table (1975), one of the fictional short stories, "Mercurio", is set on Tristan da Cunha, named "Desolation Island".
In Patrick O'Brian's novel The Mauritius Command (1977), Tristan da Cunha is mentioned by a man fond of birds, Captain Fortescue of the schooner Wasp, who spent an extended period on the island studying the Albatross whilst cast ashore. Also in O'Brian's The Thirteen-Gun Salute (1991), the ship Dianne is nearly wrecked on Inaccessible Island, with the cover of the book depicting the scene.
Robert A. Heinlein's book Tramp Royale (1992), about a world trip in 1953–54, devoted a chapter to his near visit to Tristan da Cunha. He talked to islanders but could not go ashore owing to the weather.
Zinnie Harris's play, Further Than the Furthest Thing (2000), is inspired by events on the island, notably the 1961 volcanic eruption and evacuation of the islanders.
Raoul Schrott's novel, Tristan da Cunha oder die Hälfte der Erde (2003), is almost entirely set on Tristan da Cunha and Gough islands, and chronicles the history of the archipelago.
Alice Munro's short story Deep-Holes in her 2009 short story collection Too Much Happiness. The female protagonist, a mother, confides to her young son about her fascination with remote islands like Trista da Cunha and the Faeroe Islands. Later, when her son goes missing, she fantasises that he has found his way to one of these islands and is living there.
Non-fiction
thumb|upright|Painting by Rose Annie Rogers of Atlantisia rogersi (1927), the world's smallest flightless bird, which is found only on Inaccessible Island
Frank T. Bullen provides details of visiting the island in the 1870s in his book The Cruise of the Cachalot, first published in 1898.
Raymond Rallier du Baty describes the people and the island circa 1908 in his book 15,000 Miles in a Ketch (1915).
In Shackleton's Last Voyage by Captain Frank Wild (1923), several chapters (with photographs) recount events on the island during the Shackleton–Rowett Expedition in May 1922.
Rose Annie Rogers, part of an American missionary couple, wrote a memoir of her time on Tristan da Cunha, called The Lonely Island (1927).
Katherine Mary Barrow's book Three Years in Tristan Da Cunha (1910) is a "simple and true description of daily life among a very small community cut off from the rest of the world" based on entries to her diaries and letters written during the period to her sister.
Martin Holdgate describes a visit to the island by a scientific expedition heading for Gough Island in 1955 in Mountains in the Sea.
Simon Winchester's Outposts: Journeys to the Surviving Relics of the British Empire (1985, reprinted in 2003), devotes a chapter to the island, which he visited in the mid-1980s. In the foreword to the reprint, the author states that he was banned from Tristan da Cunha because of his writing about the war-time romance of a local woman. He published a longer account of his banishment in Latham's Quarterly.
In 2005, Rockhopper Copper, the first book about the island written by an Islander, was published. It was written by Conrad Glass, Tristan da Cunha's longtime Police and Conservation officer.
See also
Outline of Tristan da Cunha
Sandy Point, Tristan da Cunha
Notes and references
Notes
References
Further reading
Guides
A Short Guide to Tristan da Cunha by James Glass and Anne Green, Tristan Chief Islanders (2005, Whitby Press, 12 pages).
Field Guides to the Animals and Plants of Tristan da Cunha and Gough Island Edited by Peter Ryan (2007, RSPB Publication, 168 pages).
Gough Island: A Natural History by Christine Hanel, Steven Chown and Kevin Gaston (2005, Sun Press, 169 pages).
Culture
Tristan da Cunha: History, People, Language by Daniel Schreier and Karen Lavarello-Schreier (2003, Battlebridge, 88 pages).
Rockhopper Copper: The life and times of the people of the most remote inhabited island on Earth by Conrad Glass MBE, Tristan Police Officer (2005, Polperro Heritage Press, 176 pages).
Recipes from Tristan da Cunha by Dawn Repetto, Tristan Tourism Co-ordinator (2010, Tristan Books, 32 pages).
Corporal Glass's Island: The Story of Tristan da Cunha by Nancy Hosegood (1966, Farrar, Straus, Giroux, 192 pages, with several pages of photographs).
Three Years in Tristan da Cunha by Katherine Mary Barrow (1910, Skeffington & Son, 200 pages, with 37 photographs).
External links
Tristan da Cunha
Tristan Times
History of Tristan da Cunha (2 books, and other material)
TRISTAN DA CUNHA (Spanish)
Videos of the island
Return to Trista da Cunha, Global Nomad, National Geographic (2012).
A Day on Tristan da Cunha, Global Nomad, National Geographic (2011).
Tristan da Cunha: The story of Asthma Island, part 1 and part 2, BBC Four (2008).
Tristan da Cunha: Life on the island in 1963 (1963).
Tristan da Cunha: Life of an islander in 1963 (1963).
Category:English-speaking countries and territories
Category:Important Bird Areas of Saint Helena
Category:Seabird colonies
Category:States and territories established in 1938
Category:Former British colonies and protectorates in Africa
Category:Remote islands | 31,361 | 2017-01 |
Diarrhea | Diarrhea, also spelled diarrhoea, is the condition of having at least three loose or liquid bowel movements each day. It often lasts for a few days and can result in dehydration due to fluid loss. Signs of dehydration often begin with loss of the normal stretchiness of the skin and irritable behaviour. This can progress to decreased urination, loss of skin color, a fast heart rate, and a decrease in responsiveness as it becomes more severe. Loose but non-watery stools in babies who are breastfed, however, may be normal.
The most common cause is an infection of the intestines due to either a virus, bacteria, or parasite; a condition known as gastroenteritis. These infections are often acquired from food or water that has been contaminated by stool, or directly from another person who is infected. It may be divided into three types: short duration watery diarrhea, short duration bloody diarrhea, and if it lasts for more than two weeks, persistent diarrhea. The short duration watery diarrhea may be due to an infection by cholera, although this is rare in the developed world. If blood is present it is also known as dysentery. A number of non-infectious causes may also result in diarrhea, including hyperthyroidism, lactose intolerance, inflammatory bowel disease, a number of medications, and irritable bowel syndrome. In most cases, stool cultures are not required to confirm the exact cause.
Prevention of infectious diarrhea is by improved sanitation, clean drinking water, and hand washing with soap. Breastfeeding for at least six months is also recommended as is vaccination against rotavirus. Oral rehydration solution (ORS), which is clean water with modest amounts of salts and sugar, is the treatment of choice. Zinc tablets are also recommended. These treatments have been estimated to have saved 50 million children in the past 25 years. When people have diarrhea it is recommended that they continue to eat healthy food and babies continue to be breastfed. If commercial ORS are not available, homemade solutions may be used. In those with severe dehydration, intravenous fluids may be required. Most cases; however, can be managed well with fluids by mouth. Antibiotics, while rarely used, may be recommended in a few cases such as those who have bloody diarrhea and a high fever, those with severe diarrhea following travelling, and those who grow specific bacteria or parasites in their stool. Loperamide may help decrease the number of bowel movements but is not recommended in those with severe disease.
About 1.7 to 5 billion cases of diarrhea occur per year. It is most common in developing countries, where young children get diarrhea on average three times a year. Total deaths from diarrhea are estimated at 1.26 million in 2013 – down from 2.58 million in 1990. In 2012, it was the second most common cause of deaths in children younger than five (0.76 million or 11%). Frequent episodes of diarrhea are also a common cause of malnutrition and the most common cause in those younger than five years of age. Other long term problems that can result include stunted growth and poor intellectual development.
Definition
thumb|upright=1.7|Bristol stool chart
Diarrhea is defined by the World Health Organization as having three or more loose or liquid stools per day, or as having more stools than is normal for that person.
Acute diarrhea is defined as an abnormally frequent discharge of semisolid or fluid fecal matter from the bowel, lasting less than 14 days, by World Gastroenterology Organization.
Secretory
Secretory diarrhea means that there is an increase in the active secretion, or there is an inhibition of absorption. There is little to no structural damage. The most common cause of this type of diarrhea is a cholera toxin that stimulates the secretion of anions, especially chloride ions. Therefore, to maintain a charge balance in the gastrointestinal tract, sodium is carried with it, along with water. In this type of diarrhea intestinal fluid secretion is isotonic with plasma even during fasting. It continues even when there is no oral food intake.
Osmotic
Osmotic diarrhea occurs when too much water is drawn into the bowels. If a person drinks solutions with excessive sugar or excessive salt, these can draw water from the body into the bowel and cause osmotic diarrhea. Osmotic diarrhea can also be the result of maldigestion (e.g., pancreatic disease or Coeliac disease), in which the nutrients are left in the lumen to pull in water. Or it can be caused by osmotic laxatives (which work to alleviate constipation by drawing water into the bowels). In healthy individuals, too much magnesium or vitamin C or undigested lactose can produce osmotic diarrhea and distention of the bowel. A person who has lactose intolerance can have difficulty absorbing lactose after an extraordinarily high intake of dairy products. In persons who have fructose malabsorption, excess fructose intake can also cause diarrhea. High-fructose foods that also have a high glucose content are more absorbable and less likely to cause diarrhea. Sugar alcohols such as sorbitol (often found in sugar-free foods) are difficult for the body to absorb and, in large amounts, may lead to osmotic diarrhea. In most of these cases, osmotic diarrhea stops when the offending agent (e.g. milk, sorbitol) is stopped.
Exudative
Exudative diarrhea occurs with the presence of blood and pus in the stool. This occurs with inflammatory bowel diseases, such as Crohn's disease or ulcerative colitis, and other severe infections such as E. coli or other forms of food poisoning.
Inflammatory
Inflammatory diarrhea occurs when there is damage to the mucosal lining or brush border, which leads to a passive loss of protein-rich fluids and a decreased ability to absorb these lost fluids. Features of all three of the other types of diarrhea can be found in this type of diarrhea. It can be caused by bacterial infections, viral infections, parasitic infections, or autoimmune problems such as inflammatory bowel diseases. It can also be caused by tuberculosis, colon cancer, and enteritis.
Dysentery
If there is blood visible in the stools, it is also known as dysentery. The blood is a trace of an invasion of bowel tissue. Dysentery is a symptom of, among others, Shigella, Entamoeba histolytica, and Salmonella.
Health effects
Diarrheal disease may have a negative impact on both physical fitness and mental development. "Early childhood malnutrition resulting from any cause reduces physical fitness and work productivity in adults," and diarrhea is a primary cause of childhood malnutrition. Further, evidence suggests that diarrheal disease has significant impacts on mental development and health; it has been shown that, even when controlling for helminth infection and early breastfeeding, children who had experienced severe diarrhea had significantly lower scores on a series of tests of intelligence.
Differential diagnosis
right|thumb|Diagram of the human gastrointestinal tract.
Acute diarrhea is most commonly due to viral gastroenteritis with rotavirus, which accounts for 40% of cases in children under five. (p. 17) In travelers however bacterial infections predominate. Various toxins such as mushroom poisoning and drugs can also cause acute diarrhea.
Chronic diarrhea can be the part of the presentations of a number of chronic medical conditions affecting the intestine. Common causes include ulcerative colitis, Crohn's disease, microscopic colitis, celiac disease, irritable bowel syndrome and bile acid malabsorption.
Infections
There are many causes of infectious diarrhea, which include viruses, bacteria and parasites. Infectious diarrhea is frequently referred to as gastroenteritis. Norovirus is the most common cause of viral diarrhea in adults, but rotavirus is the most common cause in children under five years old. Adenovirus types 40 and 41, and astroviruses cause a significant number of infections.
Campylobacter spp. are a common cause of bacterial diarrhea, but infections by Salmonella spp., Shigella spp. and some strains of Escherichia coli are also a frequent cause.
In the elderly, particularly those who have been treated with antibiotics for unrelated infections, a toxin produced by Clostridium difficile often causes severe diarrhea.
Parasites, particularly protozoa (e.g., Cryptosporidium spp., Giardia spp., Entamoeba histolytica, Blastocystis spp., Cyclospora cayetanensis), are frequently the cause of diarrhea that involves chronic infection. The broad-spectrum antiparasitic agent nitazoxanide has shown efficacy against many diarrhea-causing parasites.
Other infectious agents, such as parasites or bacterial toxins, may exacerbate symptoms. In sanitary living conditions where there is ample food and a supply of clean water, an otherwise healthy person usually recovers from viral infections in a few days. However, for ill or malnourished individuals, diarrhea can lead to severe dehydration and can become life-threatening.
Malabsorption
Malabsorption is the inability to absorb food fully, mostly from disorders in the small bowel, but also due to maldigestion from diseases of the pancreas.
Causes include:
enzyme deficiencies or mucosal abnormality, as in food allergy and food intolerance, e.g. celiac disease (gluten intolerance), lactose intolerance (intolerance to milk sugar, common in non-Europeans), and fructose malabsorption.
pernicious anemia, or impaired bowel function due to the inability to absorb vitamin B12,
loss of pancreatic secretions, which may be due to cystic fibrosis or pancreatitis,
structural defects, like short bowel syndrome (surgically removed bowel) and radiation fibrosis, such as usually follows cancer treatment and other drugs, including agents used in chemotherapy; and
certain drugs, like orlistat, which inhibits the absorption of fat.
Inflammatory bowel disease
The two overlapping types here are of unknown origin:
Ulcerative colitis is marked by chronic bloody diarrhea and inflammation mostly affects the distal colon near the rectum.
Crohn's disease typically affects fairly well demarcated segments of bowel in the colon and often affects the end of the small bowel.
Irritable bowel syndrome
Another possible cause of diarrhea is irritable bowel syndrome (IBS), which usually presents with abdominal discomfort relieved by defecation and unusual stool (diarrhea or constipation) for at least 3 days a week over the previous 3 months. Symptoms of diarrhea-predominant IBS can be managed through a combination of dietary changes, soluble fiber supplements, and/or medications such as loperamide or codeine. About 30% of patients with diarrhea-predominant IBS have bile acid malabsorption diagnosed with an abnormal SeHCAT test.
Other diseases
Diarrhea can be caused by other diseases and conditions, namely:
Chronic ethanol ingestionKasper DL, Braunwald E, Fauci AS, Hauser SL, Longo DL, Jameson JL. Harrison's Principles of Internal Medicine. New York: McGraw-Hill, 2005. ISBN 0-07-139140-1.
Ischemic bowel disease: This usually affects older people and can be due to blocked arteries.
Microscopic colitis, a type of inflammatory bowel disease where changes are only seen on histological examination of colonic biopsies.
Bile salt malabsorption (primary bile acid diarrhea) where excessive bile acids in the colon produce a secretory diarrhea.
Hormone-secreting tumors: some hormones (e.g., serotonin) can cause diarrhea if excreted in excess (usually from a tumor).
Chronic mild diarrhea in infants and toddlers may occur with no obvious cause and with no other ill effects; this condition is called toddler's diarrhea.
Environmental enteropathy
Radiation enteropathy following treatment for pelvic and abdominal cancers.
Causes
Sanitation
thumb|Poverty often leads to unhygienic living conditions, as in this community in the Indian Himalayas. Such conditions promote contraction of diarrheal diseases, as a result of poor sanitation and hygiene.
Open defecation is a leading cause of infectious diarrhea leading to death.
Poverty is a good indicator of the rate of infectious diarrhea in a population. This association does not stem from poverty itself, but rather from the conditions under which impoverished people live. The absence of certain resources compromises the ability of the poor to defend themselves against infectious diarrhea. "Poverty is associated with poor housing, crowding, dirt floors, lack of access to clean water or to sanitary disposal of fecal waste (sanitation), cohabitation with domestic animals that may carry human pathogens, and a lack of refrigerated storage for food, all of which increase the frequency of diarrhea... Poverty also restricts the ability to provide age-appropriate, nutritionally balanced diets or to modify diets when diarrhea develops so as to mitigate and repair nutrient losses. The impact is exacerbated by the lack of adequate, available, and affordable medical care."
Water
One of the most common causes of infectious diarrhea, is a lack of clean water. Often, improper fecal disposal leads to contamination of groundwater. This can lead to widespread infection among a population, especially in the absence of water filtration or purification. Human feces contains a variety of potentially harmful human pathogens.
Nutrition
Proper nutrition is important for health and functioning, including the prevention of infectious diarrhea. It is especially important to young children who do not have a fully developed immune system. Zinc deficiency, a condition often found in children in developing countries can, even in mild cases, have a significant impact on the development and proper functioning of the human immune system. Indeed, this relationship between zinc deficiency and reduced immune functioning corresponds with an increased severity of infectious diarrhea. Children who have lowered levels of zinc have a greater number of instances of diarrhea, severe diarrhea, and diarrhea associated with fever. Similarly, vitamin A deficiency can cause an increase in the severity of diarrheal episodes. However, there is some discrepancy when it comes to the impact of vitamin A deficiency on the rate of disease. While some argue that a relationship does not exist between the rate of disease and vitamin A status, others suggest an increase in the rate associated with deficiency. Given that estimates suggest 127 million preschool children worldwide are vitamin A deficient, this population has the potential for increased risk of disease contraction.
Pathophysiology
Evolution
According to two researchers, Nesse and Williams, diarrhea may function as an evolved expulsion defense mechanism. As a result, if it is stopped, there might be a delay in recovery. They cite in support of this argument research published in 1973 that found that treating Shigella with the anti-diarrhea drug (Co-phenotrope, Lomotil) caused people to stay feverish twice as long as those not so treated. The researchers indeed themselves observed that: "Lomotil may be contraindicated in shigellosis. Diarrhea may represent a defense mechanism".
Diagnostic approach
The following types of diarrhea may indicate further investigation is needed:
In infants
Moderate or severe diarrhea in young children
Associated with blood
Continues for more than two days
Associated non-cramping abdominal pain, fever, weight loss, etc.
In travelers
In food handlers, because of the potential to infect others;
In institutions such as hospitals, child care centers, or geriatric and convalescent homes.
A severity score is used to aid diagnosis in children.
Prevention
Sanitation
Numerous studies have shown that improvements in drinking water and sanitation (WASH) lead to decreased risks of diarrhoea. Such improvements might include for example use of water filters, provision of high-quality piped water and sewer connections.
In institutions, communities, and households, interventions that promote hand washing with soap lead to significant reductions in the incidence of diarrhea. The same applies to preventing open defecation at a community-wide level and providing access to improved sanitation. This includes use of toilets and implementation of the entire sanitation chain connected to the toilets (collection, transport, disposal or reuse of human excreta).
Hand washing
Basic sanitation techniques can have a profound effect on the transmission of diarrheal disease. The implementation of hand washing using soap and water, for example, has been experimentally shown to reduce the incidence of disease by approximately 42–48%. Hand washing in developing countries, however, is compromised by poverty as acknowledged by the CDC: "Handwashing is integral to disease prevention in all parts of the world; however, access to soap and water is limited in a number of less developed countries. This lack of access is one of many challenges to proper hygiene in less developed countries." Solutions to this barrier require the implementation of educational programs that encourage sanitary behaviours.
Water
Given that water contamination is a major means of transmitting diarrheal disease, efforts to provide clean water supply and improved sanitation have the potential to dramatically cut the rate of disease incidence. In fact, it has been proposed that we might expect an 88% reduction in child mortality resulting from diarrheal disease as a result of improved water sanitation and hygiene. Similarly, a meta-analysis of numerous studies on improving water supply and sanitation shows a 22–27% reduction in disease incidence, and a 21–30% reduction in mortality rate associated with diarrheal disease.
Chlorine treatment of water, for example, has been shown to reduce both the risk of diarrheal disease, and of contamination of stored water with diarrheal pathogens.
Vaccination
Immunization against the pathogens that cause diarrheal disease is a viable prevention strategy, however it does require targeting certain pathogens for vaccination. In the case of Rotavirus, which was responsible for around 6% of diarrheal episodes and 20% of diarrheal disease deaths in the children of developing countries, use of a Rotavirus vaccine in trials in 1985 yielded a slight (2-3%) decrease in total diarrheal disease incidence, while reducing overall mortality by 6-10%. Similarly, a Cholera vaccine showed a strong reduction in morbidity and mortality, though the overall impact of vaccination was minimal as Cholera is not one of the major causative pathogens of diarrheal disease. Since this time, more effective vaccines have been developed that have the potential to save many thousands of lives in developing nations, while reducing the overall cost of treatment, and the costs to society.
A rotavirus vaccine decrease the rates of diarrhea in a population. New vaccines against rotavirus, Shigella, Enterotoxigenic Escherichia coli (ETEC), and cholera are under development, as well as other causes of infectious diarrhea.
Nutrition
Dietary deficiencies in developing countries can be combated by promoting better eating practices. Supplementation with vitamin A and/or zinc. Zinc supplementation proved successful showing a significant decrease in the incidence of diarrheal disease compared to a control group. The majority of the literature suggests that vitamin A supplementation is advantageous in reducing disease incidence. Development of a supplementation strategy should take into consideration the fact that vitamin A supplementation was less effective in reducing diarrhea incidence when compared to vitamin A and zinc supplementation, and that the latter strategy was estimated to be significantly more cost effective.
Breastfeeding
Breastfeeding practices have been shown to have a dramatic effect on the incidence of diarrheal disease in poor populations. Studies across a number of developing nations have shown that those who receive exclusive breastfeeding during their first 6 months of life are better protected against infection with diarrheal diseases. Exclusive breastfeeding is currently recommended during, at least, the first six months of an infant's life by the WHO.
Others
Probiotics decrease the risk of diarrhea in those taking antibiotics.
Management
In many cases of diarrhea, replacing lost fluid and salts is the only treatment needed. This is usually by mouth – oral rehydration therapy – or, in severe cases, intravenously. Diet restrictions such as the BRAT diet are no longer recommended. Research does not support the limiting of milk to children as doing so has no effect on duration of diarrhea. To the contrary, WHO recommends that children with diarrhea continue to eat as sufficient nutrients are usually still absorbed to support continued growth and weight gain, and that continuing to eat also speeds up recovery of normal intestinal functioning. CDC recommends that children and adults with cholera also continue to eat.
Medications such as loperamide (Imodium) and bismuth subsalicylate may be beneficial; however they may be contraindicated in certain situations.
Fluids
thumb|A person consuming oral rehydration solution.
Oral rehydration solution (ORS) (a slightly sweetened and salty water) can be used to prevent dehydration. Standard home solutions such as salted rice water, salted yogurt drinks, vegetable and chicken soups with salt can be given. Home solutions such as water in which cereal has been cooked, unsalted soup, green coconut water, weak tea (unsweetened), and unsweetened fresh fruit juices can have from half a teaspoon to full teaspoon of salt (from one-and-a-half to three grams) added per liter. Clean plain water can also be one of several fluids given. There are commercial solutions such as Pedialyte, and relief agencies such as UNICEF widely distribute packets of salts and sugar. A WHO publication for physicians recommends a homemade ORS consisting of one liter water with one teaspoon salt (3 grams) and two tablespoons sugar (18 grams) added (approximately the "taste of tears"A Guide on Safe Food for Travellers, Welcome to South Africa, Host to the 2010 FIFA World Cup (bottom left of page 1).). Rehydration Project recommends adding the same amount of sugar but only one-half a teaspoon of salt, stating that this more dilute approach is less risky with very little loss of effectiveness.Rehydration Project, http://rehydrate.org/ Homemade Oral Rehydration Solution Recipe. Both agree that drinks with too much sugar or salt can make dehydration worse.
Appropriate amounts of supplemental zinc and potassium should be added if available. But the availability of these should not delay rehydration. As WHO points out, the most important thing is to begin preventing dehydration as early as possible. In another example of prompt ORS hopefully preventing dehydration, CDC recommends for the treatment of cholera continuing to give Oral Rehydration Solution during travel to medical treatment.Community Health Worker Training Materials for Cholera Prevention and Control, CDC, slides at back are dated 17 November 2010. Page 7 states " . . . Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently."
Vomiting often occurs during the first hour or two of treatment with ORS, especially if a child drinks the solution too quickly, but this seldom prevents successful rehydration since most of the fluid is still absorbed. WHO recommends that if a child vomits, to wait five or ten minutes and then start to give the solution again more slowly.
Drinks especially high in simple sugars, such as soft drinks and fruit juices, are not recommended in children under 5 years of age as they may increase dehydration. A too rich solution in the gut draws water from the rest of the body, just as if the person were to drink sea water. Plain water may be used if more specific and effective ORT preparations are unavailable or are not palatable. Additionally, a mix of both plain water and drinks perhaps too rich in sugar and salt can alternatively be given to the same person, with the goal of providing a medium amount of sodium overall. A nasogastric tube can be used in young children to administer fluids if warranted.
Eating
WHO recommends a child with diarrhea continue to be fed. Continued feeding speeds the recovery of normal intestinal function. In contrast, children whose food is restricted have diarrhea of longer duration and recover intestinal function more slowly. A child should also continue to be breastfed. The WHO states "Food should never be withheld and the child's usual foods should not be diluted. Breastfeeding should always be continued." And in the specific example of cholera, CDC also makes the same recommendation. In young children who are not breast-fed and live in the developed world, a lactose-free diet may be useful to speed recovery.
Medications
While antibiotics are beneficial in certain types of acute diarrhea, they are usually not used except in specific situations. There are concerns that antibiotics may increase the risk of hemolytic uremic syndrome in people infected with Escherichia coli O157:H7. In resource-poor countries, treatment with antibiotics may be beneficial. However, some bacteria are developing antibiotic resistance, particularly Shigella. Antibiotics can also cause diarrhea, and antibiotic-associated diarrhea is the most common adverse effect of treatment with general antibiotics.
While bismuth compounds (Pepto-Bismol) decreased the number of bowel movements in those with travelers' diarrhea, they do not decrease the length of illness. Anti-motility agents like loperamide are also effective at reducing the number of stools but not the duration of disease. These agents should only be used if bloody diarrhea is not present.
Bile acid sequestrants such as cholestyramine can be effective in chronic diarrhea due to bile acid malabsorption. Therapeutic trials of these drugs are indicated in chronic diarrhea if bile acid malabsorption cannot be diagnosed with a specific test, such as SeHCAT retention.
Alternative therapies
Zinc supplementation benefits children with diarrhea in developing countries, but only in infants over six months old. This supports the World Health Organization guidelines for zinc, but not in the very young.
Probiotics reduce the duration of symptoms by one day and reduced the chances of symptoms lasting longer than four days by 60%. The probiotic lactobacillus can help prevent antibiotic-associated diarrhea in adults but possibly not children. For those with lactose intolerance, taking digestive enzymes containing lactase when consuming dairy products often improves symptoms.
Epidemiology
thumb|upright=1.3|Deaths due to diarrhoeal diseases per million persons in 2012
thumb|upright=1.3|Disability-adjusted life year for diarrhea per 100,000 inhabitants in 2004.
Worldwide in 2004, approximately 2.5 billion cases of diarrhea occurred, which resulted in 1.5 million deaths among children under the age of five. Greater than half of these were in Africa and South Asia. This is down from a death rate of 4.5 million in 1980 for gastroenteritis. Diarrhea remains the second leading cause of infant mortality (16%) after pneumonia (17%) in this age group.
The majority of such cases occur in the developing world, with over half of the recorded cases of childhood diarrhea occurring in Africa and Asia, with 696 million and 1.2 billion cases, respectively, compared to only 480 million in the rest of the world.
Infectious diarrhea resulted in about 0.7 million deaths in children under five years old in 2011 and 250 million lost school days. In the Americas, diarrheal disease accounts for a total of 10% of deaths among children aged 1–59 months while in South East Asia, it accounts for 31.3% of deaths. It is estimated that around 21% of child mortalities in developing countries are due to diarrheal disease.
Etymology
The word diarrhea is from the Ancient Greek from "through" and "flow".
Diarrhea is the spelling in American English while diarrhoea is the spelling in Commonwealth English.
References
External links
Category:Intestinal infectious diseases
Category:Waterborne diseases
Category:Diseases of intestines
Category:Conditions diagnosed by stool test
Category:Symptoms and signs: Digestive system and abdomen
Category:Feces
Category:RTT | 53,951 | 2017-01 |
Architecture | thumb|upright=2|alt=View of Florence showing the dome, which dominates everything around it. It is octagonal in plan and ovoid in section. It has wide ribs rising to the apex with red tiles in between and a marble lantern on top.|Brunelleschi, in the building of the dome of Florence Cathedral in the early 15th century, not only transformed the building and the city, but also the role and status of the architect.Museo Galileo, Museum and Institute of History and Science, The Dome of Santa Maria del Fiore , (accessed 30 January 2013)Giovanni Fanelli, Brunelleschi, Becocci, Florence (1980), Chapter: The Dome pp. 10-41.
thumb|Section of Beauvais Cathedral, gothic architecture of the 13th century.
Architecture (Latin architectura, from the Greek ἀρχιτέκτων arkhitekton "architect", from ἀρχι- "chief" and τέκτων "builder") is both the process and the product of planning, designing, and constructing buildings and other physical structures. Architectural works, in the material form of buildings, are often perceived as cultural symbols and as works of art. Historical civilizations are often identified with their surviving architectural achievements.
"Architecture" can mean:
A general term to describe buildings and other physical structures.
The art and science of designing buildings and (some) nonbuilding structures.
The style of design and method of construction of buildings and other physical structures.
Knowledge of art, science, technology, and humanity.
The practice of the architect, where architecture means offering or rendering professional services in connection with the design and construction of buildings, or built environments.
The design activity of the architect, from the macro-level (urban design, landscape architecture) to the micro-level (construction details and furniture).
Architecture has to do with planning and designing form, space and ambience to reflect functional, technical, social, environmental, and aesthetic considerations. It requires the creative manipulation and coordination of materials and technology, and of light and shadow. Often, conflicting requirements must be resolved. The practice of architecture also encompasses the pragmatic aspects of realizing buildings and structures, including scheduling, cost estimation and construction administration. Documentation produced by architects, typically drawings, plans and technical specifications, defines the structure and/or behavior of a building or other kind of system that is to be or has been constructed.
The word "architecture" has also been adopted to describe other designed systems, especially in information technology.Shorter Oxford English Dictionary (1993), Oxford, ISBN 0 19 860575 7
Theory of architecture
Historic treatises
thumb|alt=The Parthenon is a rectangular building of white marble with eight columns supporting a pediment at the front, and a long line of columns visible at the side|The Parthenon, Athens, Greece, "the supreme example among architectural sites." (Fletcher).Banister Fletcher, A History of Architecture on the Comparative Method
The earliest surviving written work on the subject of architecture is De architectura, by the Roman architect Vitruvius in the early 1st century AD.D. Rowland – T.N. Howe: Vitruvius. Ten Books on Architecture. Cambridge University Press, Cambridge 1999, ISBN 0-521-00292-3 According to Vitruvius, a good building should satisfy the three principles of firmitas, utilitas, venustas, commonly known by the original translation – firmness, commodity and delight. An equivalent in modern English would be:
Durability – a building should stand up robustly and remain in good condition.
Utility – it should be suitable for the purposes for which it is used.
Beauty – it should be aesthetically pleasing.
According to Vitruvius, the architect should strive to fulfill each of these three attributes as well as possible.
Leon Battista Alberti, who elaborates on the ideas of Vitruvius in his treatise, De Re Aedificatoria, saw beauty primarily as a matter of proportion, although ornament also played a part. For Alberti, the rules of proportion were those that governed the idealised human figure, the Golden mean.
The most important aspect of beauty was, therefore, an inherent part of an object, rather than something applied superficially, and was based on universal, recognisable truths. The notion of style in the arts was not developed until the 16th century, with the writing of Vasari:Françoise Choay, Alberti and Vitruvius, editor, Joseph Rykwert, Profile 21, Architectural Design, Vol 49 No 5-6 by the 18th century, his Lives of the Most Excellent Painters, Sculptors, and Architects had been translated into Italian, French, Spanish, and English.
thumb|left|alt=The Houses of Parliament in London, seen across the river, are a large Victorian Gothic building with two big towers and many pinnacles|The Houses of Parliament, Westminster, master-planned by Charles Barry, with interiors and details by A.W.N. Pugin
In the early 19th century, Augustus Welby Northmore Pugin wrote Contrasts (1836) that, as the titled suggested, contrasted the modern, industrial world, which he disparaged, with an idealized image of neo-medieval world. Gothic architecture, Pugin believed, was the only "true Christian form of architecture."
The 19th-century English art critic, John Ruskin, in his Seven Lamps of Architecture, published 1849, was much narrower in his view of what constituted architecture. Architecture was the "art which so disposes and adorns the edifices raised by men ... that the sight of them" contributes "to his mental health, power, and pleasure".John Ruskin, The Seven Lamps of Architecture, G. Allen (1880), reprinted Dover, (1989) ISBN 0-486-26145-X
For Ruskin, the aesthetic was of overriding significance. His work goes on to state that a building is not truly a work of architecture unless it is in some way "adorned". For Ruskin, a well-constructed, well-proportioned, functional building needed string courses or rustication, at the very least.
On the difference between the ideals of architecture and mere construction, the renowned 20th-century architect Le Corbusier wrote: "You employ stone, wood, and concrete, and with these materials you build houses and palaces: that is construction. Ingenuity is at work. But suddenly you touch my heart, you do me good. I am happy and I say: This is beautiful. That is Architecture".Le Corbusier, Towards a New Architecture, Dover Publications(1985). ISBN 0-486-25023-7
Le Corbusier's contemporary Ludwig Mies van der Rohe said "Architecture starts when you carefully put two bricks together. There it begins."
thumb|alt= The view shows a 20th-century building with two identical towers very close to each other rising from a low building which has a dome at one end, and an inverted dome, like a saucer, at the other.|The National Congress of Brazil, designed by Oscar Niemeyer
Modern concepts of architecture
The notable 19th-century architect of skyscrapers, Louis Sullivan, promoted an overriding precept to architectural design: "Form follows function".
While the notion that structural and aesthetic considerations should be entirely subject to functionality was met with both popularity and skepticism, it had the effect of introducing the concept of "function" in place of Vitruvius' "utility". "Function" came to be seen as encompassing all criteria of the use, perception and enjoyment of a building, not only practical but also aesthetic, psychological and cultural.
left|thumb|alt=The Sydney Opera House appears to float on the harbour. It has numerous roof-sections which are shaped like huge shining white sails|Sydney Opera House, Australia designed by Jørn Utzon
Nunzia Rondanini stated, "Through its aesthetic dimension architecture goes beyond the functional aspects that it has in common with other human sciences. Through its own particular way of expressing values, architecture can stimulate and influence social life without presuming that, in and of itself, it will promote social development.'
To restrict the meaning of (architectural) formalism to art for art's sake is not only reactionary; it can also be a purposeless quest for perfection or originality which degrades form into a mere instrumentality".Rondanini, Nunzia Architecture and Social Change Heresies II, Vol. 3, No. 3, New York, Neresies Collective Inc., 1981.
Among the philosophies that have influenced modern architects and their approach to building design are rationalism, empiricism, structuralism, poststructuralism, and phenomenology.
In the late 20th century a new concept was added to those included in the compass of both structure and function, the consideration of sustainability, hence sustainable architecture. To satisfy the contemporary ethos a building should be constructed in a manner which is environmentally friendly in terms of the production of its materials, its impact upon the natural and built environment of its surrounding area and the demands that it makes upon non-sustainable power sources for heating, cooling, water and waste management and lighting.
History
Origins and vernacular architecture
thumb|alt=A small hut composed entirely of split logs, and raised above the ground on stout upright stumps.|Vernacular architecture in NorwayBuilding first evolved out of the dynamics between needs (shelter, security, worship, etc.) and means (available building materials and attendant skills). As human cultures developed and knowledge began to be formalized through oral traditions and practices, building became a craft, and "architecture" is the name given to the most highly formalized and respected versions of that craft.
It is widely assumed that architectural success was the product of a process of trial and error, with progressively less trial and more replication as the results of the process proved increasingly satisfactory. What is termed vernacular architecture continues to be produced in many parts of the world. Indeed, vernacular buildings make up most of the built world that people experience every day.
Early human settlements were mostly rural. Due to a surplus in production the economy began to expand resulting in urbanization thus creating urban areas which grew and evolved very rapidly in some cases, such as that of Çatal Höyük in Anatolia and Mohenjo Daro of the Indus Valley Civilization in modern-day Pakistan.
thumb|left|alt=The three main Pyramids at Gizeh shown rising from the desert sands with three smaller pyramids in front of them|The Pyramids at Giza in Egypt
Ancient architecture
In many ancient civilizations, such as those of Egypt and Mesopotamia, architecture and urbanism reflected the constant engagement with the divine and the supernatural, and many ancient cultures resorted to monumentality in architecture to represent symbolically the political power of the ruler, the ruling elite, or the state itself.
The architecture and urbanism of the Classical civilizations such as the Greek and the Roman evolved from civic ideals rather than religious or empirical ones and new building types emerged. Architectural "style" developed in the form of the Classical orders.
Texts on architecture have been written since ancient time. These texts provided both general advice and specific formal prescriptions or canons. Some examples of canons are found in the writings of the 1st-century BCE Roman Architect Vitruvius. Some of the most important early examples of canonic architecture are religious.
thumb|alt= The Golden Pavilion is a building of three storeys with encircling balconies and curving roofs, overlooking a tranquil lake and woods|Kinkaku-ji (Golden Pavilion), Kyoto, Japan
Asian architecture
Early Asian writings on architecture include the Kao Gong Ji of China from the 7th–5th centuries BCE; the Shilpa Shastras of ancient India and Manjusri Vasthu Vidya Sastra of Sri Lanka.
The architecture of different parts of Asia developed along different lines from that of Europe; Buddhist, Hindu and Sikh architecture each having different characteristics. Buddhist architecture, in particular, showed great regional diversity. Hindu temple architecture, which developed around the 3rd century BCE, is governed by concepts laid down in the Shastras, and is concerned with expressing the macrocosm and the microcosm. In many Asian countries, pantheistic religion led to architectural forms that were designed specifically to enhance the natural landscape.
thumb|left|alt=The Taj Mahal is a mosque-like structure of white marble with an onion-shaped dome, and a tall marble minaret at each corner|The Taj Mahal (1632–1653), in India
Islamic architecture
Islamic architecture began in the 7th century CE, incorporating architectural forms from the ancient Middle East and Byzantium, but also developing features to suit the religious and social needs of the society. Examples can be found throughout the Middle East, North Africa, Spain and the Indian Sub-continent. The widespread application of the pointed arch was to influence European architecture of the Medieval period.
Middle Ages
thumb|alt=Notre Dame, Paris, is a grand Gothic cathedral with Towers at one end and a small spire rising from the centre of the roof.|Notre Dame de Paris, France
In Europe during the Medieval period, guilds were formed by craftsmen to organise their trades and written contracts have survived, particularly in relation to ecclesiastical buildings. The role of architect was usually one with that of master mason, or Magister lathomorum as they are sometimes described in contemporary documents.
The major architectural undertakings were the buildings of abbeys and cathedrals. From about 900 CE onwards, the movements of both clerics and tradesmen carried architectural knowledge across Europe, resulting in the pan-European styles Romanesque and Gothic.
thumb|left|alt=La Rotunda is a domed domestic building of which two sides can be seen, with identical classical porticos, indicating that it is the same on all sides.|La Rotonda (1567), Italy by Palladio
Renaissance and the architect
In Renaissance Europe, from about 1400 onwards, there was a revival of Classical learning accompanied by the development of Renaissance Humanism which placed greater emphasis on the role of the individual in society than had been the case during the Medieval period. Buildings were ascribed to specific architects – Brunelleschi, Alberti, Michelangelo, Palladio – and the cult of the individual had begun. There was still no dividing line between artist, architect and engineer, or any of the related vocations, and the appellation was often one of regional preference.
A revival of the Classical style in architecture was accompanied by a burgeoning of science and engineering which affected the proportions and structure of buildings. At this stage, it was still possible for an artist to design a bridge as the level of structural calculations involved was within the scope of the generalist.
Early modern and the industrial age
thumb|left|The Maughan Library, the university library of King's College London, was rebuilt in 1851-1890s but roots traceable to 1232. It features work of British architects Inigo Jones, Sir James Pennethorne and Sir John Taylor
thumb|alt=The Opera House in Paris is an ornate 19th century building decorated with much sculptured detail.|Paris Opera by Charles Garnier (1875), France
With the emerging knowledge in scientific fields and the rise of new materials and technology, architecture and engineering began to separate, and the architect began to concentrate on aesthetics and the humanist aspects, often at the expense of technical aspects of building design. There was also the rise of the "gentleman architect" who usually dealt with wealthy clients and concentrated predominantly on visual qualities derived usually from historical prototypes, typified by the many country houses of Great Britain that were created in the Neo Gothic or Scottish Baronial styles.
Formal architectural training in the 19th century, for example at École des Beaux-Arts in France, gave much emphasis to the production of beautiful drawings and little to context and feasibility. Effective architects generally received their training in the offices of other architects, graduating to the role from draughtsmen or clerks.
Meanwhile, the Industrial Revolution laid open the door for mass production and consumption. Aesthetics became a criterion for the middle class as ornamented products, once within the province of expensive craftsmanship, became cheaper under machine production.
Vernacular architecture became increasingly ornamental. House builders could use current architectural design in their work by combining features found in pattern books and architectural journals.
Modernism
thumb|right|The Bauhaus is a Moderne building of massed rectangular shapes, with the name as a significant decorative element|The Bauhaus Dessau architecture department from 1925 by Walter Gropius
Around the beginning of the 20th century, a general dissatisfaction with the emphasis on revivalist architecture and elaborate decoration gave rise to many new lines of thought that served as precursors to Modern Architecture. Notable among these is the Deutscher Werkbund, formed in 1907 to produce better quality machine made objects. The rise of the profession of industrial design is usually placed here. Following this lead, the Bauhaus school, founded in Weimar, Germany in 1919, redefined the architectural bounds prior set throughout history, viewing the creation of a building as the ultimate synthesis—the apex—of art, craft, and technology.
When modern architecture was first practiced, it was an avant-garde movement with moral, philosophical, and aesthetic underpinnings. Immediately after World War I, pioneering modernist architects sought to develop a completely new style appropriate for a new post-war social and economic order, focused on meeting the needs of the middle and working classes. They rejected the architectural practice of the academic refinement of historical styles which served the rapidly declining aristocratic order. The approach of the Modernist architects was to reduce buildings to pure forms, removing historical references and ornament in favor of functionalist details. Buildings displayed their functional and structural elements, exposing steel beams and concrete surfaces instead of hiding them behind decorative forms.
thumb|left|"Fallingwater" is a house built of horizontal rectangular shapes arranged in a seemingly haphazard way in a natural setting, right over a small waterfall|Fallingwater, organic architecture by Frank Lloyd Wright
Architects such as Frank Lloyd Wright developed Organic architecture, in which the form was defined by its environment and purpose, with an aim to promote harmony between human habitation and the natural world with prime examples being Robie House and Fallingwater.
thumb|alt=The Crystal Cathedral is a built in a modern style with panels of glass set in metal frames making both the walls and roof. A tall tower of the same materials rises beside it|The Crystal Cathedral, California, by Philip Johnson (1980)
Architects such as Mies van der Rohe, Philip Johnson and Marcel Breuer worked to create beauty based on the inherent qualities of building materials and modern construction techniques, trading traditional historic forms for simplified geometric forms, celebrating the new means and methods made possible by the Industrial Revolution, including steel-frame construction, which gave birth to high-rise superstructures. By mid-century, Modernism had morphed into the International Style, an aesthetic epitomized in many ways by the Twin Towers of New York's World Trade Center designed by Minoru Yamasaki.
Postmodernism
Many architects resisted modernism, finding it devoid of the decorative richness of historical styles. As the first generation of modernists began to die after World War II, a second generation of architects including Paul Rudolph, Marcel Breuer, and Eero Saarinen tried to expand the aesthetics of modernism with Brutalism, buildings with expressive sculptural façades made of unfinished concrete. But an even new younger postwar generation critiqued modernism and Brutalism for being too austere, standardized, monotone, and not taking into account the richness of human experience offered in historical buildings across time and in different places and cultures.
One such reaction to the cold aesthetic of modernism and Brutalism is the school of metaphoric architecture, which includes such things as biomorphism and zoomorphic architecture, both using nature as the primary source of inspiration and design. While it is considered by some to be merely an aspect of postmodernism, others consider it to be a school in its own right and a later development of expressionist architecture.
Beginning in the late 1950s and 1960s, architectural phenomenology emerged as an important movement in the early reaction against modernism, with architects like Charles Moore in the United States, Christian Norberg-Schulz in Norway, and Ernesto Nathan Rogers and Vittorio Gregotti, Michele Valori, Bruno Zevi in Italy, who collectively popularized an interest in a new contemporary architecture aimed at expanding human experience using historical buildings as models and precedents. Postmodernism produced a style that combined contemporary building technology and cheap materials, with the aesthetics of older pre-modern and non-modern styles, from high classical architecture to popular or vernacular regional building styles. Robert Venturi famously defined postmodern architecture as a "decorated shed" (an ordinary building which is functionally designed inside and embellished on the outside), and upheld it against modernist and brutalist "ducks" (buildings with unnecessarily expressive tectonic forms).
Architecture today
thumb|left|alt=The Railway station in Lisbon has a fibreglass roof supported on piers with radiating arms resembling Gothic columns, arches and vaults|Postmodern design at Gare do Oriente, Lisbon, Portugal, by Santiago Calatrava
Since the 1980s, as the complexity of buildings began to increase (in terms of structural systems, services, energy and technologies), the field of architecture became multi-disciplinary with specializations for each project type, technological expertise or project delivery methods. In addition, there has been an increased separation of the 'design' architect A design architect is one who is responsible for the design. from the 'project' architect who ensures that the project meets the required standards and deals with matters of liability.A project architect is one who is responsible for ensuring the design is built correctly and who administers building contracts – in non-specialist architectural practices the project architect is also the design architect and the term refers to the differing roles the architect plays at differing stages of the process. The preparatory processes for the design of any large building have become increasingly complicated, and require preliminary studies of such matters as durability, sustainability, quality, money, and compliance with local laws. A large structure can no longer be the design of one person but must be the work of many.
Modernism and Postmodernism have been criticised by some members of the architectural profession who feel that successful architecture is not a personal, philosophical, or aesthetic pursuit by individualists; rather it has to consider everyday needs of people and use technology to create liveable environments, with the design process being informed by studies of behavioral, environmental, and social sciences.
thumb|alt= A low building has a roof completely covered with soil and grass. It appears to be built into a hillside|Green roof planted with native species at L'Historial de la Vendée, a new museum in western France
Environmental sustainability has become a mainstream issue, with profound effect on the architectural profession. Many developers, those who support the financing of buildings, have become educated to encourage the facilitation of environmentally sustainable design, rather than solutions based primarily on immediate cost. Major examples of this can be found in passive solar building design, greener roof designs, biodegradable materials, and more attention to a structure's energy usage. This major shift in architecture has also changed architecture schools to focus more on the environment. Sustainability in architecture was pioneered by Frank Lloyd Wright, in the 1960s by Buckminster Fuller and in the 1970s by architects such as Ian McHarg and Sim Van der Ryn in the US and Brenda and Robert Vale in the UK and New Zealand. There has been an acceleration in the number of buildings which seek to meet green building sustainable design principles. Sustainable practices that were at the core of vernacular architecture increasingly provide inspiration for environmentally and socially sustainable contemporary techniques. The U.S. Green Building Council's LEED (Leadership in Energy and Environmental Design) rating system has been instrumental in this.Other energy efficiency and green building rating systems include Energy Star, Green Globes, and CHPS (Collaborative for High Performance Schools).
Concurrently, the recent movements of New Urbanism, metaphoric architecture and New Classical Architecture promote a sustainable approach towards construction that appreciates and develops smart growth, architectural tradition and classical design. This in contrast to modernist and globally uniform architecture, as well as leaning against solitary housing estates and suburban sprawl.Issue Brief: Smart-Growth: Building Livable Communities. American Institute of Architects. Retrieved on 23 March 2014.
See also
Architectural design competition
Architectural drawing
Architectural style
Architectural technology
Architectural theory
Architecture prizes
Building materials
Contemporary architecture
Glossary of architecture
List of human habitation forms
Mathematics and architecture
Organic architecture
Metaphoric Architecture
Zoomorphic architecture
Outline of architecture
Sociology of architecture
Sustainable architecture
Dravidian architecture
Notes
References
External links
World Architecture Community
Architecture.com, published by Royal Institute of British Architects
Architectural centers and museums in the world, list of links from the UIA
Architecture Week
Architecture Arch2O
American Institute of Architects
Glossary of Architecture Terms (with dictionary definitions)
Cities and Buildings Database - Collection of digitized images of buildings and cities drawn from across time and throughout the world from the University of Washington Library
Category:Architectural design | 21,296,224 | 2017-01 |
East India Company |
The East India Company (EIC), also known as the Honourable East India Company (HEIC) or the British East India Company and informally as John Company, was an English and later British joint-stock company,The Dutch East India Company was the first to issue public stock. which was formed to pursue trade with the East Indies but ended up trading mainly with the Indian subcontinent and Qing China.
Originally chartered as the "Governor and Company of Merchants of London trading into the East Indies", the company rose to account for half of the world's trade, particularly in basic commodities including cotton, silk, indigo dye, salt, saltpetre, tea and opium. The company also ruled the beginnings of the British Empire in India.
The company received a Royal Charter from Queen Elizabeth I on 31December 1600,The Register of Letters &c. of the Governor and Company of Merchants of London trading into the East Indies, 1600–1619. On page three, a letter written by Elizabeth I on 23 January 1601 ("Witnes or selfe at Westminster the xxiiijth of Ianuarie in the xliijth yeare of or Reigne.") states, "Haue been pleased to giue lysence vnto or said Subjects to proceed in the said voiadgs, & for the better inabling them to establish a trade into & from the said East Indies Haue by or tres Pattents vnder or great seale of England beareing date at Westminster the last daie of december last past incorporated or said Subjecte by the name of the Gournor & Companie of the merchaunts of London trading into the East Indies, & in the same tres Pattents haue geven them the sole trade of theast Indies for the terme of XVteen yeares ..." making it the oldest among several similarly formed European East India Companies. Wealthy merchants and aristocrats owned the Company's shares. The government owned no shares and had only indirect control.
The company eventually came to rule large areas of India with its own private armies, exercising military power and assuming administrative functions.This is the argument of Robins (2006). Company rule in India effectively began in 1757 after the Battle of Plassey and lasted until 1858 when, following the Indian Rebellion of 1857, the Government of India Act 1858 led to the British Crown assuming direct control of India in the form of the new British Raj.
Despite frequent government intervention, the company had recurring problems with its finances. It was dissolved in 1874 as a result of the East India Stock Dividend Redemption Act passed one year earlier, as the Government of India Act had by then rendered it vestigial, powerless, and obsolete. The official government machinery of British India had assumed its governmental functions and absorbed its armies.
Founding
thumb|left|upright|James Lancaster commanded the first East India Company voyage in 1601
Soon after the defeat of the Spanish Armada in 1588, London merchants presented a petition to Queen Elizabeth I for permission to sail to the Indian Ocean. Permission was granted, and despite the defeat of the English Armada in 1589, on 10April 1591 three ships sailed from Torbay around the Cape of Good Hope to the Arabian Sea on one of the earliest English overseas Indian expeditions. One of them, Edward Bonventure, then sailed around Cape Comorin to the Malay Peninsula and returned to England in 1594.
In 1596, three more ships sailed east; however, these were all lost at sea. Three years later, on 22 September 1599, another group of merchants met and stated their intention "to venture in the pretended voyage to the East Indies (the which it may please the Lord to prosper), and the sums that they will adventure", committing £30,133.http://www.british-history.ac.uk/report.aspx?compid=68624 Two days later, on 24September, "the Adventurers" reconvened and resolved to apply to the Queen for support of the project.
Although their first attempt had not been completely successful, they nonetheless sought the Queen's unofficial approval to continue, bought ships for their venture and increased their capital to £68,373. The Adventurers convened again a year later.
This time they succeeded, and on 31 December 1600, the Queen granted a Royal Charter to "George, Earl of Cumberland, and 215 Knights, Aldermen, and Burgesses" under the name, Governor and Company of Merchants of London trading with the East Indies. For a period of fifteen years the charter awarded the newly formed company a monopoly on trade with all countries east of the Cape of Good Hope and west of the Straits of Magellan. Anybody who traded in breach of the charter without a licence from the Company was liable to forfeiture of their ships and cargo (half of which went to the Crown and the other half to the Company), as well as imprisonment at the "royal pleasure".
The governance of the company was in the hands of one governor and 24 directors or "committees", who made up the Court of Directors. They, in turn, reported to the Court of Proprietors, which appointed them. Ten committees reported to the Court of Directors.
Early voyages to the East Indies
Sir James Lancaster commanded the first East India Company voyage in 1601 and returned in 1603.http://thinkingpast.com/seldenmapatlas/eicvoyage1.htm In March 1604 Sir Henry Middleton commanded the second voyage. General William Keeling, a captain during the second voyage, led the third voyage aboard the Red Dragon from 1607 to 1610 along with the Hector under Captain William Hawkins and the Consent under Captain David Middleton.
Early in 1608 Alexander Sharpeigh was appointed captain of the Company's Ascension, and general or commander of the fourth voyage. Thereafter two ships, Ascension and Union (captained by Richard Rowles) sailed from Woolwich on 14 March 1607–8.
Initially, the company struggled in the spice trade because of the competition from the already well-established Dutch East India Company. The company opened a factory in Bantam on the first voyage and imports of pepper from Java were an important part of the company's trade for twenty years. The factory in Bantam was closed in 1683. During this time ships belonging to the company arriving in India docked at Surat, which was established as a trade transit point in 1608.
In the next two years, the company established its first factory in south India in the town of Machilipatnam on the Coromandel Coast of the Bay of Bengal. The high profits reported by the company after landing in India initially prompted King James I to grant subsidiary licences to other trading companies in England. But in 1609 he renewed the charter given to the company for an indefinite period, including a clause that specified that the charter would cease to be in force if the trade turned unprofitable for three consecutive years.
Foothold in India
thumb|Red Dragon fought the Portuguese at the Battle of Swally in 1612, and made several voyages to the East Indies.
English traders frequently engaged in hostilities with their Dutch and Portuguese counterparts in the Indian Ocean. The company achieved a major victory over the Portuguese in the Battle of Swally in 1612, which was held at Suvali of Surat. The company decided to explore the feasibility of gaining a territorial foothold in mainland India, with official sanction by both Britain and the Mughal Empire, and requested that the Crown launch a diplomatic mission.The battle of Plassey ended the tax on the Indian goods. Indian History Sourcebook: England, India, and The East Indies, 1617 A.D
thumb|left|Jahangir investing a courtier with a robe of honour watched by Sir Thomas Roe, English ambassador to the court of Jahangir at Agra from 1615–18, and others
In 1612, James I instructed Sir Thomas Roe to visit the Mughal Emperor Nuruddin Salim Jahangir (r. 1605–1627) to arrange for a commercial treaty that would give the company exclusive rights to reside and establish factories in Surat and other areas. In return, the company offered to provide the Emperor with goods and rarities from the European market. This mission was highly successful as Jahangir sent a letter to James through Sir Thomas Roe:
Expansion
thumb|East India House, London, painted by Thomas Malton in c.1800
The company, which benefited from the imperial patronage, soon expanded its commercial trading operations, eclipsing the Portuguese Estado da Índia, which had established bases in Goa, Chittagong, and Bombay, which Portugal later ceded to England as part of the dowry of Catherine de Braganza. The East India Company also launched a joint attack with the Dutch United East India Company on Portuguese and Spanish ships off the coast of China, which helped secure their ports in China. The company established trading posts in Surat (1619), Madras (1639), Bombay (1668), and Calcutta (1690). By 1647, the company had 23 factories, each under the command of a factor or master merchant and governor if so chosen, and 90 employees in India. The major factories became the walled forts of Fort William in Bengal, Fort St George in Madras, and Bombay Castle.
In 1634, the Mughal emperor extended his hospitality to the English traders to the region of Bengal, and in 1717 completely waived customs duties for the trade. The company's mainstay businesses were by then cotton, silk, indigo dye, saltpetre, and tea. The Dutch were aggressive competitors and had meanwhile expanded their monopoly of the spice trade in the Malaccan straits by ousting the Portuguese in 1640–41. With reduced Portuguese and Spanish influence in the region, the EIC and Dutch East India Company (VOC) entered a period of intense competition, resulting in the Anglo-Dutch Wars of the 17th and 18th centuries.
Meanwhile, in 1657, Oliver Cromwell renewed the charter of 1609, and brought about minor changes in the holding of the company. The restoration of monarchy in England further enhanced the EIC's status.
In an act aimed at strengthening the power of the EIC, King Charles II granted the EIC (in a series of five acts around 1670) the rights to autonomous territorial acquisitions, to mint money, to command fortresses and troops and form alliances, to make war and peace, and to exercise both civil and criminal jurisdiction over the acquired areas."East India Company" (1911). Encyclopædia Britannica Eleventh Edition, Volume 8, p.835
William Hedges was sent in 1682 to Shaista Khan, the Mughal governor of Bengal in order to obtain a firman, an imperial directive that would grant England regular trading privileges throughout the Mughal Empire. However, the company's governor in London, Sir Josiah Child, interfered with Hedges's mission, causing Mughal Emperor Aurangzeb to break off the negotiations.
In 1689 a Mughal fleet commanded by Sidi Yaqub attacked Bombay. After a year of resistance the EIC surrendered in 1690, and the company sent envoys to Aurangzeb's camp to plead for a pardon. The company's envoys had to prostrate themselves before the emperor, pay a large indemnity, and promise better behaviour in the future. The emperor withdrew his troops and the company subsequently reestablished itself in Bombay and set up a new base in Calcutta.Europe, 1450 to 1789: Encyclopaedia of the Early Modern World
Japan
thumbnail|right|Document with the original vermilion seal of Tokugawa Ieyasu, granting trade privileges in Japan to the East India Company in 1613
In 1613, during the rule of Tokugawa Hidetada of the Tokugawa Shogunate, the British ship Clove, under the command of Captain John Saris, was the first British ship to call on Japan. Saris was the chief factor of the EIC's trading post in Java, and with the assistance of William Adams, a British sailor who had arrived in Japan in 1600, was able to gain permission from the ruler to establish a commercial house in Hirado on the Japanese island of Kyushu:
We give free license to the subjects of the King of Great Britaine, Sir Thomas Smythe, Governor and Company of the East Indian Merchants and Adventurers forever safely come into any of our ports of our Empire of Japan with their shippes and merchandise, without any hindrance to them or their goods, and to abide, buy, sell and barter according to their own manner with all nations, to tarry here as long as they think good, and to depart at their pleasure.
However, unable to obtain Japanese raw silk for import to China and with their trading area reduced to Hirado and Nagasaki from 1616 onwards, in 1623 the Company closed their factory.
Mughal convoy piracy incident of 1695
In September 1695, Captain Henry Every, an English pirate on board the Fancy, reached the Straits of Bab-el-Mandeb, where he teamed up with five other pirate captains to make an attack on the Indian fleet making the annual voyage to Mocha. The Mughal convoy included the treasure-laden Ganj-i-Sawai, reported to be the greatest in the Mughal fleet and the largest ship operational in the Indian Ocean, and its escort, the Fateh Muhammed. They were spotted passing the straits en route to Surat. The pirates gave chase and caught up with Fateh Muhammed some days later, and meeting little resistance, took some £50,000 to £60,000 worth of treasure.Burgess, Douglas R. (2009). The Pirates' Pact: The Secret Alliances Between History's Most Notorious Buccaneers and Colonial America. New York, NY: McGraw-Hill. ISBN 978-0-07-147476-4
thumb|left|English, Dutch and Danish factories at Mocha
Every continued in pursuit and managed to overhaul Ganj-i-Sawai, which resisted strongly before eventually striking. Ganj-i-Sawai carried enormous wealth and, according to contemporary East India Company sources, was carrying a relative of the Grand Mughal, though there is no evidence to suggest that it was his daughter and her retinue. The loot from the Ganj-i-Sawai had a total value between £325,000 and £600,000, including 500,000 gold and silver pieces, and has become known as the richest ship ever taken by pirates.
In a letter sent to the Privy Council by Sir John Gayer, then governor of Bombay and head of the East India Company, Gayer claims that "it is certain the Pirates ... did do very barbarously by the People of the Ganj-i-Sawai and Abdul Ghaffar's ship, to make them confess where their money was." The pirates set free the survivors who were left aboard their emptied ships, to continue their voyage back to India.
When the news arrived in England it caused an outcry. In response, a combined bounty of £1,000 was offered for Every's capture by the Privy Council and East India Company, leading to the first worldwide manhunt in recorded history. The plunder of Aurangzeb's treasure ship had serious consequences for the English East India Company. The furious Mughal Emperor Aurangzeb ordered Sidi Yaqub and Nawab Daud Khan to attack and close four of the company's factories in India and imprison their officers, who were almost lynched by a mob of angry Mughals, blaming them for their countryman's depredations, and threatened to put an end to all English trading in India. To appease Emperor Aurangzeb and particularly his Grand Vizier Asad Khan, Parliament exempted Every from all of the Acts of Grace (pardons) and amnesties it would subsequently issue to other pirates.Fox, E. T. (2008). King of the Pirates: The Swashbuckling Life of Henry Every. London: Tempus Publishing. ISBN 978-0-7524-4718-6.
Forming a complete monopoly
Trade monopoly
thumb|Rear view of the East India Company's Factory at Cossimbazar
The prosperity that the officers of the company enjoyed allowed them to return to Britain and establish sprawling estates and businesses, and to obtain political power. The company developed a lobby in the English parliament. Under pressure from ambitious tradesmen and former associates of the company (pejoratively termed Interlopers by the company), who wanted to establish private trading firms in India, a deregulating act was passed in 1694.
This allowed any English firm to trade with India, unless specifically prohibited by act of parliament, thereby annulling the charter that had been in force for almost 100 years. By an act that was passed in 1698, a new "parallel" East India Company (officially titled the English Company Trading to the East Indies) was floated under a state-backed indemnity of £2 million. The powerful stockholders of the old company quickly subscribed a sum of £315,000 in the new concern, and dominated the new body. The two companies wrestled with each other for some time, both in England and in India, for a dominant share of the trade.
It quickly became evident that, in practice, the original company faced scarcely any measurable competition. The companies merged in 1708, by a tripartite indenture involving both companies and the state. Under this arrangement, the merged company lent to the Treasury a sum of £3,200,000, in return for exclusive privileges for the next three years, after which the situation was to be reviewed. The amalgamated company became the United Company of Merchants of England Trading to the East Indies.
thumb|200px|Company painting depicting an official of the East India Company, c. 1760
In the following decades there was a constant battle between the company lobby and the Parliament. The company sought a permanent establishment, while the Parliament would not willingly allow it greater autonomy and so relinquish the opportunity to exploit the company's profits. In 1712, another act renewed the status of the company, though the debts were repaid. By 1720, 15% of British imports were from India, almost all passing through the company, which reasserted the influence of the company lobby. The licence was prolonged until 1766 by yet another act in 1730.
At this time, Britain and France became bitter rivals. Frequent skirmishes between them took place for control of colonial possessions. In 1742, fearing the monetary consequences of a war, the British government agreed to extend the deadline for the licensed exclusive trade by the company in India until 1783, in return for a further loan of £1 million. Between 1756 and 1763, the Seven Years' War diverted the state's attention towards consolidation and defence of its territorial possessions in Europe and its colonies in North America.Thomas, P. D. G. (2008) "Pratt, Charles, first Earl Camden (1714–1794)", Oxford Dictionary of National Biography, Oxford University Press, online edn, accessed 15 February 2008
The war took place on Indian soil, between the company troops and the French forces. In 1757, the Law Officers of the Crown delivered the Pratt-Yorke opinion distinguishing overseas territories acquired by right of conquest from those acquired by private treaty. The opinion asserted that, while the Crown of Great Britain enjoyed sovereignty over both, only the property of the former was vested in the Crown.
With the advent of the Industrial Revolution, Britain surged ahead of its European rivals. Demand for Indian commodities was boosted by the need to sustain the troops and the economy during the war, and by the increased availability of raw materials and efficient methods of production. As home to the revolution, Britain experienced higher standards of living. Its spiralling cycle of prosperity, demand and production had a profound influence on overseas trade. The company became the single largest player in the British global market. William Henry Pyne notes in his book The Microcosm of London (1808) that:
On the 1 March 1801, the debts of the East India Company to £5,393,989 their effects to £15,404,736 and their sales increased since February 1793, from £4,988,300 to £7,602,041.
Saltpetre trade
thumb|left|Saltpetre used for gunpowder was one of the major trade goods of the company.
Sir John Banks, a businessman from Kent who negotiated an agreement between the king and the company, began his career in a syndicate arranging contracts for victualling the navy, an interest he kept up for most of his life. He knew that Samuel Pepys and John Evelyn had amassed a substantial fortune from the Levant and Indian trades.
He became a Director and later, as Governor of the East India Company in 1672, he arranged a contract which included a loan of £20,000 and £30,000 worth of saltpetre—also known as potassium nitrate, a primary ingredient in gunpowder—for the King "at the price it shall sell by the candle"—that is by auction—where bidding could continue as long as an inch-long candle remained alight.
Outstanding debts were also agreed and the company permitted to export 250 tons of saltpetre. Again in 1673, Banks successfully negotiated another contract for 700 tons of saltpetre at £37,000 between the king and the company. So urgent was the need to supply the armed forces in the United Kingdom, America and elsewhere that the authorities sometimes turned a blind eye on the untaxed sales. One governor of the company was even reported as saying in 1864 that he would rather have the saltpetre made than the tax on salt.SALTPETER the secret salt – Salt made the world go round
Basis for the monopoly
thumb|East India Company silver coin issued during William IV's reign, Indian Museum
thumb|Coins issued by East India Company during reign of Shah Alam II, Indian Museum
Colonial monopoly
thumb|Robert Clive became the first British Governor of Bengal after he had instated Mir Jafar as the Nawab of Bengal.
The Seven Years' War (1756–63) resulted in the defeat of the French forces, limited French imperial ambitions, and stunted the influence of the Industrial Revolution in French territories. Robert Clive, the Governor General, led the company to a victory against Joseph François Dupleix, the commander of the French forces in India, and recaptured Fort St George from the French. The company took this respite to seize Manila in 1762.
By the Treaty of Paris (1763), France regained the five establishments captured by the British during the war (Pondichéry, Mahe, Karikal, Yanam and Chandernagar) but was prevented from erecting fortifications and keeping troops in Bengal (art. XI). Elsewhere in India, the French were to remain a military threat, particularly during the War of American Independence, and up to the capture of Pondichéry in 1793 at the outset of the French Revolutionary Wars without any military presence. Although these small outposts remained French possessions for the next two hundred years, French ambitions on Indian territories were effectively laid to rest, thus eliminating a major source of economic competition for the company.
East India Company Army and Navy
In its first century and half, the EIC used a few hundred soldiers as guards. The great expansion came after 1750, when it had 3000 regular troops. By 1763, it had 26,000; by 1778, it had 67,000. It recruited largely Indian troops, and trained them along European lines.Gerald Bryant, "Officers of the East India Company's army in the days of Clive and Hastings", The Journal of Imperial and Commonwealth History (1978)6#3 pp 203-27 The military arm of the East India Company quickly developed to become a private corporate armed force, and was used as an instrument of geo-political power and expansion, rather than its original purpose as a guard force, and became the most powerful military force in the Indian sub-continent. As it increased in size the army was broken into the Presidency Armies of Bengal, Madras and Bombay each recruiting their own integral infantry, cavalry, artillery and horse artillery units. The navy also grew significantly, vastly expanding its fleet and although made up predominantly of heavily armed merchant vessels, called East Indiamen, it also included warships.
Expansion and Conquest
The company, fresh from a colossal victory, and with the backing of its own private well-disciplined and experienced army, was able to assert its interests in the Carnatic region from its base at Madras and in Bengal from Calcutta, without facing any further obstacles from other colonial powers.
thumb|left|The Mughal Emperor Shah Alam II, who with his allies fought against the East India Company during his early years (1760–64), only accepting the protection of the British in the year 1803, after he had been blinded by his enemies and deserted by his subjects
It continued to experience resistance from local rulers during its expansion. Robert Clive led company forces against Siraj Ud Daulah, the last independent Nawab of Bengal, Bihar, and Midnapore district in Odisha to victory at the Battle of Plassey in 1757, resulting in the conquest of Bengal. This victory estranged the British and the Mughals, since Siraj Ud Daulah was a Mughal feudatory ally.
With the gradual weakening of the Marathas in the aftermath of the three Anglo-Maratha wars, the British also secured the Ganges-Jumna Doab, the Delhi-Agra region, parts of Bundelkhand, Broach, some districts of Gujarat, the fort of Ahmmadnagar, province of Cuttack (which included Mughalbandi/the coastal part of Odisha, Garjat/the princely states of Odisha, Balasore Port, parts of Midnapore district of West Bengal), Bombay (Mumbai) and the surrounding areas, leading to a formal end of the Maratha empire and firm establishment of the British East India Company in India.
Hyder Ali and Tipu Sultan, the rulers of the Kingdom of Mysore, offered much resistance to the British forces. Having sided with the French during the Revolutionary War, the rulers of Mysore continued their struggle against the company with the four Anglo-Mysore Wars. Mysore finally fell to the company forces in 1799, with the death of Tipu Sultan.
thumb|right|The fall of Tipu Sultan and the Sultanate of Mysore, during the Battle of Seringapatam in 1799
The last vestiges of local administration were restricted to the northern regions of Delhi, Oudh, Rajputana, and Punjab, where the company's presence was ever increasing amidst infighting and offers of protection among the remaining princes. The hundred years from the Battle of Plassey in 1757 to the Indian Rebellion of 1857 were a period of consolidation for the company, which began to function more as an administrator and less as a trading concern.
A cholera pandemic began in Bengal, then spread across India by 1820. 10,000 British troops and countless Indians died during this pandemic. Between 1760 and 1834 only some 10% of the East India Company's officers survived to take the final voyage home.
In the early 19th century the Indian question of geopolitical dominance and empire holding remained with the East India Company.Note: as of 30 December 1600, the official name: Governor and Company of Merchants of London trading with the East Indies The three independent armies of the company's Presidencies, with some locally raised irregular forces, expanded to a total of 280,000 men by 1857. First recruited from mercenaries and low-caste volunteers, the Bengal Army especially eventually became composed largely of high-caste Hindus and landowning Muslims.
Within the Army, British officers who initially trained at the company's own academy at the Addiscombe Military Seminary, always outranked Indians, no matter how long their service. The highest rank to which an Indian soldier could aspire was Subadar-Major (or Rissaldar-Major in cavalry units), effectively a senior subaltern equivalent. Promotion for both British and Indian soldiers was strictly by seniority, so Indian soldiers rarely reached the commissioned ranks of Jamadar or Subadar before they were middle aged at best. They received no training in administration or leadership to make them independent of their British officers.
During the wars against the French and their allies in the late eighteenth and early nineteenth centuries, the East India Company's armies were used to seize the colonial possessions of other European nations, including the islands of Réunion and Mauritius.
There was a systemic disrespect in the company for the spreading of Protestantism although it fostered respect for Hindu and Muslim, castes and ethnic groups. The growth of tensions between the EIC and the local religious and cultural groups grew in the 19th century as the Protestant revival grew in Great Britain. These tensions erupted at the Indian Rebellion of 1857 and the company ceased to exist when the company dissolved through the East India Stock Dividend Redemption Act 1873.
Opium trade
thumb|The Nemesis destroying Chinese war junks during the Second Battle of Chuenpi, 7 January 1841, by Edward Duncan
In the 18th century, Britain had a huge trade deficit with Qing dynasty China and so in 1773, the Company created a British monopoly on opium buying in Bengal, India by prohibiting the licensing of opium farmers and private cultivation. The monopoly system established in 1799 continued with minimal changes until 1947.
As the opium trade was illegal in China, Company ships could not carry opium to China. So the opium produced in Bengal was sold in Calcutta on condition that it be sent to China.East India Company Factory Records Sources from the British Library, London Part 1: China and Japan
Despite the Chinese ban on opium imports, reaffirmed in 1799 by the Jiaqing Emperor, the drug was smuggled into China from Bengal by traffickers and agency houses such as Jardine, Matheson & Co and Dent & Co. in amounts averaging 900 tons a year. The proceeds of the drug-smugglers landing their cargoes at Lintin Island were paid into the Company's factory at Canton and by 1825, most of the money needed to buy tea in China was raised by the illegal opium trade.
The Company established a group of trading settlements centred on the Straits of Malacca called the Straits Settlements in 1826 to protect its trade route to China and to combat local piracy. The Settlements were also used as penal settlements for Indian civilian and military prisoners.
In 1838 with the amount of smuggled opium entering China approaching 1,400 tons a year, the Chinese imposed a death penalty for opium smuggling and sent a Special Imperial Commissioner, Lin Zexu, to curb smuggling. This resulted in the First Opium War (1839–42). After the war Hong Kong island was ceded to Britain under the Treaty of Nanking and the Chinese market opened to the opium traders of Britain and other nations. The Jardines and Apcar and Company dominated the trade, although P&O also tried to take a share.
A Second Opium War fought by Britain and France against China lasted from 1856 until 1860 and led to the Treaty of Tientsin, which legalised the importation of opium. Legalisation stimulated domestic Chinese opium production and increased the importation of opium from Turkey and Persia. This increased competition for the Chinese market led to India reducing its opium output and diversifying its exports.
Regulation of the company's affairs
Writers
thumb|The Destruction of Tea at Boston Harbor, 1773|alt=Two ships in a harbour, one in the distance. On board, men stripped to the waist and wearing feathers in their hair are throwing crates overboard. A large crowd, mostly men, is standing on the dock, waving hats and cheering. A few people wave their hats from windows in a nearby building. Monopolistic activity by the company triggered the Boston Tea Party.
The Company employed many junior clerks, known as "writers", to record the details of accounting, managerial decisions, and activities related to the Company, such as minutes of meetings, copies of Company orders and contracts, and filings of reports and copies of ship's logs. Several well-known British scholars and literary men had Company writerships, such as Henry Thomas Colebrooke in India and Charles Lamb in England.
Financial troubles
Though the Company was becoming increasingly bold and ambitious in putting down resisting states, it was becoming clearer that the Company was incapable of governing the vast expanse of the captured territories. The Bengal famine of 1770, in which one-third of the local population died, caused distress in Britain. Military and administrative costs mounted beyond control in British-administered regions in Bengal because of the ensuing drop in labour productivity.
At the same time, there was commercial stagnation and trade depression throughout Europe. The directors of the company attempted to avert bankruptcy by appealing to Parliament for financial help. This led to the passing of the Tea Act in 1773, which gave the Company greater autonomy in running its trade in the American colonies, and allowed it an exemption from tea import duties which its colonial competitors were required to pay.
When the American colonists and tea merchants were told of this Act, they boycotted the Company tea. Although the price of tea had dropped because of the Act, it also validated the Townshend Acts, setting the precedent for the king to impose additional taxes in the future. The arrival of tax-exempt Company tea, undercutting the local merchants, triggered the Boston Tea Party in the Province of Massachusetts Bay, one of the major events leading up to the American Revolution.
Regulating Acts of Parliament
East India Company Act 1773
By the Regulating Act of 1773 (later known as the East India Company Act 1773), the Parliament of Great Britain imposed a series of administrative and economic reforms; this clearly established Parliament's sovereignty and ultimate control over the Company. The Act recognised the Company's political functions and clearly established that the "acquisition of sovereignty by the subjects of the Crown is on behalf of the Crown and not in its own right".
Despite stiff resistance from the East India lobby in parliament and from the Company's shareholders, the Act passed. It introduced substantial governmental control and allowed British India to be formally under the control of the Crown, but leased back to the Company at £40,000 for two years. Under the Act's most important provision, a governing Council composed of five members was created in Calcutta. The three members nominated by Parliament and representing the Government's interest could, and invariably would, outvote the two Company members. The Council was headed by Warren Hastings, the incumbent Governor, who became the first Governor-General of Bengal, with an ill-defined authority over the Bombay and Madras Presidencies.Keay, John (1991). The Honourable Company: A History of the English East India Company. Macmillan Publishing Company, New York p. 385. His nomination, made by the Court of Directors, would in future be subject to the approval of a Council of Four appointed by the Crown. Initially, the Council consisted of Lt. General Sir John Clavering, The Honourable Sir George Monson, Sir Richard Barwell, and Sir Philip Francis.Anthony, Frank. Britain's Betrayal in India: The Story of the Anglo Indian Community. Second Edition. London: The Simon Wallenberg Press, 2007 Pages 18–19, 42, 45.
Hastings was entrusted with the power of peace and war. British judges and magistrates would also be sent to India to administer the legal system. The Governor General and the council would have complete legislative powers. The company was allowed to maintain its virtual monopoly over trade in exchange for the biennial sum and was obligated to export a minimum quantity of goods yearly to Britain. The costs of administration were to be met by the company. The Company initially welcomed these provisions, but the annual burden of the payment contributed to the steady decline of its finances.
East India Company Act 1784 (Pitt's India Act)
The East India Company Act 1784 (Pitt's India Act) had two key aspects:
Relationship to the British government: the bill differentiated the East India Company's political functions from its commercial activities. In political matters the East India Company was subordinated to the British government directly. To accomplish this, the Act created a Board of Commissioners for the Affairs of India, usually referred to as the Board of Control. The members of the Board were the Chancellor of the Exchequer, the Secretary of State, and four Privy Councillors, nominated by the King. The act specified that the Secretary of State "shall preside at, and be President of the said Board".
Internal Administration of British India: the bill laid the foundation for the centralised and bureaucratic British administration of India which would reach its peak at the beginning of the 20th century during the governor-generalship of George Nathaniel Curzon, 1st Baron Curzon.
Pitt's Act was deemed a failure because it quickly became apparent that the boundaries between government control and the company's powers were nebulous and highly subjective. The government felt obliged to respond to humanitarian calls for better treatment of local peoples in British-occupied territories. Edmund Burke, a former East India Company shareholder and diplomat, was moved to address the situation and introduced a new Regulating Bill in 1783. The bill was defeated amid lobbying by company loyalists and accusations of nepotism in the bill's recommendations for the appointment of councillors.
Act of 1786
thumb|General Lord Cornwallis, receiving two of Tipu Sultan's sons as hostages in the year 1793
The Act of 1786 (26 Geo. 3 c. 16) enacted the demand of Earl Cornwallis that the powers of the Governor-General be enlarged to empower him, in special cases, to override the majority of his Council and act on his own special responsibility. The Act enabled the offices of the Governor-General and the Commander-in-Chief to be jointly held by the same official.
This Act clearly demarcated borders between the Crown and the Company. After this point, the Company functioned as a regularised subsidiary of the Crown, with greater accountability for its actions and reached a stable stage of expansion and consolidation. Having temporarily achieved a state of truce with the Crown, the Company continued to expand its influence to nearby territories through threats and coercive actions. By the middle of the 19th century, the Company's rule extended across most of India, Burma, Malaya, Singapore, and British Hong Kong, and a fifth of the world's population was under its trading influence. In addition, Penang, one of the states in Malaya, became the fourth most important settlement, a presidency, of the Company's Indian territories.Langdon, Marcus; "Penang: The Fourth Presidency of India 1805–1830, Volume One: Ships, Men and Mansions", Areca Books, 2013. ISBN 978-967-5719-07-3
East India Company Act 1793 (Charter Act)
The Company's charter was renewed for a further 20 years by the Charter Act of 1793. In contrast with the legislative proposals of the previous two decades, the 1793 Act was not a particularly controversial measure, and made only minimal changes to the system of government in India and to British oversight of the Company's activities.
thumb|Major-General Wellesley, meeting with Nawab Azim al-Daula, 1805
East India Company Act 1813 (Charter Act)
The aggressive policies of Lord Wellesley and the Marquis of Hastings led to the Company gaining control of all India (except for the Punjab and Sindh), and some part of the then kingdom of Nepal under the Sugauli Treaty. The Indian Princes had become vassals of the Company. But the expense of wars leading to the total control of India strained the Company's finances. The Company was forced to petition Parliament for assistance. This was the background to the Charter Act of 1813 which, among other things:
asserted the sovereignty of the British Crown over the Indian territories held by the Company;
renewed the charter of the company for a further twenty years, but
deprived the company of its Indian trade monopoly except for trade in tea and the trade with China
required the company to maintain separate and distinct its commercial and territorial accounts
opened India to missionaries
Government of India Act 1833
The Industrial Revolution in Britain, the consequent search for markets, and the rise of laissez-faire economic ideology form the background to the Government of India Act 1833 (3 & 4 Will. 4 c. 85). The Act:
removed the Company's remaining trade monopolies and divested it of all its commercial functions
renewed for another twenty years the Company's political and administrative authority
invested the Board of Control with full power and authority over the Company. As stated by Professor Sri Ram Sharma, "The President of the Board of Control now became Minister for Indian Affairs."
carried further the ongoing process of administrative centralisation through investing the Governor-General in Council with, full power and authority to superintend and, control the Presidency Governments in all civil and military matters
initiated a machinery for the codification of laws
provided that no Indian subject of the Company would be debarred from holding any office under the Company by reason of his religion, place of birth, descent or colour
vested the Island of St Helena in the CrownSaint Helena Act 1833
British influence continued to expand; in 1845, Great Britain purchased the Danish colony of Tranquebar. The Company had at various stages extended its influence to China, the Philippines, and Java. It had solved its critical lack of cash needed to buy tea by exporting Indian-grown opium to China. China's efforts to end the trade led to the First Opium War (1839–1842).
English Education Act 1835
The English Education Act by the Council of India in 1835 reallocated funds from the East India Company to spend on education and literature in India.
Government of India Act 1853
This Act (16 & 17 Vict. c. 95) provided that British India would remain under the administration of the Company in trust for the Crown until Parliament should decide otherwise. It also introduced a system of open competition as the basis of recruitment for civil servants of the company and thus deprived the Directors of their patronage system.M. Laxhimikanth, Public Administration, TMH, Tenth Reprint, 2013
Under the act, for the first time the legislative and executive powers of the governor general's council were separated. It also added six additional members to the governor general's executive committee.Laxhimikanth, Public Administration, TMH, Tenth Reprint, 2013
Indian Rebellion and disestablishment
thumb|Capture of the last Mughal emperor Bahadur Shah Zafar and his sons by William Hodson in 1857
The Indian Rebellion of 1857 (also known as the Indian Mutiny) resulted in widespread devastation in India: many condemned the East India Company for permitting the events to occur. In the aftermath of the Rebellion, under the provisions of the Government of India Act 1858, the British Government nationalised the Company. The Crown took over its Indian possessions, its administrative powers and machinery, and its armed forces.
The Company remained in existence in vestigial form, continuing to manage the tea trade on behalf of the British Government (and the supply of Saint Helena) until the East India Stock Dividend Redemption Act 1873 came into effect, on 1 January 1874. This Act provided for the formal dissolution of the company on 1 June 1874, after a final dividend payment and the commutation or redemption of its stock.East India Stock Dividend Redemption Act 1873 (36 & 37 Vict. 17) s. 36: "On the First day of June One thousand eight hundred and seventy-four, and on payment by the East India Company of all unclaimed dividends on East India Stock to such accounts as are herein-before mentioned in pursuance of the directions herein-before contained, the powers of the East India Company shall cease, and the said Company shall be dissolved." Where possible, the stock was redeemed through commutation (i.e. exchanging the stock for other securities or money) on terms agreed with the stockholders (ss. 5–8), but stockholders who did not agree to commute their holdings had their stock compulsorily redeemed on 30 April 1874 by payment of £200 for every £100 of stock held (s. 13). The Times commented on 8 April 1873:
Establishments in Britain
thumb|The expanded East India House, Leadenhall Street, London, as reconstructed in 1796–1800. A drawing by Thomas Hosmer Shepherd of c.1817.
The Company's headquarters in London, from which much of India was governed, was East India House in Leadenhall Street. After occupying premises in Philpot Lane from 1600 to 1621; in Crosby House, Bishopsgate, from 1621 to 1638; and in Leadenhall Street from 1638 to 1648, the Company moved into Craven House, an Elizabethan mansion in Leadenhall Street. The building had become known as East India House by 1661. It was completely rebuilt and enlarged in 1726–9; and further significantly remodelled and expanded in 1796–1800. It was finally vacated in 1860 and demolished in 1861–62. The site is now occupied by the Lloyd's building.
In 1607, the Company decided to build its own ships and leased a yard on the River Thames at Deptford. By 1614, the yard having become too small, an alternative site was acquired at Blackwall: the new yard was fully operational by 1617. It was sold in 1656, although for some years East India Company ships continued to be built and repaired there under the new owners.
In 1803, an Act of Parliament, promoted by the East India Company, established the East India Dock Company, with the aim of establishing a new set of docks (the East India Docks) primarily for the use of ships trading with India. The existing Brunswick Dock, part of the Blackwall Yard site, became the Export Dock; while a new Import Dock was built to the north. In 1838 the East India Dock Company merged with the West India Dock Company. The docks were taken over by the Port of London Authority in 1909, and closed in 1967.
The East India College was founded in 1806 as a training establishment for "writers" (i.e. clerks) in the Company's service. It was initially located in Hertford Castle, but moved in 1809 to purpose-built premises at Hertford Heath, Hertfordshire. In 1858 the college closed; but in 1862 the buildings reopened as a public school, now Haileybury and Imperial Service College.
thumb|left|Addiscombe Seminary, photographed in c.1859, with cadets in the foreground.
The East India Company Military Seminary was founded in 1809 at Addiscombe, near Croydon, Surrey, to train young officers for service in the Company's armies in India. It was based in Addiscombe Place, an early 18th-century mansion. The government took it over in 1858, and renamed it the Royal Indian Military College. In 1861 it was closed, and the site was subsequently redeveloped.
In 1818, the Company entered into an agreement by which those of its servants who were certified insane in India might be cared for at Pembroke House, Hackney, London, a private lunatic asylum run by Dr George Rees until 1838, and thereafter by Dr William Williams. The arrangement outlasted the Company itself, continuing until 1870, when the India Office opened its own asylum, the Royal India Asylum, at Hanwell, Middlesex.Farrington 1976, pp. 125–32.
The East India Club in London was formed in 1849 for officers of the Company. The Club still exists today as a private gentlemen's club with its club house situated at 16 St. James's Square, London.
Legacy and criticisms
The East India Company had a long lasting impact on the Indian Subcontinent, with both positive and harmful effects. Although dissolved following the rebellion of 1857, it stimulated the growth of the British Empire. Its armies were to become the armies of British India after 1857, and it played a key role in introducing English as an official language in India.
The East India Company was the first company to record the Chinese usage of orange-flavoured tea, which led to the development of Earl Grey tea.
The East India Company introduced a system of merit-based appointments that provided a model for the British and Indian civil service."The Company that ruled the waves", in The Economist, 17–30 December 2011, p. 111.
Widespread corruption and looting of Bengal resources and treasures during its rule resulted in poverty. Famines, such as the Great Bengal famine of 1770 and subsequent famines during the 18th and 19th centuries, became more widespread, chiefly because of exploitative agriculture promulgated by the policies of the East India company and the forced cultivation of opium in place of grain.
Flags
The English East India Company flag changed with history, with a canton based on the current flag of the Kingdom, and a field of 9 to 13 alternating red and white stripes.
From the period of 1600, the canton consisted of a St George's Cross representing the Kingdom of England. With the Acts of Union 1707, the canton was updated to be the new Union Flag—consisting of an English St George's Cross combined with a Scottish St Andrew's cross—representing the Kingdom of Great Britain. After the Acts of Union 1800 that joined Ireland with Great Britain to form the United Kingdom, the canton of the East India Company flag was altered accordingly to include a Saint Patrick's Saltire replicating the updated Union Flag representing the United Kingdom of Great Britain and Ireland.
Regarding the field of the flag, there has been much debate and discussion regarding the number and order of the stripes. Historical documents and paintings show many variations from 9 to 13 stripes, with some images showing the top stripe being red and others showing the top stripe being white.
At the time of the American Revolution the East India Company flag was nearly identical to the Grand Union Flag. Historian Charles Fawcett argued that the East India Company Flag inspired the Stars and Stripes.
Coat of arms
thumb|The later coat of arms of the East India Company
The East India Company's original coat of arms was granted in 1600. The arms was as follows:
"Azure, three ships with three masts, rigged and under full sail, the sails, pennants and ensigns Argent, each charged with a cross Gules; on a chief of the second a pale quarterly Azure and Gules, on the 1st and 4th a fleur-de-lis or, on the 2nd and 3rd a leopard or, between two roses Gules seeded Or barbed Vert." The shield had as a crest: "A sphere without a frame, bounded with the Zodiac in bend Or, between two pennants flottant Argent, each charged with a cross Gules, over the sphere the words DEUS INDICAT" (Latin: God Indicates). The supporters were two sea lions (lions with fishes' tails) and the motto was DEO DUCENTE NIL NOCET (Latin: Where God Leads, Nothing Hurts).
The East India Company's arms, granted in 1698, were: "Argent a cross Gules; in the dexter chief quarter an escutcheon of the arms of France and England quarterly, the shield ornamentally and regally crowned Or." The crest was: "A lion rampant guardant Or holding between the forepaws a regal crown proper." The supporters were: "Two lions rampant guardant Or, each supporting a banner erect Argent, charged with a cross Gules." The motto was AUSPICIO REGIS ET SENATUS ANGLIÆ (Latin: By right of the King and the Senate of England).
Ships
thumb|right|Ships in Bombay Harbour, c. 1731
Ships of the East India Company were called East Indiamen or simply "Indiamen".Sutton, Jean (1981) Lords of the East: The East India Company and Its Ships. London: Conway Maritime Some examples include:
Red Dragon (1595)
Doddington (East Indiaman) Lost 1755
Royal Captain (Lost on her maiden voyage in 1773)
Grosvenor Lost 1782
General Goddard (1782)
Earl of Abergavenny (1796)
Earl of Mornington (1799); packet ship
Lord Nelson (1799)
David Clark (1816)
Kent (1820): Lost on her third voyage
Nemesis (1839): first British-built ocean-going iron warship
Agamemnon (1855)
thumb|right|The East Indiaman Royal George, 1779. Royal George was one of the five East Indiamen the Spanish fleet captured in 1780.
During the period of the Napoleonic Wars, the East India Company arranged for letters of marque for its vessels such as the Lord Nelson. This was not so that they could carry cannon to fend off warships, privateers and pirates on their voyages to India and China (that they could do without permission) but so that, should they have the opportunity to take a prize, they could do so without being guilty of piracy. Similarly, the Earl of Mornington, an East India Company packet ship of only six guns, also sailed under a letter of marque.
In addition, the company had its own navy, the Bombay Marine, equipped with warships such as Grappler. These vessels often accompanied vessels of the Royal Navy on expeditions, such as the Invasion of Java (1811).
At the Battle of Pulo Aura, which was probably the company's most notable naval victory, Nathaniel Dance, Commodore of a convoy of Indiamen and sailing aboard the Warley, led several Indiamen in a skirmish with a French squadron, driving them off. Some six years earlier, on 28 January 1797, five Indiamen, the Woodford, under Captain Charles Lennox, the Taunton-Castle, Captain Edward Studd, Canton, Captain Abel Vyvyan, and Boddam, Captain George Palmer, and Ocean, Captain John Christian Lochner, had encountered Admiral de Sercey and his squadron of frigates. On this occasion the Indiamen also succeeded in bluffing their way to safety, and without any shots even being fired. Lastly, on 15 June 1795, the General Goddard played a large role in the capture of seven Dutch East Indiamen off St Helena.
East India Company (EIC)'s ships were well built, with the result that the Royal Navy bought several Company ships to convert to warships and transports. The Earl of Mornington became HMS Drake. Other examples include:
The company had many ports of call, some of which have seen their names changed over time.
Records
Unlike all other British Government records, the records from the East India Company (and its successor the India Office) are not in The National Archives at Kew, London, but are held by the British Library in London as part of the Asia, Pacific and Africa Collections. The catalogue is searchable online in the Access to Archives catalogues.A2A – Access to Archives Home Many of the East India Company records are freely available online under an agreement that the Families in British India Society has with the British Library. Published catalogues exist of East India Company ships' journals and logs, 1600–1834; and of some of the Company's daughter institutions, including the East India Company College, Haileybury, and Addiscombe Military Seminary.Farrington 1976.
See also
East India Company:
Governor-General of India
Chief Justice of Bengal
Advocate-General of Bengal
Chief Justice of Madras
List of trading companies
East India Company Cemetery in Macau
:Category:Honourable East India Company regiments
General:
British Imperial Lifeline
Carnatic Wars
Commercial Revolution
Political warfare in British colonial India
Trade between Western Europe and the Mughal Empire in the 17th century
Whampoa anchorage
Notes and references
Bibliography
; 14 essays by scholars
Dalrymple, William (March 2015). The East India Company: The original corporate raiders. "For a century, the East India Company conquered, subjugated and plundered vast tracts of south Asia. The lessons of its brutal reign have never been more relevant." The Guardian
Furber, Holden. John Company at Work: A study of European Expansion in India in the late Eighteenth century (Harvard University Press, 1948)
Misra, B.B. . The Central Administration of the East India Company, 1773-1834 (1959) online
Philips, C. H. The East India Company 1784 - 1834 (2nd ed. 1961), on its internal workings
Riddick, John F. The history of British India: a chronology (2006) excerpt and text search, covers 1599–1947
Riddick, John F. Who Was Who in British India (1998), covers 1599–1947
Robins, Nick (December 2004). The world's first multinational, in the New Statesman
Stern, Philip J. The Company-State: Corporate Sovereignty and the Early Modern Foundations of the British Empire in India (2011) online
External links
Charter of 1600
Seals and Insignias of East India Company
The Secret Trade The basis of the monopoly.
Trading Places – a learning resource from the British Library
Port Cities: History of the East India Company
Ships of the East India Company
Plant Cultures: East India Company in India
History and Politics: East India Company
Nick Robins, "The world's first multinational", 13 December 2004, New Statesman
East India Company: Its History and Results article by Karl Marx, MECW Volume 12, p. 148 in Marxists Internet Archive
Text of East India Company Act 1773
Text of East India Company Act 1784
"The East India Company – a corporate route to Europe" on BBC Radio 4's In Our Time featuring Huw Bowen, Linda Colley and Maria Misra
HistoryMole Timeline: The British East India Company
William Howard Hooker Collection: East Indiaman Thetis Logbook (#472-003), East Carolina Manuscript Collection, J. Y. Joyner Library, East Carolina University
Category:British colonisation of Asia
Category:British Ceylon
.
.
.
Category:Colonial Indian companies
Category:Chartered companies
Category:Defunct companies of England
Category:Former monopolies
Category:Trading companies
Category:Trade monopolies
Category:History of Kolkata
Category:History of Bengal
Category:History of West Bengal
Category:History of India
Category:History of Pakistan
Category:History of foreign trade in China
Category:Mysore invasion of Kerala
Category:Defunct companies of the United Kingdom
Category:Companies established in 1600
Category:Companies disestablished in 1857
Category:1600 establishments in England
Category:1600s establishments in British India
Category:1600s establishments in India
Category:1600 establishments in Asia
Category:1874 disestablishments in the British Empire
Category:1874 disestablishments in British India
Category:1870s disestablishments in India
Category:1874 disestablishments in Asia
Category:Age of Sail | 43,281 | 2017-01 |
Aspirated consonant | In phonetics, aspiration is the strong burst of breath that accompanies either the release or, in the case of preaspiration, the closure of some obstruents. In English, aspirated consonants are allophones in complementary distribution with their unaspirated counterparts, but in some other languages, notably most Indian and East Asian languages, the difference is contrastive, while in Arabic and Persian, all stops are aspirated.
To feel or see the difference between aspirated and unaspirated sounds, one can put a hand or a lit candle in front of one's mouth, and say spin and then pin . One should either feel a puff of air or see a flicker of the candle flame with pin that one does not get with spin.
Transcription
In the International Phonetic Alphabet (IPA), aspirated consonants are written using the symbols for voiceless consonants followed by the aspiration modifier letter , a superscript form of the symbol for the voiceless glottal fricative . For instance, represents the voiceless bilabial stop, and represents the aspirated bilabial stop.
Voiced consonants are seldom actually aspirated. Symbols for voiced consonants followed by , such as , typically represent consonants with murmured voiced release (see below). In the grammatical tradition of Sanskrit, aspirated consonants are called voiceless aspirated, and breathy-voiced consonants are called voiced aspirated.
There are no dedicated IPA symbols for degrees of aspiration and typically only two degrees are marked: unaspirated and aspirated . An old symbol for light aspiration was , but this is now obsolete. The aspiration modifier letter may be doubled to indicate especially strong or long aspiration. Hence, the two degrees of aspiration in Korean stops are sometimes transcribed or and , but they are usually transcribed and , word lists from 1977, 1966, 1975. with the details of voice-onset time given numerically.
Preaspirated consonants are marked by placing the aspiration modifier letter before the consonant symbol: represents the preaspirated bilabial stop.
Unaspirated or tenuis consonants are occasionally marked with the modifier letter for unaspiration , a superscript equals sign: . Usually, however, unaspirated consonants are left unmarked: .
Phonetics
Voiceless consonants are produced with the vocal folds open (spread) and not vibrating, and voiced consonants are produced when the vocal folds are fractionally closed and vibrating (modal voice). Voiceless aspiration occurs when the vocal cords remain open after a consonant is released. An easy way to measure this is by noting the consonant's voice-onset time, as the voicing of a following vowel cannot begin until the vocal cords close.
Phonetically in some languages, such as Navajo, aspiration of stops tends to be realised as voiceless velar airflow; aspiration of affricates is realised as an extended length of the frication.
Aspirated consonants are not always followed by vowels or other voiced sounds. For example, in Eastern Armenian, aspiration is contrastive even word-finally, and aspirated consonants occur in consonant clusters. In Wahgi, consonants are aspirated only in final position.
Degree
The degree of aspiration varies: the voice-onset time of aspirated stops is longer or shorter depending on the language or the place of articulation.
Armenian and Cantonese have aspiration that lasts about as long as English aspirated stops, in addition to unaspirated stops. Korean has lightly aspirated stops that fall between the Armenian and Cantonese unaspirated and aspirated stops as well as strongly aspirated stops whose aspiration lasts longer than that of Armenian or Cantonese. (See voice-onset time.)
Aspiration varies with place of articulation. The Spanish voiceless stops have voice-onset times (VOTs) of about 5, 10, and 30 milliseconds, whereas English aspirated have VOTs of about 60, 70, and 80 ms. Voice-onset time in Korean has been measured at 20, 25, and 50 ms for and 90, 95, and 125 for .
Doubling
When aspirated consonants are doubled or geminated, the stop is held longer and then has an aspirated release. An aspirated affricate consists of a stop, fricative, and aspirated release. A doubled aspirated affricate has a longer hold in the stop portion and then has a release consisting of the fricative and aspiration.
Preaspiration
Icelandic and Faroese have consonants with preaspiration , and some scholars interpret them as consonant clusters as well. In Icelandic, preaspirated stops contrast with double stops and single stops:
or "zeal"
"hoax"
"opening"
Preaspirated stops also occur in most Sami languages. For example, in Northern Sami, the unvoiced stop and affricate phonemes , , , , are pronounced preaspirated (, , , ) in medial or final position.
Fricative
Although most aspirated obstruents in the world's languages are stops and affricates, aspirated fricatives such as , or have been documented in Korean, in a few Tibeto-Burman languages, in some Oto-Manguean languages, and in the Siouan language Ofo. Some languages, such as Choni Tibetan, have up to four contrastive aspirated fricatives , and .Guillaume Jacques 2011. A panchronic study of aspirated fricatives, with new evidence from Pumi, Lingua 121.9:1518-1538
Voiced consonants with voiceless aspiration
True aspirated voiced consonants, as opposed to murmured (breathy-voice) consonants such as the that are common in the languages of India, are extremely rare. They have been documented in KelabitRobert Blust, 2006, "The Origin of the Kelabit Voiced Aspirates: A Historical Hypothesis Revisited", Oceanic Linguistics 45:311 Taa, and the Kx'a languages. Reported aspirated voiced stops, affricates and clicks are .
Phonology
Aspiration has varying significance in different languages. It is either allophonic or phonemic, and may be analyzed as an underlying consonant cluster.
Allophonic
In some languages, such as English, aspiration is allophonic. Stops are distinguished primarily by voicing, and voiceless stops are sometimes aspirated, while voiced stops are usually unaspirated.
English voiceless stops are aspirated for most native speakers when they are word-initial or begin a stressed syllable, as in pill, till, kill.
They are unaspirated for almost all speakers when immediately following word-initial s, as in spill, still, skill. After an s elsewhere in a word they are normally unaspirated as well, except sometimes in compound words. When the consonants in a cluster like st are analyzed as belonging to different morphemes (heteromorphemic) the stop is aspirated, but when they are analyzed as belonging to one morpheme the stop is unaspirated. For instance, distend has unaspirated since it is not analyzed as two morphemes, but distaste has an aspirated middle because it is analyzed as dis- + taste and the word taste has an aspirated initial t.
Word-final voiceless stops are sometimes aspirated.
Voiceless stops in Pashto are slightly aspirated prevocalically in a stressed syllable.
Phonemic
In many languages, such as Armenian, Korean, Thai, Indo-Aryan languages, Dravidian languages, Icelandic, Ancient Greek, and the varieties of Chinese, tenuis and aspirated consonants are phonemic. Unaspirated consonants like and aspirated consonants like are separate phonemes, and words are distinguished by whether they have one or the other.
Consonant cluster
Alemannic German dialects have unaspirated as well as aspirated ; the latter series are usually viewed as consonant clusters.
Tenseness
In Danish and most southern varieties of German, the "lenis" consonants transcribed for historical reasons as are distinguished from their fortis counterparts , mainly in their lack of aspiration.
Absence
French, Standard Dutch,Frans Hinskens, Johan Taeldeman, Language and space: Dutch, Walter de Gruyter 2014. 3110261332, 9783110261332, p.66 Tamil, Italian, Russian, Spanish, Modern Greek, and Latvian are languages that do not have aspirated consonants.
Examples
Chinese
Standard Chinese (Mandarin) has stops and affricates distinguished by aspiration: for instance, , . In pinyin, tenuis stops are written with letters that represent voiced consonants in English, and aspirated stops with letters that represent voiceless consonants. Thus d represents , and t represents .
Wu Chinese and Southern Min has a three-way distinction in stops and affricates: . In addition to aspirated and unaspirated consonants, there is a series of muddy consonants, like . These are pronounced with slack or breathy voice: that is, they are weakly voiced. Muddy consonants as initial cause a syllable to be pronounced with low pitch or light (陽 yáng) tone.
Indian languages
Many Indo-Aryan languages have aspirated stops. Sanskrit, Hindi, Bengali, Marathi, and Gujarati have a four-way distinction in stops: voiceless, aspirated, voiced, and breathy-voiced or voiced aspirated, such as . Punjabi has lost breathy-voiced consonants, which resulted in a tone system, and therefore has a distinction between voiceless, aspirated, and voiced: .
Some of the Dravidian languages, such as Telugu, Tamil, Malayalam, and Kannada, have a distinction between voiced and voiceless, aspirated and unaspirated only in loanwords from Indo-Aryan languages. In native Dravidian words, there is no distinction between these categories and stops are underspecified for voicing and aspiration.
Armenian
Most dialects of Armenian have aspirated stops, and some have breathy-voiced stops.
Classical and Eastern Armenian have a three-way distinction between voiceless, aspirated, and voiced, such as .
Western Armenian has a two-way distinction between aspirated and voiced: . Western Armenian aspirated corresponds to Eastern Armenian aspirated and voiced , and Western voiced corresponds to Eastern voiceless .
Greek
Some forms of Greek before the Koine Greek period are reconstructed as having aspirated stops. The Classical Attic dialect of Ancient Greek had a three-way distinction in stops like Eastern Armenian: . These series were called , , (psilá, daséa, mésa) "smooth, rough, intermediate", respectively, by Koine Greek grammarians.
There were aspirated stops at three places of articulation: labial, coronal, and velar . Earlier Greek, represented by Mycenaean Greek, likely had a labialized velar aspirated stop , which later became labial, coronal, or velar depending on dialect and phonetic environment.
The other Ancient Greek dialects, Ionic, Doric, Aeolic, and Arcadocypriot, likely had the same three-way distinction at one point, but Doric seems to have had a fricative in place of in the Classical period, and the Ionic and Aeolic dialects sometimes lost aspiration (psilosis).
Later, during the Koine Greek period, the aspirated and voiced stops of Attic Greek lenited to voiceless and voiced fricatives, yielding in Medieval and Modern Greek.
Other uses
Debuccalization
The term aspiration sometimes refers to the sound change of debuccalization, in which a consonant is lenited (weakened) to become a glottal stop or fricative .
Breathy-voiced release
So-called voiced aspirated consonants are nearly always pronounced instead with breathy voice, a type of phonation or vibration of the vocal folds. The modifier letter after a voiced consonant actually represents a breathy-voiced or murmured dental stop, as with the "voiced aspirated" bilabial stop in the Indo-Aryan languages. This consonant is therefore more accurately transcribed as , with the diacritic for breathy voice, or with the modifier letter , a superscript form of the symbol for the voiced glottal fricative .
Some linguists restrict the double-dot subscript to murmured sonorants, such as vowels and nasals, which are murmured throughout their duration, and use the superscript hook-aitch for the breathy-voiced release of obstruents.
See also
Aspirated h
Voice-onset time
List of phonetic topics
Phonation
Preaspiration
Rough breathing
Smooth breathing
Breathy voice
Tenuis consonant (Unaspirated consonant)
Notes
References
Cho, T., & Ladefoged, P., "Variations and universals in VOT". In Fieldwork Studies of Targeted Languages V: UCLA Working Papers in Phonetics vol. 95. 1997.
Category:Phonetics | 3,134 | 2017-01 |
Valencia | Valencia (; ), officially València (), is the capital of the autonomous community of Valencia and the third largest city in Spain after Madrid and Barcelona, with around 800,000 inhabitants in the administrative centre. Its urban area extends beyond the administrative city limits with a population of around 1.5–1.6 million people.World Urban Areas – Demographia, 2016 Valencia is Spain's third largest metropolitan area, with a population ranging from 1.7 to 2.5 million. The Port of Valencia is the 5th busiest container port in Europe and the busiest container port on the Mediterranean Sea. The city is ranked at Gamma in the Globalization and World Cities Research Network.
Valencia was founded as a Roman colony by the consul Decimus Junius Brutus Callaicus in 138 BC, and called Valentia Edetanorum. In 711 the Muslims occupied the city, introducing their language, religion and customs; they implemented improved irrigation systems and the cultivation of new crops as well, being capital of the Taifa of Valencia. In 1238 the Christian king James I of Aragon reconquered the city and divided the land among the nobles who helped him conquer it, as witnessed in the Llibre del Repartiment. He also created a new law for the city, the Furs of Valencia, which were extended to the rest of the Kingdom of Valencia. In 18th century Philip V of Spain abolished the privileges as punishment to the kingdom of Valencia for aligning with the Habsburg side in the War of Spanish Succession. Valencia was the capital of Spain when Joseph I moved there the Court in summer of 1812, and was capital of Spain between 1936 and 1937 during the Second Spanish Republic.
The city is situated on the banks of the Turia, on the east coast of the Iberian Peninsula, fronting the Gulf of Valencia on the Mediterranean Sea. Its historic centre is one of the largest in Spain, with approximately 169 hectares; this heritage of ancient monuments, views and cultural attractions makes Valencia one of the country's most popular tourist destinations.
Valencia is integrated into an industrial area on the Costa del Azahar (Orange Blossom Coast). Valencia's main festival is the Falles. The traditional Spanish dish, paella, originated in Valencia.
Name
140px|thumb|left|Roman Cornucopia, symbol of Valentia, found on the floor of a Roman building excavated in the Plaza de la Virgen.
The original Latin name of the city was Valentia (), meaning "strength", or "valour", the city being named according to the Roman practice of recognising the valour of former Roman soldiers after a war. The Roman historian Livy explains that the founding of Valentia in the 2nd century BC was due to the settling of the Roman soldiers who fought against an Iberian rebel, Viriatus.
During the rule of the Muslim kingdoms in Spain, it had the nickname Medina bu-Tarab ('City of Joy') according to a transliteration, or Medina at-Turab (, 'City of Sands') according to another, since it was located on the banks of the River Turia. It is not clear if the term Balansiyya () was reserved for the entire Taifa of Valencia or also designated the city.
By gradual sound changes, Valentia has become Valencia (i.e. before a pausa or nasal sound) or (after a continuant) in Castilian and València in Valencian. In Valencian, the grave accent <è> contrasts with the acute accent <é> —but the word València is an exception to this rule. It is spelled according to Catalan etymology, though its pronunciation is closer to Vulgar Latin.
Geography
Location
Valencia stands on the banks of the Turia River, located on the eastern coast of the Iberian Peninsula and the western part of the Mediterranean Sea, fronting the Gulf of Valencia. At its founding by the Romans, it stood on a river island in the Turia, from the sea. The Albufera, a freshwater lagoon and estuary about south of the city, is one of the largest lakes in Spain. The City Council bought the lake from the Crown of Spain for 1,072,980 pesetas in 1911, and today it forms the main portion of the Parc Natural de l'Albufera (Albufera Nature Reserve), with a surface area of . In 1986, because of its cultural, historical, and ecological value, the Generalitat Valenciana declared it a natural park.
Climate
Valencia has a Mediterranean climate (Köppen Csa) with short, very mild winters and long, hot and dry summers.
Its average annual temperature is . during the day and at night.
In the coldest month – January, the maximum temperature typically during the day ranges from , the minimum temperature typically at night ranges from . In the warmest month – August, the maximum temperature during the day typically ranges from , about at night. Generally, similar temperatures to those experienced in the northern part of Europe in summer last about 8 months, from April to November. March is transitional, the temperature often exceeds , with an average temperature of during the day and at night. December, January and February are the coldest months, with average temperatures around during the day and at night. Valencia has one of the mildest winters in Europe, owing to its southern location on the Mediterranean Sea and the Foehn phenomenon. The January average is comparable to temperatures expected for May and September in the major cities of northern Europe.
Sunshine duration hours are 2,696 per year, from 155 (average nearly 5 hours of sunshine duration at day) in December to 315 (average above 10 hours of sunshine duration at day) in July. The average temperature of the sea is during winters and during summers. Average relative humidity is 60% in April to 68% in August.
Economy
thumb|Commercial zone
Valencia enjoyed strong economic growth over the last decade, much of it spurred by tourism and the construction industry, with concurrent development and expansion of telecommunications and transport. The city's economy is service-oriented, as nearly 84% of the working population is employed in service sector occupations. However, the city still maintains an important industrial base, with 5.5% of the population employed in this sector. Agricultural activities are still carried on in the municipality, even though of relatively minor importance with only 1.9% of the working population and 3973 hectares planted mostly in orchards and citrus groves.
Since the onset of the crisis (2008), Valencia has been among the Spanish regions most affected by it and has not been able to slow down a growing unemployment rate, growing government debt, etc. Severe spending cuts have been introduced by the city authorities.
In 2009, Valencia was the 29th fastest improving European city. Its influence in commerce, education, entertainment, media, fashion, science and the arts contributes to its status as one of the world's "Gamma"-rank global cities.
The large factory of Ford Motor Company lies in a suburb of the city, Almussafes.Global Operations – Spain: Valencia Body and Assembly – Corporate.ford.com
The Valencia metropolitan area had a GDP amounting to $52.7 billion, and $28,141 per capita.
Port
thumb|Port of Valencia
Valencia's port is the biggest on the Mediterranean western coast, the first of Spain in container traffic and the second of Spain in total traffic, handling 20% of Spain's exports. The main exports are foodstuffs and beverages. Other exports include oranges, furniture, ceramic tiles, fans, textiles and iron products. Valencia's manufacturing sector focuses on metallurgy, chemicals, textiles, shipbuilding and brewing. Small and medium-sized industries are an important part of the local economy, and before the current crisis unemployment was lower than the Spanish average.
Valencia's port underwent radical changes to accommodate the 32nd America's Cup in 2007. It was divided into two parts—one was unchanged while the other section was modified for the America's Cup festivities. The two sections remain divided by a wall that projects far into the water to maintain clean water for the America's Cup side.
thumb|325px|left|The North station (Estació del Nord)
Transport
Public transport is provided by the Ferrocarrils de la Generalitat Valenciana (FGV), which operates the Metrovalencia and other rail and bus services. The Estació del Nord (North Station) is the main railway terminus in Valencia. A new temporary station, Estació de València-Joaquín Sorolla, has been built on land adjacent to this terminus to accommodate high speed AVE trains to and from Madrid, Barcelona, Seville and Alicante. Valencia Airport is situated west of Valencia city centre. Alicante Airport is situated about south of Valencia.
The City of Valencia also makes available a bicycle sharing system named Valenbisi to both visitors and residents. As of 13 October 2012, the system has 2750 bikes distributed over 250 stations all throughout the city.
Tourism
Starting in the mid-1990s, Valencia, formerly an industrial centre, saw rapid development that expanded its cultural and touristic possibilities, and transformed it into a newly vibrant city. Many local landmarks were restored, including the ancient Towers of the medieval city (Serrans Towers and Quart Towers), and the Sant Miquel dels Reis monastery (:es:Monasterio de San Miguel de los Reyes), which now holds a conservation library. Whole sections of the old city, for example the Carmen Quarter, have been extensively renovated. The Paseu Marítim, a long palm tree-lined promenade was constructed along the beaches of the north side of the port (Platja de Les Arenes, Platja del Cabanyal and Platja de la Malva-rosa).
The city has numerous convention centres and venues for trade events, among them the Feria Valencia Convention and Exhibition Centre (Institución Ferial de Valencia) and the Palau de congres (Conference Palace), and several 5-star hotels to accommodate business travelers.
In its long history, Valencia has acquired many local traditions and festivals, among them the Falles, which were declared Celebrations of International Touristic Interest (Festes de Interés Turístic Internacional) on 25 January 1965, and the Water Tribunal of Valencia (Tribunal de les Aigües de València), which was declared an intangible cultural heritage of humanity (Patrimoni Cultural Inmaterial de la Humanitat) in 2009. In addition to these Valencia has hosted world-class events that helped shape the city's reputation and put it in the international spotlight, e.g., the Regional Exhibition of 1909, the 32nd and the 33rd America's Cup competitions, the European Grand Prix of Formula One auto racing, the Valencia Open 500 tennis tournament, and the Global Champions Tour of equestrian sports. The final round of the MotoGP Championship is held annually at the Circuito de la Communitat Valenciana.
The 2007 America's Cup yachting races were held at Valencia in June and July 2007 and attracted huge crowds. The Louis Vuitton stage drew 1,044,373 visitors and the America's Cup match drew 466,010 visitors to the event.
Demographics
The third largest city in Spain and the 24th most populous municipality in the European Union, Valencia has a population of 809,267 within its administrative limits on a land area of . The urban area of Valencia extending beyond the administrative city limits has a population of between 1,561,000 and 1,564,145.Eurostat – Larger Urban Zones: Urban Audit.org 1,705,742The Principal Agglomerations of the World – Population Statistics and Maps – citypopulation.deDatos de áreas urbanas en 2006 según el proyecto AUDES5 Conurbaciones en 2006 según el proyecto AUDES5 or 2,300,000Organization for Economic Cooperation and Development, Competitive Cities in the Global Economy, OECD Territorial Reviews, (OECD Publishing, 2006), Table 1.1 or 2,516,818"Population by sex and age groups" – Eurostat, 2012 people live in the Valencia metropolitan area. Between 2007 and 2008 there was a 14% increase in the foreign born population with the largest numeric increases by country being from Bolivia, Romania and Italy.
One notable demographic change in Valencia in the last decade has been the growth in the foreign born population, which rose from 1.5% in the year 2000 to 9.1% in 2009, a trend that has also occurred in the two larger cities of Madrid and Barcelona. The main countries of origin were Ecuador, Bolivia, Colombia, Morocco and Romania.
Culture
thumb|200px|Traditional preparation of paella
Valencia is known internationally for the Falles (Les Falles), a local festival held in March, and for paella valenciana, traditional Valencian ceramics, intricate traditional dress, and the architecture of the City of Arts and Sciences designed by Santiago Calatrava and Félix Candela.
La Tomatina, an annual tomato fight, draws crowds to the nearby town of Buñol in August. There are also a number of well-preserved traditional Catholic festivities throughout the year. Holy Week celebrations in Valencia are considered some of the most colourful in Spain.
Valencia was once a venue for the Formula One European Grand Prix, first hosting the event on 24 August 2008, but was dropped at the beginning of the Grand Prix 2013 season.
The University of Valencia (officially Universitat de València Estudi General) was founded in 1499, being one of the oldest surviving universities in Spain, and the oldest university in the Valencian Community. It was listed as one of the four leading Spanish universities in the 2011 Shanghai Academic Ranking of World Universities.
In 2012, Berklee College of Music opened a new campus at the Palau de les Arts Reina Sofia providing focus on the music of the region through its Mediterranean Music Institute. Since 2003, Valencia also hosts the music courses of Musikeon, the leading musical institution in the Spanish-speaking world.
Languages
Valencia is a bilingual city: Valencian and Spanish are the two official languages. Spanish is official in all of Spain, whereas Valencian is official in the Valencian Country, as well as in Catalonia and the Balearic Islands, where it receives the name of Catalan. Despite the differentiated denomination, the distinct dialectal traits and political tension between Catalonia and the Valencian Country, Catalan and Valencian are mutually intelligible and are considered two varieties of the same language.
Valencian has been historically repressed in favour of Spanish. The effects have been more noticeable in the city proper, whereas the language has remained active in the rural and metropolitan areas. After the Castille-Aragon unification, a Spanish-speaking elite established itself in the city. In more recent history, the establishment of Franco's military and administrative apparatus in Valencia further excluded Valencian from public life. Valencian recovered its official status, prestige and use in education after the transition to democracy in 1978. However, due to industrialisation in recent decades, Valencia has attracted immigration from other regions in Spain, and hence there is also a demographic factor for its declining social use. Due to a combination of these reasons, Valencia has become the bastion of anti-Catalan blaverism, which celebrates Valencian as merely folkloric, but rejects the existing standard which was adapted from Catalan orthography.
Spanish is currently the predominant language in the city proper but, thanks to the education system, most Valencians have basic knowledge of both Spanish and Valencian, and either can be used in the city. Valencia is therefore the second biggest Catalan-speaking city after Barcelona. Institutional buildings and streets are named in Valencian. The city is also home to many pro-Valencian political and civil organisations. Furthermore, education entirely in Valencian is offered in more than 70 state-owned schools in the city, as well as by the University of Valencia across all disciplines.
Food
thumb|200px|A glass of orxata de xufa with a fartons.
Valencia is famous for its gastronomic culture. Typical dishes include paella, a simmered rice dish with seafood or meat (chicken and rabbit), fartons, bunyols, the Spanish omelette, pinchos, rosquilletes and squid (calamars)".
Valencia is the birthplace of the cold xufa beverage known as orxata, popular in many parts of the world including the Americas.
Festivals
Falles of Valencia
Every year, the five days and nights from March 15 to March 19, called Falles, are a continual festival in Valencia; beginning on March 1, the popular pyrotechnic events called mascletàes start every day at 2:00 pm. The Falles (Fallas in Spanish) is an enduring tradition in Valencia and other towns in the Valencian Community, where it has become an important tourist attraction. The festival began in the 18th century, and came to be celebrated on the night of the feast day of Saint Joseph, the patron saint of carpenters, with the burning of waste planks of wood from their workshops, as well as worn-out wooden objects brought by people in the neighborhood.
This tradition continued to evolve, and eventually the parots were dressed with clothing to look like people—these were the first ninots, with features identifiable as being those of a well-known person from the neighborhood often added as well. In 1901 the city inaugurated the awarding of prizes for the best Falles monuments, and neighborhood groups still vie with each other to make the most impressive and outrageous creations. Their intricate assemblages, placed on top of pedestals for better visibility, depict famous personalities and topical subjects of the past year, presenting humorous and often satirical commentary on them.
History
Roman colony
Valencia is one of the oldest cities in Spain, founded in the Roman period, c. 138 BC, under the name "Valentia Edetanorum". A few centuries later, with the power vacuum left by the demise of the Roman imperial administration, the church assumed the reins of power in the city, coinciding with the first waves of the invading Germanic peoples (Suevi, Vandals and Alans, and later the Visigoths).
Muslim rule
thumb|Towers of Serrans, it is one of the twelve gates that was guarding the Christian city walls of Valencia. Of Valencian Gothic, built between 1392 and 1398. This gate was the used by kings to enter the city.
The city surrendered to the invading Moors (Berbers and Arabs) about 714 AD, and the cathedral of Saint Vincent was turned into a mosque. The Castilian nobleman Rodrigo Diaz de Vivar, known as El Cid, in command of a combined Christian and Moorish army, besieged the city beginning in 1092. After the siege ended in May 1094, he ruled the city and its surrounding territory as his own fiefdom for five years from 15 June 1094 to July 1099.
The city remained in the hands of Christian troops until 1102, when the Almoravids retook the city and restored the Muslim religion. Alfonso VI of León and Castile, drove them from the city, but was unable to hold it. The Almoravid Masdali took possession on 5 May 1109, then the Almohads, seized control of it in 1171.
Christian reconquest
In 1238, King James I of Aragon, with an army composed of Aragonese, Catalans, Navarrese and crusaders from the Order of Calatrava, laid siege to Valencia and on 28 September obtained a surrender. Fifty thousand Moors were forced to leave.
The city endured serious troubles in the mid-14th century, including the decimation of the population by the Black Death of 1348 and subsequent years of epidemics — as well as a series of wars and riots that followed.
The 15th century was a time of economic expansion, known as the Valencian Golden Age, in which culture and the arts flourished. Concurrent population growth made Valencia the most populous city in the Crown of Aragon.
Some of the most emblematic buildings of the city were built during this period, including the Serrans Towers (1392), the Silk Exchange (1482), the Micalet and the Chapel of the Kings of the Convent of Sant Domènec. In painting and sculpture, Flemish and Italian trends had an influence on Valencian artists.
Valencia rose to become one of the most influential cities on the Mediterranean in the 15th and 16th centuries, but following the discovery of the Americas, the Valencians, like the Catalans, Aragonese and Majorcans, were prohibited participation in the cross-Atlantic commerce, and with this loss of trade, Valencia eventually suffered an economic crisis.
17th century
thumb|300px|Expulsion of the Moriscos from Valencia Grau by Pere Oromig
The crisis deepened during the 17th century with the expulsion in 1609 of the Jews and the Moriscos, descendants of the Muslim population that had converted to Christianity. The Spanish government systematically forced Moriscos to leave the kingdom for Muslim North Africa. They were concentrated in the former Kingdom of Aragon, and in the Valencia area specifically, they were roughly a third of the total population. The expulsion caused the financial ruin of some of the nobility and the bankruptcy of the Taula de Canvi financial institution in 1613.
18th century
The decline of the city reached its nadir with the War of Spanish Succession (1702–1709), marking the end of the political and legal independence of the Kingdom of Valencia. During the War of the Spanish Succession, Valencia sided with the Habsburg ruler of the Holy Roman Empire, Charles of Austria. On 24 January 1706, Charles Mordaunt, 3rd Earl of Peterborough, 1st Earl of Monmouth, led a handful of English cavalrymen into the city after riding south from Barcelona, captured the nearby fortress at Sagunt, and bluffed the Spanish Bourbon army into withdrawal.
The English held the city for 16 months and defeated several attempts to expel them. After the victory of the Bourbons at the Battle of Almansa on 25 April 1707, the English army evacuated Valencia and Philip V ordered the repeal of the privileges of Valencia as punishment for the kingdom's support of Charles of Austria. By the Nueva Planta decrees (Decretos de Nueva Planta) the ancient Charters of Valencia were abolished and the city was governed by the Castilian Charter.
The Valencian economy recovered during the 18th century with the rising manufacture of woven silk and ceramic tiles. The Palau de Justícia is an example of the affluence manifested in the most prosperous times of Bourbon rule (1758–1802) during the rule of Charles III. The 18th century was the age of the Enlightenment in Europe, and its humanistic ideals influenced such men as Gregory Maians and Perez Bayer in Valencia, who maintained correspondence with the leading French and German thinkers of the time.
19th century
thumb|300px|left|Triumphal welcome of Ferdinand VII of Spain at Valencia, 1814 by Miquel Parra
The 19th century began with Spain embroiled in wars with France, Portugal, and England—but the War of Independence most affected the Valencian territories and the capital city. The repercussions of the French Revolution were still felt when Napoleon's armies invaded the Iberian Peninsula. The Valencian people rose in arms against them on 23 May 1808, aroused by men such as Vicent Doménech el Palleter.
The mutineers seized the Citadel, a Supreme Junta government took over, and on 26–28 June, Napoleon's Marshal Moncey attacked the city with a column of 9,000 French imperial troops in the First Battle of Valencia. He failed to take the city in two assaults and retreated to Madrid. Marshal Suchet began a long siege of the city in October 1811, and after intense bombardment forced it to surrender on 8 January 1812. After the capitulation, the French instituted reforms in Valencia, which became the capital of Spain when the Bonapartist pretender to the throne, José I (Joseph Bonaparte, Napoleon's elder brother), moved the Court there in the middle of 1812. The disaster of the Battle of Vitoria on 21 June 1813 obliged Suchet to quit Valencia, and the French troops withdrew in July.
Ferdinand VII became king after the victorious end of the Peninsular War, which freed Spain from Napoleonic domination. When he returned on 24 March 1814 from exile in France, the Cortes requested that he respect the liberal Constitution of 1812, which seriously limited royal powers. Ferdinand refused and went to Valencia instead of Madrid. Here, on 17 April, General Elio invited the King to reclaim his absolute rights and put his troops at the King's disposition. The king abolished the Constitution of 1812 and dissolved the two chambers of the Spanish Parliament on 10 May. Thus began six years (1814–1820) of absolutist rule, but the constitution was reinstated during the Trienio Liberal, a period of three years of liberal government in Spain from 1820–1823.
On the death of King Ferdinand VII in 1833, Baldomero Espartero became one of the most ardent defenders of the hereditary rights of the king's daughter, the future Isabella II. During the regency of Maria Cristina, Espartero ruled Spain for two years as its 18th Prime Minister from 16 September 1840 to 21 May 1841. City life in Valencia carried on in a revolutionary climate, with frequent clashes between liberals and republicans.
The reign of Isabella II as an adult (1843–1868) was a period of relative stability and growth for Valencia. During the second half of the 19th century the bourgeoisie encouraged the development of the city and its environs; land-owners were enriched by the introduction of the orange crop and the expansion of vineyards and other crops,. This economic boom corresponded with a revival of local traditions and of the Valencian language, which had been ruthlessly suppressed from the time of Philip V. Around 1870, the Valencian Renaissance, a movement committed to the revival of the Valencian language and traditions, began to gain ascendancy.
20th century
thumb|Palaciow|Palau de l'Exposició (Palacio de la Exposición), site of Regional Exhibition of 1909
In the early 20th century Valencia was an industrialised city. The silk industry had disappeared, but there was a large production of hides and skins, wood, metals and foodstuffs, this last with substantial exports, particularly of wine and citrus. Small businesses predominated, but with the rapid mechanisation of industry larger companies were being formed. The best expression of this dynamic was in the regional exhibitions, including that of 1909 held next to the pedestrian avenue L'Albereda (Paseo de la Alameda), which depicted the progress of agriculture and industry. Among the most architecturally successful buildings of the era were those designed in the Art Nouveau style, such as the North Station (Estació del Nord) and the Central and Columbus markets.
World War I (1914–1918) greatly affected the Valencian economy, causing the collapse of its citrus exports. The Second Spanish Republic (1931–1939) opened the way for democratic participation and the increased politicisation of citizens, especially in response to the rise of Conservative Front power in 1933. The inevitable march to civil war and the combat in Madrid resulted in the removal of the capital of the Republic to Valencia.
On 6 November 1936, the city became the capital of Republican Spain. The city was heavily bombarded by air and sea, and by the end of the war the city had survived 442 bombardments, leaving 2,831 dead and 847 wounded, although it is estimated that the death toll was higher. The Republican government moved to Barcelona on 31 October of that year. On 30 March 1939, Valencia surrendered and the Nationalist troops entered the city. The postwar years were a time of hardship for Valencians. During Franco's regime speaking or teaching Valencian was prohibited; in a significant reversal it is now compulsory for every schoolchild in Valencia.
The dictatorship of Franco forbade political parties and began a harsh ideological and cultural repression countenanced and sometimes even led by the Church.
The economy began to recover in the early 1960s, and the city experienced explosive population growth through immigration spurred by the jobs created with the implementation of major urban projects and infrastructure improvements. With the advent of democracy in Spain, the ancient kingdom of Valencia was established as a new autonomous entity, the Valencian Community, the Statute of Autonomy of 1982 designating Valencia as its capital.
Valencia has since then experienced a surge in its cultural development, exemplified by exhibitions and performances at such iconic institutions as the Palau de la Música, the Palacio de Congresos, the Metro, the City of Arts and Sciences (Ciutat de les Arts i les Ciències), the Valencian Museum of Enlightenment and Modernity (Museo Valenciano de la Ilustracion y la Modernidad), and the Institute of Modern Art (Institut Valencià d'Art Modern). The various productions of Santiago Calatrava, a renowned structural engineer, architect, and sculptor and of the architect Félix Candela have contributed to Valencia's international reputation. These public works and the ongoing rehabilitation of the Old City (Ciutat Vella) have helped improve the city's livability and tourism is continually increasing.
21st century
On 9 July 2006, the World Day of Families, during Mass at Valencia's Cathedral, Our Lady of the Forsaken Basilica, Pope Benedict XVI used, the Sant Calze, a 1st-century Middle-Eastern artifact that some Catholics believe is the Holy Grail. It was supposedly brought to that church by Emperor Valerian in the 3rd century, after having been brought by St. Peter to Rome from Jerusalem. The Sant Calze (Holy Chalice) is a simple, small stone cup. Its base was added in Medieval Times and consists of fine gold, alabaster and gem stones.
Valencia was selected in 2003 to host the historic America's Cup yacht race, the first European city ever to do so. The America's Cup matches took place from April to July 2007. On 3 July 2007, Alinghi defeated Team New Zealand to retain the America's Cup. Twenty-two days later, on 25 July 2007, the leaders of the Alinghi syndicate, holder of the America's Cup, officially announced that Valencia would be the host city for the 33rd America's Cup, held in June 2009.Announcement of the election as host city for 33rd America's Cup
In the Valencia City Council elections from 1991 to 2015 the City Council was governed by the People's Party of Spain (Partido Popular) (PP) and Mayor Rita Barberá Nolla who became mayor by a pact made with the Valencian Union.
Main sights
Major monuments include Valencia Cathedral, the Torres de Serrans, the Torres de Quart (:es:Torres de Quart), the Llotja de la Seda (declared a World Heritage Site by UNESCO in 1996), and the Ciutat de les Arts i les Ciències (City of Arts and Sciences), an entertainment-based cultural and architectural complex designed by Santiago Calatrava and Félix Candela. The Museu de Belles Arts de València houses a large collection of paintings from the 14th to the 18th centuries, including works by Velázquez, El Greco, and Goya, as well as an important series of engravings by Piranesi. The Institut Valencià d'Art Modern (Valencian Institute of Modern Art) houses both permanent collections and temporary exhibitions of contemporary art and photography.
Architecture
The ancient winding streets of the Barrio del Carmen contain buildings dating to Roman and Arabic times. The Cathedral, built between the 13th and 15th centuries, is primarily of Gothic style but contains elements of Baroque and Romanesque architecture. Beside the Cathedral is the Gothic Basilica of the Virgin (Basílica De La Mare de Déu dels Desamparats). The 15th-century Serrans and Quart towers are part of what was once the wall surrounding the city.
thumb|Modernist Mercado Central market, built in 1914.
UNESCO has recognised the Silk Exchange market (La Llotja de la Seda), erected in early Valencian Gothic style, as a World Heritage Site. The modernist Central Market (Mercat Central) is one of the largest in Europe. The main railway station Estació Del Nord is built in modernisme (the Spanish version of Art Nouveau) style.
World-renowned (and city-born) architect Santiago Calatrava produced the futuristic City of Arts and Sciences (Ciutat de les Arts i les Ciències), which contains an opera house/performing arts centre, a science museum, an IMAX cinema/planetarium, an oceanographic park and other structures such as a long covered walkway and restaurants. Calatrava is also responsible for the bridge named after him in the centre of the city. The Music Palace (Palau De La Música) (:es:Palacio de la Música de Valencia) is another noteworthy example of modern architecture in Valencia.
The cathedral
thumb|Northern view of the cathedral: dome, apse, and the Basilica of Our Lady
The Valencia Cathedral was called Iglesia Major in the early days of the Reconquista, then Iglesia de la Seu (Seu is from the Latin sedes, i.e., (archiepiscopal) See), and by virtue of the papal concession of 16 October 1866, it was called the Basilica Metropolitana. It is situated in the centre of the ancient Roman city where some believe the temple of Diana stood. In Gothic times, it seems to have been dedicated to the Holy Saviour; the Cid dedicated it to the Blessed Virgin; King James I of Aragon did likewise, leaving in the main chapel the image of the Blessed Virgin, which he carried with him and is reputed to be the one now preserved in the sacristy. The Moorish mosque, which had been converted into a Christian Church by the conqueror, was deemed unworthy of the title of the cathedral of Valencia, and in 1262 Bishop Andrés de Albalat laid the cornerstone of the new Gothic building, with three naves; these reach only to the choir of the present building. Bishop Vidal de Blanes built the chapter hall, and James I added the tower, called El Micalet because it was blessed on St. Michael's day in 1418. The tower is about high and is topped with a belfry (1660–1736).
In the 15th century the dome was added and the naves extended back of the choir, uniting the building to the tower and forming a main entrance. Archbishop Luis Alfonso de los Cameros began the building of the main chapel in 1674; the walls were decorated with marbles and bronzes in the Baroque style of that period. At the beginning of the 18th century the German Conrad Rudolphus built the façade of the main entrance. The other two doors lead into the transept; one, that of the Apostles in pure pointed Gothic, dates from the 14th century, the other is that of the Palau. The additions made to the back of the cathedral detract from its height. The 18th-century restoration rounded the pointed arches, covered the Gothic columns with Corinthian pillars, and redecorated the walls.
thumb|Sitting of the Tribunal de las Aguas outside the Portal of the Apostles of the Valencia Cathedral
The dome has no lantern, its plain ceiling being pierced by two large side windows. There are four chapels on either side, besides that at the end and those that open into the choir, the transept, and the sanctuary. It contains many paintings by eminent artists. A silver reredos, which was behind the altar, was carried away in the war of 1808, and converted into coin to meet the expenses of the campaign. There are two paintings by Francisco de Goya in the San Francesco chapel. Behind the Chapel of the Blessed Sacrament is a small Renaissance chapel built by Calixtus III. Beside the cathedral is the chapel dedicated to the Our Lady of the Forsaken (Mare de Déu dels desamparats).
The Tribunal de les Aigües (Water Court), a court dating from Moorish times that hears and mediates in matters relating to irrigation water, sits at noon every Thursday outside the Porta dels Apostols (Portal of the Apostles).
Hospital
In 1409, a hospital was founded and placed under the patronage of Santa Maria dels Innocents; to this was attached a confraternity devoted to recovering the bodies of the unfriended dead in the city and within a radius of around it. At the end of the 15th century this confraternity separated from the hospital, and continued its work under the name of "Cofradia para el ámparo de los desamparados". King Philip IV of Spain and the Duke of Arcos suggested the building of the new chapel, and in 1647 the Viceroy, Conde de Oropesa, who had been preserved from the bubonic plague, insisted on carrying out their project. The Blessed Virgin was proclaimed patroness of the city under the title of Virgen de los desamparados (Virgin of the Forsaken), and Archbishop Pedro de Urbina, on 31 June 1652, laid the cornerstone of the new chapel of this name. The archiepiscopal palace, a grain market in the time of the Moors, is simple in design, with an inside cloister and a handsome chapel. In 1357, the arch that connects it with the cathedral was built. Inside the council chamber are preserved the portraits of all the prelates of Valencia.
Medieval churches
Sant Joan del Mercat- Gothic parish church dedicated to John the Baptist and Evangelist, rebuilt in Baroque style after a 1598 fire. The interior ceilings was frescoed by Palomino.
Sant Nicolau
Santa Caterina
San Esteban
El Temple (the Temple), the ancient church of the Knights Templar, which passed into the hands of the Order of Montesa and was rebuilt in the reigns of Ferdinand VI and Charles III; the former convent of the Dominicans, at one time the headquarters of the Capitan General, the cloister of which has a beautiful Gothic wing and the chapter room, large columns imitating palm trees; the Colegio del Corpus Christi, which is devoted to the Blessed Sacrament, and in which perpetual adoration is carried on; the Jesuit college, which was destroyed in 1868 by the revolutionary Committee of the Popular Front, but later rebuilt; and the Colegio de San Juan (also of the Society), the former college of the nobles, now a provincial institute for secondary instruction.
Squares and gardens
The largest plaza in Valencia is the Plaça del Ajuntament; it is home to the City Hall (Ajuntament) on its western side and the central post office (Edifici de Correus) on its eastern side, a cinema that shows classic movies, and many restaurants and bars. The plaza is triangular in shape, with a large cement lot at the southern end, normally surrounded by flower vendors. It serves as ground zero during the Les Falles when the fireworks of the Mascletà can be heard every afternoon. There is a large fountain at the northern end.
The Plaça de la Mare de Déu contains the Basilica of the Virgin and the Turia fountain, and is a popular spot for locals and tourists. Around the corner is the Plaça de la Reina, with the Cathedral, orange trees, and many bars and restaurants.
The Turia River was diverted in the 1960s, after severe flooding, and the old riverbed is now the Turia gardens, which contain a children's playground, a fountain, and sports fields. The Palau de la Música is adjacent to the Turia gardens and the City of Arts and Sciences lies at one end. The Valencia Bioparc is a zoo, also located in the Turia riverbed.
Other gardens in Valencia include:
The Jardíns de Monfort (:es:Jardines de Monforte).
The Jardí Botànic (Botanical Gardens).
The Jardíns del Real or Jardíns de Vivers (Del Real Gardens), they are located in the Pla del Real district, on just the former site of the Del Real Palace.
thumb|200px|The Ciutat de les Arts i les Ciències complex designed by the architects the Valencian Santiago Calatrava and Madrilenian Félix Candela.
Museums
thumb|200px|L'Oceanogràfic, located within the complex of the Ciutat de les Arts i les Ciències, is currently the largest aquarium in Europe, it houses 45,000 animals of 500 different species.
Ciutat de les Arts i les Ciències (City of Arts and Sciences). Designed by the Valencian architect Santiago Calatrava, it is situated in the former Túria river-bed and comprises the following monuments:
Palau de les Arts Reina Sofía, a flamboyant opera and music palace with four halls and a total area of .
L'Oceanogràfic, the largest aquarium in Europe, with a variety of ocean beings from different environments: from the Mediterranean, fishes from the ocean and reef inhabitants, sharks, mackerel swarms, dolphinarium, inhabitants of the polar regions (belugas, walruses, penguins), coast inhabitants (sea lions), etc. L'Oceanogràfic exhibits also smaller animals as coral, jellyfish, sea anemones, etc.
El Museu de les Ciències Príncipe Felipe, an interactive museum of science but resembling the skeleton of a whale. It has an area of around over three floors.
L'Hemisfèric, an Imax cinema. (:es:L'Hemisfèric)
Museu de Prehistòria de València (Prehistory Museum of Valencia)
Museu Valencià d'Etnologia (Valencian Museum of Ethnology)
House Museum Blasco Ibáñez
IVAM – Institut Valencià d'Art Modern – Centre Julio González Julio González Centre – Valencian Institute of Modern Art Museu de Belles Arts de València (Museum of Fine Arts)
Museu Faller (Falles Museum)
Museu d'Història de València (Valencia History Museum)
Museu Taurí de València (Bullfighting Museum)
MuVIM – Museu Valencià de la Il·lustració i la Modernitat (Valencian Museum of Enlightenment and Modernity)
González Martí National Museum of Ceramics and Decorative Arts
Computer Museum – is located within Technical School of Computer Engineering (Polytechnic University of Valencia)
Sport
thumb|180px|Mestalla
thumb|180px|Estadi Ciutat de València
thumb|180px|Pabellón Fuente San Luis
ClubLeagueSportVenueEstablishedCapacityValencia C.F.La LigaFootballMestalla191955,000Levante UDSegunda DivisiónFootballEstadi Ciutat de València190925,354Huracán ValenciaSegunda División BFootballMunicipal de Manises20111,000Valencia CF MestallaSegunda División BFootballCiudad Deportiva de Paterna19444,000Valencia Basket ClubACBBasketballPabellón Fuente San Luis19869,000Valencia GiantsLNFAAmerican footballInstalaciones polideportivas del Saler2003Valencia FirebatsLNFAAmerican footballEstadio Municipal Jardín del Turia1993Valencia FSTercera DivisiónFutsalSan Isidro1983500Les AbellesDivisión de Honor BRugby UnionPolideportivo Quatre carreres1971500CAU Rugby ValenciaDivisión de Honor BRugby UnionCampo del Río Turia1973750Rugby Club ValenciaDivisión de Honor BRugby UnionPolideportivo Quatre carreres1966500
Football
Valencia is also internationally famous for its football club, Valencia C.F., which won the Spanish league in 2002 and 2004 (the year it also won the UEFA Cup), for a total of six times, and was a UEFA Champions League runner-up in 2000 and 2001. The club is currently owned by Peter Lim, a Singaporean businessman who bought the club in 2014. The team's stadium is the Mestalla; its city rival Levante UD plays in the Segunda Divisón after getting relegated in 2016, its stadium is Estadi Ciutat de València.
American Football
Valencia is the only city in Spain with two American football teams in LNFA Serie A, the national first division: Valencia Firebats and Valencia Giants. The Firebats have been national champions three times and have represented Valencia and Spain in the European playoffs since 2005. Both teams share the Jardín del Turia stadium.
thumb|Valencia Street Circuit
Motor sports
Once a year between 2008–2012 the European Formula One Grand Prix took place in the Valencia Street Circuit. Valencia is among with Barcelona, Porto and Monte Carlo the only European cities ever to host Formula One World Championship Grands Prix on public roads in the middle of cities. The final race in 2012 European Grand Prix saw an extremely popular winner, since home driver Fernando Alonso won for Ferrari in spite of starting halfway down the field. The Valencian Community motorcycle Grand Prix (Gran Premi de la Comunitat Valenciana de motociclisme) is part of the Grand Prix motorcycle racing season at the Circuit Ricardo Tormo (also known as Circuit de Valencia) held in November. Periodically the Spanish round of the Deutsche Tourenwagen Masters touring car racing Championship (DTM) is held in Valencia.
Rugby League
Valencia is also the home of the Asociación Española de Rugby League, who are the governing body for Rugby League in Spain. The city plays host to a number of clubs playing the sport and to date has hosted all the country's home international matches. In 2015 Valencia hosted their first match in the Rugby League European Federation C competition, which was a qualifier for the 2017 Rugby League World Cup. Spain won the fixture 40-30
People born in Valencia and Valencia province
thumb|upright|Juan Luis Vives
thumb|upright|Joaquín Sorolla
thumb|upright|Vicente Blasco Ibáñez
Ibn al-Abbar (1199–1260), poet and diplomat
Concepción Aleixandre, educator and gynecologist
Pope Alexander VI, Pope from 1492 to 1503
Alfonso III, King of Aragon and Count of Barcelona (as Alfons II)
Juan Bautista Bayuco, 17th-century painter
Josep Maria Bayarri, linguist, poet and writer
José Benlliure y Gil, painter
Vicente Blasco Ibáñez (1867–1928), Spanish realist novelist writing in Spanish, a screenwriter and occasional film director
Nino Bravo (birth name, Luis Manuel Ferri Llopis) (1944–1973), popular singer
Santiago Calatrava, internationally recognised and award-winning architect
Pope Callixtus III, Pope from 1455 to 1458
Guillén de Castro (1569–1631), famous Spanish writer of the Spanish Golden Age
Antonio José Cavanilles, taxonomic botanist
Victor Claver, basketball player
María Teresa Fernández de la Vega, Spanish Socialist Workers' Party politician and the first female First Deputy Prime Minister of Spain
Saint Vincent Ferrer, Dominican missionary and logician
Joan Fuster, philologist, historian and writer
Vicente Gandia (1935–2009), painter, artist
Luis García Berlanga, film director and screenwriter
Rafael Guastavino, architect and builder, creator of the Guastavino tile
José Iturbi, conductor and pianist
King James II of Aragon
Salvador Larroca, comic book artist
Joaquín Lloréns Fernández de Cordoba, Carlist soldier and politician
Joaquín Manglano y Cucaló, city mayor (1939–1943) and Carlist politician
Ausiàs March, poet
Joanot Martorell (1413–1468), knight and writer the author of the novel Tirant lo Blanch
Fernando Miranda y Casellas, Spanish-American sculptor and illustrator (1842–1925)
Manuel Palau, music composer
Antonio Peris Carbonell, Spanish expressionist painter and sculptor
King Peter III of Aragon (Peter the Great)
Raimon, composer and singer
Joaquín Rodrigo, music composer
Joan Roís de Corella, poet and writer
Ricardo Samper (1881–1938), politician
Manuel Sanchis i Guarner, philologist, historian and writer
Luis de Santángel (1866–1927), finance minister
Enrique Simonet, painter
Josu De Solaun Soto, classical music pianist
Joaquin Sorolla, painter, who excelled in the painting of portraits, landscapes, and monumental works of social and historical themes
Francisco Tárrega, influential Spanish composer and guitarist
Ramón Tebar, conductor and pianist
Enric Valor i Vives, grammarian and writer
Joan Lluís Vives, scholar and humanist
Districts
thumb|right|upright|Towers of Quart, City gate by Francesc Baldomar and Pere Compte between 1441 and 1460.
Ciutat Vella: La Seu, La Xerea, El Carmen, El Pilar, El Mercado, San Francisco.
Eixample: Russafa, El Pla del Remei, Gran Via.
Extramurs: El Botànic, La Roqueta, La Pechina, Arrancapins.
Campanar: Campanar, Les Tendetes, El Calvari, Sant Pau.
La Saïdia: Marxalenes, Morvedre, Trinitat, Tormos, Sant Antoni.
Pla del Real: Exposició, Mestalla, Jaume Roig, Ciutat Universitària
Olivereta: Nou Moles, Soternes, Tres Forques, La Fontsanta, La Luz.
Patraix: Patraix, Sant Isidre, Vara de Quart, Safranar, Favara.
Jesús: La Raiosa, L'Hort de Senabre, The Covered Cross, Saint Marcelino, Real Way.
Quatre Carreres: Montolivet, En Corts, Malilla, La Font de Sant Lluís, Na Rovella, La Punta, Ciutat de les Arts i les Ciències.
Poblats Marítims: El Grau, El Cabanyal, El Canyameral, La Malva-Rosa, Beteró, Nazaret.
Camins del Grau: Aiora, Albors, Creu del Grau, Camí Fondo, Penya-Roja.
Algiròs: Illa Perduda, Ciutat Jardí, Amistat, Vega Baixa, la Carrasca.
Benimaclet: Benimaclet, Camí de Vera.
Rascanya: Orriols, Torrefiel, Sant Llorenç.
Benicalap: Benicalap, Ciutat Fallera.
Other towns within the municipality of Valencia
These towns administratively are within of districts of Valencia.
Towns at north: Benifaraig, Poble Nou, Carpesa, Cases de Bàrcena, Mauella, Massarrojos, Borbotó.
Towns at west: Benimàmet, Beniferri.
Towns at south: Forn d'Alcedo, Castellar-l'Oliveral, Pinedo, el Saler, el Palmar, El Perellonet, la Torre,
Twin towns and sister cities
Valencia is twinned with:
Mainz, Germany, since 4 August 1978
Bologna, Italy, since 29 June 1979
Veracruz, Mexico, since 26 September 1984
Sacramento, USA, since 29 June 1989
Valencia, Venezuela, since 20 March 1982
Odessa, Ukraine, since 13 May 1982
See also
Archdiocese of Valencia
List of tallest buildings in Valencia
Nou Mestalla
Valencia City Council elections
References
Bibliography
AttributionThis article incorporates information from the equivalent article on the Catalan Wikipedia.This article incorporates information from the equivalent article on the Spanish Wikipedia''.
Notes
Further reading
External links
Official website of the city of Valencia (Valencian)
Official tourism website of the city of Valencia (Valencian)
Official website of the Community Valenciana tourism
Valencia-La Ciudad de las Artes y de las Ciencias
Category:Comarques of the Valencian Community
Category:Former national capitals
Category:Mediterranean port cities and towns in Spain
Category:Municipalities in the Province of Valencia
Category:Populated coastal places in Spain
Category:Populated places established in the 2nd century BC
Category:Roman sites in Spain
Category:138 BC
Category:130s BC establishments
Category:Populated places in the Province of Valencia
Category:Route of the Borgias
Category:University towns in Spain
Category:130s BC establishments in Europe
Category:Coloniae (Roman) | 63,861 | 2017-01 |
Gene | A gene is a locus (or region) of DNA which is made up of nucleotides and is the molecular unit of heredity.Slack, J.M.W. Genes-A Very Short Introduction. Oxford University Press 2014 The transmission of genes to an organism's offspring is the basis of the inheritance of phenotypic traits. Most biological traits are under the influence of polygenes (many different genes) as well as gene–environment interactions. Some genetic traits are instantly visible, such as eye colour or number of limbs, and some are not, such as blood type, risk for specific diseases, or the thousands of basic biochemical processes that comprise life.
Genes can acquire mutations in their sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a protein, which cause different phenotype traits. Colloquial usage of the term "having a gene" (e.g., "good genes," "hair colour gene") typically refers to having a different allele of the gene. Genes evolve due to natural selection or survival of the fittest of the alleles.
The concept of a gene continues to be refined as new phenomena are discovered. For example, regulatory regions of a gene can be far removed from its coding regions, and coding regions can be split into several exons. Some viruses store their genome in RNA instead of DNA and some gene products are functional non-coding RNAs. Therefore, a broad, modern working definition of a gene is any discrete locus of heritable, genomic sequence which affect an organism's traits by being expressed as a functional product or by regulation of gene expression.
History
thumb|200px|Gregor Mendel|alt=Photograph of Gregor Mendel
Discovery of discrete inherited units
The existence of discrete inheritable units was first suggested by Gregor Mendel (1822–1884). From 1857 to 1864, he studied inheritance patterns in 8000 common edible pea plants, tracking distinct traits from parent to offspring. He described these mathematically as 2n combinations where n is the number of differing characteristics in the original peas. Although he did not use the term gene, he explained his results in terms of discrete inherited units that give rise to observable physical characteristics. This description prefigured the distinction between genotype (the genetic material of an organism) and phenotype (the visible traits of that organism). Mendel was also the first to demonstrate independent assortment, the distinction between dominant and recessive traits, the distinction between a heterozygote and homozygote, and the phenomenon of discontinuous inheritance.
Prior to Mendel's work, the dominant theory of heredity was one of blending inheritance, which suggested that each parent contributed fluids to the fertilisation process and that the traits of the parents blended and mixed to produce the offspring. Charles Darwin developed a theory of inheritance he termed pangenesis, from Greek pan ("all, whole") and genesis ("birth") / genos ("origin"). Darwin used the term gemmule to describe hypothetical particles that would mix during reproduction.
Mendel's work went largely unnoticed after its first publication in 1866, but was rediscovered in the late 19th century by Hugo de Vries, Carl Correns, and Erich von Tschermak, who (claimed to have) reached similar conclusions in their own research. Specifically, in 1889, Hugo de Vries published his book Intracellular Pangenesis,Vries, H. de, Intracellulare Pangenese, Verlag von Gustav Fischer, Jena, 1889. Translated in 1908 from German to English by C. Stuart Gager as Intracellular Pangenesis, Open Court Publishing Co., Chicago, 1910 in which he postulated that different characters have individual hereditary carriers and that inheritance of specific traits in organisms comes in particles. De Vries called these units "pangenes" (Pangens in German), after Darwin's 1868 pangenesis theory.
Sixteen years later, in 1905, the word genetics was first used by William Bateson, while Eduard Strasburger, amongst others, still used the term pangene for the fundamental physical and functional unit of heredity.Gager, C.S., Translator's preface to Intracellular Pangenesis, page viii. In 1909 the Danish botanist Wilhelm Johannsen shortened the name to "gene".
Discovery of DNA
Advances in understanding genes and inheritance continued throughout the 20th century. Deoxyribonucleic acid (DNA) was shown to be the molecular repository of genetic information by experiments in the 1940s to 1950s. Reprint: The structure of DNA was studied by Rosalind Franklin and Maurice Wilkins using X-ray crystallography, which led James D. Watson and Francis Crick to publish a model of the double-stranded DNA molecule whose paired nucleotide bases indicated a compelling hypothesis for the mechanism of genetic replication.
In the early 1950s the prevailing view was that the genes in a chromosome acted like discrete entities, indivisible by recombination and arranged like beads on a string. The experiments of Benzer using mutants defective in the rII region of bacteriophage T4 (1955-1959) showed that individual genes have a simple linear structure and are likely to be equivalent to a linear section of DNA.
Collectively, this body of research established the central dogma of molecular biology, which states that proteins are translated from RNA, which is transcribed from DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses. The modern study of genetics at the level of DNA is known as molecular genetics.
In 1972, Walter Fiers and his team at the University of Ghent were the first to determine the sequence of a gene: the gene for Bacteriophage MS2 coat protein. The subsequent development of chain-termination DNA sequencing in 1977 by Frederick Sanger improved the efficiency of sequencing and turned it into a routine laboratory tool. An automated version of the Sanger method was used in early phases of the Human Genome Project.
Modern evolutionary synthesis
The theories developed in the 1930s and 1940s to integrate molecular genetics with Darwinian evolution are called the modern evolutionary synthesis, a term introduced by Julian Huxley. Evolutionary biologists subsequently refined this concept, such as George C. Williams' gene-centric view of evolution. He proposed an evolutionary concept of the gene as a unit of natural selection with the definition: "that which segregates and recombines with appreciable frequency." In this view, the molecular gene transcribes as a unit, and the evolutionary gene inherits as a unit. Related ideas emphasizing the centrality of genes in evolution were popularized by Richard Dawkins.
Molecular basis
thumb|upright=1.6|The chemical structure of a four base pair fragment of a DNA double helix. The sugar-phosphate backbone chains run in opposite directions with the bases pointing inwards, base-pairing A to T and C to G with hydrogen bonds.
|alt=DNA chemical structure diagram showing how the double helix consists of two chains of sugar-phosphate backbone with bases pointing inwards and specifically base pairing A to T and C to G with hydrogen bonds.
DNA
The vast majority of living organisms encode their genes in long strands of DNA (deoxyribonucleic acid). DNA consists of a chain made from four types of nucleotide subunits, each composed of: a five-carbon sugar (2'-deoxyribose), a phosphate group, and one of the four bases adenine, cytosine, guanine, and thymine.
Two chains of DNA twist around each other to form a DNA double helix with the phosphate-sugar backbone spiralling around the outside, and the bases pointing inwards with adenine base pairing to thymine and guanine to cytosine. The specificity of base pairing occurs because adenine and thymine align to form two hydrogen bonds, whereas cytosine and guanine form three hydrogen bonds. The two strands in a double helix must therefore be complementary, with their sequence of bases matching such that the adenines of one strand are paired with the thymines of the other strand, and so on.
Due to the chemical composition of the pentose residues of the bases, DNA strands have directionality. One end of a DNA polymer contains an exposed hydroxyl group on the deoxyribose; this is known as the 3' end of the molecule. The other end contains an exposed phosphate group; this is the 5' end. The two strands of a double-helix run in opposite directions. Nucleic acid synthesis, including DNA replication and transcription occurs in the 5'→3' direction, because new nucleotides are added via a dehydration reaction that uses the exposed 3' hydroxyl as a nucleophile.
The expression of genes encoded in DNA begins by transcribing the gene into RNA, a second type of nucleic acid that is very similar to DNA, but whose monomers contain the sugar ribose rather than deoxyribose. RNA also contains the base uracil in place of thymine. RNA molecules are less stable than DNA and are typically single-stranded. Genes that encode proteins are composed of a series of three-nucleotide sequences called codons, which serve as the "words" in the genetic "language". The genetic code specifies the correspondence during protein translation between codons and amino acids. The genetic code is nearly the same for all known organisms.
Chromosomes
thumb|upright=1.6|alt=A microscopy image of 46 chromosomes striped with red and green bands|Fluorescent microscopy image of a human female karyotype, showing 23 pairs of chromosomes . The DNA is stained red, with regions rich in housekeeping genes further stained in green. The largest chromosomes are around 10 times the size of the smallest.
The total complement of genes in an organism or cell is known as its genome, which may be stored on one or more chromosomes. A chromosome consists of a single, very long DNA helix on which thousands of genes are encoded. The region of the chromosome at which a particular gene is located is called its locus. Each locus contains one allele of a gene; however, members of a population may have different alleles at the locus, each with a slightly different gene sequence.
The majority of eukaryotic genes are stored on a set of large, linear chromosomes. The chromosomes are packed within the nucleus in complex with storage proteins called histones to form a unit called a nucleosome. DNA packaged and condensed in this way is called chromatin. The manner in which DNA is stored on the histones, as well as chemical modifications of the histone itself, regulate whether a particular region of DNA is accessible for gene expression. In addition to genes, eukaryotic chromosomes contain sequences involved in ensuring that the DNA is copied without degradation of end regions and sorted into daughter cells during cell division: replication origins, telomeres and the centromere. Replication origins are the sequence regions where DNA replication is initiated to make two copies of the chromosome. Telomeres are long stretches of repetitive sequence that cap the ends of the linear chromosomes and prevent degradation of coding and regulatory regions during DNA replication. The length of the telomeres decreases each time the genome is replicated and has been implicated in the aging process. The centromere is required for binding spindle fibres to separate sister chromatids into daughter cells during cell division.
Prokaryotes (bacteria and archaea) typically store their genomes on a single large, circular chromosome. Similarly, some eukaryotic organelles contain a remnant circular chromosome with a small number of genes. Prokaryotes sometimes supplement their chromosome with additional small circles of DNA called plasmids, which usually encode only a few genes and are transferable between individuals. For example, the genes for antibiotic resistance are usually encoded on bacterial plasmids and can be passed between individual cells, even those of different species, via horizontal gene transfer.
Whereas the chromosomes of prokaryotes are relatively gene-dense, those of eukaryotes often contain regions of DNA that serve no obvious function. Simple single-celled eukaryotes have relatively small amounts of such DNA, whereas the genomes of complex multicellular organisms, including humans, contain an absolute majority of DNA without an identified function. This DNA has often been referred to as "junk DNA". However, more recent analyses suggest that, although protein-coding DNA makes up barely 2% of the human genome, about 80% of the bases in the genome may be expressed, so the term "junk DNA" may be a misnomer.
Structure and function
The structure of a gene consists of many elements of which the actual protein coding sequence is often only a small part. These include DNA regions that are not transcribed as well as untranslated regions of the RNA.
Firstly, flanking the open reading frame, all genes contain a regulatory sequence that is required for their expression. In order to be expressed, genes require a promoter sequence. The promoter is recognized and bound by transcription factors and RNA polymerase to initiate transcription. A gene can have more than one promoter, resulting in messenger RNAs (mRNA) that differ in how far they extend in the 5' end. Promoter regions have a consensus sequence, however highly transcribed genes have "strong" promoter sequences that bind the transcription machinery well, whereas others have "weak" promoters that bind poorly and initiate transcription less frequently. Eukaryotic promoter regions are much more complex and difficult to identify than prokaryotic promoters.
Additionally, genes can have regulatory regions many kilobases upstream or downstream of the open reading frame. These act by binding to transcription factors which then cause the DNA to loop so that the regulatory sequence (and bound transcription factor) become close to the RNA polymerase binding site. For example, enhancers increase transcription by binding an activator protein which then helps to recruit the RNA polymerase to the promoter; conversely silencers bind repressor proteins and make the DNA less available for RNA polymerase.
The transcribed pre-mRNA contains untranslated regions at both ends which contain a ribosome binding site, terminator and start and stop codons. In addition, most eukaryotic open reading frames contain untranslated introns which are removed before the exons are translated. The sequences at the ends of the introns, dictate the splice sites to generate the final mature mRNA which encodes the protein or RNA product.
Many prokaryotic genes are organized into operons, with multiple protein-coding sequences that are transcribed as a unit. The genes in an operon are transcribed as a continuous messenger RNA, referred to as a polycistronic mRNA. The term cistron in this context is equivalent to gene. The transcription of an operon’s mRNA is often controlled by a repressor that can occur in an active or inactive state depending on the presence of certain specific metabolites. When active, the repressor binds to a DNA sequence at the beginning of the operon, called the operator region, and represses transcription of the operon; when the repressor is inactive transcription of the operon can occur (see e.g. Lac operon). The products of operon genes typically have related functions and are involved in the same regulatory network.
Functional definitions
Defining exactly what section of a DNA sequence comprises a gene is difficult. Regulatory regions of a gene such as enhancers do not necessarily have to be close to the coding sequence on the linear molecule because the intervening DNA can be looped out to bring the gene and its regulatory region into proximity. Similarly, a gene's introns can be much larger than its exons. Regulatory regions can even be on entirely different chromosomes and operate in trans to allow regulatory regions on one chromosome to come in contact with target genes on another chromosome.
Early work in molecular genetics suggested the concept that one gene makes one protein. This concept (originally called the one gene-one enzyme hypothesis) emerged from an influential 1941 paper by George Beadle and Edward Tatum on experiments with mutants of the fungus Neurospora crassa. Norman Horowitz, an early colleague on the Neurospora research, reminisced in 2004 that “these experiments founded the science of what Beadle and Tatum called biochemical genetics. In actuality they proved to be the opening gun in what became molecular genetics and all the developments that have followed from that.” The one gene-one protein concept has been refined since the discovery of genes that can encode multiple proteins by alternative splicing and coding sequences split in short section across the genome whose mRNAs are concatenated by trans-splicing.
A broad operational definition is sometimes used to encompass the complexity of these diverse phenomena, where a gene is defined as a union of genomic sequences encoding a coherent set of potentially overlapping functional products. This definition categorizes genes by their functional products (proteins or RNA) rather than their specific DNA loci, with regulatory elements classified as gene-associated regions.
Gene expression
In all organisms, two steps are required to read the information encoded in a gene's DNA and produce the protein it specifies. First, the gene's DNA is transcribed to messenger RNA (mRNA). Second, that mRNA is translated to protein. RNA-coding genes must still go through the first step, but are not translated into protein. The process of producing a biologically functional molecule of either RNA or protein is called gene expression, and the resulting molecule is called a gene product.
Genetic code
thumb|upright=1.6|Schematic of a single-stranded RNA molecule illustrating a series of three-base codons. Each three-nucleotide codon corresponds to an amino acid when translated to protein|alt=An RNA molecule consisting of nucleotides. Groups of three nucleotides are indicated as codons, with each corresponding to a specific amino acid.
The nucleotide sequence of a gene's DNA specifies the amino acid sequence of a protein through the genetic code. Sets of three nucleotides, known as codons, each correspond to a specific amino acid. The principle that three sequential bases of DNA code for each amino acid was demonstrated in 1961 using frameshift mutations in the rIIB gene of bacteriophage T4 (see Crick, Brenner et al. experiment).
Additionally, a "start codon", and three "stop codons" indicate the beginning and end of the protein coding region. There are 64 possible codons (four possible nucleotides at each of three positions, hence 43 possible codons) and only 20 standard amino acids; hence the code is redundant and multiple codons can specify the same amino acid. The correspondence between codons and amino acids is nearly universal among all known living organisms.
Transcription
Transcription produces a single-stranded RNA molecule known as messenger RNA, whose nucleotide sequence is complementary to the DNA from which it was transcribed. The mRNA acts as an intermediate between the DNA gene and its final protein product. The gene's DNA is used as a template to generate a complementary mRNA. The mRNA matches the sequence of the gene's DNA coding strand because it is synthesised as the complement of the template strand. Transcription is performed by an enzyme called an RNA polymerase, which reads the template strand in the 3' to 5' direction and synthesizes the RNA from 5' to 3'. To initiate transcription, the polymerase first recognizes and binds a promoter region of the gene. Thus, a major mechanism of gene regulation is the blocking or sequestering the promoter region, either by tight binding by repressor molecules that physically block the polymerase, or by organizing the DNA so that the promoter region is not accessible.
In prokaryotes, transcription occurs in the cytoplasm; for very long transcripts, translation may begin at the 5' end of the RNA while the 3' end is still being transcribed. In eukaryotes, transcription occurs in the nucleus, where the cell's DNA is stored. The RNA molecule produced by the polymerase is known as the primary transcript and undergoes post-transcriptional modifications before being exported to the cytoplasm for translation. One of the modifications performed is the splicing of introns which are sequences in the transcribed region that do not encode protein. Alternative splicing mechanisms can result in mature transcripts from the same gene having different sequences and thus coding for different proteins. This is a major form of regulation in eukaryotic cells and also occurs in some prokaryotes.
Translation
thumb|upright=1.6|Protein coding genes are transcribed to an mRNA intermediate, then translated to a functional protein. RNA-coding genes are transcribed to a functional non-coding RNA. ()|alt=A protein-coding gene in DNA being transcribed and translated to a functional protein or a non-protein-coding gene being transcribed to a functional RNATranslation is the process by which a mature mRNA molecule is used as a template for synthesizing a new protein. Translation is carried out by ribosomes, large complexes of RNA and protein responsible for carrying out the chemical reactions to add new amino acids to a growing polypeptide chain by the formation of peptide bonds. The genetic code is read three nucleotides at a time, in units called codons, via interactions with specialized RNA molecules called transfer RNA (tRNA). Each tRNA has three unpaired bases known as the anticodon that are complementary to the codon it reads on the mRNA. The tRNA is also covalently attached to the amino acid specified by the complementary codon. When the tRNA binds to its complementary codon in an mRNA strand, the ribosome attaches its amino acid cargo to the new polypeptide chain, which is synthesized from amino terminus to carboxyl terminus. During and after synthesis, most new proteins must fold to their active three-dimensional structure before they can carry out their cellular functions.
Regulation
Genes are regulated so that they are expressed only when the product is needed, since expression draws on limited resources. A cell regulates its gene expression depending on its external environment (e.g. available nutrients, temperature and other stresses), its internal environment (e.g. cell division cycle, metabolism, infection status), and its specific role if in a multicellular organism. Gene expression can be regulated at any step: from transcriptional initiation, to RNA processing, to post-translational modification of the protein. The regulation of lactose metabolism genes in E. coli (lac operon) was the first such mechanism to be described in 1961.
RNA genes
A typical protein-coding gene is first copied into RNA as an intermediate in the manufacture of the final protein product. In other cases, the RNA molecules are the actual functional products, as in the synthesis of ribosomal RNA and transfer RNA. Some RNAs known as ribozymes are capable of enzymatic function, and microRNA has a regulatory role. The DNA sequences from which such RNAs are transcribed are known as non-coding RNA genes.
Some viruses store their entire genomes in the form of RNA, and contain no DNA at all. Because they use RNA to store genes, their cellular hosts may synthesize their proteins as soon as they are infected and without the delay in waiting for transcription. On the other hand, RNA retroviruses, such as HIV, require the reverse transcription of their genome from RNA into DNA before their proteins can be synthesized. RNA-mediated epigenetic inheritance has also been observed in plants and very rarely in animals.
Inheritance
thumb|Inheritance of a gene that has two different alleles (blue and white). The gene is located on an autosomal chromosome. The blue allele is recessive to the white allele. The probability of each outcome in the children's generation is one quarter, or 25 percent.|alt=Illustration of autosomal recessive inheritance. Each parent has one blue allele and one white allele. Each of their 4 children inherit one allele from each parent such that one child ends up with two blue alleles, one child has two white alleles and two children have one of each allele. Only the child with both blue alleles shows the trait because the trait is recessive.Organisms inherit their genes from their parents. Asexual organisms simply inherit a complete copy of their parent's genome. Sexual organisms have two copies of each chromosome because they inherit one complete set from each parent.
Mendelian inheritance
According to Mendelian inheritance, variations in an organism's phenotype (observable physical and behavioral characteristics) are due in part to variations in its genotype (particular set of genes). Each gene specifies a particular trait with different sequence of a gene (alleles) giving rise to different phenotypes. Most eukaryotic organisms (such as the pea plants Mendel worked on) have two alleles for each trait, one inherited from each parent.
Alleles at a locus may be dominant or recessive; dominant alleles give rise to their corresponding phenotypes when paired with any other allele for the same trait, whereas recessive alleles give rise to their corresponding phenotype only when paired with another copy of the same allele. For example, if the allele specifying tall stems in pea plants is dominant over the allele specifying short stems, then pea plants that inherit one tall allele from one parent and one short allele from the other parent will also have tall stems. Mendel's work demonstrated that alleles assort independently in the production of gametes, or germ cells, ensuring variation in the next generation. Although Mendelian inheritance remains a good model for many traits determined by single genes (including a number of well-known genetic disorders) it does not include the physical processes of DNA replication and cell division.
DNA replication and cell division
The growth, development, and reproduction of organisms relies on cell division, or the process by which a single cell divides into two usually identical daughter cells. This requires first making a duplicate copy of every gene in the genome in a process called DNA replication. The copies are made by specialized enzymes known as DNA polymerases, which "read" one strand of the double-helical DNA, known as the template strand, and synthesize a new complementary strand. Because the DNA double helix is held together by base pairing, the sequence of one strand completely specifies the sequence of its complement; hence only one strand needs to be read by the enzyme to produce a faithful copy. The process of DNA replication is semiconservative; that is, the copy of the genome inherited by each daughter cell contains one original and one newly synthesized strand of DNA.
The rate of DNA replication in living cells was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli and found to be impressively rapid. During the period of exponential DNA increase at 37 °C, the rate of elongation was 749 nucleotides per second.
After DNA replication is complete, the cell must physically separate the two copies of the genome and divide into two distinct membrane-bound cells. In prokaryotes (bacteria and archaea) this usually occurs via a relatively simple process called binary fission, in which each circular genome attaches to the cell membrane and is separated into the daughter cells as the membrane invaginates to split the cytoplasm into two membrane-bound portions. Binary fission is extremely fast compared to the rates of cell division in eukaryotes. Eukaryotic cell division is a more complex process known as the cell cycle; DNA replication occurs during a phase of this cycle known as S phase, whereas the process of segregating chromosomes and splitting the cytoplasm occurs during M phase.
Molecular inheritance
The duplication and transmission of genetic material from one generation of cells to the next is the basis for molecular inheritance, and the link between the classical and molecular pictures of genes. Organisms inherit the characteristics of their parents because the cells of the offspring contain copies of the genes in their parents' cells. In asexually reproducing organisms, the offspring will be a genetic copy or clone of the parent organism. In sexually reproducing organisms, a specialized form of cell division called meiosis produces cells called gametes or germ cells that are haploid, or contain only one copy of each gene. The gametes produced by females are called eggs or ova, and those produced by males are called sperm. Two gametes fuse to form a diploid fertilized egg, a single cell that has two sets of genes, with one copy of each gene from the mother and one from the father.
During the process of meiotic cell division, an event called genetic recombination or crossing-over can sometimes occur, in which a length of DNA on one chromatid is swapped with a length of DNA on the corresponding homologous non-sister chromatid. This can result in reassortment of otherwise linked alleles. The Mendelian principle of independent assortment asserts that each of a parent's two genes for each trait will sort independently into gametes; which allele an organism inherits for one trait is unrelated to which allele it inherits for another trait. This is in fact only true for genes that do not reside on the same chromosome, or are located very far from one another on the same chromosome. The closer two genes lie on the same chromosome, the more closely they will be associated in gametes and the more often they will appear together; genes that are very close are essentially never separated because it is extremely unlikely that a crossover point will occur between them. This is known as genetic linkage.
Molecular evolution
Mutation
DNA replication is for the most part extremely accurate, however errors (mutations) do occur. The error rate in eukaryotic cells can be as low as 10−8 per nucleotide per replication, whereas for some RNA viruses it can be as high as 10−3. This means that each generation, each human genome accumulates 1–2 new mutations. Small mutations can be caused by DNA replication and the aftermath of DNA damage and include point mutations in which a single base is altered and frameshift mutations in which a single base is inserted or deleted. Either of these mutations can change the gene by missense (change a codon to encode a different amino acid) or nonsense (a premature stop codon). Larger mutations can be caused by errors in recombination to cause chromosomal abnormalities including the duplication, deletion, rearrangement or inversion of large sections of a chromosome. Additionally, DNA repair mechanisms can introduce mutational errors when repairing physical damage to the molecule. The repair, even with mutation, is more important to survival than restoring an exact copy, for example when repairing double-strand breaks.
When multiple different alleles for a gene are present in a species's population it is called polymorphic. Most different alleles are functionally equivalent, however some alleles can give rise to different phenotypic traits. A gene's most common allele is called the wild type, and rare alleles are called mutants. The genetic variation in relative frequencies of different alleles in a population is due to both natural selection and genetic drift. The wild-type allele is not necessarily the ancestor of less common alleles, nor is it necessarily fitter.
Most mutations within genes are neutral, having no effect on the organism's phenotype (silent mutations). Some mutations do not change the amino acid sequence because multiple codons encode the same amino acid (synonymous mutations). Other mutations can be neutral if they lead to amino acid sequence changes, but the protein still functions similarly with the new amino acid (e.g. conservative mutations). Many mutations, however, are deleterious or even lethal, and are removed from populations by natural selection. Genetic disorders are the result of deleterious mutations and can be due to spontaneous mutation in the affected individual, or can be inherited. Finally, a small fraction of mutations are beneficial, improving the organism's fitness and are extremely important for evolution, since their directional selection leads to adaptive evolution.
Sequence homology
thumb|right|399px|A sequence alignment, produced by ClustalO, of mammalian histone proteins
Genes with a most recent common ancestor, and thus a shared evolutionary ancestry, are known as homologs. These genes appear either from gene duplication within an organism's genome, where they are known as paralogous genes, or are the result of divergence of the genes after a speciation event, where they are known as orthologous genes, and often perform the same or similar functions in related organisms. It is often assumed that the functions of orthologous genes are more similar than those of paralogous genes, although the difference is minimal.
The relationship between genes can be measured by comparing the sequence alignment of their DNA. The degree of sequence similarity between homologous genes is called conserved sequence. Most changes to a gene's sequence do not affect its function and so genes accumulate mutations over time by neutral molecular evolution. Additionally, any selection on a gene will cause its sequence to diverge at a different rate. Genes under stabilizing selection are constrained and so change more slowly whereas genes under directional selection change sequence more rapidly. The sequence differences between genes can be used for phylogenetic analyses to study how those genes have evolved and how the organisms they come from are related.
Origins of new genes
thumb|right|400px|Evolutionary fate of duplicate genes
The most common source of new genes in eukaryotic lineages is gene duplication, which creates copy number variation of an existing gene in the genome. The resulting genes (paralogs) may then diverge in sequence and in function. Sets of genes formed in this way comprise a gene family. Gene duplications and losses within a family are common and represent a major source of evolutionary biodiversity. Sometimes, gene duplication may result in a nonfunctional copy of a gene, or a functional copy may be subject to mutations that result in loss of function; such nonfunctional genes are called pseudogenes.
"Orphan" genes, whose sequence shows no similarity to existing genes, are less common than gene duplicates. Estimates of the number of genes with no homologs outside humans range from 18 to 60. Two primary sources of orphan protein-coding genes are gene duplication followed by extremely rapid sequence change, such that the original relationship is undetectable by sequence comparisons, and de novo conversion of a previously non-coding sequence into a protein-coding gene. De novo genes are typically shorter and simpler in structure than most eukaryotic genes, with few if any introns. Over long evolutionary time periods, de novo gene birth may be responsible for a significant fraction of taxonomically-restricted gene families.
Horizontal gene transfer refers to the transfer of genetic material through a mechanism other than reproduction. This mechanism is a common source of new genes in prokaryotes, sometimes thought to contribute more to genetic variation than gene duplication. It is a common means of spreading antibiotic resistance, virulence, and adaptive metabolic functions. Although horizontal gene transfer is rare in eukaryotes, likely examples have been identified of protist and alga genomes containing genes of bacterial origin.
Genome
The genome is the total genetic material of an organism and includes both the genes and non-coding sequences.Ridley, M. (2006). Genome. New York, NY: Harper Perennial. ISBN 0-06-019497-9
Number of genes
thumb|600px|Representative genome sizes for plants (green), vertebrates (blue), invertebrates (red), fungus (yellow), bacteria (purple), and viruses (grey). An inset on the right shows the smaller genomes expanded 100-fold.Watson, JD, Baker TA, Bell SP, Gann A, Levine M, Losick R. (2004). "Ch9-10", Molecular Biology of the Gene, 5th ed., Peason Benjamin Cummings; CSHL Press.
The genome size, and the number of genes it encodes varies widely between organisms. The smallest genomes occur in viruses (which can have as few as 2 protein-coding genes), and viroids (which act as a single non-coding RNA gene). Conversely, plants can have extremely large genomes, with rice containing >46,000 protein-coding genes. The total number of protein-coding genes (the Earth's proteome) is estimated to be 5 million sequences.
Although the number of base-pairs of DNA in the human genome has been known since the 1960s, the estimated number of genes has changed over time as definitions of genes, and methods of detecting them have been refined. Initial theoretical predictions of the number of human genes were as high as 2,000,000. Early experimental measures indicated there to be 50,000–100,000 transcribed genes (expressed sequence tags). Subsequently, the sequencing in the Human Genome Project indicated that many of these transcripts were alternative variants of the same genes, and the total number of protein-coding genes was revised down to ~20,000 with 13 genes encoded on the mitochondrial genome. Of the human genome, only 1–2% consists of protein-coding genes, with the remainder being 'noncoding' DNA such as introns, retrotransposons, and noncoding RNAs. Every multicellular organism has all its genes in each cell of its body but not every gene functions in every cell .
Essential genes
thumb|280px|Gene functions in the minimal genome of the synthetic organism, Syn 3.
Essential genes are the set of genes thought to be critical for an organism's survival. This definition assumes the abundant availability of all relevant nutrients and the absence of environmental stress. Only a small portion of an organism's genes are essential. In bacteria, an estimated 250–400 genes are essential for Escherichia coli and Bacillus subtilis, which is less than 10% of their genes. Half of these genes are orthologs in both organisms and are largely involved in protein synthesis. In the budding yeast Saccharomyces cerevisiae the number of essential genes is slightly higher, at 1000 genes (~20% of their genes). Although the number is more difficult to measure in higher eukaryotes, mice and humans are estimated to have around 2000 essential genes (~10% of their genes). The synthetic organism, Syn 3, has a minimal genome of 473 essential genes and quasi-essential genes (necessary for fast growth), although 149 have unknown function.
Essential genes include Housekeeping genes (critical for basic cell functions) as well as genes that are expressed at different times in the organisms development or life cycle. Housekeeping genes are used as experimental controls when analysing gene expression, since they are constitutively expressed at a relatively constant level.
Genetic and genomic nomenclature
Gene nomenclature has been established by the HUGO Gene Nomenclature Committee (HGNC) for each known human gene in the form of an approved gene name and symbol (short-form abbreviation), which can be accessed through a database maintained by HGNC. Symbols are chosen to be unique, and each gene has only one symbol (although approved symbols sometimes change). Symbols are preferably kept consistent with other members of a gene family and with homologs in other species, particularly the mouse due to its role as a common model organism.
Genetic engineering
thumb|right|300px|Comparison of conventional plant breeding with transgenic and cisgenic genetic modification.
Genetic engineering is the modification of an organism's genome through biotechnology. Since the 1970s, a variety of techniques have been developed to specifically add, remove and edit genes in an organism. Recently developed genome engineering techniques use engineered nuclease enzymes to create targeted DNA repair in a chromosome to either disrupt or edit a gene when the break is repaired. The related term synthetic biology is sometimes used to refer to extensive genetic engineering of an organism.
Genetic engineering is now a routine research tool with model organisms. For example, genes are easily added to bacteria and lineages of knockout mice with a specific gene's function disrupted are used to investigate that gene's function. Many organisms have been genetically modified for applications in agriculture, industrial biotechnology, and medicine.
For multicellular organisms, typically the embryo is engineered which grows into the adult genetically modified organism. However, the genomes of cells in an adult organism can be edited using gene therapy techniques to treat genetic diseases.
See also
References
Main textbook
– A molecular biology textbook available free online through NCBI Bookshelf.
Glossary
Ch 1: Cells and genomes
1.1: The Universal Features of Cells on Earth
Ch 2: Cell Chemistry and Biosynthesis
2.1: The Chemical Components of a Cell
Ch 3: Proteins
Ch 4: DNA and Chromosomes
4.1: The Structure and Function of DNA
4.2: Chromosomal DNA and Its Packaging in the Chromatin Fiber
Ch 5: DNA Replication, Repair, and Recombination
5.2: DNA Replication Mechanisms
5.4: DNA Repair
5.5: General Recombination
Ch 6: How Cells Read the Genome: From DNA to Protein
6.1: DNA to RNA
6.2: RNA to Protein
Ch 7: Control of Gene Expression
7.1: An Overview of Gene Control
7.2: DNA-Binding Motifs in Gene Regulatory Proteins
7.3: How Genetic Switches Work
7.5: Posttranscriptional Controls
7.6: How Genomes Evolve
Ch 14: Energy Conversion: Mitochondria and Chloroplasts
14.4: The Genetic Systems of Mitochondria and Plastids
Ch 18: The Mechanics of Cell Division
18.1: An Overview of M Phase
18.2: Mitosis
Ch 20: Germ Cells and Fertilization
20.2: Meiosis
References
Further reading
Google Book Search; first published 1976.
External links
Comparative Toxicogenomics Database
DNA From The Beginning – a primer on genes and DNA
Genes And DNA – Introduction to genes and DNA aimed at non-biologist
Entrez Gene – a searchable database of genes
IDconverter – converts gene IDs between public databases
iHOP – Information Hyperlinked over Proteins
TranscriptomeBrowser – Gene expression profile analysis
The Protein Naming Utility, a database to identify and correct deficient gene names
Genes – an Open Access journal
IMPC (International Mouse Phenotyping Consortium) – Encyclopedia of mammalian gene function
Global Genes Project – Leading non-profit organization supporting people living with genetic diseases
ENCODE threads Explorer Characterization of intergenic regions and gene definition. Nature
Category:Cloning
Category:Molecular biology | 4,250,553 | 2017-01 |
Crucifixion of Jesus | thumb|right|Christ Crucified (c. 1632) by Diego Velázquez. Museo del Prado, Madrid
The crucifixion of Jesus occurred in 1st century Judea, most probably between the years 30 and 33 AD. Jesus' crucifixion is described in the four canonical gospels, referred to in the New Testament epistles, attested to by other ancient sources, and is established as a historical event confirmed by non-Christian sources, although, among historians, there is no consensus on the precise details of what exactly occurred.Christopher M. Tuckett in The Cambridge companion to Jesus edited by Markus N. A. Bockmuehl 2001 Cambridge Univ Press ISBN 978-0-521-79678-1 pages 123–124
According to the canonical gospels, Jesus, the Christ, was arrested, tried, and sentenced by Pontius Pilate to be scourged, and finally crucified by the Romans.Studying the Historical Jesus: Evaluations of the State of Current Research edited by Bruce Chilton, Craig A. Evans 1998 ISBN 90-04-11142-5 pages 455–457The Cradle, the Cross, and the Crown: An Introduction to the New Testament by Andreas J. Köstenberger, L. Scott Kellum 2009 ISBN 978-0-8054-4365-3 page 104–108Evans, Craig A. (2001). Jesus and His Contemporaries: Comparative Studies ISBN 0-391-04118-5 page 316Wansbrough, Henry (2004). Jesus and the oral Gospel tradition ISBN 0-567-04090-9 page 185 Jesus was stripped of his clothing and offered wine mixed with gall to drink, before being crucified. He was then hung between two convicted thieves and according to Mark's Gospel, died some six hours later. During this time, the soldiers affixed a sign to the top of the cross stating "Jesus of Nazareth, King of the Jews" in three languages. They then divided his garments among them, but cast lots for his seamless robe. After Jesus' death they pierced his side with a spear to be certain that he had died. The Bible describes seven statements that Jesus made while he was on the cross, as well as several supernatural events that occurred.
Collectively referred to as the Passion, Jesus' suffering and redemptive death by crucifixion are the central aspects of Christian theology concerning the doctrines of salvation and atonement.
Historicity
thumb|Crucifixion of Jesus of Nazareth, medieval illustration from the Hortus deliciarum of Herrad of Landsberg (12th century)
thumb|right|upright|Descent from the Cross, depicted by Rubens
The baptism of Jesus and his crucifixion are considered to be two historically certain facts about Jesus.Jesus of Nazareth by Paul Verhoeven (Apr 6, 2010) ISBN 1-58322-905-1 page 39 James Dunn states that these "two facts in the life of Jesus command almost universal assent" and "rank so high on the 'almost impossible to doubt or deny' scale of historical facts" that they are often the starting points for the study of the historical Jesus.Jesus Remembered by James D. G. Dunn 2003 ISBN 0-8028-3931-2 page 339 Bart Ehrman states that the crucifixion of Jesus on the orders of Pontius Pilate is the most certain element about him.A Brief Introduction to the New Testament by Bart D. Ehrman 2008 ISBN 0-19-536934-3 page 136 John Dominic Crossan states that the crucifixion of Jesus is as certain as any historical fact can be. Eddy and Boyd state that it is now "firmly established" that there is non-Christian confirmation of the crucifixion of Jesus. Craig Blomberg states that most scholars in the third quest for the historical Jesus consider the crucifixion indisputable.Jesus and the Gospels: An Introduction and Survey by Craig L. Blomberg 2009 ISBN 0-8054-4482-3 pages 211–214 Christopher M. Tuckett states that, although the exact reasons for the death of Jesus are hard to determine, one of the indisputable facts about him is that he was crucified.The Cambridge Companion to Jesus by Markus N. A. Bockmuehl 2001 ISBN 0-521-79678-4 page 136
While scholars agree on the historicity of the crucifixion, they differ on the reason and context for it. For example, both E. P. Sanders and Paula Fredriksen support the historicity of the crucifixion but contend that Jesus did not foretell his own crucifixion and that his prediction of the crucifixion is a "church creation" (p. 126). Geza Vermes also views the crucifixion as a historical event but provides his own explanation and background for it.A Century of Theological and Religious Studies in Britain, 1902–2007 by Ernest Nicholson 2004 ISBN 0-19-726305-4 pages 125–126 Link 126
John P. Meier views the crucifixion of Jesus as historical fact and states that, based on the criterion of embarrassment, Christians would not have invented the painful death of their leader.John P. Meier "How do we decide what comes from Jesus" in The Historical Jesus in Recent Research by James D. G. Dunn and Scot McKnight 2006 ISBN 1-57506-100-7 pages 126–128 Meier states that a number of other criteria, e.g., the criterion of multiple attestation (i.e., confirmation by more than one source) and the criterion of coherence (i.e., that it fits with other historical elements) help establish the crucifixion of Jesus as a historical event.John P. Meier "How do we decide what comes from Jesus" in The Historical Jesus in Recent Research by James D. G. Dunn and Scot McKnight 2006 ISBN 1-57506-100-7 pages 132–136
Although almost all ancient sources relating to crucifixion are literary, the 1968 archeological discovery just northeast of Jerusalem of the body of a crucified man dated to the 1st century provided good confirmatory evidence that crucifixions occurred during the Roman period roughly according to the manner in which the crucifixion of Jesus is described in the gospels.David Freedman, 2000, Eerdmans Dictionary of the Bible, ISBN 978-0-8028-2400-4, page 299. The crucified man was identified as Yehohanan ben Hagkol and probably died about 70 AD, around the time of the Jewish revolt against Rome. The analyses at the Hadassah Medical School estimated that he died in his late 20s. Another relevant archaeological find, which also dates to the 1st century AD, is an unidentified heel bone with a spike discovered in a Jerusalem gravesite, now held by the Israel Antiquities Authority and displayed in the Israel Museum.Article on the Crucifixion of Jesus
New Testament narrative
The earliest detailed accounts of the death of Jesus are contained in the four canonical gospels. There are other, more implicit references in the New Testament epistles. In the synoptic gospels, Jesus predicts his death in three separate episodes.St Mark's Gospel and the Christian faith by Michael Keene 2002 ISBN 0-7487-6775-4 pages 24–25 All four Gospels conclude with an extended narrative of Jesus' arrest, trial, crucifixion, burial, and accounts of resurrection. In each Gospel these five events in the life of Jesus are treated with more intense detail than any other portion of that Gospel's narrative. Scholars note that the reader receives an almost hour-by-hour account of what is happening.Powell, Mark A. Introducing the New Testament. Baker Academic, 2009. ISBN 978-0-8010-2868-7
Combining statements in the canonical Gospels produces the following account: Jesus was arrested in Gethsemane following the Last Supper with the Twelve Apostles, and then stood trial before the Sanhedrin (a Jewish judicial body), Pontius Pilate (a Roman authority in Judaea), and Herod Antipas (king of Judea, appointed by Rome), before being handed over for crucifixion by the chief priests of the Jews.; ; ; After being flogged, Jesus was mocked by Roman soldiers as the "King of the Jews", clothed in a purple robe, crowned with thorns, beaten and spat on. Jesus then had to make his way to the place of his crucifixion.
Once at Golgotha, Jesus was offered wine mixed with gall to drink. Matthew's and Mark's Gospels record that he refused this. He was then crucified and hung between two convicted thieves. According to some translations from the original Greek, the thieves may have been bandits or Jewish rebels.Reza Aslan, Zealot: The Life and Times of Jesus of Nazareth, Random House, 2014. ISBN 0812981480. According to Mark's Gospel, he endured the torment of crucifixion for some six hours from the third hour, at approximately 9 am, until his death at the ninth hour, corresponding to about 3 pm. The soldiers affixed a sign above his head stating "Jesus of Nazareth, King of the Jews" in three languages, divided his garments and cast lots for his seamless robe. The Roman soldiers did not break Jesus' legs, as they did to the other two men crucified (breaking the legs hastened the crucifixion process), as Jesus was dead already. Each gospel has its own account of Jesus' last words, seven statements altogether.Ehrman, Bart D.. Jesus, Interrupted, HarperCollins, 2009. ISBN 0-06-117393-2 In the Synoptic Gospels, various supernatural events accompany the crucifixion, including darkness, an earthquake, and (in Matthew) the resurrection of saints. Following Jesus' death, his body was removed from the cross by Joseph of Arimathea and buried in a rock-hewn tomb, with Nicodemus assisting.
thumb|The Crucifixion. Christ on the Cross between two thieves. Illumination from the Vaux Passional, 16th Century
thumb|180px|left|Bronzino's depiction of the Crucifixion with 3 nails, no ropes, and a hypopodium standing support, c. 1545.
According to all four gospels, Jesus was brought to the "Place of a Skull" - "place called Golgotha (which means Place of a Skull)"; (same as Matthew); - "place that is called The Skull"; - "place called The Place of a Skull, which in Aramaic is called Golgotha" and crucified with two thieves,; ; ; with the charge of claiming to be "King of the Jews", - "This is Jesus, the King of the Jews."; - "The King of the Jews."; - "This is the King of the Jews." Some manuscripts add in letters of Greek and Latin and Hebrew; - "Jesus of Nazareth, the King of the Jews." "... it was written in Aramaic, in Latin, and in Greek." and the soldiers divided his clothes; ; ; before he bowed his head and died.; ; ; Following his death, Joseph of Arimathea requested the body from Pilate,; ; ; which Joseph then placed in a new garden tomb.; ; ;
The three Synoptic gospels also describe Simon of Cyrene bearing the cross,; ; the multitude mocking Jesus; ; along with the thieves/robbers/rebels,; ; darkness from the 6th to the 9th hour,; ; and the temple veil being torn from top to bottom.; ; The Synoptics also mention several witnesses, including a centurion,; ; and several women who watched from a distance; ; two of whom were present during the burial.; ;
Luke is the only gospel writer to omit the detail of sour wine mix that was offered to Jesus on a reed,; ; ; ; while only Mark and John describe Joseph actually taking the body down off the cross.;
There are several details that are only found in one of the gospel accounts. For instance, only Matthew's gospel mentions an earthquake, resurrected saints who went to the city and that Roman soldiers were assigned to guard the tomb,; while Mark is the only one to state the actual time of the crucifixion (the third hour, or 9 am) and the centurion's report of Jesus' death.; The Gospel of Luke's unique contributions to the narrative include Jesus' words to the women who were mourning, one criminal's rebuke of the other, the reaction of the multitudes who left "beating their breasts", and the women preparing spices and ointments before resting on the Sabbath.; ; ; John is also the only one to refer to the request that the legs be broken and the soldier's subsequent piercing of Jesus' side (as fulfillment of Old Testament prophecy), as well as that Nicodemus assisted Joseph with burial.;
According to the First Epistle to the Corinthians (1 Cor. 15:4), Jesus was raised from the dead ("on the third day" counting the day of crucifixion as the first) and according to the canonical Gospels, appeared to his disciples on different occasions before ascending to heaven.; ; The account given in Acts of the Apostles, which says Jesus remained with the apostles for forty days, appears to differ from the account in the Gospel of Luke, which makes no clear distinction between the events of Easter Sunday and the Ascension.Geza Vermes, The Resurrection, (Penguin, 2008) page 148.E. P. Sanders, The Historical Figure of Jesus, (Penguin, 1993), page 276. However, most biblical scholars agree that St. Luke also wrote the Acts of the Apostles as a follow-up volume to his Gospel account, and the two works must be considered as a whole.Donald Guthrie, New Testament Introduction, (Intervarsity, 1990) pages 125, 366.
In Mark, Jesus is crucified along with two rebels, and the day goes dark for three hours.Funk, Robert W. and the Jesus Seminar. The acts of Jesus: the search for the authentic deeds of Jesus. HarperSanFrancisco. 1998. "Mark," p. 51–161 ISBN 978-0060629786 Jesus calls out to God, then gives a shout and dies. The curtain of the Temple is torn in two. Matthew follows Mark, adding an earthquake and the resurrection of saints.Funk, Robert W. and the Jesus Seminar. The acts of Jesus: the search for the authentic deeds of Jesus. HarperSanFrancisco. 1998. "Matthew," p. 129–270 ISBN 978-0060629786 Luke also follows Mark, though he describes the rebels as common criminals, one of whom defends Jesus, who in turn promises that he (Jesus) and the criminal will be together in paradise.Funk, Robert W. and the Jesus Seminar. The acts of Jesus: the search for the authentic deeds of Jesus. HarperSanFrancisco. 1998. "Luke," p. 267–364 ISBN 978-0060629786 Luke portrays Jesus as impassive in the face of his crucifixion.Ehrman, Bart D.. Misquoting Jesus: The Story Behind Who Changed the Bible and Why. HarperCollins, 2005. ISBN 978-0-06-073817-4 John includes several of the same elements as those found in Mark, though they are treated differently.Funk, Robert W. and the Jesus Seminar. The acts of Jesus: the search for the authentic deeds of Jesus. HarperSanFrancisco. 1998. "John" pp. 365–440 ISBN 978-0060629786
Other accounts and references
thumb|Crucifixion, from the Buhl Altarpiece. A particularly large Gothic oil on panel painting from the 1490s.
An early non-Christian reference to the crucifixion of Jesus is likely to be Mara Bar-Serapion's letter to his son, written sometime after AD 73 but before the 3rd century AD.Evidence of Greek Philosophical Concepts in the Writings of Ephrem the Syrian by Ute Possekel 1999 ISBN 90-429-0759-2 pages 29–30The Cradle, the Cross, and the Crown: An Introduction to the New Testament by Andreas J. Köstenberger, L. Scott Kellum 2009 ISBN 978-0-8054-4365-3 page 110 The letter includes no Christian themes and the author is presumed to be a pagan.Jesus outside the New Testament: an introduction to the ancient evidence by Robert E. Van Voorst 2000 ISBN 0-8028-4368-9 pages 53–55 The letter refers to the retributions that followed the unjust treatment of three wise men: Socrates, Pythagoras, and "the wise king" of the Jews. Some scholars see little doubt that the reference to the execution of the "king of the Jews" is about the crucifixion of Jesus, while others place less value in the letter, given the possible ambiguity in the reference.Jesus and His Contemporaries: Comparative Studies by Craig A. Evans 2001 ISBN 978-0-391-04118-9 page 41
In the Antiquities of the Jews (written about 93 AD) Jewish historian Josephus, stated (Ant 18.3) that Jesus was crucified by Pilate, writing that:
Now there was about this time Jesus, a wise man, ... He drew over to him both many of the Jews and many of the Gentiles ... And when Pilate, at the suggestion of the principal men amongst us, had condemned him to the cross ...
Most modern scholars agree that while this Josephus passage (called the Testimonium Flavianum) includes some later interpolations, it originally consisted of an authentic nucleus with a reference to the execution of Jesus by Pilate. James Dunn states that there is "broad consensus" among scholars regarding the nature of an authentic reference to the crucifixion of Jesus in the Testimonium.Dunn, James (2003). Jesus remembered ISBN 0-8028-3931-2 page 141
Early in the second century another reference to the crucifixion of Jesus was made by Tacitus, generally considered one of the greatest Roman historians.Van Voorst, Robert E (2000). Jesus Outside the New Testament: An Introduction to the Ancient Evidence Eerdmans Publishing ISBN 0-8028-4368-9 pages 39–42Backgrounds of early Christianity by Everett Ferguson 2003 ISBN 0-8028-2221-5 page 116 Writing in The Annals (c. 116 AD), Tacitus described the persecution of Christians by Nero and stated (Annals 15.44) that Pilate ordered the execution of Jesus:Theissen 1998, pp. 81–83
Nero fastened the guilt and inflicted the most exquisite tortures on a class hated for their abominations, called Christians by the populace. Christus, from whom the name had its origin, suffered the extreme penalty during the reign of Tiberius at the hands of one of our procurators, Pontius Pilatus.
Scholars generally consider the Tacitus reference to the execution of Jesus by Pilate to be genuine, and of historical value as an independent Roman source.Jesus as a figure in history: how modern historians view the man from Galilee by Mark Allan Powell 1998 ISBN 0-664-25703-8 page 33Jesus and His Contemporaries: Comparative Studies by Craig A. Evans 2001 ISBN 0-391-04118-5 page 42Ancient Rome by William E. Dunstan 2010 ISBN 0-7425-6833-4 page 293Tacitus' characterization of "Christian abominations" may have been based on the rumors in Rome that during the Eucharist rituals Christians ate the body and drank the blood of their God, interpreting the symbolic ritual as cannibalism by Christians. References: Ancient Rome by William E. Dunstan 2010 ISBN 0-7425-6833-4 page 293 and An introduction to the New Testament and the origins of Christianity by Delbert Royce Burkett 2002 ISBN 0-521-00720-8 page 485Pontius Pilate in History and Interpretation by Helen K. Bond 2004 ISBN 0-521-61620-4 page xi Eddy and Boyd state that it is now "firmly established" that Tacitus provides a non-Christian confirmation of the crucifixion of Jesus.Eddy, Paul; Boyd, Gregory (2007). The Jesus Legend: A Case for the Historical Reliability of the Synoptic Jesus Tradition Baker Academic, ISBN 0-8010-3114-1 page 127
Another possible reference to the crucifixion ("hanging" cf. ; ) is found in the Babylonian Talmud:
Although the question of the equivalence of the identities of Yeshu and Jesus has at times been debated, many historians agree that the above 2nd-century passage is likely to be about Jesus, Peter Schäfer stating that there can be no doubt that this narrative of the execution in the Talmud refers to Jesus of Nazareth.Jesus in the Talmud by Peter Schäfer (Aug 24, 2009) ISBN 0-691-14318-8 page 141 and 9 Robert Van Voorst states that the Sanhedrin 43a reference to Jesus can be confirmed not only from the reference itself, but from the context that surrounds it.Van Voorst, Robert E. (2000). Jesus Outside the New Testament: An Introduction to the Ancient Evidence Wm. B. Eerdmans Publishing Co.. ISBN 0-8028-4368-9 pages 177–118
Muslims maintain that Jesus was not crucified and that those who thought they had killed him had mistakenly killed Judas Iscariot, Simon of Cyrene, or someone else in his place.George W. Braswell Jr., What You Need to Know about Islam and Muslims, page 127 (B & H Publishing Group, 2000). ISBN 978-0-8054-1829-3 They hold this belief based on various interpretations of , which states: "they killed him not, nor crucified him, but so it was made to appear to them [or it appeared so unto them], ... Nay, Allah raised him up unto Himself".
Some early Christian Gnostic sects, believing Jesus did not have a physical substance, denied that he was crucified. In response, Ignatius of Antioch insisted that Jesus was truly born and was truly crucified and wrote that those who held that Jesus only seemed to suffer only seemed to be Christians.William Barclay, Great Themes of the New Testament. Westminster John Knox Press. 2001. ISBN 978-0-664-22385-4. p. 41.
The Crucifixion
Chronology
There is no consensus regarding the exact date of the crucifixion of Jesus, although it is generally agreed by biblical scholars that it was on a Friday on or near Passover (Nisan 15), during the governorship of Pontius Pilate (who ruled AD 26–36). Scholars have provided estimates for the year of crucifixion in the range 30–33 AD,Paul L. Maier "The Date of the Nativity and Chronology of Jesus" in Chronos, kairos, Christos: nativity and chronological studies by Jerry Vardaman, Edwin M. Yamauchi 1989 ISBN 0-931464-50-1 pages 113–129The Cradle, the Cross, and the Crown: An Introduction to the New Testament by Andreas J. Köstenberger, L. Scott Kellum 2009 ISBN 978-0-8054-4365-3 page 114Jesus & the Rise of Early Christianity: A History of New Testament Times by Paul Barnett 2002 ISBN 0-8308-2699-8 pages 19–21 with the majority of modern scholars favouring the date April 7, 30 AD.Rainer Riesner, Paul's Early Period: Chronology, Mission Strategy, Theology (Wm. B. Eerdmans Publishing, 1998), page 58.Josef Blinzler, Der Prozess Jesu (Pustet, 1960) cited in Colin J. Humphreys, The Mystery of the Last Supper: Reconstructing the Final Days of Jesus (Cambridge University Press, 2011) page 14. Another popular date is Friday, April 3, 33 AD.
Since an observational calendar was used during the time of Jesus, including an ascertainment of the new moon and ripening barley harvest, the exact day or even month for Passover in a given year is subject to speculation. . Various approaches have been used to estimate the year of the crucifixion, including the canonical Gospels, the chronology of the life of Paul, as well as different astronomical models.
The consensus of modern scholarship is that the New Testament accounts represent a crucifixion occurring on a Friday, but a Thursday or Wednesday crucifixion have also been proposed.The Cradle, the Cross, and the Crown: An Introduction to the New Testament by Andreas J. Köstenberger, L. Scott Kellum 2009 ISBN 978-0-8054-4365-3 pages 142–143 Some scholars explain a Thursday crucifixion based on a "double sabbath" caused by an extra Passover sabbath falling on Thursday dusk to Friday afternoon, ahead of the normal weekly Sabbath.Cyclopaedia of Biblical, theological, and ecclesiastical literature: Volume 7 John McClintock, James Strong - 1894 "... he lay in the grave on the 15th (which was a 'high day' or double Sabbath, because the weekly Sabbath coincided ..." Some have argued that Jesus was crucified on Wednesday, not Friday, on the grounds of the mention of "three days and three nights" in before his resurrection, celebrated on Sunday. Others have countered by saying that this ignores the Jewish idiom by which a "day and night" may refer to any part of a 24-hour period, that the expression in Matthew is idiomatic, not a statement that Jesus was 72 hours in the tomb, and that the many references to a resurrection on the third day do not require three literal nights.
In Mark 15:25 crucifixion takes place at the third hour (9 a.m.) and Jesus' death at the ninth hour (3 p.m.).The Gospel of Mark, Volume 2 by John R. Donahue, Daniel J. Harrington 2002 ISBN 0-8146-5965-9 page 442 However, in John 19:14 Jesus is still before Pilate at the sixth hour. Scholars have presented a number of arguments to deal with the issue, some suggesting a reconciliation, e.g., based on the use of Roman timekeeping in John but not in Mark, yet others have rejected the arguments.Death of the Messiah, Volume 2 by Raymond E. Brown 1999 ISBN 0-385-49449-1 pages 959–960Colin Humphreys, The Mystery of the Last Supper Cambridge University Press 2011 ISBN 978-0-521-73200-0, pages 188–190 Several notable scholars have argued that the modern precision of marking the time of day should not be read back into the gospel accounts, written at a time when no standardization of timepieces, or exact recording of hours and minutes was available, and time was often approximated to the closest three-hour period.Steven L. Cox, Kendell H Easley, 2007 Harmony of the Gospels ISBN 0-8054-9444-8 pages 323–323New Testament History by Richard L. Niswonger 1992 ISBN 0-310-31201-9 pages 173–174The Cradle, the Cross, and the Crown: An Introduction to the New Testament by Andreas J. Köstenberger, L. Scott Kellum 2009 ISBN 978-0-8054-4365-3 page 538
Path to the crucifixion
thumb|Andrea di Bartolo, Way to Calvary, c. 1400. The cluster of halos at the left are the Virgin Mary in front, with the Three Marys.
The three Synoptic Gospels refer to a man called Simon of Cyrene who is made to carry the cross,, , while in the Gospel of John, Jesus is said to "bear" his own cross.
Luke's gospel also describes an interaction between Jesus and the women among the crowd of mourners following him, quoting Jesus as saying "Daughters of Jerusalem, do not weep for me, but weep for yourselves and for your children. For behold, the days are coming when they will say, 'Blessed are the barren and the wombs that never bore and the breasts that never nursed!' Then they will begin to say to the mountains, 'Fall on us,' and to the hills, 'Cover us.' For if they do these things when the wood is green, what will happen when it is dry?"
The Gospel of Luke has Jesus address these women as "daughters of Jerusalem", thus distinguishing them from the women whom the same gospel describes as "the women who had followed him from Galilee" and who were present at his crucifixion. and
Traditionally, the path that Jesus took is called Via Dolorosa (Latin for "Way of Grief" or "Way of Suffering") and is a street in the Old City of Jerusalem. It is marked by nine of the fourteen Stations of the Cross. It passes the Ecce Homo Church and the last five stations are inside the Church of the Holy Sepulchre.
There is no reference to the legendaryLavinia Cohn-Sherbok, Who's who in Christianity, (Routledge 1998), page 303. Veronica in the Gospels, but sources such as Acta Sanctorum describe her as a pious woman of Jerusalem who, moved with pity as Jesus carried his cross to Golgotha, gave him her veil that he might wipe his forehead.Notes and Queries, Volume 6 July–December 1852, London, page 252The Archaeological journal (UK), Volume 7, 1850 page 413Alban Butler, 2000 Lives of the Saints ISBN 0-86012-256-5 page 84
Location
thumb|300px|A diagram of the Church of the Holy Sepulchre and the historical site
The precise location of the crucifixion remains a matter of conjecture, but the biblical accounts indicate that it was outside the city walls of Jerusalem, accessible to passers-by and observable from some distance away. Eusebius identified its location only as being north of Mount Zion, which is consistent with the two most popularly suggested sites of modern times.
Calvary as an English name for the place is derived from the Latin word for skull (calvaria), which is used in the Vulgate translation of "place of a skull", the explanation given in all four Gospels of the Aramaic word Gûlgaltâ which was the name of the place where Jesus was crucified.; ; ; The text does not indicate why it was so designated, but several theories have been put forward. One is that as a place of public execution, Calvary may have been strewn with the skulls of abandoned victims (which would be contrary to Jewish burial traditions, but not Roman). Another is that Calvary is named after a nearby cemetery (which is consistent with both of the proposed modern sites). A third is that the name was derived from the physical contour, which would be more consistent with the singular use of the word, i.e., the place of "a skull". While often referred to as "Mount Calvary", it was more likely a small hill or rocky knoll.
The traditional site, inside what is now occupied by the Church of the Holy Sepulchre in the Christian Quarter of the Old City, has been attested since the 4th century. A second site (commonly referred to as Gordon's Calvary ), located further north of the Old City near a place popularly called the Garden Tomb, has been promoted since the 19th century.
People present
thumb|left|upright=1.25|The dead Christ with the Virgin, John the Evangelist and Mary Magdalene. Unknown painter of the 18th century
The Gospel of Matthew describes many women at the crucifixion, some of whom are named in the Gospels. Apart from these women, the three Synoptic Gospels speak of the presence of others: "the chief priests, with the scribes and elders";; cf. , two robbers crucified, one on Jesus' right and one on his left,; whom the Gospel of Luke presents as the penitent thief and the impenitent thief; "the soldiers", "the centurion and those who were with him, keeping watch over Jesus";; cf. passers-by;; "bystanders",; ; cf. "the crowds that had assembled for this spectacle"; and "his acquaintances"
The Gospel of John also speaks of women present, but only mentions the soldiers, and "the disciple whom Jesus loved".
The Gospels also tell of the arrival, after the death of Jesus, of Joseph of Arimathea, , , and of Nicodemus.
Method and manner
thumb|180px|Crucifixion of Jesus on a two-beamed cross, from the Sainte Bible (1866)
thumb|200px|right|Torture stake, a simple wooden torture stake. Image by Justus Lipsius.
Whereas most Christians believe the gibbet on which Jesus was executed was the traditional two-beamed cross, the Jehovah's Witnesses hold the view that a single upright stake was used. The Greek and Latin words used in the earliest Christian writings are ambiguous. The Koine Greek terms used in the New Testament are stauros () and xylon (). The latter means wood (a live tree, timber or an object constructed of wood); in earlier forms of Greek, the former term meant an upright stake or pole, but in Koine Greek it was used also to mean a cross. The Latin word crux was also applied to objects other than a cross.Charlton T. Lewis, Charles Short, A Latin Dictionary
However, early Christian writers who speak of the shape of the particular gibbet on which Jesus died invariably describe it as having a cross-beam. For instance, the Epistle of Barnabas, which was certainly earlier than 135,For a discussion of the date of the work, see Information on Epistle of Barnabas and Andrew C. Clark, "Apostleship: Evidence from the New Testament and Early Christian Literature," Evangelical Review of Theology, 1989, Vol. 13, p. 380 and may have been of the 1st century AD,John Dominic Crossan, The Cross that Spoke (ISBN 978-0-06-254843-6), p. 121 the time when the gospel accounts of the death of Jesus were written, likened it to the letter T (the Greek letter tau, which had the numeric value of 300),Epistle of Barnabas, 9:7-8 and to the position assumed by Moses in ."The Spirit saith to the heart of Moses, that he should make a type of the cross and of Him that was to suffer, that unless, saith He, they shall set their hope on Him, war shall be waged against them for ever. Moses therefore pileth arms one upon another in the midst of the encounter, and standing on higher ground than any he stretched out his hands, and so Israel was again victorious" (Epistle of Barnabas, 12:2-3). Justin Martyr (100–165) explicitly says the cross of Christ was of two-beam shape: "That lamb which was commanded to be wholly roasted was a symbol of the suffering of the cross which Christ would undergo. For the lamb, which is roasted, is roasted and dressed up in the form of the cross. For one spit is transfixed right through from the lower parts up to the head, and one across the back, to which are attached the legs of the lamb." Irenaeus, who died around the end of the 2nd century, speaks of the cross as having "five extremities, two in length, two in breadth, and one in the middle, on which [last] the person rests who is fixed by the nails."Irenaeus, Adversus Haereses, II, xxiv, 4
The assumption of the use of a two-beamed cross does not determine the number of nails used in the crucifixion and some theories suggest three nails while others suggest four nails.The International Standard Bible Encyclopedia by Geoffrey W. Bromiley 1988 ISBN 0-8028-3785-9 page 826 However, throughout history larger numbers of nails have been hypothesized, at times as high as 14 nails.Encyclopedia of Biblical Literature, Part 2 by John Kitto 2003 ISBN 0-7661-5980-9 page 591 These variations are also present in the artistic depictions of the crucifixion.Renaissance art: a topical dictionary by Irene Earls 1987 ISBN 0-313-24658-0 page 64 In the Western Church, before the Renaissance usually four nails would be depicted, with the feet side by side. After the Renaissance most depictions use three nails, with one foot placed on the other. Nails are almost always depicted in art, although Romans sometimes just tied the victims to the cross. The tradition also carries to Christian emblems, e.g. the Jesuits use three nails under the IHS monogram and a cross to symbolize the crucifixion.The visual arts: a history by Hugh Honour, John Fleming 1995 ISBN 0-8109-3928-2 page 526
The placing of the nails in the hands, or the wrists is also uncertain. Some theories suggest that the Greek word cheir (χειρ) for hand includes the wrist and that the Romans were generally trained to place nails through Destot's space (between the capitate and lunate bones) without fracturing any bones.The Crucifixion and Death of a Man Called Jesus by David A Ball 2010 ISBN 1-61507-128-8 pages 82–84 Another theory suggests that the Greek word for hand also includes the forearm and that the nails were placed near the radius and ulna of the forearm.The Chronological Life of Christ by Mark E. Moore 2007 ISBN 0-89900-955-7 page 639–643 Ropes may have also been used to fasten the hands in addition to the use of nails.Holman Concise Bible Dictionary Holman, 2011 ISBN 0-8054-9548-7 page 148
Another issue has been the use of a hypopodium as a standing platform to support the feet, given that the hands may not have been able to support the weight. In the 17th century Rasmus Bartholin considered a number of analytical scenarios of that topic. In the 20th century, forensic pathologist Frederick Zugibe performed a number of crucifixion experiments by using ropes to hang human subjects at various angles and hand positions. His experiments support an angled suspension, and a two-beamed cross, and perhaps some form of foot support, given that in an Aufbinden form of suspension from a straight stake (as used by the Nazis in the Dachau concentration camp during World War II), death comes rather quickly.Crucifixion and the Death Cry of Jesus Christ by Geoffrey L Phelan MD, 2009 ISBN pages 106–111
Words of Jesus spoken from the cross
thumb|right|200px|The Crucifixion, seen from the Cross, by James Tissot, 19th century.
The New Testament gives three different accounts of the words of Jesus on the cross. In Mark and Matthew Jesus utters only one saying on the cross, while Luke and John each describe three statements unique to them.Thomas W. Walker, Luke, (Westminster John Knox Press, 2013) page 84.
Mark / Matthew
"E′li, E′li, la′ma sa‧bach‧tha′ni?" (Aramaic for "My God, My God, why have you forsaken me?").
The only words of Jesus on the cross in the Mark and Matthew accounts, this is a quotation of Psalm 22. Since other verses of the same Psalm are cited in the crucifixion accounts, it is often considered a literary and theological creation. Geza Vermes, however, points out that the verse is cited in Aramaic rather than the Hebrew in which it usually would have been recited, and suggests that by the time of Jesus, this phrase had become a proverbial saying in common usage.Geza Vermes, The Passion (Penguin, 2005) page 75. Compared to the accounts in the other Gospels, which he describes as 'theologically correct and reassuring', he considers this phrase 'unexpected, disquieting and in consequence more probable'.Geza Vermes, The Passion (Penguin, 2005) page 114. He describes it as bearing 'all the appearances of a genuine cry'.Geza Vermes, The Passion (Penguin, 2005) page 122. Raymond Brown likewise comments that he finds 'no persuasive argument against attributing to the Jesus of Mark/Matt the literal sentiment of feeling forsaken expressed in the Psalm quote'.Raymond Brown, The Death of the Messiah Volume II (Doubleday, 1994) page 1051
Luke
"Father, forgive them, for they know not what they do." [Some early manuscripts do not have this]
"Truly, I say to you, today you will be with me in Paradise."
"Father, into your hands I commit my spirit!"
The Gospel of Luke does not have the cry of Jesus found within Matthew and Mark, possibly playing down the suffering of Jesus and replacing a cry of desperation with one of hope and confidence, in keeping with the message of the Gospel which Jesus as dying confident that he would be vindicated as God's righteous prophet.John Haralson Hayes, Biblical Exegesis: A Beginner's Handbook (Westminster John Knox Press, 1987) page 104-5.
John
"Woman, behold, your son!"
"I thirst."
"It is finished."
The words of Jesus on the cross, especially his last words, have been the subject of a wide range of Christian teachings and sermons, and a number of authors have written books specifically devoted to the last sayings of Christ.David Anderson-Berry, 1871 The Seven Sayings of Christ on the Cross, Glasgow: Pickering & Inglis PublishersRev. John Edmunds, 1855 The seven sayings of Christ on the cross Thomas Hatchford Publishers, London, page 26Arthur Pink, 2005 The Seven Sayings of the Saviour on the Cross Baker Books ISBN 0-8010-6573-9Simon Peter Long, 1966 The wounded Word: A brief meditation on the seven sayings of Christ on the cross Baker BooksJohn Ross Macduff, 1857 The Words of Jesus New York: Thomas Stanford Publishers, page 76Alexander Watson, 1847 The seven sayings on the Cross John Masters Publishers, London, page 5 The difference between the accounts is cited by James Dunn as a reason to doubt their historicity.James G. D. Dunn, Jesus Remembered, (Eerdmans, 2003) page 779–781.
Reported extraordinary occurrences
The synoptics report various miraculous events during the crucifixion.Scott's Monthly Magazine. J.J. Toon; 1868. The Miracles Coincident With The Crucifixion, by H.P.B. p. 86–89.Richard Watson. An Apology for the Bible: In a Series of Letters Addressed to Thomas Paine. Cambridge University Press; 29 March 2012. ISBN 978-1-107-60004-1. p. 81–. Mark mentions darkness in the daytime during Jesus' crucifixion, and the Temple veil being torn in two when Jesus dies. Luke follows Mark; as does Matthew, adding an earthquake and the resurrection of dead saints. No mention of any of these appears in John.Harris, Stephen L., Understanding the Bible. Palo Alto: Mayfield. 1985. "John" p. 302–310
Darkness
thumb|200px|Christ on the Cross, by Carl Heinrich Bloch, showing the skies darkened
In the synoptic narrative, while Jesus is hanging on the cross, the sky over Judea (or the whole world) is "darkened for three hours," from the sixth to the ninth hour (noon to mid-afternoon). There is no reference to darkness in the Gospel of John account, in which the crucifixion does not take place until after noon.Edwin Keith Broadhead Prophet, Son, Messiah: Narrative Form and Function in Mark (Continuum, 1994) page 196.
Some Christian writers considered the possibility that pagan commentators may have mentioned this event, mistaking it for a solar eclipse - although this would have been impossible during the Passover, which takes place at the full moon. Christian traveller and historian Sextus Julius Africanus and Christian theologian Origen refer to Greek historian Phlegon, who lived in the 2nd century AD, as having written "with regard to the eclipse in the time of Tiberius Caesar, in whose reign Jesus appears to have been crucified, and the great earthquakes which then took place"
Sextus Julius Africanus further refers to the writings of historian Thallus: "This darkness Thallus, in the third book of his History, calls, as appears to me without reason, an eclipse of the sun. For the Hebrews celebrate the passover on the 14th day according to the moon, and the passion of our Saviour falls on the day before the passover; but an eclipse of the sun takes place only when the moon comes under the sun." Christian apologist Tertullian believed the event was documented in the Roman archives."In the same hour, too, the light of day was withdrawn, when the sun at the very time was in his meridian blaze. Those who were not aware that this had been predicted about Christ, no doubt thought it an eclipse. You yourselves have the account of the world-portent still in your archives."
Colin Humphreys and W. G. Waddington of Oxford University considered the possibility that a lunar, rather than solar, eclipse might have taken place.Colin J. Humphreys and W. G. Waddington, The Date of the Crucifixion Journal of the American Scientific Affiliation 37 (March 1985)Colin Humphreys, The Mystery of the Last Supper Cambridge University Press 2011 ISBN 978-0-521-73200-0, p. 193 (However note that Humphreys places the Last Supper on a Wednesday) They concluded that such an eclipse would have been visible, for thirty minutes, from Jerusalem and suggested the gospel reference to a solar eclipse was the result of a scribe wrongly amending a text. Historian David Henige dismisses this explanation as 'indefensible' and astronomer Bradley Schaefer points out that the lunar eclipse would not have been visible during daylight hours.Schaefer, B. E. (1990, March). Lunar visibility and the crucifixion. Royal Astronomical Society Quarterly Journal, 31(1), 53-67Schaefer, B. E. (1991, July). Glare and celestial visibility. Publications of the Astronomical Society of the Pacific, 103, 645-660.
Modern biblical scholarship treats the account in the synoptic gospels as a literary creation by the author of the Mark Gospel, amended in the Luke and Matthew accounts, intended to heighten the importance of what they saw as a theologically significant event, and not intended to be taken literally.Burton L. Mack, A Myth of Innocence: Mark and Christian Origins (Fortress Press, 1988) page 296; George Bradford Caird, The language and imagery of the Bible (Westminster Press, 1980), page 186; Joseph Fitzmyer, The Gospel According to Luke, X-XXIV (Doubleday, 1985) page 1513; William David Davies, Dale Allison, Matthew: Volume 3 (Continuum, 1997) page 623. This image of darkness over the land would have been understood by ancient readers, a typical element in the description of the death of kings and other major figures by writers such as Philo, Dio Cassius, Virgil, Plutarch and Josephus.David E. Garland, Reading Matthew: A Literary and Theological Commentary on the First Gospel (Smyth & Helwys Publishing, 1999) page 264. Géza Vermes describes the darkness account as typical of "Jewish eschatological imagery of the day of the Lord", and says that those interpreting it as a datable eclipse are "barking up the wrong tree".Géza Vermes, The Passion (Penguin, 2005) pages 108–109.
Temple veil, earthquake and resurrection of dead saints
The synoptic gospels state that the veil of the temple was torn from top to bottom.
The Gospel of Matthew adds an account of earthquakes, splitting rocks, and the opening of the graves of dead saints - stock motifs from Jewish apocalyptic literature - and describes how these resurrected saints went into the holy city and appeared to many people.John Yueh-Han Yieh, One Teacher: Jesus' Teaching Role in Matthew's Gospel Report (Walter de Gruyter, 2005) page 65; Robert Walter Funk, The acts of Jesus: the search for the authentic deeds of Jesus (Harper San Francisco, 1998) pages 129-270.
In the Mark and Matthew accounts, the centurion in charge comments on the events: "Truly this man was the Son of God!" or "Truly this was the Son of God!". In the Gospel of Luke this becomes, "Certainly this man was innocent!"
Medical aspects
A number of theories to explain the circumstances of the death of Jesus on the cross have been proposed by physicians and Biblical scholars. In 2006, Matthew W Maslen and Piers D Mitchell reviewed over 40 publications on the subject with theories ranging from cardiac rupture to pulmonary embolism.Medical theories on the cause of death in Crucifixion J R Soc Med April 2006 vol. 99 no. 4 185-188.
thumb|left|180px|Bronzino's Deposition of Christ
In 1847, based on the reference in the Gospel of John () to blood and water coming out when the Jesus' side was pierced with a spear, physician William Stroud proposed the ruptured heart theory of the cause of Christ's death which influenced a number of other people.William Stroud, 1847, Treatise on the Physical Death of Jesus Christ London: Hamilton and Adams.William Seymour, 2003, The Cross in Tradition, History and Art ISBN 0-7661-4527-1
The cardiovascular collapse theory is a prevalent modern explanation and suggests that Jesus died of profound shock. According to this theory, the scourging, the beatings, and the fixing to the cross would have left Jesus dehydrated, weak, and critically ill and that this would have led to cardiovascular collapse.<ref>[http://www.meridianmagazine.com/byustudies/050325cause.html The Search for the Physical Cause of Christ's Death BYU Studies]</ref>The Physical Death Of Jesus Christ, Study by The Mayo Clinic citing studies by Bucklin R (The legal and medical aspects of the trial and death of Christ. Sci Law 1970; 10:14–26), Mikulicz-Radeeki FV (The chest wound in the crucified Christ. Med News 1966; 14:30–40), Davis CT (The Crucifixion of Jesus: The passion of Christ from a medical point of view. Ariz Med 1965; 22:183-187), and Barbet P (A Doctor at Calvary: The Passion of Out Lord Jesus Christ as Described by a Surgeon, Earl of Wicklow (trans) Garden City, NY, Doubleday Image Books 1953, pp 12–18, 37–147, 159–175, 187–208).
Writing in the Journal of the American Medical Association, physician William Edwards and his colleagues supported the combined cardiovascular collapse (via hypovolemic shock) and exhaustion asphyxia theories, assuming that the flow of water from the side of Jesus described in the Gospel of John was pericardial fluid.Edwards, William D.; Gabel, Wesley J.; Hosmer, Floyd E; On the Physical Death of Jesus, JAMA March 21, 1986, Vol 255, No. 11, pp 1455–1463
In his book The Crucifixion of Jesus, physician and forensic pathologist Frederick Zugibe studied the likely circumstances of the death of Jesus in great detail.Frederick Zugibe, 2005, The Crucifixion of Jesus: A Forensic Inquiry Evans Publishing, ISBN 1-59077-070-6JW Hewitt, The Use of Nails in the Crucifixion Harvard Theological Review, 1932 Zugibe carried out a number of experiments over several years to test his theories while he was a medical examiner. These studies included experiments in which volunteers with specific weights were hanging at specific angles and the amount of pull on each hand was measured, in cases where the feet were also secured or not. In these cases the amount of pull and the corresponding pain was found to be significant.
Pierre Barbet, a French physician, and the chief surgeon at Saint Joseph's Hospital in Paris,New Scientist Oct 12, 1978, page 96 hypothesized that Jesus would have had to relax his muscles to obtain enough air to utter his last words, in the face of exhaustion asphyxia.Barbet, Pierre. Doctor at Calvary, New York: Image Books, 1963. Some of Barbet's theories, e.g., location of nails, are disputed by Zugibe.
Orthopedic surgeon Keith Maxwell not only analyzed the medical aspects of the crucifixion, but also looked back at how Jesus could have carried the cross all the way along Via Dolorosa.Keith Maxwell MD on the Crucifixion of Christ-From-a-Medical-Point-of-View/Page1.html Jesus' Suffering and Crucifixion from a Medical Point of View
In an article for the Catholic Medical Association, Phillip Bishop and physiologist Brian Church suggested a new theory based on suspension trauma.Catholic Medical Association, Linacre Quarterly, August 2006
In 2003, historians FP Retief and L Cilliers reviewed the history and pathology of crucifixion as performed by the Romans and suggested that the cause of death was often a combination of factors. They also state that Roman guards were prohibited from leaving the scene until death had occurred.FP Retief and L Cilliers The history and pathology of Crucifixion South African medical journal, 2003.
Theological significance
Christians believe that Jesus’ death was instrumental in restoring humankind to relationship with God. Online: https://books.google.com/books?id=tVJXcOVY2UgC Online: https://books.google.com/books?id=l3rDtUQRdKAC Christians believe that through faith in Jesus’ substitutionary death and triumphant resurrection Online: https://books.google.com/books?id=UU9Ygc_c5woC people are reunited with God and receive new joy and power in this life as well as eternal life in heaven after the body’s death. Thus the crucifixion of Jesus along with his resurrection restores access to a vibrant experience of God’s presence, love and grace as well as the confidence of eternal life. Online: https://books.google.com/books?id=13QRjJjhEqkC “In the Cross is salvation; in the Cross is life; in the Cross is protection against our enemies; in the Cross is infusion of heavenly sweetness; in the Cross is strength of mind; in the Cross is joy of spirit; in the Cross is excellence of virtue; in the Cross is perfection of holiness. There is no salvation of soul, nor hope of eternal life, save in the Cross.”
Christology of the crucifixion
The accounts of the crucifixion and subsequent resurrection of Jesus provide a rich background for Christological analysis, from the canonical Gospels to the Pauline epistles.Who do you say that I am? Essays on Christology by Jack Dean Kingsbury, Mark Allan Powell, David R. Bauer 1999 ISBN 0-664-25752-6 page 106 Christians believe Jesus' suffering was foretold in the Hebrew Bible, such as in Psalm 22, and Isaiah's songs of the suffering servant.
In Johannine "agent Christology" the submission of Jesus to crucifixion is a sacrifice made as an agent of God or servant of God, for the sake of eventual victory.The Christology of the New Testament by Oscar Cullmann 1959 ISBN 0-664-24351-7 page 79The Johannine exegesis of God by Daniel Rathnakara Sadananda 2005 ISBN 3-11-018248-3 page 281 This builds on the salvific theme of the Gospel of John which begins in John 1:29 with John the Baptist's proclamation: "The Lamb of God who takes away the sins of the world".Johannine Christology and the Early Church by T. E. Pollard 2005 ISBN 0-521-01868-4 page 21Studies in Early Christology by Martin Hengel 2004 ISBN 0-567-04280-4 page 371 Further reinforcement of the concept is provided in Revelation 21:14 where the "lamb slain but standing" is the only one worthy of handling the scroll (i.e. the book) containing the names of those who are to be saved.Studies in Revelation by M. R. DeHaan, Martin Ralph DeHaan 1998 ISBN 0-8254-2485-2 page 103
A central element in the Christology presented in the Acts of the Apostles is the affirmation of the belief that the death of Jesus by crucifixion happened "with the foreknowledge of God, according to a definite plan".New Testament christology by Frank J. Matera 1999 ISBN 0-664-25694-5 page 67 In this view, as in Acts 2:23, the cross is not viewed as a scandal, for the crucifixion of Jesus "at the hands of the lawless" is viewed as the fulfilment of the plan of God.The speeches in Acts: their content, context, and concerns by Marion L. Soards 1994 ISBN 0-664-25221-4 page 34
Paul's Christology has a specific focus on the death and resurrection of Jesus. For Paul, the crucifixion of Jesus is directly related to his resurrection and the term "the cross of Christ" used in Galatians 6:12 may be viewed as his abbreviation of the message of the gospels.Christology by Hans Schwarz 1998 ISBN 0-8028-4463-4 pages 132–134 For Paul, the crucifixion of Jesus was not an isolated event in history, but a cosmic event with significant eschatological consequences, as in 1 Corinthians 2:8. In the Pauline view, Jesus, obedient to the point of death (Philippians 2:8) died "at the right time" (Romans 4:25) based on the plan of God. For Paul the "power of the cross" is not separable from the Resurrection of Jesus.
However, the belief in the redemptive nature of Jesus' death predates the Pauline letters and goes back to the earliest days of Christianity and the Jerusalem church.Lord Jesus Christ: Devotion to Jesus in Earliest Christianity by Larry W. Hurtado (Sep 14, 2005) ISBN 0-8028-3167-2 pages 130–133 The Nicene Creed's statement that "for our sake he was crucified" is a reflection of this core belief's formalization in the fourth century.Christian Theology by J. Glyndwr Harris (Mar 2002) ISBN 1-902210-22-0 pages 12–15
John Calvin supported the "agent of God" Christology and argued that in his trial in Pilate's Court Jesus could have successfully argued for his innocence, but instead submitted to crucifixion in obedience to the Father.Calvin's Christology by Stephen Edmondson 2004 ISBN 0-521-54154-9 page 91The Reading and Preaching of the Scriptures by Hughes Oliphant Old 2002 ISBN 0-8028-4775-7 page 125 This Christological theme continued into the 20th century, both in the Eastern and Western Churches. In the Eastern Church Sergei Bulgakov argued that the crucifixion of Jesus was "pre-eternally" determined by the Father before the creation of the world, to redeem humanity from the disgrace caused by the fall of Adam.The Lamb of God by Sergei Bulgakov 2008 ISBN 0-8028-2779-9 page 129 In the Western Church, Karl Rahner elaborated on the analogy that the blood of the Lamb of God (and the water from the side of Jesus) shed at the crucifixion had a cleansing nature, similar to baptismal water.Encyclopedia of theology: a concise Sacramentum mundi by Karl Rahner 2004 ISBN 0-86012-006-6 page 74
Atonement
Jesus' death and resurrection underpin a variety of theological interpretations as to how salvation is granted to humanity. These interpretations vary widely in how much emphasis they place on the death of Jesus as compared to his words.For example, see . See also Sermon on the Mount According to the substitutionary atonement view, Jesus' death is of central importance, and Jesus willingly sacrificed himself as an act of perfect obedience as a sacrifice of love which pleased God. By contrast the moral influence theory of atonement focuses much more on the moral content of Jesus' teaching, and sees Jesus' death as a martyrdom.A. J. Wallace, R. D. Rusk Moral Transformation: The Original Christian Paradigm of Salvation, (New Zealand: Bridgehead, 2011) ISBN 978-1-4563-8980-2 Since the Middle Ages there has been conflict between these two views within Western Christianity. Evangelical Protestants typically hold a substitutionary view and in particular hold to the theory of penal substitution. Liberal Protestants typically reject substitutionary atonement and hold to the moral influence theory of atonement. Both views are popular within the Roman Catholic church, with the satisfaction doctrine incorporated into the idea of penance.
In the Roman Catholic tradition this view of atonement is balanced by the duty of Roman Catholics to perform Acts of Reparation to Jesus Christ which in the encyclical Miserentissimus Redemptor'' of Pope Pius XI were defined as "some sort of compensation to be rendered for the injury" with respect to the sufferings of Jesus. Pope John Paul II referred to these Acts of Reparation as the "unceasing effort to stand beside the endless crosses on which the Son of God continues to be crucified."
Among Eastern Orthodox Christians, another common view is Christus Victor.See Development of the Christus Victor view after Aulén This holds that Jesus was sent by God to defeat death and Satan. Because of his perfection, voluntary death, and resurrection, Jesus defeated Satan and death, and arose victorious. Therefore, humanity was no longer bound in sin, but was free to rejoin God through faith in Jesus.
Islam
Most Islamic traditions, save for a few, categorically deny that Jesus physically died, either on a cross or another manner. The contention is found within the Islamic traditions themselves, with the earliest Hadith reports quoting the companions of Muhammad stating Jesus having died, while the majority of subsequent Hadith and Tafsir have elaborated an argument in favor of the denial through exegesis and apologetics, becoming the popular (orthodox) view.
Professor and scholar Mahmoud M. Ayoub sums up what the Quran states despite interpretative arguments:
The Quranic verse in question which has been interpreted in various ways:
Contrary to Christian teachings, some Islamic traditions teach that Jesus ascended to Heaven without being put on the cross, but that God transformed another person to appear exactly like him and to be then crucified instead of him. This thought is supported in misreading an account by Irenaeus, the 2nd-century Alexandrian Gnostic Basilides when refuting a heresy denying the death."Wherefore he did not himself suffer death, but Simon, a certain man of Cyrene, being compelled, bore the cross in his stead; so that this latter being transfigured by him, that he might be thought to be Jesus, was crucified, through ignorance and error, while Jesus himself received the form of Simon, and, standing by, laughed at them. For since he was an incorporeal power, and the Nous (mind) of the unborn father, he transfigured himself as he pleased, and thus ascended to him who had sent him, deriding them, inasmuch as he could not be laid hold of, and was invisible to all" (Irenaeus, Against Heresies, book I, ch. 24, 4). Islamic tradition then picks up alongside the Christian testimony with Isa ascending bodily to Heaven, there to remain until his Second Coming in the End Days.
In art, symbolism and devotions
thumb|right|200px| Detail of the countenance of Christ just dead (1793), by José Luján Pérez, Canary Islands Cathedral, Las Palmas de Gran Canaria.
Since the crucifixion of Jesus, the cross has become a key element of Christian symbolism, and the crucifixion scene has been a key element of Christian art, giving rise to specific artistic themes such as Ecce Homo, The Raising of the Cross, Descent from the Cross and Entombment of Christ.
The Crucifixion, seen from the Cross by Tissot presented a novel approach at the end of the 19th century, in which the crucifixion scene was portrayed from the perspective of Jesus.James Tissot: the Life of Christ by Judith F. Dolkart 2009 ISBN 1-85894-496-1 page 201
The symbolism of the cross which is today one of the most widely recognized Christian symbols was used from the earliest Christian times and Justin Martyr who died in 165 describes it in a way that already implies its use as a symbol, although the crucifix appeared later. Masters such as Caravaggio, Rubens and Titian have all depicted the Crucifixion scene in their works.
Devotions based on the process of crucifixion, and the sufferings of Jesus are followed by various Christians. The Stations of the Cross follows a number of stages based on the stages involved in the crucifixion of Jesus, while the Rosary of the Holy Wounds is used to meditate on the wounds of Jesus as part of the crucifixion .
The presence of the Virgin Mary under the cross has in itself been the subject of Marian art, and well known Catholic symbolism such as the Miraculous Medal and Pope John Paul II's Coat of Arms bearing a Marian Cross. And a number of Marian devotions also involve the presence of the Virgin Mary in Calvary, e.g., Pope John Paul II stated that "Mary was united to Jesus on the Cross".EWTN: Mary was United to Jesus on the CrossVatican website on Behold Your Mother! Well known works of Christian art by masters such as Raphael (e.g., the Mond Crucifixion), and Caravaggio (e.g., his Entombment) depict the Virgin Mary as part of the crucifixion scene.
Image gallery
See also
Dismas and Gestas, the two thieves crucified alongside Jesus
Empty tomb
Feast of the Cross
Feast of the Sacred Heart
Life of Jesus in the New Testament
Seven Sorrows of Mary
Swoon hypothesis
References
Further reading
Category:30s
Category:Christology
Category:Gospel episodes
Category:Jesus and history
Category:Public executions
Category:Sorrowful Mysteries
Category:Stations of the Cross
Category:1st century in Jerusalem
de:Jesus von Nazaret#Kreuzigung | 22,852,566 | 2017-01 |
Asthma | Asthma is a common long term inflammatory disease of the airways of the lungs. It is characterized by variable and recurring symptoms, reversible airflow obstruction, and bronchospasm. Symptoms include episodes of wheezing, coughing, chest tightness, and shortness of breath. These episodes may occur a few times a day or a few times per week. Depending on the person they may become worse at night or with exercise.
Asthma is thought to be caused by a combination of genetic and environmental factors. Environmental factors include exposure to air pollution and allergens. Other potential triggers include medications such as aspirin and beta blockers. Diagnosis is usually based on the pattern of symptoms, response to therapy over time, and spirometry. Asthma is classified according to the frequency of symptoms, forced expiratory volume in one second (FEV1), and peak expiratory flow rate. It may also be classified as atopic or non-atopic where atopy refers to a predisposition toward developing a type 1 hypersensitivity reaction.
There is no cure for asthma. Symptoms can be prevented by avoiding triggers, such as allergens and irritants, and by the use of inhaled corticosteroids. Long-acting beta agonists (LABA) or antileukotriene agents may be used in addition to inhaled corticosteroids if asthma symptoms remain uncontrolled. Treatment of rapidly worsening symptoms is usually with an inhaled short-acting beta-2 agonist such as salbutamol and corticosteroids taken by mouth. In very severe cases, intravenous corticosteroids, magnesium sulfate, and hospitalization may be required.
In 2013, 242 million people globally had asthma up from 183 million in 1990. It caused about 489,000 deaths in 2013, most of which occurred in the developing world. It often begins in childhood. The rates of asthma have increased significantly since the 1960s. Asthma was recognized as early as Ancient Egypt. The word asthma is from the Greek ἅσθμα, ásthma which means "panting".
thumb|upright=1.4|Video explanation
Signs and symptoms
Asthma is characterized by recurrent episodes of wheezing, shortness of breath, chest tightness, and coughing. Sputum may be produced from the lung by coughing but is often hard to bring up. During recovery from an attack, it may appear pus-like due to high levels of white blood cells called eosinophils. Symptoms are usually worse at night and in the early morning or in response to exercise or cold air. Some people with asthma rarely experience symptoms, usually in response to triggers, whereas others may have marked and persistent symptoms.
Associated conditions
A number of other health conditions occur more frequently in those with asthma, including gastro-esophageal reflux disease (GERD), rhinosinusitis, and obstructive sleep apnea. Psychological disorders are also more common, with anxiety disorders occurring in between 16–52% and mood disorders in 14–41%. However, it is not known if asthma causes psychological problems or if psychological problems lead to asthma. Those with asthma, especially if it is poorly controlled, are at high risk for radiocontrast reactions.
Causes
Asthma is caused by a combination of complex and incompletely understood environmental and genetic interactions. These factors influence both its severity and its responsiveness to treatment. It is believed that the recent increased rates of asthma are due to changing epigenetics (heritable factors other than those related to the DNA sequence) and a changing living environment. Onset before age 12 is more likely due to genetic influence, while onset after 12 is more likely due to environmental influence.
Environmental
Many environmental factors have been associated with asthma's development and exacerbation including allergens, air pollution, and other environmental chemicals. Smoking during pregnancy and after delivery is associated with a greater risk of asthma-like symptoms. Low air quality from factors such as traffic pollution or high ozone levels has been associated with both asthma development and increased asthma severity. Over half of cases in children in the United States occur in areas with air quality below EPA standards. Exposure to indoor volatile organic compounds may be a trigger for asthma; formaldehyde exposure, for example, has a positive association. Also, phthalates in certain types of PVC are associated with asthma in children and adults.
There is an association between acetaminophen (paracetamol) use and asthma. The majority of the evidence does not, however, support a causal role. A 2014 review found that the association disappeared when respiratory infections were taken into account. Use by a mother during pregnancy is also associated with an increased risk as is psychological stress during pregnancy.
Asthma is associated with exposure to indoor allergens. Common indoor allergens include dust mites, cockroaches, animal dander (fragments of fur or feathers), and mold. Efforts to decrease dust mites have been found to be ineffective on symptoms in sensitized subjects. Certain viral respiratory infections, such as respiratory syncytial virus and rhinovirus, may increase the risk of developing asthma when acquired as young children. Certain other infections, however, may decrease the risk.
Hygiene hypothesis
The hygiene hypothesis attempts to explain the increased rates of asthma worldwide as a direct and unintended result of reduced exposure, during childhood, to non-pathogenic bacteria and viruses. It has been proposed that the reduced exposure to bacteria and viruses is due, in part, to increased cleanliness and decreased family size in modern societies. Exposure to bacterial endotoxin in early childhood may prevent the development of asthma, but exposure at an older age may provoke bronchoconstriction. Evidence supporting the hygiene hypothesis includes lower rates of asthma on farms and in households with pets.
Use of antibiotics in early life has been linked to the development of asthma. Also, delivery via caesarean section is associated with an increased risk (estimated at 20–80%) of asthma—this increased risk is attributed to the lack of healthy bacterial colonization that the newborn would have acquired from passage through the birth canal. There is a link between asthma and the degree of affluence which may be related to the hygiene hypothesis as less affluent individuals often have more exposure to bacteria and viruses.
Genetic
+ CD14-endotoxin interaction based on CD14 SNP C-159T Endotoxin levels CC genotype TT genotype High exposure Low risk High risk Low exposureHigh risk Low risk
Family history is a risk factor for asthma, with many different genes being implicated. If one identical twin is affected, the probability of the other having the disease is approximately 25%. By the end of 2005, 25 genes had been associated with asthma in six or more separate populations, including GSTM1, IL10, CTLA-4, SPINK5, LTC4S, IL4R and ADAM33, among others. Many of these genes are related to the immune system or modulating inflammation. Even among this list of genes supported by highly replicated studies, results have not been consistent among all populations tested. In 2006 over 100 genes were associated with asthma in one genetic association study alone; more continue to be found.
Some genetic variants may only cause asthma when they are combined with specific environmental exposures. An example is a specific single nucleotide polymorphism in the CD14 region and exposure to endotoxin (a bacterial product). Endotoxin exposure can come from several environmental sources including tobacco smoke, dogs, and farms. Risk for asthma, then, is determined by both a person's genetics and the level of endotoxin exposure.
Medical conditions
A triad of atopic eczema, allergic rhinitis and asthma is called atopy. The strongest risk factor for developing asthma is a history of atopic disease; with asthma occurring at a much greater rate in those who have either eczema or hay fever. Asthma has been associated with eosinophilic granulomatosis with polyangiitis (formerly known as Churg–Strauss syndrome), an autoimmune disease and vasculitis. Individuals with certain types of urticaria may also experience symptoms of asthma.
There is a correlation between obesity and the risk of asthma with both having increased in recent years. Several factors may be at play including decreased respiratory function due to a buildup of fat and the fact that adipose tissue leads to a pro-inflammatory state.
Beta blocker medications such as propranolol can trigger asthma in those who are susceptible. Cardioselective beta-blockers, however, appear safe in those with mild or moderate disease. Other medications that can cause problems in asthmatics are angiotensin-converting enzyme inhibitors, aspirin, and NSAIDs.
Exacerbation
Some individuals will have stable asthma for weeks or months and then suddenly develop an episode of acute asthma. Different individuals react to various factors in different ways. Most individuals can develop severe exacerbation from a number of triggering agents.
Home factors that can lead to exacerbation of asthma include dust, animal dander (especially cat and dog hair), cockroach allergens and mold. Perfumes are a common cause of acute attacks in women and children. Both viral and bacterial infections of the upper respiratory tract can worsen the disease. Psychological stress may worsen symptoms—it is thought that stress alters the immune system and thus increases the airway inflammatory response to allergens and irritants.
Pathophysiology
Asthma is the result of chronic inflammation of the conducting zone of the airways (most especially the bronchi and bronchioles), which subsequently results in increased contractability of the surrounding smooth muscles. This among other factors leads to bouts of narrowing of the airway and the classic symptoms of wheezing. The narrowing is typically reversible with or without treatment. Occasionally the airways themselves change. Typical changes in the airways include an increase in eosinophils and thickening of the lamina reticularis. Chronically the airways' smooth muscle may increase in size along with an increase in the numbers of mucous glands. Other cell types involved include: T lymphocytes, macrophages, and neutrophils. There may also be involvement of other components of the immune system including: cytokines, chemokines, histamine, and leukotrienes among others.
Diagnosis
While asthma is a well-recognized condition, there is not one universal agreed upon definition. It is defined by the Global Initiative for Asthma as "a chronic inflammatory disorder of the airways in which many cells and cellular elements play a role. The chronic inflammation is associated with airway hyper-responsiveness that leads to recurrent episodes of wheezing, breathlessness, chest tightness and coughing particularly at night or in the early morning. These episodes are usually associated with widespread but variable airflow obstruction within the lung that is often reversible either spontaneously or with treatment".
There is currently no precise test for the diagnosis, which is typically based on the pattern of symptoms and response to therapy over time. A diagnosis of asthma should be suspected if there is a history of recurrent wheezing, coughing or difficulty breathing and these symptoms occur or worsen due to exercise, viral infections, allergens or air pollution. Spirometry is then used to confirm the diagnosis. In children under the age of six the diagnosis is more difficult as they are too young for spirometry.
Spirometry
Spirometry is recommended to aid in diagnosis and management. It is the single best test for asthma. If the FEV1 measured by this technique improves more than 12% following administration of a bronchodilator such as salbutamol, this is supportive of the diagnosis. It however may be normal in those with a history of mild asthma, not currently acting up. As caffeine is a bronchodilator in people with asthma, the use of caffeine before a lung function test may interfere with the results. Single-breath diffusing capacity can help differentiate asthma from COPD. It is reasonable to perform spirometry every one or two years to follow how well a person's asthma is controlled.
Others
The methacholine challenge involves the inhalation of increasing concentrations of a substance that causes airway narrowing in those predisposed. If negative it means that a person does not have asthma; if positive, however, it is not specific for the disease.
Other supportive evidence includes: a ≥20% difference in peak expiratory flow rate on at least three days in a week for at least two weeks, a ≥20% improvement of peak flow following treatment with either salbutamol, inhaled corticosteroids or prednisone, or a ≥20% decrease in peak flow following exposure to a trigger. Testing peak expiratory flow is more variable than spirometry, however, and thus not recommended for routine diagnosis. It may be useful for daily self-monitoring in those with moderate to severe disease and for checking the effectiveness of new medications. It may also be helpful in guiding treatment in those with acute exacerbations.
Classification
+ Clinical classification (≥ 12 years old) Severity Symptom frequency Night time symptoms %FEV1 of predicted FEV1 Variability SABA use Intermittent ≤2/week ≤2/month ≥80% <20% ≤2 days/week Mild persistent >2/week 3–4/month ≥80% 20–30% >2 days/week Moderate persistent Daily >1/week 60–80% >30% daily Severe persistent Continuously Frequent (7×/week) <60% >30% ≥twice/day
Asthma is clinically classified according to the frequency of symptoms, forced expiratory volume in one second (FEV1), and peak expiratory flow rate. Asthma may also be classified as atopic (extrinsic) or non-atopic (intrinsic), based on whether symptoms are precipitated by allergens (atopic) or not (non-atopic). While asthma is classified based on severity, at the moment there is no clear method for classifying different subgroups of asthma beyond this system. Finding ways to identify subgroups that respond well to different types of treatments is a current critical goal of asthma research.
Although asthma is a chronic obstructive condition, it is not considered as a part of chronic obstructive pulmonary disease as this term refers specifically to combinations of disease that are irreversible such as bronchiectasis, chronic bronchitis, and emphysema. Unlike these diseases, the airway obstruction in asthma is usually reversible; however, if left untreated, the chronic inflammation from asthma can lead the lungs to become irreversibly obstructed due to airway remodeling. In contrast to emphysema, asthma affects the bronchi, not the alveoli.
Asthma exacerbation
+ Severity of an acute exacerbation Near-fatal High PaCO2, or requiring mechanical ventilation, or both Life-threatening(any one of) Clinical signs Measurements Altered level of consciousness Peak flow < 33% Exhaustion Oxygen saturation < 92% Arrhythmia PaO2 < 8 kPa Low blood pressure "Normal" PaCO2 Cyanosis Silent chest Poor respiratory effort Acute severe(any one of) Peak flow 33–50% Respiratory rate ≥ 25 breaths per minute Heart rate ≥ 110 beats per minute Unable to complete sentences in one breath Moderate Worsening symptoms Peak flow 50–80% best or predicted No features of acute severe asthma
An acute asthma exacerbation is commonly referred to as an asthma attack. The classic symptoms are shortness of breath, wheezing, and chest tightness. The wheezing is most often when breathing out. While these are the primary symptoms of asthma, some people present primarily with coughing, and in severe cases, air motion may be significantly impaired such that no wheezing is heard. In children, chest pain is often present.
Signs which occur during an asthma attack include the use of accessory muscles of respiration (sternocleidomastoid and scalene muscles of the neck), there may be a paradoxical pulse (a pulse that is weaker during inhalation and stronger during exhalation), and over-inflation of the chest. A blue color of the skin and nails may occur from lack of oxygen.
In a mild exacerbation the peak expiratory flow rate (PEFR) is ≥200 L/min or ≥50% of the predicted best. Moderate is defined as between 80 and 200 L/min or 25% and 50% of the predicted best while severe is defined as ≤ 80 L/min or ≤25% of the predicted best.
Acute severe asthma, previously known as status asthmaticus, is an acute exacerbation of asthma that does not respond to standard treatments of bronchodilators and corticosteroids. Half of cases are due to infections with others caused by allergen, air pollution, or insufficient or inappropriate medication use.
Brittle asthma is a kind of asthma distinguishable by recurrent, severe attacks. Type 1 brittle asthma is a disease with wide peak flow variability, despite intense medication. Type 2 brittle asthma is background well-controlled asthma with sudden severe exacerbations.
Exercise-induced
Exercise can trigger bronchoconstriction both in people with or without asthma. It occurs in most people with asthma and up to 20% of people without asthma. Exercise-induced bronchoconstriction is common in professional athletes. The highest rates are among cyclists (up to 45%), swimmers, and cross-country skiers. While it may occur with any weather conditions it is more common when it is dry and cold. Inhaled beta2-agonists do not appear to improve athletic performance among those without asthma however oral doses may improve endurance and strength.
Occupational
Asthma as a result of (or worsened by) workplace exposures, is a commonly reported occupational disease. Many cases however are not reported or recognized as such. It is estimated that 5–25% of asthma cases in adults are work–related. A few hundred different agents have been implicated with the most common being: isocyanates, grain and wood dust, colophony, soldering flux, latex, animals, and aldehydes. The employment associated with the highest risk of problems include: those who spray paint, bakers and those who process food, nurses, chemical workers, those who work with animals, welders, hairdressers and timber workers.
Aspirin-induced asthma
Aspirin-exacerbated respiratory disease, also known as aspirin-induced asthma, affects up to 9% of asthmatics. Reactions may also occur to other NSAIDs. People affected often also have trouble with nasal polyps. In people who are affected low doses paracetamol or COX-2 inhibitors are generally safe.
Alcohol-induced asthma
Alcohol may worsen asthmatic symptoms in up to a third of people. This may be even more common in some ethnic groups such as the Japanese and those with aspirin-induced asthma. Other studies have found improvement in asthmatic symptoms from alcohol.
Nonallergic asthma
Nonallergic asthma, also known as intrinsic or nonatopic asthma makes up between 10 and 33% of cases. There is negative skin test to common inhalant allergens and normal serum concentrations of IgE. Often it starts later in life and women are more commonly affected than men. Usual treatments may not work as well.
Differential diagnosis
Many other conditions can cause symptoms similar to those of asthma. In children, other upper airway diseases such as allergic rhinitis and sinusitis should be considered as well as other causes of airway obstruction including: foreign body aspiration, tracheal stenosis or laryngotracheomalacia, vascular rings, enlarged lymph nodes or neck masses. Bronchiolitis and other viral infections may also produce wheezing. In adults, COPD, congestive heart failure, airway masses, as well as drug-induced coughing due to ACE inhibitors should be considered. In both populations vocal cord dysfunction may present similarly.
Chronic obstructive pulmonary disease can coexist with asthma and can occur as a complication of chronic asthma. After the age of 65, most people with obstructive airway disease will have asthma and COPD. In this setting, COPD can be differentiated by increased airway neutrophils, abnormally increased wall thickness, and increased smooth muscle in the bronchi. However, this level of investigation is not performed due to COPD and asthma sharing similar principles of management: corticosteroids, long-acting beta-agonists, and smoking cessation. It closely resembles asthma in symptoms, is correlated with more exposure to cigarette smoke, an older age, less symptom reversibility after bronchodilator administration, and decreased likelihood of family history of atopy.
Prevention
The evidence for the effectiveness of measures to prevent the development of asthma is weak. Some show promise including: limiting smoke exposure both in utero and after delivery, breastfeeding, and increased exposure to daycare or large families but none are well supported enough to be recommended for this indication. Early pet exposure may be useful. Results from exposure to pets at other times are inconclusive and it is only recommended that pets be removed from the home if a person has allergic symptoms to said pet. Dietary restrictions during pregnancy or when breast feeding have not been found to be effective and thus are not recommended. Reducing or eliminating compounds known to sensitive people from the work place may be effective. It is not clear if annual influenza vaccinations affects the risk of exacerbations. Immunization; however, is recommended by the World Health Organization. Smoking bans are effective in decreasing exacerbations of asthma.
Management
While there is no cure for asthma, symptoms can typically be improved. A specific, customized plan for proactively monitoring and managing symptoms should be created. This plan should include the reduction of exposure to allergens, testing to assess the severity of symptoms, and the usage of medications. The treatment plan should be written down and advise adjustments to treatment according to changes in symptoms.
The most effective treatment for asthma is identifying triggers, such as cigarette smoke, pets, or aspirin, and eliminating exposure to them. If trigger avoidance is insufficient, the use of medication is recommended. Pharmaceutical drugs are selected based on, among other things, the severity of illness and the frequency of symptoms. Specific medications for asthma are broadly classified into fast-acting and long-acting categories.
Bronchodilators are recommended for short-term relief of symptoms. In those with occasional attacks, no other medication is needed. If mild persistent disease is present (more than two attacks a week), low-dose inhaled corticosteroids or alternatively, an oral leukotriene antagonist or a mast cell stabilizer is recommended. For those who have daily attacks, a higher dose of inhaled corticosteroids is used. In a moderate or severe exacerbation, oral corticosteroids are added to these treatments.
Lifestyle modification
Avoidance of triggers is a key component of improving control and preventing attacks. The most common triggers include allergens, smoke (tobacco and other), air pollution, non selective beta-blockers, and sulfite-containing foods. Cigarette smoking and second-hand smoke (passive smoke) may reduce the effectiveness of medications such as corticosteroids. Laws that limit smoking decrease the number of people hospitalized for asthma. Dust mite control measures, including air filtration, chemicals to kill mites, vacuuming, mattress covers and others methods had no effect on asthma symptoms. Overall, exercise is beneficial in people with stable asthma. Yoga could provide small improvements in quality of life and symptoms in people with asthma.
Medications
Medications used to treat asthma are divided into two general classes: quick-relief medications used to treat acute symptoms; and long-term control medications used to prevent further exacerbation. Antibiotics are generally not needed for sudden worsening of symptoms.
Fast–acting
thumb|alt=A round canister above a blue plastic holder|Salbutamol metered dose inhaler commonly used to treat asthma attacks.
Short-acting beta2-adrenoceptor agonists (SABA), such as salbutamol (albuterol USAN) are the first line treatment for asthma symptoms. They are recommended before exercise in those with exercise induced symptoms.
Anticholinergic medications, such as ipratropium bromide, provide additional benefit when used in combination with SABA in those with moderate or severe symptoms. Anticholinergic bronchodilators can also be used if a person cannot tolerate a SABA. If a child requires admission to hospital additional ipratropium does not appear to help over a SABA.
Older, less selective adrenergic agonists, such as inhaled epinephrine, have similar efficacy to SABAs. They are however not recommended due to concerns regarding excessive cardiac stimulation.
Long–term control
thumb|alt=A round canister above an orange plastic holder|Fluticasone propionate metered dose inhaler commonly used for long-term control.
Corticosteroids are generally considered the most effective treatment available for long-term control. Inhaled forms such as beclomethasone are usually used except in the case of severe persistent disease, in which oral corticosteroids may be needed. It is usually recommended that inhaled formulations be used once or twice daily, depending on the severity of symptoms.
Long-acting beta-adrenoceptor agonists (LABA) such as salmeterol and formoterol can improve asthma control, at least in adults, when given in combination with inhaled corticosteroids. In children this benefit is uncertain. When used without steroids they increase the risk of severe side-effects and even with corticosteroids they may slightly increase the risk.
Leukotriene receptor antagonists (such as montelukast and zafirlukast) may be used in addition to inhaled corticosteroids, typically also in conjunction with a LABA. Evidence is insufficient to support use in acute exacerbations. In children they appear to be of little benefit when added to inhaled steroids, and the same applies in adolescents and adults. They are useful by themselves. In those under five years of age, they were the preferred add-on therapy after inhaled corticosteroids by the British Thoracic Society in 2009. A similar class of drugs, 5-LOX inhibitors, may be used as an alternative in the chronic treatment of mild to moderate asthma among older children and adults. As of 2013 there is one medication in this family known as zileuton.
Mast cell stabilizers (such as cromolyn sodium) are another non-preferred alternative to corticosteroids.
Delivery methods
Medications are typically provided as metered-dose inhalers (MDIs) in combination with an asthma spacer or as a dry powder inhaler. The spacer is a plastic cylinder that mixes the medication with air, making it easier to receive a full dose of the drug. A nebulizer may also be used. Nebulizers and spacers are equally effective in those with mild to moderate symptoms. However, insufficient evidence is available to determine whether a difference exists in those with severe disease.
Adverse effects
Long-term use of inhaled corticosteroids at conventional doses carries a minor risk of adverse effects. Risks include the development of cataracts and a mild regression in stature.
Others
When asthma is unresponsive to usual medications, other options are available for both emergency management and prevention of flareups. For emergency management other options include:
Oxygen to alleviate hypoxia if saturations fall below 92%.
Corticosteroid by mouth are recommended with five days of prednisone being the same 2 days of dexamethasone.
Magnesium sulfate intravenous treatment increases bronchodilation when used in addition to other treatment in moderate severe acute asthma attacks. In adults it results in a reduction of hospital admissions.
Heliox, a mixture of helium and oxygen, may also be considered in severe unresponsive cases.
Intravenous salbutamol is not supported by available evidence and is thus used only in extreme cases.
Methylxanthines (such as theophylline) were once widely used, but do not add significantly to the effects of inhaled beta-agonists. Their use in acute exacerbations is controversial.
The dissociative anesthetic ketamine is theoretically useful if intubation and mechanical ventilation is needed in people who are approaching respiratory arrest; however, there is no evidence from clinical trials to support this. It is unclear if non-invasive positive pressure ventilation in children is of use as it has not been sufficiently studied.
For those with severe persistent asthma not controlled by inhaled corticosteroids and LABAs, bronchial thermoplasty may be an option. It involves the delivery of controlled thermal energy to the airway wall during a series of bronchoscopies. While it may increase exacerbation frequency in the first few months it appears to decrease the subsequent rate. Effects beyond one year are unknown. Evidence suggests that sublingual immunotherapy in those with both allergic rhinitis and asthma improve outcomes.
Alternative medicine
Many people with asthma, like those with other chronic disorders, use alternative treatments; surveys show that roughly 50% use some form of unconventional therapy. There is little data to support the effectiveness of most of these therapies. Evidence is insufficient to support the usage of Vitamin C. There is tentative support for its use in exercise induced brochospasm. In people with mild to moderate asthma, treatment with vitamin D supplementation is likely to reduce the risk of asthma exacerbations.
Acupuncture is not recommended for the treatment as there is insufficient evidence to support its use. Air ionisers show no evidence that they improve asthma symptoms or benefit lung function; this applied equally to positive and negative ion generators.
Manual therapies, including osteopathic, chiropractic, physiotherapeutic and respiratory therapeutic maneuvers, have insufficient evidence to support their use in treating asthma. The Buteyko breathing technique for controlling hyperventilation may result in a reduction in medication use; however, the technique does not have any effect on lung function. Thus an expert panel felt that evidence was insufficient to support its use.
Prognosis
thumb|upright=1.2|Asthma deaths per million persons in 2012
thumb|upright=1.2|alt=A map of the world with Europe shaded yellow, most of North and South America orange and Southern Africa a dark red|Disability-adjusted life year for asthma per 100,000 inhabitants in 2004.
The prognosis for asthma is generally good, especially for children with mild disease. Mortality has decreased over the last few decades due to better recognition and improvement in care. In 2010 the death rate was 170 per million for males and 90 per million for females. Rates vary between countries by 100 fold.
Globally it causes moderate or severe disability in 19.4 million people as of 2004 (16 million of which are in low and middle income countries). Of asthma diagnosed during childhood, half of cases will no longer carry the diagnosis after a decade. Airway remodeling is observed, but it is unknown whether these represent harmful or beneficial changes. Early treatment with corticosteroids seems to prevent or ameliorates a decline in lung function. Asthma in children also has negative effects on quality of life of their parents.
Epidemiology
thumb|upright=1.2|left|alt=A map of the world with Europe, North America, Australia and much of South America shaded red, much of Asia is yellow, and most of Africa is grey|Rates of asthma in different countries of the world as of 2004.
As of 2011, 235–330 million people worldwide are affected by asthma, and approximately 250,000–345,000 people die per year from the disease. Rates vary between countries with prevalences between 1 and 18%. It is more common in developed than developing countries. One thus sees lower rates in Asia, Eastern Europe and Africa. Within developed countries it is more common in those who are economically disadvantaged while in contrast in developing countries it is more common in the affluent. The reason for these differences is not well known. Low and middle income countries make up more than 80% of the mortality.
While asthma is twice as common in boys as girls, severe asthma occurs at equal rates. In contrast adult women have a higher rate of asthma than men and it is more common in the young than the old. In children, asthma was the most common reason for admission to the hospital following an emergency department visit in the US in 2011.
Global rates of asthma have increased significantly between the 1960s and 2008 with it being recognized as a major public health problem since the 1970s. Rates of asthma have plateaued in the developed world since the mid-1990s with recent increases primarily in the developing world. Asthma affects approximately 7% of the population of the United States and 5% of people in the United Kingdom. Canada, Australia and New Zealand have rates of about 14–15%.
Economics
From 2000 to 2010, the average cost per asthma-related hospital stay in the United States for children remained relatively stable at about $3,600, whereas the average cost per asthma-related hospital stay for adults increased from $5,200 to $6,600. In 2010, Medicaid was the most frequent primary payer among children and adults aged 18–44 years in the United States; private insurance was the second most frequent payer. Among both children and adults in the lowest income communities in the United States there is a higher rate of hospital stays for asthma in 2010 than those in the highest income communities.
History
thumb|right|Ebers Papyrus detailing treatment of asthma
Asthma was recognized in Ancient Egypt and was treated by drinking an incense mixture known as kyphi. It was officially named as a specific respiratory problem by Hippocrates circa 450 BC, with the Greek word for "panting" forming the basis of our modern name. In 200 BC it was believed to be at least partly related to the emotions.
In 1873, one of the first papers in modern medicine on the subject tried to explain the pathophysiology of the disease while one in 1872, concluded that asthma can be cured by rubbing the chest with chloroform liniment. Medical treatment in 1880, included the use of intravenous doses of a drug called pilocarpin. In 1886, F.H. Bosworth theorized a connection between asthma and hay fever. Epinephrine was first referred to in the treatment of asthma in 1905. Oral corticosteroids began to be used for this condition in the 1950s while inhaled corticosteroids and selective short acting beta agonist came into wide use in the 1960s.
thumb|left|upright|1907 advertisement for Grimault's Indian Cigarettes, emphasising their alleged efficacy for the relief of asthma
A notable and well-documented case in the 19th century was that of young Theodore Roosevelt (1858–1919). At that time there was no effective treatment. Roosevelt's youth was in large part shaped by his poor health partly related to his asthma. He experienced recurring nighttime asthma attacks that caused the experience of being smothered to death, terrifying the boy and his parents.
During the 1930s to 1950s, asthma was known as one of the "holy seven" psychosomatic illnesses. Its cause was considered to be psychological, with treatment often based on psychoanalysis and other talking cures. As these psychoanalysts interpreted the asthmatic wheeze as the suppressed cry of the child for its mother, they considered the treatment of depression to be especially important for individuals with asthma.
References
References
External links
Category:Chronic lower respiratory diseases
Category:Respiratory therapy
Category:RTT(full)
Category:Steroid-responsive inflammatory conditions | 44,905 | 2017-01 |
Central African Republic | The Central African Republic (CAR; Sango: Ködörösêse tî Bêafrîka; , or ) is a landlocked country in Central Africa. It is bordered by Chad to the north, Sudan to the northeast, South Sudan to the east, the Democratic Republic of the Congo and the Republic of the Congo to the southwest and Cameroon to the west. The CAR covers a land area of about and had an estimated population of around 4.7 million .
Most of the CAR consists of Sudano-Guinean savannas, but the country also includes a Sahelo-Sudanian zone in the north and an equatorial forest zone in the south. Two thirds of the country is within the Ubangi River basin (which flows into the Congo), while the remaining third lies in the basin of the Chari, which flows into Lake Chad.
What is today the Central African Republic has been inhabited for millennia; however, the country's current borders were established by France, which ruled the country as a colony starting in the late 19th century. After gaining independence from France in 1960, the Central African Republic was ruled by a series of autocratic leaders; by the 1990s, calls for democracy led to the first multi-party democratic elections in 1993. Ange-Félix Patassé became president, but was later removed by General François Bozizé in the 2003 coup. The Central African Republic Bush War began in 2004 and, despite a peace treaty in 2007 and another in 2011, fighting broke out between various factions in December 2012, leading to ethnic and religious cleansing of the Muslim minority and massive population displacement in 2013 and 2014.
Despite its significant mineral deposits and other resources, such as uranium reserves, crude oil, gold, diamonds, cobalt, lumber, and hydropower, as well as significant quantities of arable land, the Central African Republic is among the ten poorest countries in the world. , according to the Human Development Index (HDI), the country had the second lowest level of human development, ranking 187th out of 188 countries.
History
thumb|left|The Bouar Megaliths, pictured here on a 1967 Central African stamp, date back to the very late Neolithic Era (c. 3500–2700 BC).
Early history
Approximately 10,000 years ago, desertification forced hunter-gatherer societies south into the Sahel regions of northern Central Africa, where some groups settled and began farming as part of the Neolithic Revolution.McKenna, p. 4 Initial farming of white yam progressed into millet and sorghum, and before 3000 BCFran Osseo-Asare (2005) Food Culture in Sub Saharan Africa. Greenwood. ISBN 0313324883. p. xxi the domestication of African oil palm improved the groups' nutrition and allowed for expansion of the local populations.McKenna, p. 5 This Agricultural Revolution, combined with a "Fish-stew Revolution", in which fishing began to take place, and the use of boats, allowed for the transportation of goods. Products were often moved in ceramic pots, which are the first known examples of artistic expression from the region's inhabitants.
The Bouar Megaliths in the western region of the country indicate an advanced level of habitation dating back to the very late Neolithic Era (c. 3500–2700 BC).Methodology and African Prehistory by, UNESCO. International Scientific Committee for the Drafting of a General History of Africa, p. 548UNESCO World Heritage Centre. "Les mégalithes de Bouar". UNESCO. Ironworking arrived in the region around 1000 BC from both Bantu cultures in what is today Nigeria and from the Nile city of Meroë, the capital of the Kingdom of Kush.McKenna, p. 7
During the Bantu Migrations from about 1000 BC to AD 1000, Ubangian-speaking people spread eastward from Cameroon to Sudan, Bantu-speaking people settled in the southwestern regions of the CAR, and Central Sudanic-speaking people settled along the Ubangi River in what is today Central and East CAR.
Bananas arrived in the region and added an important source of carbohydrates to the diet; they were also used in the production of alcoholic beverages. Production of copper, salt, dried fish, and textiles dominated the economic trade in the Central African region.McKenna, p. 10
16th–18th century
During the 16th and 17th centuries slave traders began to raid the region as part of the expansion of the Saharan and Nile River slave routes. Their captives were enslaved and shipped to the Mediterranean coast, Europe, Arabia, the Western Hemisphere, or to the slave ports and factories along the West and North Africa or South the Ubanqui and Congo rivers.Alistair Boddy-Evans. Central Africa Republic Timeline – Part 1: From Prehistory to Independence (13 August 1960), A Chronology of Key Events in Central Africa Republic. About.com In the mid 19th century, the Bobangi people became major slave traders and sold their captives to the Americas using the Ubangi river to reach the coast."Central African Republic". Encyclopædia Britannica. During the 18th century Bandia-Nzakara peoples established the Bangassou Kingdom along the Ubangi River.
French colonial period
250px|thumb|left|The Sultan of Bangassou and his wives, 1906
In 1875 the Sudanese sultan Rabih az-Zubayr governed Upper-Oubangui, which included present-day CAR. The European penetration of Central African territory began in the late 19th century during the Scramble for Africa.French Colonies – Central African Republic. Discoverfrance.net. Retrieved 6 April 2013. Europeans, primarily the French, Germans, and Belgians, arrived in the area in 1885. France created Ubangi-Shari territory in 1894.
In 1911 at the Treaty of Fez, France ceded a nearly 300,000 km² portion of the Sangha and Lobaye basins to the German Empire which ceded a smaller area (in present-day Chad) to France. After World War I France again annexed the territory.
In 1920 French Equatorial Africa was established and Ubangi-Shari was administered from Brazzaville.Thomas O'Toole (1997) Political Reform in Francophone Africa. Westview Press. p. 111 During the 1920s and 1930s the French introduced a policy of mandatory cotton cultivation, a network of roads was built, attempts were made to combat sleeping sickness and Protestant missions were established to spread Christianity. New forms of forced labor were also introduced and a large number of Ubangians were sent to work on the Congo-Ocean Railway. Many of these forced laborers died of exhaustion, illness, or the poor conditions which claimed between 20% and 25% of the 127,000 workers."Extreme Railways : Congo's Jungle Railway". YouTube. 2 October 2013.
thumb|250px|left|Charles de Gaulle in Bangui, 1940.
In 1928, a major insurrection, the Kongo-Wara rebellion or 'war of the hoe handle', broke out in Western Ubangi-Shari and continued for several years. The extent of this insurrection, which was perhaps the largest anti-colonial rebellion in Africa during the interwar years, was carefully hidden from the French public because it provided evidence of strong opposition to French colonial rule and forced labor.
In September 1940, during the Second World War, pro-Gaullist French officers took control of Ubangi-Shari and General Leclerc established his headquarters for the Free French Forces in Bangui.Central African Republic: The colonial era – Britannica Online Encyclopedia. Encyclopædia Britannica. Retrieved 6 April 2013. In 1946 Barthélémy Boganda was elected with 9,000 votes to the French National Assembly, becoming the first representative for CAR in the French government. Boganda maintained a political stance against racism and the colonial regime but gradually became disheartened with the French political system and returned to CAR to establish the Movement for the Social Evolution of Black Africa (MESAN) in 1950.
Since independence (1960–present)
In the Ubangi-Shari Territorial Assembly election in 1957, MESAN captured 347,000 out of the total 356,000 votes,Olson, p. 122. and won every legislative seat,Kalck, p. xxxi. which led to Boganda being elected president of the Grand Council of French Equatorial Africa and vice-president of the Ubangi-Shari Government Council.Kalck, p. 90. Within a year, he declared the establishment of the Central African Republic and served as the country's first prime minister. MESAN continued to exist, but its role was limited.Kalck, p. 136. After Boganda's death in a plane crash on 29 March 1959, his cousin, David Dacko, took control of MESAN and became the country's first president after the CAR had formally received independence from France. Dacko threw out his political rivals, including former Prime Minister and Mouvement d'évolution démocratique de l'Afrique centrale (MEDAC), leader Abel Goumba, whom he forced into exile in France. With all opposition parties suppressed by November 1962, Dacko declared MESAN as the official party of the state.Kalck, p. xxxii.
Bokassa and the Central African Empire (1965–1979)
thumb|Jean-Bédel Bokassa, self-crowned Emperor of Central Africa.'Cannibal' dictator Bokassa given posthumous pardon. The Guardian. 3 December 2010
On 31 December 1965, Dacko was overthrown in the Saint-Sylvestre coup d'état by Colonel Jean-Bédel Bokassa, who suspended the constitution and dissolved the National Assembly. President Bokassa declared himself President for Life in 1972, and named himself Emperor Bokassa I of the Central African Empire (as the country was renamed) on 4 December 1976. A year later, Emperor Bokassa crowned himself in a lavish and expensive ceremony that was ridiculed by much of the world.
In April 1979, young students protested against Bokassa's decree that all school attendees would need to buy uniforms from a company owned by one of his wives. The government violently suppressed the protests, killing 100 children and teenagers. Bokassa himself may have been personally involved in some of the killings."'Good old days' under Bokassa?". BBC News. 2 January 2009 In September 1979, France overthrew Bokassa and "restored" Dacko to power (subsequently restoring the name of the country to the Central African Republic). Dacko, in turn, was again overthrown in a coup by General André Kolingba on 1 September 1981.
Central African Republic under Kolingba
Kolingba suspended the constitution and ruled with a military junta until 1985. He introduced a new constitution in 1986 which was adopted by a nationwide referendum. Membership in his new party, the Rassemblement Démocratique Centrafricain (RDC), was voluntary. In 1987 and 1988, semi-free elections to parliament were held but Kolingba's two major political opponents, Abel Goumba and Ange-Félix Patassé were not allowed to participate.
By 1990, inspired by the fall of the Berlin Wall, a pro-democracy movement arose. Pressure from the United States, France, and from a group of locally represented countries and agencies called GIBAFOR (France, the USA, Germany, Japan, the EU, the World Bank, and the UN) finally led Kolingba to agree, in principle, to hold free elections in October 1992 with help from the UN Office of Electoral Affairs. After using the excuse of alleged irregularities to suspend the results of the elections as a pretext for holding on to power, President Kolingba came under intense pressure from GIBAFOR to establish a "Conseil National Politique Provisoire de la République" (Provisional National Political Council, CNPPR) and to set up a "Mixed Electoral Commission", which included representatives from all political parties.
When a second round of elections were finally held in 1993, again with the help of the international community coordinated by GIBAFOR, Ange-Félix Patassé won in the second round of voting with 53% of the vote while Goumba won 45.6%. Patassé's party, the Mouvement pour la Libération du Peuple Centrafricain (MLPC) or Movement for the Liberation of the Central African People, gained a simple but not an absolute majority of seats in parliament, which meant Patassé's party required coalition partners.
Patassé Government (1993–2003)
Patassé purged many of the Kolingba elements from the government and Kolingba supporters accused Patassé's government of conducting a "witch hunt" against the Yakoma. A new constitution was approved on 28 December 1994 but had little impact on the country's politics. In 1996–1997, reflecting steadily decreasing public confidence in the government's erratic behaviour, three mutinies against Patassé's administration were accompanied by widespread destruction of property and heightened ethnic tension. During this time (1996) the Peace Corps evacuated all its volunteers to neighboring Cameroon. To date, the Peace Corps has not returned to the Central African Republic. The Bangui Agreements, signed in January 1997, provided for the deployment of an inter-African military mission, to Central African Republic and re-entry of ex-mutineers into the government on 7 April 1997. The inter-African military mission was later replaced by a U.N. peacekeeping force (MINURCA).
In 1998, parliamentary elections resulted in Kolingba's RDC winning 20 out of 109 seats but in 1999, in spite of widespread public anger in urban centers over his corrupt rule, Patassé won a second term in the presidential election.
On 28 May 2001, rebels stormed strategic buildings in Bangui in an unsuccessful coup attempt. The army chief of staff, Abel Abrou, and General François N'Djadder Bedaya were killed, but Patassé regained the upper hand by bringing in at least 300 troops of the Congolese rebel leader Jean-Pierre Bemba and Libyan soldiers.
In the aftermath of the failed coup, militias loyal to Patassé sought revenge against rebels in many neighborhoods of Bangui and incited unrest including the murder of many political opponents. Eventually, Patassé came to suspect that General François Bozizé was involved in another coup attempt against him, which led Bozizé to flee with loyal troops to Chad. In March 2003, Bozizé launched a surprise attack against Patassé, who was out of the country. Libyan troops and some 1,000 soldiers of Bemba's Congolese rebel organization failed to stop the rebels and Bozizé's forces succeeded in overthrowing Patassé.
Central African Republic since 2003
thumb|250px|Rebel militia in the northern countryside, 2007.
François Bozizé suspended the constitution and named a new cabinet which included most opposition parties. Abel Goumba was named vice-president, which gave Bozizé's new government a positive image. Bozizé established a broad-based National Transition Council to draft a new constitution and announced that he would step down and run for office once the new constitution was approved.
In 2004 the Central African Republic Bush War began as forces opposed to Bozizé took up arms against his government. In May 2005 Bozizé won a presidential election that excluded Patassé and in 2006 fighting continued between the government and the rebels. In November 2006, Bozizé's government requested French military support to help them repel rebels who had taken control of towns in the country's northern regions.
Though the initially public details of the agreement pertained to logistics and intelligence, the French assistance eventually included strikes by Mirage jets against rebel positions.
The Syrte Agreement in February and the Birao Peace Agreement in April 2007 called for a cessation of hostilities, the billeting of FDPC fighters and their integration with FACA, the liberation of political prisoners, integration of FDPC into government, an amnesty for the UFDR, its recognition as a political party, and the integration of its fighters into the national army. Several groups continued to fight but other groups signed on to the agreement, or similar agreements with the government (e.g. UFR on 15 December 2008). The only major group not to sign an agreement at the time was the CPJP, which continued its activities and signed a peace agreement with the government on 25 August 2012.
In 2011 Bozizé was reelected in an election which was widely considered fraudulent.
In November 2012, Séléka, a coalition of rebel groups, took over towns in the northern and central regions of the country. These groups eventually reached a peace deal with the Bozizé's government in January 2013 involving a power sharing government but this deal broke down and the rebels seized the capital in March 2013 and Bozizé fled the country.
Michel Djotodia took over as president and in May 2013 Central African Republic's Prime Minister Nicolas Tiangaye requested a UN peacekeeping force from the UN Security Council and on 31 May former President Bozizé was indicted for crimes against humanity and incitement of genocide."CrisisWatch N°117". crisisgroup.org.
The security situation did not improve during June–August 2013 and there were reports of over 200,000 internally displaced persons (IDPs) as well as human rights abuses"CrisisWatch N°118". crisisgroup.org. and renewed fighting between Séléka and Bozizé supporters."CrisisWatch N°119". crisisgroup.org.
thumb|left|250px|Eland armoured car of the Central African Multinational Force patrols the streets of Bangui in December 2013.
French President François Hollande called on the UN Security Council and African Union to increase their efforts to stabilize the country. The Séléka government was said to be divided. and in September 2013, Djotodia officially disbanded Seleka, but many rebels refused to disarm and veered further out of government control.Smith, David (22 November 2013) Unspeakable horrors in a country on the verge of genocide The Guardian. Retrieved 23 November 2013
The conflict worsened towards the end of the year with international warnings of a "genocide" and fighting was largely from reprisal attacks on civilians from Seleka's predominantly Muslim fighters and Christian militias called "anti-balaka."
On 11 January 2014, Michael Djotodia and his prime minister, Nicolas Tiengaye, resigned as part of a deal negotiated at a regional summit in neighboring Chad. Catherine Samba-Panza was elected as interim president by the National Transitional Council, assuming office on 23 January. She became the first ever female Central African president. In January 2015, Marie-Noëlle Koyara became the first female defense minister since independence.
On 18 February 2014, United Nations Secretary-General Ban Ki-moon called on the UN Security Council to immediately deploy 3,000 troops to the country to combat what he described as innocent civilians being deliberately targeted and murdered in large numbers. The secretary-general outlined a six-point plan, including the addition of 3,000 peacekeepers to bolster the 6,000 African Union soldiers and 2,000 French troops already deployed in the country.
On 23 July 2014, following Congolese mediation efforts, Séléka and anti-balaka representatives signed a ceasefire agreement in Brazzaville."RCA : signature d’un accord de cessez-le-feu à Brazzaville". VOA. 24 July 2014. Retrieved 28 July 2014.
On 14 December 2015, Séléka rebel leaders declared an independent Republic of Logone."Rebel declares autonomous state in Central African Republic". Reuters. 16 December 2015.
Geography
thumb|Falls of Boali on the Mbali River
thumb|A village in the Central African Republic
The Central African Republic is a landlocked nation within the interior of the African continent. It is bordered by Cameroon, Chad, Sudan, South Sudan, the Democratic Republic of the Congo, and the Republic of the Congo. The country lies between latitudes 2° and 11°N, and longitudes 14° and 28°E.
Much of the country consists of flat or rolling plateau savanna approximately above sea level. Most of the northern half lies within the World Wildlife Fund's East Sudanian savanna ecoregion. In addition to the Fertit Hills in the northeast of the CAR, there are scattered hills in the southwest regions. In the northwest is the Yade Massif, a granite plateau with an altitude of .
At , the Central African Republic is the world's 45th-largest country. It is comparable in size to Ukraine.
Much of the southern border is formed by tributaries of the Congo River; the Mbomou River in the east merges with the Uele River to form the Ubangi River, which also comprises portions of the southern border. The Sangha River flows through some of the western regions of the country, while the eastern border lies along the edge of the Nile River watershed.
It has been estimated that up to 8% of the country is covered by forest, with the densest parts generally located in the southern regions. The forests are highly diverse and include commercially important species of Ayous, Sapelli and Sipo."Sold Down the River (English)". forestsmonitor.org. The deforestation rate is about 0.4% per annum, and lumber poaching is commonplace.. CARPE 13 July 2007
In 2008, Central African Republic was the world's least light pollution affected country.National Geographic Magazine, November 2008
The Central African Republic is the focal point of the Bangui Magnetic Anomaly, one of the largest magnetic anomalies on Earth.
Wildlife
In the southwest, the Dzanga-Sangha National Park is located in a rain forest area. The country is noted for its population of forest elephants and western lowland gorillas. In the north, the Manovo-Gounda St Floris National Park is well-populated with wildlife, including leopards, lions, cheetahs and rhinos, and the Bamingui-Bangoran National Park is located in the northeast of CAR. The parks have been seriously affected by the activities of poachers, particularly those from Sudan, over the past two decades.
Climate
thumb|300px|Central African Republic map of Köppen climate classification.
The climate of the Central African Republic is generally tropical, with a wet season that lasts from June to September in the northern regions of the country, and from May to October in the south. During the wet season, rainstorms are an almost daily occurrence, and early morning fog is commonplace. Maximum annual precipitation is approximately in the upper Ubangi region.Central African Republic: Country Study Guide volume 1, p. 24.
The northern areas are hot and humid from February to May, but can be subject to the hot, dry, and dusty trade wind known as the Harmattan. The southern regions have a more equatorial climate, but they are subject to desertification, while the extreme northeast regions of the country are already desert.
Prefectures and sub-prefectures
The Central African Republic is divided into 16 administrative prefectures (préfectures), two of which are economic prefectures (préfectures economiques), and one an autonomous commune; the prefectures are further divided into 71 sub-prefectures (sous-préfectures).
The prefectures are Bamingui-Bangoran, Basse-Kotto, Haute-Kotto, Haut-Mbomou, Kémo, Lobaye, Mambéré-Kadéï, Mbomou, Nana-Mambéré, Ombella-M'Poko, Ouaka, Ouham, Ouham-Pendé and Vakaga. The economic prefectures are Nana-Grébizi and Sangha-Mbaéré, while the commune is the capital city of Bangui.
Demographics
thumb|Fula women in Paoua
The population of the Central African Republic has almost quadrupled since independence. In 1960, the population was 1,232,000; as of a 2014 UN estimate, it is approximately 4,709,000.
The United Nations estimates that approximately 11% of the population aged between 15 and 49 is HIV positive. Only 3% of the country has antiretroviral therapy available, compared to a 17% coverage in the neighbouring countries of Chad and the Republic of the Congo.ANNEX 3: Country progress indicators. 2006 Report on the Global AIDS Epidemic. unaids.org
The nation is divided into over 80 ethnic groups, each having its own language. The largest ethnic groups are the Baya, Banda, Mandjia, Sara, Mboum, M'Baka, Yakoma, and Fula or Fulani,In ; in with others including Europeans of mostly French descent.Central African Republic. CIA World Factbook
Urbanization
Religion
thumb|A Christian church in the Central African Republic.
According to the 2003 national census, 80.3% of the population was Christian—51.4% Protestant and 28.9% Roman Catholic—and 15% is Muslim. Indigenous belief (animism) is also practiced, and many indigenous beliefs are incorporated into Christian and Islamic practice. A UN director described religious tensions between Muslims and Christians as being high.
The CIA World Factbook reports that approximately fifty percent of the population of CAR are Christians (Protestant 25%, Roman Catholic 25%), while 35% of the population maintain indigenous beliefs and 15% practice Islam.
There are many missionary groups operating in the country, including Lutherans, Baptists, Catholics, Grace Brethren, and Jehovah's Witnesses. While these missionaries are predominantly from the United States, France, Italy, and Spain, many are also from Nigeria, the Democratic Republic of the Congo, and other African countries. Large numbers of missionaries left the country when fighting broke out between rebel and government forces in 2002–3, but many of them have now returned to continue their work.
According to Overseas Development Institute research, during the crisis ongoing since 2012, religious leaders have mediated between communities and armed groups; they also provided refuge for people seeking shelter.Veronique Barbelet (2015) Central African Republic: addressing the protection crisis London: Overseas Development Institute
Language
The Central African Republic's two official languages are French and Sango (also spelled Sangho), a creole developed as an inter-ethnic lingua franca based on the local Ngbandi language. CAR is one of the few African countries to have an African language as their official language.See list of official languages by state on Wikipedia
Culture
Sports
Basketball is the country's most popular sport and a good way to connect with its people.Country Profile - Central African Republic-Sports and Activities, Indo-African Chamber of Commerce and Industry Retrieved 24 September 2015.Central African Republic — Things to Do, iExplore Retrieved 24 September 2015. Its national team won the African Championship twice and was the first Sub-Saharan African team to qualify for the Basketball World Cup.
The country also has a national football team, which is governed by the Fédération Centrafricaine de Football, and stages matches at the Barthélemy Boganda Stadium.
Government and politics
Politics in the Central African Republic formally take place in a framework of a semi-presidential republic. In this system, the President is the head of state, with a Prime Minister as head of government. Executive power is exercised by the government. Legislative power is vested in both the government and parliament.
Changes in government have occurred in recent years by three methods: violence, negotiations, and elections. A new constitution was approved by voters in a referendum held on 5 December 2004. The government was rated 'Partly Free' from 1991 to 2001 and from 2004 to 2013.
Executive branch
The president is elected by popular vote for a six-year term, and the prime minister is appointed by the president. The president also appoints and presides over the Council of Ministers, which initiates laws and oversees government operations.
As of June 2014 the Central African Republic was governed by an interim government under Catherine Samba-Panza, Interim President; and André Nzapayeké, Interim Prime Minister.
Legislative branch
The National Assembly (Assemblée Nationale) has 105 members, elected for a five-year term using the two-round (or Run-off) system.
Judicial branch
Like many other former French colonies, the Central African Republic's legal system is based on French law. The Supreme Court, or Cour Supreme, is made up of judges appointed by the president. There is also a Constitutional Court, and its judges are also appointed by the president.
Foreign relations
Foreign aid and UN Involvement
The Central African Republic is heavily dependent upon foreign aid and numerous NGOs provide services that the government does not provide.
In 2006, due to ongoing violence, over 50,000 people in the country's northwest were at risk of starvationCAR: Food shortages increase as fighting intensifies in the northwest. irinnews.org, 29 March 2006 but this was averted due to assistance from the United Nations. On 8 January 2008, the UN Secretary-General Ban Ki-Moon declared that the Central African Republic was eligible to receive assistance from the Peacebuilding Fund.Central African Republic Peacebuilding Fund – Overview. United Nations. Three priority areas were identified: first, the reform of the security sector; second, the promotion of good governance and the rule of law; and third, the revitalization of communities affected by conflicts. On 12 June 2008, the Central African Republic requested assistance from the UN Peacebuilding Commission, which was set up in 2005 to help countries emerging from conflict avoid devolving back into war or chaos.
In response to concerns of a potential genocide, a peacekeeping force - the International Support Mission to the Central African Republic (MISCA) - was authorised in December 2013. This African Union force of 6,000 personnel was accompanied by the French Operation Sangaris.
Events in 2013–2014
thumb|240px|Refugees of the fighting in the Central African Republic, January 2014
In March 2013 the Bozizé government fell to the Séléka rebel group and the rebel leader, Djotodia, proclaimed himself President, while Nicolas Tiangaye remained as the prime minister; he had recently been appointed and was allowed by the Séléka rebels to retain his post, as he was endorsed by the opposition.
A new government was appointed on 31 March 2013, which consisted of members of Séléka and representatives of the opposition to Bozizé, one pro-Bozizé individual,"Rebels, opposition form government in CentrAfrica: decree", Agence France-Presse, 31 March 2013."Centrafrique : Nicolas Tiangaye présente son gouvernement d'union nationale", Jeune Afrique, 1 April 2013 . and a number representatives of civil society. On 1 April, the former opposition parties declared that they would boycott the government.Ange Aboa, "Central African Republic opposition says to boycott new government", Reuters, 1 April 2013. After African leaders in Chad refused to recognize Djotodia as President, proposing to form a transitional council and the holding of new elections, Djotodia signed a decree on 6 April for the formation of a council that would act as a transitional parliament. The council was tasked with electing a president to serve prior to elections in 18 months."C. Africa strongman forms transition council", Agence France-Presse, 6 April 2013.
In November 2013, the United Nations Secretary-General Ban Ki-moon noted that the security situation in the country remained precarious with government authority nonexistent outside of Bangui.Rick Gladstone (18 November 2013), Central African Republic Stirs Concern The New York Times
Both the president and prime minister resigned through an announcement at a regional summit in January 2014, after which an interim leader and speaker for the provisional parliament took over. In late January Catherine Samba-Panza became the Interim President, and André Nzapayeké, the Interim Prime Minister.
Economy
thumb|Bangui shopping district
The per capita income of the Republic is often listed as being approximately $400 a year, one of the lowest in the world, but this figure is based mostly on reported sales of exports and largely ignores the unregistered sale of foods, locally produced alcoholic beverages, diamonds, ivory, bushmeat, and traditional medicine. For most Central Africans, the informal economy of the CAR is more important than the formal economy. Export trade is hindered by poor economic development and the country's landlocked position.
The currency of Central African Republic is the CFA franc, which is accepted across the former countries of French West Africa and trades at a fixed rate to the Euro. Diamonds constitute the country's most important export, accounting for 40–55% of export revenues, but it is estimated that between 30% and 50% of those produced each year leave the country clandestinely.
thumb|left|Graphical depiction of Central African Republic's product exports in 28 color-coded categories
Agriculture is dominated by the cultivation and sale of food crops such as cassava, peanuts, maize, sorghum, millet, sesame, and plantain. The annual real GDP growth rate is just above 3%. The importance of food crops over exported cash crops is indicated by the fact that the total production of cassava, the staple food of most Central Africans, ranges between 200,000 and 300,000 tonnes a year, while the production of cotton, the principal exported cash crop, ranges from 25,000 to 45,000 tonnes a year. Food crops are not exported in large quantities, but still constitute the principal cash crops of the country, because Central Africans derive far more income from the periodic sale of surplus food crops than from exported cash crops such as cotton or coffee. Much of the country is self-sufficient in food crops; however, livestock development is hindered by the presence of the tsetse fly.
The Republic's primary import partner is the Netherlands (19.5%). Other imports come from Cameroon (9.7%), France (9.3%), and South Korea (8.7%). Its largest export partner is Belgium (31.5%), followed by China (27.7%), the Democratic Republic of Congo (8.6%), Indonesia (5.2%), and France (4.5%).
The CAR is a member of the Organization for the Harmonization of Business Law in Africa (OHADA). In the 2009 World Bank Group's report Doing Business, it was ranked 183rd of 183 as regards 'ease of doing business', a composite index which takes into account regulations that enhance business activity and those that restrict it.
Infrastructure
Transportation
thumb|Trucks in Bangui
Bangui is the transport hub of the Central African Republic. As of 1999, eight roads connected the city to other main towns in the country, Cameroon, Chad and South Sudan; of these, only the toll roads are paved. During the rainy season from July to October, some roads are impassable.Eur, pp. 200–202
River ferries sail from the river port at Bangui to Brazzaville and Zongo. The river can be navigated most of the year between Bangui and Brazzaville. From Brazzaville, goods are transported by rail to Pointe-Noire, Congo's Atlantic port. The river port handles the overwhelming majority of the country's international trade and has a cargo handling capacity of 350,000 tons; it has length of wharfs and of warehousing space.
Bangui M'Poko International Airport is Central African Republic's only international airport. As of June 2014 it had regularly scheduled direct flights to Brazzaville, Casablanca, Cotonou, Douala, Kinshasha, Lomé, Luanda, Malabo, N'Djamena, Paris, Pointe-Noire, and Yaoundé.
Since at least 2002 there have been plans to connect Bangui by rail to the Transcameroon Railway.Eur, p. 185
Energy
The Central African Republic primarily uses hydroelectricity as there are few other resources for energy and power for the world around them.
Communications
Presently, the Central African Republic has active television services, radio stations, internet service providers, and mobile phone carriers; Socatel is the leading provider for both internet and mobile phone access throughout the country. The primary governmental regulating bodies of telecommunications are the Ministère des Postes and Télécommunications et des Nouvelles Technologies. In addition, the Central African Republic receives international support on telecommunication related operations from ITU Telecommunication Development Sector (ITU-D) within the International Telecommunication Union to improve infrastructure.
Education
thumb|Classroom in Sam Ouandja
Public education in the Central African Republic is free and is compulsory from ages 6 to 14."Central African Republic". Findings on the Worst Forms of Child Labor (2001). Bureau of International Labor Affairs, U.S. Department of Labor (2002). This article incorporates text from this source, which is in the public domain. However, approximately half of the adult population of the country is illiterate.
Higher education
The University of Bangui, a public university located in Bangui, includes a medical school, and Euclid University, an international university in Bangui, are the two institutions of higher education in the Central African Republic.
Healthcare
thumb|left|Mothers and babies aged between 0 and 5 years are lining up in a Health Post at Begoua, a district of Bangui, waiting for the two drops of the oral polio vaccine.
The largest hospitals in the country are located in the Bangui district. As a member of the World Health Organization, the Central African Republic receives vaccination assistance, such as a 2014 intervention for the prevention of a measles epidemic. In 2007, female life expectancy at birth was 48.2 years and male life expectancy at birth was 45.1 years.
Women's health is poor in the Central African Republic. , the country had the 4th highest maternal mortality rate in the world.
The total fertility rate in 2014 was estimated at 4.46 children born/woman. Approximately 25% of women had undergone female genital mutilation. Many births in the country are guided by traditional birth attendants, who often have little or no formal training.
Malaria is endemic in the Central African Republic, and one of the leading causes of death.
According to 2009 estimates, the HIV/AIDS prevalence rate is about 4.7% of the adult population (ages 15–49).CIA World Factbook: HIV/AIDS – adult prevalence rate. Cia.gov. Retrieved 6 April 2013. Government expenditure on health was US$20 (PPP) per person in 2006 and 10.9% of total government expenditure in 2006. There was only around 1 physician for every 20,000 persons in 2009.
Human rights
The 2009 Human Rights Report by the United States Department of State noted that human rights in CAR were poor and expressed concerns over numerous government abuses.2009 Human Rights Report: Central African Republic. U.S. Department of State, 11 March 2010. The U.S. State Department alleged that major human rights abuses such as extrajudicial executions by security forces, torture, beatings and rape of suspects and prisoners occurred with impunity. It also alleged harsh and life-threatening conditions in prisons and detention centers, arbitrary arrest, prolonged pretrial detention and denial of a fair trial, restrictions on freedom of movement, official corruption, and restrictions on workers' rights.
The State Department report also cites widespread mob violence, the prevalence of female genital mutilation, discrimination against women and Pygmies, human trafficking, forced labor, and child labor."Findings on the Worst Forms of Child Labor – Central African Republic". dol.gov. Freedom of movement is limited in the northern part of the country "because of actions by state security forces, armed bandits, and other nonstate armed entities", and due to fighting between government and anti-government forces, many persons have been internally displaced.
Violence against children and women in relation to accusations of witchcraft has also been cited as a serious problem in the country.UN human rights chief says impunity major challenge in run-up to elections in Central African Republic. ohchr.org. 19 February 2010 Witchcraft is a criminal offense under the penal code.
Freedom of speech is addressed in the country's constitution, but there have been incidents of government intimidation of the media. A report by the International Research & Exchanges Board's media sustainability index noted that "the country minimally met objectives, with segments of the legal system and government opposed to a free media system".
Approximately 68% of girls are married before they turn 18, and the United Nations' Human Development Index ranked the country 179 out of 187 countries surveyed. The Bureau of International Labor Affairs has also mentioned it in its last edition of the List of Goods Produced by Child Labor or Forced Labor.
References
Bibliography
Further reading
Doeden, Matt, Central African Republic in Pictures (Twentyfirst Century Books, 2009).
Petringa, Maria, Brazza, A Life for Africa (2006). ISBN 978-1-4259-1198-0.
Titley, Brian, Dark Age: The Political Odyssey of Emperor Bokassa, 2002.
Woodfrok, Jacqueline, Culture and Customs of the Central African Republic (Greenwood Press, 2006).
External links
Overviews
Centrafrique.com
Country Profile from BBC News
Central African Republic from UCB Libraries GovPubs
Key Development Forecasts for the Central African Republic from International Futures
News
Central African Republic news headline links from AllAfrica.com
Other
Central African Republic at Humanitarian and Development Partnership Team (HDPT)
Johann Hari in Birao, Central African Republic. "Inside France's Secret War" from The Independent, 5 October 2007
Category:French-speaking countries and territories
Category:Landlocked countries
Category:Least developed countries
Category:Member states of the Organisation internationale de la Francophonie
Category:Member states of the African Union
Category:Member states of the United Nations
Category:Republics
Category:States and territories established in 1960
Category:Central African countries
Category:1960 establishments in the Central African Republic | 5,478 | 2017-01 |
Predation | thumb|A polar bear (Ursus maritimus) as the predator feeding on a bearded seal in Svalbard, Norway
thumb|Indian python swallowing a small chital deer at Mudumalai National Park
thumb|Meat ants feeding on a cicada; some species can prey on individuals of far greater size, particularly when working cooperatively.
In an ecosystem, predation is a biological interaction where a predator (an organism that is hunting) feeds on its prey (the organism that is attacked).Begon, M., Townsend, C., Harper, J. (1996). Ecology: Individuals, populations and communities (Third edition). Blackwell Science, London. ISBN 0-86542-845-X, ISBN 0-632-03801-2, ISBN 0-632-04393-8. Predators may or may not kill their prey prior to feeding on them, but the act of predation often results in the death of the prey and the eventual absorption of the prey's tissue through consumption.Encyclopædia Britannica: "predation" Thus predation is often, though not always, carnivory. Other categories of consumption are herbivory (eating parts of plants), fungivory (eating parts of fungi), and detritivory (the consumption of dead organic material). All these consumption categories fall under the rubric of consumer-resource systems. It can often be difficult to separate various types of feeding behaviors. For example, some parasitic species prey on a host organism and then lay their eggs on it for their offspring to feed on it while it continues to live in or on its decaying corpse after it has died. The key characteristic of predation however is the predator's direct impact on the prey population. On the other hand, detritivores simply eat dead organic material arising from the decay of dead individuals and have no direct impact on the "donor" organism(s).
Selective pressures imposed on one another often leads to an evolutionary arms race between prey and predator, resulting in various antipredator adaptations. Ways of classifying predation surveyed here include grouping by trophic level or diet, by specialization, and by the nature of the predator's interaction with prey.
Functional classification
Classification of predators by the extent to which they feed on and interact with their prey is one way ecologists may wish to categorize the different types of predation. Instead of focusing on what they eat, this system classifies predators by the way in which they eat, and the general nature of the interaction between predator and prey species. Two factors are considered here: How close the predator and prey are (in the latter two cases the term prey may be replaced with host) and whether or not the prey are directly killed by the predator is considered, with true predation and parasitoidism involving certain death.
True predation
A true predator can commonly be known as one that kills and eats another living thing. Whereas other types of predator all harm their prey in some way, this form kills them. Predators may hunt actively for prey in pursuit predation, or sit and wait for prey to approach within striking distance, as in ambush predators. Some predators kill large prey and dismember or chew it prior to eating it, such as a jaguar or a human; others may eat their (usually much smaller) prey whole, as does a bottlenose dolphin swallowing a fish, or a snake, duck or stork swallowing a frog. Some animals that kill both large and small prey for their size (domestic cats and dogs are prime examples) may do either depending upon the circumstances; either would devour a large insect whole but dismember a rabbit. Some predation entails venom that subdues a prey creature before the predator ingests the prey by killing, which the box jellyfish does, or disabling it, found in the behavior of the cone shell. In some cases, the venom, as in rattlesnakes and some spiders, contributes to the digestion of the prey item even before the predator begins eating. In other cases, the prey organism may die in the mouth or digestive system of the predator. Baleen whales, for example, eat millions of microscopic plankton at once, the prey being broken down well after entering the whale. Seed predation and egg predation are other forms of true predation, as seeds and eggs represent potential organisms. Predators of this classification need not eat prey entirely. For example, some predators cannot digest bones, while others can. Some may eat only part of an organism, as in grazing (see below), but still consistently cause its direct death.
Grazing
Grazing organisms may also kill their prey species, but this is seldom the case. While some herbivores like zooplankton live on unicellular phytoplankton and therefore, by the individualized nature of the organism, kill their prey, many only eat a small part of the plant. Grazing livestock may pull some grass out at the roots, but most is simply grazed upon, allowing the plant to regrow once again. Kelp is frequently grazed in subtidal kelp forests, but regrows at the base of the blade continuously to cope with browsing pressure. Animals may also be 'grazed' upon; female mosquitos land on hosts briefly to gain sufficient proteins for the development of their offspring. Starfish may be grazed on, being capable of regenerating lost arms.
Parasitism
Parasites can at times be difficult to distinguish from grazers. Their feeding behavior is similar in many ways, however they are noted for their close association with their host species. While a grazing species such as an elephant may travel many kilometers in a single day, grazing on many plants in the process, parasites form very close associations with their hosts, usually having only one or at most a few in their lifetime. This close living arrangement may be described by the term symbiosis, "living together", but unlike mutualism the association significantly reduces the fitness of the host. Parasitic organisms range from the macroscopic mistletoe, a parasitic plant, to microscopic internal parasites such as cholera. Some species however have more loose associations with their hosts. Lepidoptera (butterfly and moth) larvae may feed parasitically on only a single plant, or they may graze on several nearby plants. It is therefore wise to treat this classification system as a continuum rather than four isolated forms.
Parasitoidism
Parasitoids are organisms living in or on their host and feeding directly upon it, eventually leading to its death. They are much like parasites in their close symbiotic relationship with their host or hosts. Like the previous two classifications parasitoid predators do not kill their hosts instantly. However, unlike parasites, they are very similar to true predators in that the fate of their prey is quite inevitably death. A well-known example is the ichneumon wasps, solitary insects living a free life as an adult, then laying eggs on or in another species such as a caterpillar. Its larva(e) feed on the growing host causing it little harm at first, but soon devouring the internal organs until finally destroying the nervous system resulting in prey death. By this stage the young wasp(s) are developed sufficiently to move to the next stage in their life cycle. Though limited mainly to the insect order Hymenoptera, Diptera and Coleoptera parasitoids make up as much as 10% of all insect species.Godfray, H.C.J. (1994). Parasitoids: Behavioral and Evolutionary Ecology. Princeton University Press, Princeton. ISBN 0-691-03325-0, ISBN 0-691-00047-6. P. 20.
Degree of specialization
thumb|right|The land flatworm Platydemus manokwari is a predator mainly specialized in land snails
thumb|left|175px|An opportunistic alligator swims with a deer.
Among predators there is a large degree of specialization. Many predators specialize in hunting only one species of prey. Others are more opportunistic and will kill and eat almost anything (examples: humans, leopards, dogs and alligators). The specialists are usually particularly well suited to capturing their preferred prey. The prey in turn, are often equally suited to escape that predator. This is called an evolutionary arms race and tends to keep the populations of both species in equilibrium. Some predators specialize in certain classes of prey, not just single species. Some will switch to other prey (with varying degrees of success) when the preferred target is extremely scarce, and they may also resort to scavenging or a herbivorous diet if possible.
Trophic level
thumb|A secondary consumer in action: a mantis (Tenodera aridifolia) eating a bee.
thumb|Coluber eating a Sheltopusik.
Predators are often another organism's prey, and likewise prey are often predators. Though blue jays prey on insects, they may in turn be prey for cats and snakes, and snakes may be the prey of hawks. One way of classifying predators is by trophic level. Organisms that feed on autotrophs, the producers of the trophic pyramid, are known as herbivores or primary consumers; those that feed on heterotrophs such as animals are known as secondary consumers. Secondary consumers are a type of carnivore, but there are also tertiary consumers eating these carnivores, quartary consumers eating them, and so forth. Because only a fraction of energy is passed on to the next level, this hierarchy of predation must end somewhere, and very seldom goes higher than five or six levels, and may go only as high as three trophic levels (for example, a lion that preys upon large herbivores such as wildebeest, which in turn eat grasses). A predator at the top of any food chain (that is, one that is preyed upon by no organism) is called an apex predator; examples include the orca, sperm whale, anaconda, Komodo dragon, tiger, lion, tiger shark, Nile crocodile, and most eagles and owls—and even omnivorous humans and grizzly bears. An apex predator in one environment may not retain this position as a top predator if introduced to another habitat, such as a dog among alligators, a skunk in the presence of the great horned owl immune to skunk spray, or a snapping turtle among jaguars; a predatory species introduced into an area where it faces no predators, such as a domestic cat or a dog in some insular environments, can become an apex predator by default.
Many organisms (of which humans are prime examples) eat from multiple levels of the food chain and, thus, make this classification problematic. A carnivore may eat both secondary and tertiary consumers, and its prey may itself be difficult to classify for similar reasons. Organisms showing both carnivory and herbivory are known as omnivores. Even herbivores such as the giant panda may supplement their diet with meat. Scavenging of carrion provides a significant part of the diet of some of the most fearsome predators. Carnivorous plants would be very difficult to fit into this classification, producing their own food but also digesting anything that they may trap. Organisms that eat detritivores or parasites would also be difficult to classify by such a scheme.
Predation as competition
An alternative view offered by Richard Dawkins is of predation as a form of competition: the genes of both the predator and prey are competing for the body (or 'survival machine') of the prey organism.Dawkins, R. (1976). The Selfish Gene. Oxford University Press. ISBN 0-19-286092-5. This is best understood in the context of the gene centered view of evolution. Another manner in which predation and competition are connected is throughout intraguild predation. Intraguild predators are those that kill and eat other predators of different species at the same trophic level, and thus that are potential competitors.
Ecological role
Predators may increase the biodiversity of communities by preventing a single species from becoming dominant. Such predators are known as keystone species and may have a profound influence on the balance of organisms in a particular ecosystem.Changing the distribution of predators and prey in an ecosystem can turn things upside down. March 1, 2013 Scientific American Introduction or removal of this predator, or changes in its population density, can have drastic cascading effects on the equilibrium of many other populations in the ecosystem. For example, grazers of a grassland may prevent a single dominant species from taking over.Botkin, D. and E. Keller (2003). Enrivonmental Science: Earth as a living planet. John Wiley & Sons. ISBN 0-471-38914-5. P.2.
The elimination of wolves from Yellowstone National Park had profound impacts on the trophic pyramid. Without predation, herbivores began to over-graze many woody browse species, affecting the area's plant populations. In addition, wolves often kept animals from grazing in riparian areas, which protected beavers from having their food sources encroached upon. The removal of wolves had a direct effect on beaver populations, as their habitat became territory for grazing.William J. Ripple and Robert L. Beschta. "Wolves and the Ecology of Fear: Can Predation Risk Structure Ecosystems?" 2004. Furthermore, predation keeps hydrological features such as creeks and streams in normal working order. Increased browsing on willows and conifers along Blacktail Creek due to a lack of predation caused channel incision because they helped slow the water down and hold the soil in place.
Adaptations and behavior
The act of predation can be broken down into a maximum of four stages: Detection of prey, attack, capture and finally consumption. The relationship between predator and prey is one that is typically beneficial to the predator, and detrimental to the prey species. Sometimes, however, predation has indirect benefits to the prey species, though the individuals preyed upon themselves do not benefit.Dawkins, R. (2004). The Ancestor's Tale. Boston: Houghton Mifflin. ISBN 0-618-00583-8. This means that, at each applicable stage, predator and prey species are in an evolutionary arms race to maximize their respective abilities to obtain food or avoid being eaten. This interaction has resulted in a vast array of adaptations in both groups.
thumb|Camouflage of the dead leaf mantis makes it less visible to both its predators and prey.
One adaptation helping both predators and prey avoid detection is camouflage, a form of crypsis where species have an appearance that helps them blend into the background. Camouflage consists of not only color but also shape and pattern. The background upon which the organism is seen can be both its environment (e.g., the praying mantis to the right resembling dead leaves) or other organisms (e.g., zebras' stripes blend in with each other in a herd, making it difficult for lions to focus on a single target). The more convincing camouflage is, the more likely it is that the organism will go unseen.
left|thumb|Mimicry in Automeris io.
Mimicry is a related phenomenon where an organism has a similar appearance to another species. One such example is the drone fly, which looks a lot like a bee, yet is completely harmless as it cannot sting at all. Another example of batesian mimicry is the io moth, (Automeris io), which has markings on its wings that resemble an owl's eyes. When an insectivorous predator disturbs the moth, it reveals its hind wings, temporarily startling the predator and giving it time to escape. Predators may also use mimicry to lure their prey, however. Female fireflies of the genus Photuris, for example, copy the light signals of other species, thereby attracting male fireflies, which are then captured and eaten (see aggressive mimicry).
Predator
thumb|A juvenile red-tailed hawk eating a California vole
thumb|Great blue heron with prey
thumb|Lizard eating a rat.
While successful predation results in a gain of energy, hunting invariably involves energetic costs as well. When hunger is not an issue, in general most predators will not seek to attack prey since the costs outweigh the benefits. For instance, a large predatory fish like a shark that is well fed in an aquarium will typically ignore the smaller fish swimming around it (while the prey fish take advantage of the fact that the apex predator is apparently uninterested). Surplus killing represents a deviation from this type of behaviour. The treatment of consumption in terms of cost-benefit analysis is known as optimal foraging theory, and has been quite successful in the study of animal behavior. Some adaptations of successful predators include speed- built for speed, weapons- sharp teeth and claws, camouflage- to avoid being seen by prey, and depth perception- eyes to the front of the head to judge size and distance. Phelan, Jay. What is Life? A Guide to Biology. New York. W.H. Freeman & Company. 2015. Text.
Social predation allows predators to kill creatures larger than those that members of the species could overpower singly. Lions, hyenas, wolves, dholes, African wild dogs, and piranhas can kill large herbivores that single animals of the same species usually don't dispatch. Social predation allows some animals to organize hunts of creatures that would easily escape a single predator; thus chimpanzees can prey upon colobus monkeys, and Harris's hawks can cut off all possible escapes for a doomed rabbit. Social predation is often complex behavior, and not all social creatures perform it. Even without complex intelligence, some ant species can destroy much larger creatures.
Size-selective predation involves predators preferring prey of a certain size. Large prey may prove troublesome for a predator, while small prey might prove hard to find and in any case provide less of a reward. This has led to a correlation between the size of predators and their prey. Size may also act as a refuge for large prey, for example adult elephants are, in general, safe from predation by lions, but juveniles are vulnerable.
Antipredator adaptations
thumb|upright|Thomson's gazelle stotting to advertise its ability to escape
Many antipredator adaptations have evolved in prey populations due to the selective pressures of predation over long periods of time. Some species mob predators cooperatively. Others such as Thomson's gazelle stot to signal to predators such as cheetahs that they will have an unprofitable chase. Many prey animals are aposematically colored or patterned as a warning to predators that they are distasteful or able to defend themselves.Bowers, M. D., Irene L. Brown, and Darryl Wheye. "Bird Predation as a Selective Agent in a Butterfly Population." Evolution 39.1 (1985): 93-103. Such distastefulness or toxicity is brought about by chemical defenses, found in a wide range of prey, especially insects, but the skunk is a dramatic mammalian example.
Population dynamics
It is fairly clear that predators tend to lower the survival and fecundity of their prey, but on a higher level of organization, populations of predator and prey species also interact. It is obvious that predators depend on prey for survival, and this is reflected in predator populations being affected by changes in prey populations. It is not so obvious, however, that predators affect prey populations. Eating a prey organism may simply make room for another if the prey population is approaching its carrying capacity.
The population dynamics of predator–prey interactions can be modelled using the Lotka–Volterra equations. These provide a mathematical model for the cycling of predator and prey populations. Predators tend to select young, weak, and ill individuals.
Evolution of predation
Predation appears to have become a major selection pressure shortly before the Cambrian period—around —as evidenced by the almost simultaneous development of calcification in animals and algae, and predation-avoiding burrowing. However, predators had been grazing on micro-organisms since at least .
Humans and predation
thumb|Humans and dogs as a predatory team
Humans are to some extent predatory, fishing, hunting and trapping animals using weapons and tools. They also use other predatory species, such as dogs, cormorants, and falcons to catch prey for food or for sport.
In biological pest control, predators from a pest's natural range are introduced to control populations, at the risk of causing unforeseen problems. Besides their use in conservation biology, predators are also important for controlling pests in agriculture. Natural predators are an environmentally friendly and sustainable way of reducing damage to crops, and are one alternative to the use of chemical agents such as pesticides.
See also
Bird of prey
Built for the Kill, a major nature series on the habits of predatory animals
Consumer-resource systems
Overpopulation in wild animals
Predator–prey reversal
Prey drive
Wa-Tor
References
Further reading
Barbosa, P. and I. Castellanos (eds.) (2004). Ecology of predator-prey interactions. New York: Oxford University Press. ISBN 0-19-517120-9.
Curio, E. (1976). The ethology of predation. Berlin; New York: Springer-Verlag. ISBN 0-387-07720-0.
External links
Wolfram Demonstrations Project: Predator-Prey Equations by Eric W. Weisstein
Predators, three articles by Olivia Judson, NY Times, Sept. & Oct., 2009
Category:Predation
Category:Biological pest control | 57,559 | 2017-01 |
Computer security | Computer security, also known as cybersecurity or IT security, is the protection of computer systems from the theft or damage to the hardware, software or the information on them, as well as from disruption or misdirection of the services they provide.
It includes controlling physical access to the hardware, as well as protecting against harm that may come via network access, data and code injection, and due to malpractice by operators, whether intentional, accidental, or due to them being tricked into deviating from secure procedures.
The field is of growing importance due to the increasing reliance on computer systems and the Internet in most societies,"Reliance spells end of road for ICT amateurs", May 07, 2013, The Australian wireless networks such as Bluetooth and Wi-Fi – and the growth of "smart" devices, including smartphones, televisions and tiny devices as part of the Internet of Things.
Vulnerabilities and attacks
A vulnerability is a system susceptibility or flaw. Many vulnerabilities are documented in the Common Vulnerabilities and Exposures (CVE) database. An exploitable vulnerability is one for which at least one working attack or "exploit" exists.
To secure a computer system, it is important to understand the attacks that can be made against it, and these threats can typically be classified into one of the categories below:
Backdoors
A backdoor in a computer system, a cryptosystem or an algorithm, is any secret method of bypassing normal authentication or security controls. They may exist for a number of reasons, including by original design or from poor configuration. They may have been added by an authorized party to allow some legitimate access, or by an attacker for malicious reasons; but regardless of the motives for their existence, they create a vulnerability.
Denial-of-service attack
Denial of service attacks (DoS) are designed to make a machine or network resource unavailable to its intended users. Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a single IP address can be blocked by adding a new firewall rule, many forms of Distributed denial of service (DDoS) attacks are possible, where the attack comes from a large number of points – and defending is much more difficult. Such attacks can originate from the zombie computers of a botnet, but a range of other techniques are possible including reflection and amplification attacks, where innocent systems are fooled into sending traffic to the victim.
Direct-access attacks
An unauthorized user gaining physical access to a computer is most likely able to directly copy data from it. They may also compromise security by making operating system modifications, installing software worms, keyloggers, covert listening devices or using wireless mice.Wireless mouse leave billions at risk of computer hack: cyber security firm Even when the system is protected by standard security measures, these may be able to be by-passed by booting another operating system or tool from a CD-ROM or other bootable media. Disk encryption and Trusted Platform Module are designed to prevent these attacks.
Eavesdropping
Eavesdropping is the act of surreptitiously listening to a private conversation, typically between hosts on a network. For instance, programs such as Carnivore and NarusInsight have been used by the FBI and NSA to eavesdrop on the systems of internet service providers. Even machines that operate as a closed system (i.e., with no contact to the outside world) can be eavesdropped upon via monitoring the faint electro-magnetic transmissions generated by the hardware; TEMPEST is a specification by the NSA referring to these attacks.
Spoofing
Spoofing, in general, is a fraudulent or malicious practice in which communication is sent from an unknown source disguised as a source known to the receiver. Spoofing is most prevalent in communication mechanisms that lack a high level of security.
Tampering
Tampering describes a malicious modification of products. So-called "Evil Maid" attacks and security services planting of surveillance capability into routers are examples.
Privilege escalation
Privilege escalation describes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level. So for example a standard computer user may be able to fool the system into giving them access to restricted data; or even to "become root" and have full unrestricted access to a system.
Phishing
Phishing is the attempt to acquire sensitive information such as usernames, passwords, and credit card details directly from users. Phishing is typically carried out by email spoofing or instant messaging, and it often directs users to enter details at a fake website whose look and feel are almost identical to the legitimate one. Preying on a victim's trust, phishing can be classified as a form of social engineering.
Clickjacking
Clickjacking, also known as "UI redress attack" or "User Interface redress attack", is a malicious technique in which an attacker tricks a user into clicking on a button or link on another webpage while the user intended to click on the top level page. This is done using multiple transparent or opaque layers. The attacker is basically "hijacking" the clicks meant for the top level page and routing them to some other irrelevant page, most likely owned by someone else. A similar technique can be used to hijack keystrokes. Carefully drafting a combination of stylesheets, iframes, buttons and text boxes, a user can be led into believing that they are typing the password or other information on some authentic webpage while it is being channeled into an invisible frame controlled by the attacker.
Social engineering
Social engineering aims to convince a user to disclose secrets such as passwords, card numbers, etc. by, for example, impersonating a bank, a contractor, or a customer.
A popular and profitable cyberscam involves fake CEO emails sent to accounting and finance departments. In early 2016, the FBI reported that the scam has cost US businesses more than $2bn in about two years.
In May 2016, the Milwaukee Bucks NBA team was the victim of this type of cyber scam with a perpetrator impersonating the team's president Peter Feigin, resulting in the handover of all the team's employees' 2015 W-2 tax forms.
Systems at risk
Computer security is critical in almost any industry which uses computers.
Currently, most electronic devices such as computers, laptops and cellphones come with built in firewall security software, but despite this, computers are not 100 percent accurate and dependable to protect our data (Smith, Grabosky & Urbas, 2004.) There are many different ways of hacking into computers. It can be done through a network system, clicking into unknown links, connecting to unfamiliar Wi-Fi, downloading software and files from unsafe sites, power consumption, electromagnetic radiation waves, and many more. However, computers can be protected through well built software and hardware. By having strong internal interactions of properties, software complexity can prevent software crash and security failure.J. C. Willemssen, "FAA Computer Security". GAO/T-AIMD-00-330. Presented at Committee on Science, House of Representatives, 2000.
Financial systems
Web sites and apps that accept or store credit card numbers, brokerage accounts, and bank account information are prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on the black market.Financial Weapons of War, Minnesota Law Review (2016), available at: http://ssrn.com/abstract=2765010 In-store payment systems and ATMs have also been tampered with in order to gather customer account data and PINs.
Utilities and industrial equipment
Computers control functions at many utilities, including coordination of telecommunications, the power grid, nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but the Stuxnet worm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable to physical damage caused by malicious commands sent to industrial equipment (in that case uranium enrichment centrifuges) which are infected via removable media. In 2014, the Computer Emergency Readiness Team, a division of the Department of Homeland Security, investigated 79 hacking incidents at energy companies. Vulnerabilities in smart meters (many of which use local radio or cellular communications) can cause problems with billing fraud.
Aviation
The aviation industry is very reliant on a series of complex system which could be attacked.P. G. Neumann, "Computer Security in Aviation," presented at International Conference on Aviation Safety and Security in the 21st Century, White House Commission on Safety and Security, 1997. A simple power outage at one airport can cause repercussions worldwide,J. Zellan, Aviation Security. Hauppauge, NY: Nova Science, 2003, pp. 65–70. much of the system relies on radio transmissions which could be disrupted, and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. There is also potential for attack from within an aircraft.
In Europe, with the (Pan-European Network Service) and NewPENS, and in the US with the NextGen program, air navigation service providers are moving to create their own dedicated networks.
The consequences of a successful attack range from loss of confidentiality to loss of system integrity, which may lead to more serious concerns such as exfiltration of data, network and air traffic control outages, which in turn can lead to airport closures, loss of aircraft, loss of passenger life, damages on the ground and to transportation infrastructure. A successful attack on a military aviation system that controls munitions could have even more serious consequences.
Consumer devices
Desktop computers and laptops are commonly infected with malware either to gather passwords or financial account information, or to construct a botnet to attack another target. Smart phones, tablet computers, smart watches, and other mobile devices such as Quantified Self devices like activity trackers have also become targets and many of these have sensors such as cameras, microphones, GPS receivers, compasses, and accelerometers which could be exploited, and may collect personal information, including sensitive health information. Wifi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.
Home automation devices such as the Nest thermostat are also potential targets.
Large corporations
Large corporations are common targets. In many cases this is aimed at financial gain through identity theft and involves data breaches such as the loss of millions of clients' credit card details by Home Depot, Staples, and Target Corporation. Medical records have been targeted for use in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale.
Not all attacks are financially motivated however; for example security firm HBGary Federal suffered a serious series of attacks in 2011 from hacktivist group Anonymous in retaliation for the firm's CEO claiming to have infiltrated their group,
and Sony Pictures was attacked in 2014 where the motive appears to have been to embarrass with data leaks, and cripple the company by wiping workstations and servers.
Automobiles
If access is gained to a car's internal controller area network, it is possible to disable the brakes and turn the steering wheel. Computerized engine timing, cruise control, anti-lock brakes, seat belt tensioners, door locks, airbags and advanced driver assistance systems make these disruptions possible, and self-driving cars go even further. Connected cars may use wifi and bluetooth to communicate with onboard consumer devices, and the cell phone network to contact concierge and emergency assistance services or get navigational or entertainment information; each of these networks is a potential entry point for malware or an attacker. Researchers in 2011 were even able to use a malicious compact disc in a car's stereo system as a successful attack vector, and cars with built-in voice recognition or remote assistance features have onboard microphones which could be used for eavesdropping. In 2015 hackers remotely carjacked a Jeep from 10 miles away and drove it into a ditch.
A 2015 report by U.S. Senator Edward Markey criticized manufacturers' security measures as inadequate, and also highlighted privacy concerns about driving, location, and diagnostic data being collected, which is vulnerable to abuse by both manufacturers and hackers.
In September 2016 the United States Department of Transportation announced some safety standards for the design and development of autonomous vehicles, called states to come up with uniform policies applying to driverless cars, clarified how current regulations can be applied to driverless cars and opened the door for new similar regulations.
Marshall Heilman notes that "the government has to have some type of legislation and mandate to secure [the] environment" of self-driving cars as hackers otherwise could be able to take over cars and notes that "some type of event [...] is going to have to occur before the government actually gets involved and sets those particular standards".
Cybersecurity of automobiles doesn't just involve the production but also the discovery, proactive measures and patching of vulnerabilities. In 2016 Tesla pushed out security fixes "over the air" and into its cars' computer systems after a Chinese whitehat hacking group disclosed it with an apparent altruistic and/or reputation incentive.
Government
Government and military computer systems are commonly attacked by activists and foreign powers."NSA Accessed Mexican President's Email", October 20, 2013, Jens Glüsing, Laura Poitras, Marcel Rosenbach and Holger Stark, spiegel.de Local and regional government infrastructure such as traffic light controls, police and intelligence agency communications, personnel records, student records, and financial systems are also potential targets as they are now all largely computerized. Passports and government ID cards that control access to facilities which use RFID can be vulnerable to cloning.
Internet of Things and physical vulnerabilities
The Internet of Things (IoT) is the network of physical objects such as devices, vehicles, and buildings that are embedded with electronics, software, sensors, and network connectivity that enables them to collect and exchange data – and concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.
While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,
it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyber attacks are likely to become an increasingly physical (rather than simply virtual) threat.Christopher Clearfield "Rethinking Security for the Internet of Things" Harvard Business Review Blog, 26 June 2013/ If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.
Medical systems
Medical devices have either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment and implanted devices including pacemakers and insulin pumps. There are many reports of hospitals and hospital organizations getting hacked, including ransomware attacks, Windows XP exploits, viruses, and data breaches of sensitive data stored on hospital servers. On 28 December 2016 the US Food and Drug Administration released its recommendations that are not legally enforceable for how medical device manufacturers should maintain the security of Internet-connected devices.
Impact of security breaches
Serious financial damage has been caused by security breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable to virus and worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."Cashell, B., Jackson, W. D., Jickling, M., & Webel, B. (2004). The Economic Impact of Cyber-Attacks. Congressional Research Service, Government and Finance Division. Washington DC: The Library of Congress.
However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classic Gordon-Loeb Model analyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber/information security breach).
Attacker motivation
As with physical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers or vandals, others are activists or criminals looking for financial gain. State-sponsored attackers are now common and well resourced, but started with amateurs such as Markus Hess who hacked for the KGB, as recounted by Clifford Stoll, in The Cuckoo's Egg.
A standard part of threat modelling for any particular system is to identify what might motivate an attack on that system, and who might be motivated to breach it. The level and detail of precautions will vary depending on the system to be secured. A home personal computer, bank, and classified military network face very different threats, even when the underlying technologies in use are similar.
Computer protection (countermeasures)
In computer security a countermeasure is an action, device, procedure, or technique that reduces a threat, a vulnerability, or an attack by eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.RFC 2828 Internet Security GlossaryCNSS Instruction No. 4009 dated 26 April 2010InfosecToday Glossary
Some common countermeasures are listed in the following sections:
Security by design
Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered as a main feature.
Some of the techniques in this approach include:
The principle of least privilege, where each part of the system has only the privileges that are needed for its function. That way even if an attacker gains access to that part, they have only limited access to the whole system.
Automated theorem proving to prove the correctness of crucial software subsystems.
Code reviews and unit testing, approaches to make modules more secure where formal correctness proofs are not possible.
Defense in depth, where the design is such that more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds.
Default secure settings, and design to "fail secure" rather than "fail insecure" (see fail-safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.
Audit trails tracking system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks.
Full disclosure of all vulnerabilities, to ensure that the "window of vulnerability" is kept as short as possible when bugs are discovered.
Security architecture
The Open Security Architecture organization defines IT security architecture as "the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes: confidentiality, integrity, availability, accountability and assurance services".Definitions: IT Security Architecture. SecurityArchitecture.org, Jan, 2006
Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:
the relationship of different components and how they depend on each other.
the determination of controls based on risk assessment, good practice, finances, and legal matters.
the standardization of controls.
Security measures
A state of computer "security" is the conceptual ideal, attained by the use of the three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following:
User account access controls and cryptography can protect systems files and data, respectively.
Firewalls are by far the most common prevention systems from a network security perspective as they can (if properly configured) shield access to internal network services, and block certain kinds of attacks through packet filtering. Firewalls can be both hardware- or software-based.
Intrusion Detection System (IDS) products are designed to detect network attacks in-progress and assist in post-attack forensics, while audit trails and logs serve a similar function for individual systems.
"Response" is necessarily defined by the assessed security requirements of an individual system and may cover the range from simple upgrade of protections to notification of legal authorities, counter-attacks, and the like. In some special cases, a complete destruction of the compromised system is favored, as it may happen that not all the compromised resources are detected.
Today, computer security comprises mainly "preventive" measures, like firewalls or an exit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as the Internet, and can be implemented as software running on the machine, hooking into the network stack (or, in the case of most UNIX-based operating systems such as Linux, built into the operating system kernel) to provide real time filtering and blocking. Another implementation is a so-called "physical firewall", which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet.
Some organizations are turning to big data platforms, such as Apache Hadoop, to extend data accessibility and machine learning to detect advanced persistent threats.
However, relatively few organisations maintain computer systems with effective detection systems, and fewer still have organised response mechanisms in place. As result, as Reuters points out: "Companies for the first time report they are losing more through electronic theft of data than physical stealing of assets". The primary obstacle to effective eradication of cyber crime could be traced to excessive reliance on firewalls and other automated "detection" systems. Yet it is basic evidence gathering by using packet capture appliances that puts criminals behind bars.
Vulnerability management
Vulnerability management is the cycle of identifying, and remediating or mitigating vulnerabilities",Foreman, P: Vulnerability Management, page 1. Taylor & Francis Group, 2010. ISBN 978-1-4398-0150-5 especially in software and firmware. Vulnerability management is integral to computer security and network security.
Vulnerabilities can be discovered with a vulnerability scanner, which analyzes a computer system in search of known vulnerabilities,Anna-Maija Juuso and Ari Takanen Unknown Vulnerability Management, Codenomicon whitepaper, October 2010 . such as open ports, insecure software configuration, and susceptibility to malware
Beyond vulnerability scanning, many organisations contract outside security auditors to run regular penetration tests against their systems to identify vulnerabilities. In some sectors this is a contractual requirement.
Reducing vulnerabilities
While formal verification of the correctness of computer systems is possible, it is not yet common. Operating systems formally verified include seL4, and SYSGO's PikeOSChristoph Baumann, Bernhard Beckert, Holger Blasum, and Thorsten Bormer Ingredients of Operating System Correctness? Lessons Learned in the Formal Verification of PikeOS"Getting it Right" by Jack Ganssle – but these make up a very small percentage of the market.
Cryptography properly implemented is now virtually impossible to directly break. Breaking them requires some non-cryptographic input, such as a stolen key, stolen plaintext (at either end of the transmission), or some other extra cryptanalytic information.
Two factor authentication is a method for mitigating unauthorized access to a system or sensitive information. It requires "something you know"; a password or PIN, and "something you have"; a card, dongle, cellphone, or other piece of hardware. This increases security as an unauthorized person needs both of these to gain access.
Social engineering and direct computer access (physical) attacks can only be prevented by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk, but even in a highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent.
It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates, using a security scanner or/and hiring competent people responsible for security. The effects of data loss/damage can be reduced by careful backing up and insurance.
Hardware protection mechanisms
While hardware may be a source of insecurity, such as with microchip vulnerabilities maliciously introduced during the manufacturing process, hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such as dongles, trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below.
USB dongles are typically used in software licensing schemes to unlock software capabilities, but they can also be seen as a way to prevent unauthorized access to a computer or other device's software. The dongle, or key, essentially creates a secure encrypted tunnel between the software application and the key. The principle is that an encryption scheme on the dongle, such as Advanced Encryption Standard (AES) provides a stronger measure of security, since it is harder to hack and replicate the dongle than to simply copy the native software to another machine and use it. Another security application for dongles is to use them for accessing web-based content such as cloud software or Virtual Private Networks (VPNs). In addition, a USB dongle can be configured to lock or unlock a computer.
Trusted platform modules (TPMs) secure devices by integrating cryptographic capabilities onto access devices, through the use of microprocessors, or so-called computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to detect and authenticate hardware devices, preventing unauthorized network and data access.
Computer case intrusion detection refers to a push-button switch which is triggered when a computer case is opened. The firmware or BIOS is programmed to show an alert to the operator when the computer is booted up the next time.
Drive locks are essentially software tools to encrypt hard drives, making them inaccessible to thieves. Tools exist specifically for encrypting external drives as well.
Disabling USB ports is a security option for preventing unauthorized and malicious access to an otherwise secure computer. Infected USB dongles connected to a network from a computer inside the firewall are considered by the magazine Network World as the most common hardware threat facing computer networks.
Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of cell phones. Built-in capabilities such as Bluetooth, the newer Bluetooth low energy (LE), Near field communication (NFC) on non-iOS devices and biometric validation such as thumb print readers, as well as QR code reader software designed for mobile devices, offer new, secure ways for mobile phones to connect to access control systems. These control systems provide computer security and can also be used for controlling access to secure buildings.
Secure operating systems
One use of the term "computer security" refers to technology that is used to implement secure operating systems. In the 1980s the United States Department of Defense (DoD) used the "Orange Book" standards, but the current international standard ISO/IEC 15408, "Common Criteria" defines a number of progressively more stringent Evaluation Assurance Levels. Many common operating systems meet the EAL4 standard of being "Methodically Designed, Tested and Reviewed", but the formal verification required for the highest levels means that they are uncommon. An example of an EAL6 ("Semiformally Verified Design and Tested") system is Integrity-178B, which is used in the Airbus A380
and several military jets.
Secure coding
In software engineering, secure coding aims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems are "secure by design". Beyond this, formal verification aims to prove the correctness of the algorithms underlying a system;
important for cryptographic protocols for example.
Capabilities and access control lists
Within computer systems, two of many security models capable of enforcing privilege separation are access control lists (ACLs) and capability-based security. Using ACLs to confine programs has been proven to be insecure in many situations, such as if the host computer can be tricked into indirectly allowing restricted file access, an issue known as the confused deputy problem. It has also been shown that the promise of ACLs of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.
Capabilities have been mostly restricted to research operating systems, while commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language.
The most secure computers are those not connected to the Internet and shielded from any interference. In the real world, the most secure systems are operating systems where security is not an add-on.
Response to breaches
Responding forcefully to attempted security breaches (in the manner that one would for attempted physical security breaches) is often very difficult for a variety of reasons:
Identifying attackers is difficult, as they are often in a different jurisdiction to the systems they attempt to breach, and operate through proxies, temporary anonymous dial-up accounts, wireless connections, and other anonymising procedures which make backtracing difficult and are often located in yet another jurisdiction. If they successfully breach security, they are often able to delete logs to cover their tracks.
The sheer number of attempted attacks is so large that organisations cannot spend time pursuing each attacker (a typical home user with a permanent (e.g., cable modem) connection will be attacked at least several times per day, so more attractive targets could be presumed to see many more). Note however, that most of the sheer bulk of these attacks are made by automated vulnerability scanners and computer worms.
Law enforcement officers are often unfamiliar with information technology, and so lack the skills and interest in pursuing attackers. There are also budgetary constraints. It has been argued that the high cost of technology, such as DNA testing, and improved forensics mean less money for other kinds of law enforcement, so the overall rate of criminals not getting dealt with goes up as the cost of the technology increases. In addition, the identification of attackers across a network may require logs from various points in the network and in many countries, the release of these records to law enforcement (with the exception of being voluntarily surrendered by a network administrator or a system administrator) requires a search warrant and, depending on the circumstances, the legal proceedings required can be drawn out to the point where the records are either regularly destroyed, or the information is no longer relevant.
Notable attacks and breaches
Some illustrative examples of different types of computer security breaches are given below.
Robert Morris and the first computer worm
In 1988, only 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On November 2, 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internet "computer worm".Jonathan Zittrain, 'The Future of The Internet', Penguin Books, 2008 The software was traced back to 23-year-old Cornell University graduate student Robert Tappan Morris, Jr. who said 'he wanted to count how many machines were connected to the Internet'.
Rome Laboratory
In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as
a trusted Rome center user.Information Security. United States Department of Defense, 1986
TJX customer credit card details
In early 2007, American apparel and home goods company TJX announced that it was the victim of an unauthorized computer systems intrusion and that the hackers had accessed a system that stored data on credit card, debit card, check, and merchandise return transactions.Largest Customer Info Breach Grows. MyFox Twin Cities, 29 March 2007.
Stuxnet attack
The computer worm known as Stuxnet reportedly ruined almost one-fifth of Iran's nuclear centrifuges by disrupting industrial programmable logic controllers (PLCs) in a targeted attack generally believed to have been launched by Israel and the United States although neither has publicly acknowledged this.
Global surveillance disclosures
In early 2013, massive breaches of computer security by the NSA were revealed, including deliberately inserting a backdoor in a NIST standard for encryption and tapping the links between Google's data centres."New Snowden Leak: NSA Tapped Google, Yahoo Data Centers", Oct 31, 2013, Lorenzo Franceschi-Bicchierai, mashable.com These were disclosed by NSA contractor Edward Snowden.
Target and Home Depot breaches
In 2013 and 2014, a Russian/Ukrainian hacking ring known as "Rescator" broke into Target Corporation computers in 2013, stealing roughly 40 million credit cards, and then Home Depot computers in 2014, stealing between 53 and 56 million credit card numbers. Warnings were delivered at both corporations, but ignored; physical security breaches using self checkout machines are believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existing antivirus software had administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing.
Ashley Madison breach
In July 2015, a hacker group known as "The Impact Team" successfully breached the extramarital relationship website Ashley Madison. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently. With this initial data release, the group stated "Avid Life Media has been instructed to take Ashley Madison and Established Men offline permanently in all forms, or we will release all customer records, including profiles with all the customers' secret sexual fantasies and matching credit card transactions, real names and addresses, and employee documents and emails. The other websites may stay online." When Avid Life Media, the parent company that created the Ashley Madison website, did not take the site offline, The Impact Group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned, but the website remained functional.
Legal issues and global regulation
Conflict of laws in cyberspace has become a major cause of concern for computer security community. Some of the main challenges and complaints about the antivirus industry are the lack of global web regulations, a global base of common rules to judge, and eventually punish, cyber crimes and cyber criminals. There is no global cyber law and cybersecurity treaty that can be invoked for enforcing global cybersecurity issues.
International legal issues of cyber attacks are complicated in nature. Even if an antivirus firm locates the cyber criminal behind the creation of a particular virus or piece of malware or form of cyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute. Authorship attribution for cyber crimes and cyber attacks is a major problem for all law enforcement agencies.
"[Computer viruses] switch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world." Use of dynamic DNS, fast flux and bullet proof servers have added own complexities to this situation.
Government
The role of the government is to make regulations to force companies and organizations to protect their systems, infrastructure and information from any cyber-attacks, but also to protect its own national infrastructure such as the national power-grid.
The question of whether the government should intervene or not in the regulation of the cyberspace is a very polemical one. Indeed, for as long as it has existed and by definition, the cyberspace is a virtual space free of any government intervention. Where everyone agree that an improvement on cybersecurity is more than vital, is the government the best actor to solve this issue?
Many government officials and experts think that the government should step in and that there is a crucial need for regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem. R. Clarke said during a panel discussion at the RSA Security Conference in San Francisco, he believes that the "industry only responds when you threaten regulation. If industry doesn't respond (to the threat), you have to follow through."
On the other hand, executives from the private sector agree that improvements are necessary, but think that the government intervention would affect their ability to innovate efficiently.
Actions and teams in the US
Legislation
The 1986 , more commonly known as the Computer Fraud and Abuse Act is the key legislation. It prohibits unauthorized access or damage of "protected computers" as defined in .
Although various other measures have been proposed, such as the "Cybersecurity Act of 2010 – S. 773" in 2009, the "International Cybercrime Reporting and Cooperation Act – H.R.4962" and "Protecting Cyberspace as a National Asset Act of 2010 – S.3480" in 2010 – none of these has succeeded.
Executive order 13636 Improving Critical Infrastructure Cybersecurity was signed February 12, 2013.
Agencies
The Department of Homeland Security has a dedicated division responsible for the response system, risk management program and requirements for cybersecurity in the United States called the National Cyber Security Division. The division is home to US-CERT operations and the National Cyber Alert System. The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.AFP-JiJi, "U.S. boots up cybersecurity center", October 31, 2009.
The third priority of the Federal Bureau of Investigation (FBI) is to: "Protect the United States against cyber-based attacks and high-technology crimes", and they, along with the National White Collar Crime Center (NW3C), and the Bureau of Justice Assistance (BJA) are part of the multi-agency task force, The Internet Crime Complaint Center, also known as IC3.Internet Crime Complaint Center
In addition to its own specific duties, the FBI participates alongside non-profit organizations such as InfraGard.
In the criminal division of the United States Department of Justice operates a section called the Computer Crime and Intellectual Property Section. The CCIPS is in charge of investigating computer crime and intellectual property crime and is specialized in the search and seizure of digital evidence in computers and networks.
The United States Cyber Command, also known as USCYBERCOM, is tasked with the defense of specified Department of Defense information networks and "ensure US/Allied freedom of action in cyberspace and deny the same to our adversaries." It has no role in the protection of civilian networks.Shachtman, Noah. "Military's Cyber Commander Swears: "No Role" in Civilian Networks", The Brookings Institution, 23 September 2010.
The U.S. Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.
The Food and Drug Administration has issued guidance for medical devices, and the National Highway Traffic Safety Administration is concerned with automotive cybersecurity. After being criticized by the Government Accountability Office, and following successful attacks on airports and claimed attacks on airplanes, the Federal Aviation Administration has devoted funding to securing systems on board the planes of private manufacturers, and the Aircraft Communications Addressing and Reporting System. Concerns have also been raised about the future Next Generation Air Transportation System.
Computer emergency readiness team
"Computer emergency response team" is a name given to expert groups that handle computer security incidents.
In the US, two distinct organization exist, although they do work closely together.
US-CERT: part of the National Cyber Security Division of the United States Department of Homeland Security.
CERT/CC: created by the Defense Advanced Research Projects Agency (DARPA) and run by the Software Engineering Institute (SEI).
International actions
Many different teams and organisations exist, including:
The Forum of Incident Response and Security Teams (FIRST) is the global association of CSIRTs. The US-CERT, AT&T, Apple, Cisco, McAfee, Microsoft are all members of this international team.
The Council of Europe helps protect societies worldwide from the threat of cybercrime through the Convention on Cybercrime.
The purpose of the Messaging Anti-Abuse Working Group (MAAWG) is to bring the messaging industry together to work collaboratively and to successfully address the various forms of messaging abuse, such as spam, viruses, denial-of-service attacks and other messaging exploitations. France Telecom, Facebook, AT&T, Apple, Cisco, Sprint are some of the members of the MAAWG.
ENISA : The European Network and Information Security Agency (ENISA) is an agency of the European Union with the objective to improve network and information security in the European Union.
Europe
CSIRTs in Europe collaborate in the TERENA task force TF-CSIRT. TERENA's Trusted Introducer service provides an accreditation and certification scheme for CSIRTs in Europe. A full list of known CSIRTs in Europe is available from the Trusted Introducer website.
National teams
Here are the main computer emergency response teams around the world.
Most countries have their own team to protect network security.
Canada
On October 3, 2010, Public Safety Canada unveiled Canada's Cyber Security Strategy, following a Speech from the Throne commitment to boost the security of Canadian cyberspace. The aim of the strategy is to strengthen Canada's "cyber systems and critical infrastructure sectors, support economic growth and protect Canadians as they connect to each other and to the world." Three main pillars define the strategy: securing government systems, partnering to secure vital cyber systems outside the federal government, and helping Canadians to be secure online. The strategy involves multiple departments and agencies across the Government of Canada. The Cyber Incident Management Framework for Canada outlines these responsibilities, and provides a plan for coordinated response between government and other partners in the event of a cyber incident. The Action Plan 2010–2015 for Canada's Cyber Security Strategy outlines the ongoing implementation of the strategy.
Public Safety Canada's Canadian Cyber Incident Response Centre (CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. The CCIRC provides support to mitigate cyber threats, technical support to respond and recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors. The CCIRC posts regular cyber security bulletins on the Public Safety Canada website. The CCIRC also operates an online reporting tool where individuals and organizations can report a cyber incident. Canada's Cyber Security Strategy is part of a larger, integrated approach to critical infrastructure protection, and functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure.
On September 27, 2010, Public Safety Canada partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations dedicated to informing the general public on how to protect themselves online. On February 4, 2014, the Government of Canada launched the Cyber Security Cooperation Program. The program is a $1.5 million five-year initiative aimed at improving Canada's cyber systems through grants and contributions to projects in support of this objective. Public Safety Canada aims to begin an evaluation of Canada's Cyber Security Strategy in early 2015. Public Safety Canada administers and routinely updates the GetCyberSafe portal for Canadian citizens, and carries out Cyber Security Awareness Month during October.
China
China's network security and information technology leadership team was established February 27, 2014. The leadership team is tasked with national security and long-term development and co-ordination of major issues related to network security and information technology. Economic, political, cultural, social and military fields as related to network security and information technology strategy, planning and major macroeconomic policy are being researched. The promotion of national network security and information technology law are constantly under study for enhanced national security capabilities.
Germany
Berlin starts National Cyber Defense Initiative:
On June 16, 2011, the German Minister for Home Affairs, officially opened the new German NCAZ (National Center for Cyber Defense) Nationales Cyber-Abwehrzentrum located in Bonn. The NCAZ closely cooperates with BSI (Federal Office for Information Security) Bundesamt für Sicherheit in der Informationstechnik, BKA (Federal Police Organisation) Bundeskriminalamt (Deutschland), BND (Federal Intelligence Service) Bundesnachrichtendienst, MAD (Military Intelligence Service) Amt für den Militärischen Abschirmdienst and other national organisations in Germany taking care of national security aspects. According to the Minister the primary task of the new organisation founded on February 23, 2011, is to detect and prevent attacks against the national infrastructure and mentioned incidents like Stuxnet.
India
Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.
The National Cyber Security Policy 2013 is a policy framework by Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyber attacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data".
The Indian Companies Act 2013 has also introduced cyber law and cyber security obligations on the part of Indian directors.
Pakistan
Cyber-crime has risen rapidly in Pakistan. There are about 34 million Internet users with 133.4 million mobile subscribers in Pakistan. According to Cyber Crime Unit (CCU), a branch of Federal Investigation Agency, only 62 cases were reported to the unit in 2007, 287 cases in 2008, ratio dropped in 2009 but in 2010, more than 312 cases were registered. However, there are many unreported incidents of cyber-crime.
"Pakistan's Cyber Crime Bill 2007", the first pertinent law, focuses on electronic crimes, for example cyber-terrorism, criminal access, electronic system fraud, electronic forgery, and misuse of encryption.
National Response Centre for Cyber Crime (NR3C) – FIA is a law enforcement agency dedicated to fight cybercrime. Inception of this Hi-Tech crime fighting unit transpired in 2007 to identify and curb the phenomenon of technological abuse in society. However, certain private firms are also working in cohesion with the government to improve cyber security and curb cyberattacks.
South Korea
Following cyberattacks in the first half of 2013, when government, news-media, television station, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011, and 2012, but Pyongyang denies the accusations.
Other countries
CERT Brazil, member of FIRST (Forum for Incident Response and Security Teams)
CARNet CERT, Croatia, member of FIRST
AE CERT, United Arab Emirates
SingCERT, Singapore
CERT-LEXSI, France, Canada, Singapore
INCIBE, Spain
ID-CERT, Indonesia
Modern warfare
Cybersecurity is becoming increasingly important as more information and technology is being made available on cyberspace. There is growing concern among governments that cyberspace will become the next theatre of warfare. As Mark Clayton from the Christian Science Monitor described in an article titled "The New Cyber Arms Race":
In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.
This has led to new terms such as cyberwarfare and cyberterrorism. More and more critical infrastructure is being controlled via computer programs that, while increasing efficiency, exposes new vulnerabilities. The test will be to see if governments and corporations that control critical systems such as energy, communications and other information will be able to prevent attacks before they occur. As Jay Cross, the chief scientist of the Internet Time Group, remarked, "Connectedness begets vulnerability."
Job market
Cybersecurity is a fast-growing field of IT concerned with reducing organizations' risk of hack or data breach. According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015. Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail. Burning Glass Technologies, "Demand for Cybersecurity Workers Outstripping Supply," July 30, 2015, accessed 2016-06-11 However, the use of the term "cybersecurity" is more prevalent in government job descriptions.
Typical cybersecurity job titles and descriptions include:
Security analyst
Analyzes and assesses vulnerabilities in the infrastructure (software, hardware, networks), investigates using available tools and countermeasures to remedy the detected vulnerabilities, and recommends solutions and best practices. Analyzes and assesses damage to the data/infrastructure as a result of security incidents, examines available recovery tools and processes, and recommends solutions. Tests for compliance with security policies and procedures. May assist in the creation, implementation, and/or management of security solutions.
Security engineer
Performs security monitoring, security and data/logs analysis, and forensic analysis, to detect security incidents, and mounts incident response. Investigates and utilizes new technologies and processes to enhance security capabilities and implement improvements. May also review code or perform other security engineering methodologies.
Security architect
Designs a security system or major components of a security system, and may head a security design team building a new security system.
Security administrator
Installs and manages organization-wide security systems. May also take on some of the tasks of a security analyst in smaller organizations.
Chief Information Security Officer (CISO)
A high-level management position responsible for the entire information security division/staff. The position may include hands-on technical work.
Chief Security Officer (CSO)
A high-level management position responsible for the entire security division/staff. A newer position now deemed needed as security risks grow.
Security Consultant/Specialist/Intelligence
Broad titles that encompass any one or all of the other roles/titles, tasked with protecting computers, networks, software, data, and/or information systems against viruses, worms, spyware, malware, intrusion detection, unauthorized access, denial-of-service attacks, and an ever increasing list of attacks by hackers acting as individuals or as part of organized crime or foreign governments.
Student programs are also available to people interested in beginning a career in cybersecurity. Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts.
Terminology
The following terms used with regards to engineering secure systems are explained below.
Access authorization restricts access to a computer to group of users through the use of authentication systems. These systems can protect either the whole computer – such as through an interactive login screen – or individual services, such as an FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, and, more recently, smart cards and biometric systems.
Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses and other malicious software (malware).
Applications with known security flaws should not be run. Either leave it turned off until it can be patched or otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry used by worms to automatically break into a system and then spread to other systems connected to it. The security website Secunia provides a search tool for unpatched known flaws in popular products.
Authentication techniques can be used to ensure that communication end-points are who they say they are.
Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
Backups are a way of securing information; they are another copy of all the important computer files kept in another location. These files are kept on hard disks, CD-Rs, CD-RWs, tapes and more recently on the cloud. Suggested locations for backups are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over the Internet for both business and individuals, known as the cloud.
Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes, or tornadoes, may strike the building where the computer is located. The building can be on fire, or an explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of disaster. Further, it is recommended that the alternate location be placed where the same disaster would not affect both locations. Examples of alternate disaster recovery sites being compromised by the same disaster that affected the primary site include having had a primary site in World Trade Center I and the recovery site in 7 World Trade Center, both of which were destroyed in the 9/11 attack, and having one's primary site and recovery site in the same coastal region, which leads to both being vulnerable to hurricane damage (for example, primary site in New Orleans and recovery site in Jefferson Parish, both of which were hit by Hurricane Katrina in 2005). The backup media should be moved between the geographic sites in a secure manner, in order to prevent them from being stolen.
Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. This section discusses their use.
Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
Confidentiality is the nondisclosure of information except to another authorized person.
Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
Cyberwarfare is an internet-based conflict that involves politically motivated attacks on information and information systems. Such attacks can, for example, disable official websites and networks, disrupt or disable essential services, steal or alter classified data, and cripple financial systems.
Data integrity is the accuracy and consistency of stored data, indicated by an absence of any alteration in data between two updates of a data record.
thumb|300px|Cryptographic techniques involve transforming information, scrambling it so it becomes unreadable during transmission. The intended recipient can unscramble the message; ideally, eavesdroppers cannot.
Encryption is used to protect the message from the eyes of others. Cryptographically secure ciphers are designed to make any practical attempt of breaking infeasible. Symmetric-key ciphers are suitable for bulk encryption using shared keys, and public-key encryption using digital certificates can provide a practical solution for the problem of securely communicating when no key is shared in advance.
Endpoint security software helps networks to prevent exfiltration (data theft) and virus infection at network entry points made vulnerable by the prevalence of potentially infected portable computing devices, such as laptops and mobile devices, and external storage devices, such as USB drives.
Firewalls are an important method for control and security on the Internet and other networks. A network firewall can be a communications processor, typically a router, or a dedicated server, along with firewall software. A firewall serves as a gatekeeper system that protects a company's intranets and other computer networks from intrusion by providing a filter and safe transfer point for access to and from the Internet and other networks. It screens all network traffic for proper passwords or other security codes and only allows authorized transmission in and out of the network. Firewalls can deter, but not completely prevent, unauthorized access (hacking) into computer networks; they can also provide some protection from online intrusion.
Honey pots are computers that are either intentionally or unintentionally left vulnerable to attack by crackers. They can be used to catch crackers or fix vulnerabilities.
Intrusion-detection systems can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network.
A microkernel is the near-minimum amount of software that can provide the mechanisms to implement an operating system. It is used solely to provide very low-level, very precisely defined machine code upon which an operating system can be developed. A simple example is the early '90s GEMSOS (Gemini Computers), which provided extremely low-level machine code, such as "segment" management, atop which an operating system could be built. The theory (in the case of "segments") was that—rather than have the operating system itself worry about mandatory access separation by means of military-style labeling—it is safer if a low-level, independently scrutinized module can be charged solely with the management of individually labeled segments, be they memory "segments" or file system "segments" or executable text "segments." If software below the visibility of the operating system is (as in this case) charged with labeling, there is no theoretically viable means for a clever hacker to subvert the labeling scheme, since the operating system per se does not provide mechanisms for interfering with labeling: the operating system is, essentially, a client (an "application," arguably) atop the microkernel and, as such, subject to its restrictions.
Pinging The ping application can be used by potential crackers to find if an IP address is reachable. If a cracker finds a computer, they can try a port scan to detect and attack services on that computer.
Social engineering awareness keeps employees aware of the dangers of social engineering and/or having a policy in place to prevent social engineering can reduce successful breaches of the network and servers.
Scholars
See also
Further reading
References
External links
Category:E-commerce
Category:Secure communication
Category:Computer network security
Category:Crime prevention
Category:National security
Category:Cryptography
Category:Computer security exploits
Category:Cyberwarfare
Category:Weapons countermeasures
Category:Security technology
Category:Cybercrime
Category:Information governance | 7,398 | 2017-01 |
Protestantism | Protestantism is a form of Christianity which originated with the Reformation, a movement against what its followers considered to be errors in the Roman Catholic Church.Oxford Dictionary It is one of the three major divisions of Christendom, together with Roman Catholicism and Orthodoxy. The term derives from the letter of protestation from German Lutheran princes in 1529 against an edict condemning the teachings of Martin Luther as heretical.Oxford Dictionary of the Christian Church (1974) art. "Speyer (Spires), Diets of"
With its origins in Germany, the modern movement is popularly considered to have begun in 1517 when Luther published his Ninety-five Theses as a reaction against abuses in the sale of indulgences, which purported to offer remission of sin to their purchasers.Protestants: A History from Wittenberg to Pennsylvania 1517–1740, p. 15 Although there were earlier breaks from or attempts to reform the Roman Catholic Church—notably by Peter Waldo, John Wycliffe, and Jan Hus—only Luther succeeded in sparking a wider, lasting movement.James Watson: Religious Thoughts
Protestants reject the notion of papal supremacy and deny the Roman Catholic doctrine of transubstantiation, but disagree among themselves regarding the real presence of Christ in the Eucharist.Protestants: A History from Wittenberg to Pennsylvania 1517–1740, pp. 32 & 50 They emphasize the priesthood of all believers, the doctrine of justification by faith alone (sola fide) rather than by or with good works, and a belief in the Bible alone (rather than with sacred tradition) as the highest authority in matters of faith and morals (sola scriptura).Mothering the Fatherland: A Protestant Sisterhood Repents for the Holocaust by George Faithful, p.159 The "Five solae" summarize the reformers' basic differences in theological beliefs and opposition to the teaching of the Roman Catholic Church.Philip Voerding: The Trouble with Christianity: A Concise Outline of Christian History
In the 16th century, Lutheranism spread from Germany into Denmark, Norway, Sweden, Finland, the Baltic states, and Iceland.Historical Dictionary of Lutheranism by Günther Gassmann, Duane H. Larson and Mark W. Oldenburg, p.9 Reformed churches were founded in Germany, Hungary, the Netherlands, Scotland, Switzerland and France by such reformers as John Calvin, Huldrych Zwingli, and John Knox.Calvinism by Abraham Kuyper The political separation of the Church of England from Rome under King Henry VIII brought England into this broad Reformation movement. Protestants developed their own culture, which made major contributions in education, the humanities and sciences, the political and social order, the economy and the arts, and other fields.Karl Heussi, Kompendium der Kirchengeschichte, 11. Auflage (1956), Tübingen (Germany), pp. 317–319, 325–326
With more than 900 million adherents, nearly 40 percent of Christians worldwide, Protestantism is more divided theologically and ecclesiastically than either Eastern Orthodoxy or Roman Catholicism, lacking both structural unity and central human authority. Some Protestant denominations do have a worldwide scope and distribution of membership, while others are confined to a single country. A majority of Protestants are members of a handful of denominational families: Adventism, Anglicanism, Baptist churches, Reformed churches, Lutheranism, Methodism, and Pentecostalism. Nondenominational, evangelical, charismatic, independent and other churches are on the rise, and constitute a significant part of Protestant Christianity.World Council of Churches: Evangelical churches: "Evangelical churches have grown exponentially in the second half of the 20th century and continue to show great vitality, especially in the global South. This resurgence may in part be explained by the phenomenal growth of Pentecostalism and the emergence of the charismatic movement, which are closely associated with evangelicalism. However, there can be no doubt that the evangelical tradition "per se" has become one of the major components of world Christianity. Evangelicals also constitute sizable minorities in the traditional Protestant and Anglican churches. In regions like Africa and Latin America, the boundaries between "evangelical" and "mainline" are rapidly changing and giving way to new ecclesial realities."Religion in Global Civil Society by Santa Barbara Mark Juergensmeyer Professor of Sociology and Director of the Global and International Studies Program University of California
Terminology
130px|thumb|Memorial Church in Speyer, Germany
Six princes of the Holy Roman Empire and rulers of fourteen Imperial Free Cities, who issued a protest or dissent against the edict of the Diet of Speyer, were the first to be called Protestants.Protestant – Online Etymology Dictionary The edict reversed concessions made to the Lutherans with the approval of Holy Roman Emperor Charles V three years earlier.
During the Reformation, the term was hardly used outside of the German politics. The word evangelical (), which refers to the gospel, was much more widely used for those involved in the religious movement. Nowadays, this word is still preferred among some of the historical Protestant denominations in the Lutheran and Calvinist traditions in Europe, and those with strong ties to them (e.g. Evangelical Lutheran Church in America). Above all the term is used by Protestant bodies in the German-speaking area, such as the EKD. In continental Europe, an Evangelical is either a Lutheran or a Calvinist. The German word evangelisch means Protestant, and is different from the German evangelikal, which refers to churches shaped by Evangelicalism. The English word evangelical usually refers to Evangelical Protestant churches, and therefore not to Protestantism as a whole. It traces its roots back to the Puritans in England, where Evangelicalism originated, and then was brought to the United States. The word reformatorisch is used as an alternative for evangelisch in German, and is different from English reformed (), which refers to churches shaped by ideas of John Calvin, Huldrych Zwingli and other Reformed theologians.
Protestantism as a general term is now used in contradistinction to the other major Christian traditions, i.e. Roman Catholicism and Eastern Orthodoxy.
Initially, Protestant became a general term to mean any adherent to the Reformation movement in Germany and was taken up by Lutherans. Even though Martin Luther himself insisted on Christian or Evangelical as the only acceptable names for individuals who professed Christ. French and Swiss Protestants preferred the word reformed (), which became a popular, neutral and alternative name for Calvinists.
The term Protestant later acquired a broader sense, referring to a member of any Western church which subscribed to the main Protestant principles. However, it is often misused to mean any church outside the Roman and Eastern Orthodox communions.
Theology
Main principles
Various experts on the subject tried to determine what makes a Christian denomination a part of Protestantism. A common consensus approved by most of them is that if a Christian denomination is to be considered Protestant, it must acknowledge the following three fundamental principles of Protestantism.
Scripture alone The belief in the Bible as the highest source of authority for the church. The early churches of the Reformation believed in a critical, yet serious, reading of scripture and holding the Bible as a source of authority higher than that of church tradition. The many abuses that had occurred in the Western Church before the Protestant Reformation led the Reformers to reject much of its tradition, though some would maintain tradition has been maintained and reorganized in the liturgy and in the confessions of the Protestant churches of the Reformation. In the early 20th century, a less critical reading of the Bible developed in the United States, leading to a "fundamentalist" reading of Scripture. Christian fundamentalists read the Bible as the "inerrant, infallible" Word of God, as do the Roman Catholic, Eastern Orthodox, Anglican and Lutheran churches, but interpret it in a literalist fashion without using the historical critical method.
Justification by faith alone The belief that believers are justified, or pardoned for sin, solely on condition of faith in Christ rather than a combination of faith and good works. For Protestants, good works are a necessary consequence rather than cause of justification.Johann Jakob Herzog, Philip Schaff, Albert. The New Schaff-Herzog Encyclopedia of Religious Knowledge. 1911, page 419. https://books.google.com/books?id=AmYAAAAAMAAJ&pg=PA419
Universal priesthood of believers The universal priesthood of believers implies the right and duty of the Christian laity not only to read the Bible in the vernacular, but also to take part in the government and all the public affairs of the Church. It is opposed to the hierarchical system which puts the essence and authority of the Church in an exclusive priesthood, and makes ordained priests the necessary mediators between God and the people.
Trinity
Protestants who adhere to the Nicene Creed believe in three persons (God the Father, Jesus the Son, and the Holy Spirit) as one God. Others, beginning with the Polish Brethren reject the Trinity.
Movements emerging around the time of the Protestant Reformation, but not a part of Protestantism, e.g. Unitarianism also reject the Trinity. Unitarianism continues to have a presence mainly in Transylvania, England and the United States, as well as elsewhere.
Five solae
The Five solae are five Latin phrases (or slogans) that emerged during the Protestant Reformation and summarize the reformers' basic differences in theological beliefs in opposition to the teaching of the Catholic Church of the day. The Latin word sola means "alone", "only", or "single".
The use of the phrases as summaries of teaching emerged over time during the Reformation, based on the overarching principle of sola scriptura (by scripture alone). This idea contains the four main doctrines on the Bible: that its teaching is needed for salvation (necessity); that all the doctrine necessary for salvation comes from the Bible alone (sufficiency); that everything taught in the Bible is correct (inerrancy); and that, by the Holy Spirit overcoming sin, believers may read and understand truth from the Bible itself, though understanding is difficult, so the means used to guide individual believers to the true teaching is often mutual discussion within the church (clarity).
The necessity and inerrancy were well-established ideas, garnering little criticism, though they later came under debate from outside during the Enlightenment. The most contentious idea at the time though was the notion that anyone could simply pick up the Bible and learn enough to gain salvation. Though the reformers were concerned with ecclesiology (the doctrine of how the church as a body works), they had a different understanding of the process in which truths in scripture were applied to life of believers, compared to the Catholics' idea that certain people within the church, or ideas that were old enough, had a special status in giving understanding of the text.
The second main principle, sola fide (by faith alone), states that faith in Christ is sufficient alone for eternal salvation. Though argued from scripture, and hence logically consequent to sola scriptura, this is the guiding principle of the work of Luther and the later reformers. Because sola scriptura placed the Bible as the only source of teaching, sola fide epitomises the main thrust of the teaching the reformers wanted to get back to, namely the direct, close, personal connection between Christ and the believer, hence the reformers' contention that their work was Christocentric.
The other solas, as statements, emerged later, but the thinking they represent was also part of the early Reformation.
Solus Christus: Christ alone
The Protestants characterize the dogma concerning the Pope as Christ's representative head of the Church on earth, the concept of works made meritorious by Christ, and the Catholic idea of a treasury of the merits of Christ and his saints, as a denial that Christ is the only mediator between God and man. Catholics, on the other hand, maintained the traditional understanding of Judaism on these questions, and appealed to the universal consensus of Christian tradition., , , ,
Sola Gratia: Grace alone
Protestants perceived Roman Catholic salvation to be dependent upon the grace of God and the merits of one's own works. The reformers posited that salvation is a gift of God (i.e., God's act of free grace), dispensed by the Holy Spirit owing to the redemptive work of Jesus Christ alone. Consequently, they argued that a sinner is not accepted by God on account of the change wrought in the believer by God's grace, and that the believer is accepted without regard for the merit of his works, for no one deserves salvation.
Soli Deo Gloria: Glory to God alone
All glory is due to God alone since salvation is accomplished solely through his will and action—not only the gift of the all-sufficient atonement of Jesus on the cross but also the gift of faith in that atonement, created in the heart of the believer by the Holy Spirit. The reformers believed that human beings—even saints canonized by the Catholic Church, the popes, and the ecclesiastical hierarchy—are not worthy of the glory.
Christ's presence in the Eucharist
thumb|A Lutheran depiction of the Last Supper by Lucas Cranach the Elder, 1547
The Protestant movement began to diverge into several distinct branches in the mid-to-late 16th century. One of the central points of divergence was controversy over the Eucharist. Early Protestants rejected the Roman Catholic dogma of transubstantiation, which teaches that the bread and wine used in the sacrificial rite of the Mass lose their natural substance by being transformed into the body, blood, soul, and divinity of Christ. They disagreed with one another concerning the presence of Christ and his body and blood in Holy Communion.
Lutherans hold that within the Lord's Supper the consecrated elements of bread and wine are the true body and blood of Christ "in, with, and under the form" of bread and wine for all those who eat and drink it, Engelder, T.E.W., Popular Symbolics. St. Louis: Concordia Publishing House, 1934. p. 95, Part XXIV. "The Lord's Supper", paragraph 131. a doctrine that the Formula of Concord calls the Sacramental union. God earnestly offers to all who receive the sacrament, forgiveness of sins, and eternal salvation.
The Reformed churches emphasize the real spiritual presence, or sacramental presence, of Christ, saying that the sacrament is a means of saving grace through which only the elect believer actually partakes of Christ, but merely with the bread and wine rather than in the elements. Calvinists deny the Lutheran assertion that all communicants, both believers and unbelievers, orally receive Christ's body and blood in the elements of the sacrament but instead affirm that Christ is united to the believer through faith—toward which the supper is an outward and visible aid. This is often referred to as dynamic presence.
A Protestant holding a popular simplification of the Zwinglian view, without concern for theological intricacies as hinted at above, may see the Lord's Supper merely as a symbol of the shared faith of the participants, a commemoration of the facts of the crucifixion, and a reminder of their standing together as the body of Christ (a view referred to somewhat derisively as memorialism).
History
Pre-Reformation
thumb|left|Execution of Jan Hus in 1415
In the late 1130s, Arnold of Brescia, an Italian canon regular became one of the first theologians to attempt to reform the Roman Catholic Church. After his death, his teachings on apostolic poverty gained currency among Arnoldists, and later more widely among Waldensians and the Spiritual Franciscans, though no written word of his has survived the official condemnation. In the early 1170s, Peter Waldo founded the Waldensians. He advocated an interpretation of the Gospel that led to conflicts with the Roman Catholic Church. By 1215, the Waldensians were declared heretical and subject to persecution. Despite that, the movement continues to exist to this day in Italy, as a part of the wider Reformed tradition.
In the 1370s, John Wycliffe—later dubbed the "Morning Star of Reformation"—started his activity as an English reformer. He rejected papal authority over secular power, translated the Bible into vernacular English, and preached anticlerical and biblically-centred reforms.
Beginning in the first decade of the 15th century, Jan Hus—a Roman Catholic priest, Czech reformist and professor—influenced by John Wycliffe's writings, founded the Hussite movement. He strongly advocated his reformist Bohemian religious denomination. He was excommunicated and burned at the stake in Constance, Bishopric of Constance in 1415 by secular authorities for unrepentant and persistent heresy. After his execution, a revolt erupted. Hussites defeated five continuous crusades proclaimed against them by the Pope.
Later on, theological disputes caused a split within the Hussite movement. Utraquists maintained that both the bread and the wine should be administered to the people during the Eucharist. Another major faction were the Taborites, who opposed the Utraquists in the Battle of Lipany during the Hussite Wars. There were two separate parties among the Hussites: moderate and radical movements. Other smaller regional Hussite branches in Bohemia included Adamites, Orebites, Orphans and Praguers.
The Hussite Wars concluded with the victory of Holy Roman Emperor Sigismund, his Catholic allies and moderate Hussites and the defeat of the radical Hussites. After the war, both moderate and radical Hussitism was increasingly persecuted by the Catholics.
Starting in 1475, an Italian Dominican friar Girolamo Savonarola was calling for a Christian renewal. Later on, Martin Luther himself read some of the friar's writings and praised him as a martyr and forerunner whose ideas on faith and grace anticipated Luther's own doctrine of justification by faith alone.
Some of Hus' followers founded the Unitas Fratrum—"Unity of the Brethren"—which was renewed under the leadership of Count Nicolaus von Zinzendorf in Herrnhut, Saxony in 1722 after its almost total destruction in the Thirty Years' War and the Counter-Reformation. Today, it is usually referred to in English as the Moravian Church and in German as the Herrnhuter Brüdergemeine.
Reformation proper
The Protestant Reformation began as an attempt to reform the Roman Catholic Church.
On 31 October 1517 (All Hallows' Eve) Martin Luther allegedly nailed his Ninety-five Theses (Disputation on the Power of Indulgences) on the door of the All Saints', the Castle Church in Wittenberg, Germany, detailing doctrinal and practical abuses of the Roman Catholic Church, especially the selling of indulgences. The theses debated and criticized many aspects of the Church and the papacy, including the practice of purgatory, particular judgment, and the authority of the pope. Luther would later write works against the Catholic devotion to Virgin Mary, the intercession of and devotion to the saints, the sacraments, mandatory clerical celibacy, monasticism, the authority of the pope, the ecclesiastical law, censure and excommunication, the role of secular rulers in religious matters, the relationship between Christianity and the law, good works, and the sacraments.Schofield Martin Luther p. 122
The Reformation was a triumph of literacy and the new printing press invented by Johannes Gutenberg.Cameron European Reformation Luther's translation of the Bible into German was a decisive moment in the spread of literacy, and stimulated as well the printing and distribution of religious books and pamphlets. From 1517 onward, religious pamphlets flooded much of Europe.Edwards Printing, Propaganda, and Martin Luther
Following the excommunication of Luther and condemnation of the Reformation by the Pope, the work and writings of John Calvin were influential in establishing a loose consensus among various groups in Switzerland, Scotland, Hungary, Germany and elsewhere. After the expulsion of its Bishop in 1526, and the unsuccessful attempts of the Bern reformer William Farel, Calvin was asked to use the organisational skill he had gathered as a student of law to discipline the city of Geneva. His Ordinances of 1541 involved a collaboration of Church affairs with the City council and consistory to bring morality to all areas of life. After the establishment of the Geneva academy in 1559, Geneva became the unofficial capital of the Protestant movement, providing refuge for Protestant exiles from all over Europe and educating them as Calvinist missionaries. The faith continued to spread after Calvin's death in 1563.
Protestantism also spread from the German lands into France, where the Protestants were nicknamed Huguenots. Calvin continued to take an interest in the French religious affairs from his base in Geneva. He regularly trained pastors to lead congregations there. Despite heavy persecution, the Reformed tradition made steady progress across large sections of the nation, appealing to people alienated by the obduracy and the complacency of the Catholic establishment. French Protestantism came to acquire a distinctly political character, made all the more obvious by the conversions of nobles during the 1550s. This established the preconditions for a series of conflicts, known as the French Wars of Religion. The civil wars gained impetus with the sudden death of Henry II of France in 1559. Atrocity and outrage became the defining characteristics of the time, illustrated at their most intense in the St. Bartholomew's Day massacre of August 1572, when the Roman Catholic party annihilated between 30,000 and 100,000 Huguenots across France. The wars only concluded when Henry IV of France issued the Edict of Nantes, promising official toleration of the Protestant minority, but under highly restricted conditions. Roman Catholicism remained the official state religion, and the fortunes of French Protestants gradually declined over the next century, culminating in Louis XIV's Edict of Fontainebleau which revoked the Edict of Nantes and made Roman Catholicism the sole legal religion once again. In response to the Edict of Fontainebleau, Frederick William I, Elector of Brandenburg declared the Edict of Potsdam, giving free passage to Huguenot refugees. In the late 17th century many Huguenots fled to England, the Netherlands, Prussia, Switzerland, and the English and Dutch overseas colonies. A significant community in France remained in the Cévennes region.
Parallel to events in Germany, a movement began in Switzerland under the leadership of Huldrych Zwingli. Zwingli was a scholar and preacher, who in 1518 moved to Zurich. Although the two movements agreed on many issues of theology, some unresolved differences kept them separate. A long-standing resentment between the German states and the Swiss Confederation led to heated debate over how much Zwingli owed his ideas to Lutheranism. The German Prince Philip of Hesse saw potential in creating an alliance between Zwingli and Luther. A meeting was held in his castle in 1529, now known as the Colloquy of Marburg, which has become infamous for its failure. The two men could not come to any agreement due to their disputation over one key doctrine.
In 1534, King Henry VIII put an end to all papal jurisdiction in England, after the Pope failed to annul his marriage to Catherine of Aragon;William P. Haugaard "The History of Anglicanism I" in The Study of Anglicanism Stephen Sykes and John Booty (eds) (SPCK 1987) pp. 6–7 this opened the door to reformational ideas. Reformers in the Church of England alternated between sympathies for ancient Catholic tradition and more Reformed principles, gradually developing into a tradition considered a middle way (via media) between the Roman Catholic and Protestant traditions. The English Reformation followed a particular course. The different character of the English Reformation came primarily from the fact that it was driven initially by the political necessities of Henry VIII. King Henry decided to remove the Church of England from the authority of Rome. In 1534, the Act of Supremacy recognized Henry as the only Supreme Head on earth of the Church of England. Between 1535 and 1540, under Thomas Cromwell, the policy known as the Dissolution of the Monasteries was put into effect. Following a brief Roman Catholic restoration during the reign of Mary I, a loose consensus developed during the reign of Elizabeth I. The Elizabethan Religious Settlement largely formed Anglicanism into a distinctive church tradition. The compromise was uneasy and was capable of veering between extreme Calvinism on the one hand and Roman Catholicism on the other. It was relatively successful until the Puritan Revolution or English Civil War in the 17th century.
The success of the Counter-Reformation on the Continent and the growth of a Puritan party dedicated to further Protestant reform polarised the Elizabethan Age. The early Puritan movement was a movement for reform in the Church of England. The desire was for the Church of England to resemble more closely the Protestant churches of Europe, especially Geneva. The later Puritan movement, often referred to as dissenters and nonconformists, eventually led to the formation of various Reformed denominations.
The Scottish Reformation of 1560 decisively shaped the Church of Scotland.Article 1, of the Articles Declaratory of the Constitution of the Church of Scotland 1921 states 'The Church of Scotland adheres to the Scottish Reformation'. The Reformation in Scotland culminated ecclesiastically in the establishment of a church along Reformed lines, and politically in the triumph of English influence over that of France. John Knox is regarded as the leader of the Scottish Reformation. The Scottish Reformation Parliament of 1560 repudiated the pope's authority by the Papal Jurisdiction Act 1560, forbade the celebration of the Mass and approved a Protestant Confession of Faith. It was made possible by a revolution against French hegemony under the regime of the regent Mary of Guise, who had governed Scotland in the name of her absent daughter.
Some of the most important activists of the Protestant Reformation included Jacobus Arminius, Theodore Beza, Martin Bucer, Andreas von Carlstadt, Heinrich Bullinger, Balthasar Hubmaier, Thomas Cranmer, William Farel, Thomas Müntzer, Laurentius Petri, Olaus Petri, Philipp Melanchthon, Menno Simons, Louis de Berquin, Primož Trubar and John Smyth.
In the course of this religious upheaval, the German Peasants' War of 1524–25 swept through the Bavarian, Thuringian and Swabian principalities. After the Eighty Years' War in the Low Countries and the French Wars of Religion, the confessional division of the states of the Holy Roman Empire eventually erupted in the Thirty Years' War between 1618 and 1648. It devastated much of Germany, killing between 25% and 40% of its population."History of Europe – Demographics". Encyclopædia Britannica. The main tenets of the Peace of Westphalia, which ended the Thirty Years' War, were:
All parties would now recognise the Peace of Augsburg of 1555, by which each prince would have the right to determine the religion of his own state, the options being Roman Catholicism, Lutheranism, and now Calvinism. (the principle of cuius regio, eius religio)
Christians living in principalities where their denomination was not the established church were guaranteed the right to practice their faith in public during allotted hours and in private at their will.
The treaty also effectively ended the papacy's pan-European political power. Pope Innocent X declared the treaty "null, void, invalid, iniquitous, unjust, damnable, reprobate, inane, empty of meaning and effect for all times" in his bull Zelo Domus Dei. European sovereigns, Roman Catholic and Protestant alike, ignored his verdict.Cross, (ed.) "Westphalia, Peace of" Oxford Dictionary of the Christian Church
Post-Reformation
The Great Awakenings were periods of rapid and dramatic religious revival in Anglo-American religious history.
The First Great Awakening was an evangelical and revitalization movement that swept through Protestant Europe and British America, especially the American colonies in the 1730s and 1740s, leaving a permanent impact on American Protestantism. It resulted from powerful preaching that gave listeners a sense of deep personal revelation of their need of salvation by Jesus Christ. Pulling away from ritual, ceremony, sacramentalism and hierarchy, it made Christianity intensely personal to the average person by fostering a deep sense of spiritual conviction and redemption, and by encouraging introspection and a commitment to a new standard of personal morality.Thomas S. Kidd, The Great Awakening: The Roots of Evangelical Christianity in Colonial America (2009)
thumb|right|270px|1839 Methodist camp meeting during the Second Great Awakening in the U.S.
The Second Great Awakening began around 1790. It gained momentum by 1800. After 1820, membership rose rapidly among Baptist and Methodist congregations, whose preachers led the movement. It was past its peak by the late 1840s. It has been described as a reaction against skepticism, deism, and rationalism, although why those forces became pressing enough at the time to spark revivals is not fully understood.Nancy Cott, "Young Women in the Great Awakening in New England," Feminist Studies 3, no. 1/2 (Autumn 1975): 15. It enrolled millions of new members in existing evangelical denominations and led to the formation of new denominations.
The Third Great Awakening refers to a hypothetical historical period that was marked by religious activism in American history and spans the late 1850s to the early 20th century.William G. McLoughlin, Revivals Awakenings and Reform (1980) It affected pietistic Protestant denominations and had a strong element of social activism.Mark A. Noll, A History of Christianity in the United States and Canada (1992) pp. 286–310 It gathered strength from the postmillennial belief that the Second Coming of Christ would occur after mankind had reformed the entire earth. It was affiliated with the Social Gospel Movement, which applied Christianity to social issues and gained its force from the Awakening, as did the worldwide missionary movement. New groupings emerged, such as the Holiness, Nazarene, and Christian Science movements.Robert William Fogel, The Fourth Great Awakening and the Future of Egalitarianism (2000)
The Fourth Great Awakening was a Christian religious awakening that some scholars—most notably, Robert Fogel—say took place in the United States in the late 1960s and early 1970s, while others look at the era following World War II. The terminology is controversial. Thus, the idea of a Fourth Great Awakening itself has not been generally accepted.Robert William Fogel (2000), The Fourth Great Awakening & the Future of Egalitarianism; see the review by Randall Balmer, Journal of Interdisciplinary History 2002 33(2): 322–325
In 1814, Le Réveil swept through Calvinist regions in Switzerland and France.
In 1904, a Protestant revival in Wales had tremendous impact on the local population. A part of British modernization, it drew many people to churches, especially Methodist and Baptist ones.
A noteworthy development in 20th-century Protestant Christianity was the rise of the modern Pentecostal movement. Sprung from Methodist and Wesleyan roots, it arose out of meetings at an urban mission on Azusa Street in Los Angeles. From there it spread around the world, carried by those who experienced what they believed to be miraculous moves of God there. These Pentecost-like manifestations have steadily been in evidence throughout the history, such as seen in the two Great Awakenings. Pentecostalism, which in turn birthed the Charismatic movement within already established denominations, continues to be an important force in Western Christianity.
In the United States and elsewhere in the world, there has been a marked rise in the evangelical wing of Protestant denominations, especially those that are more exclusively evangelical, and a corresponding decline in the mainstream liberal churches. In the post–World War I era, Liberal Christianity was on the rise, and a considerable number of seminaries held and taught from a liberal perspective as well. In the post–World War II era, the trend began to swing back towards the conservative camp in America's seminaries and church structures.
In Europe, there has been a general move away from religious observance and belief in Christian teachings and a move towards secularism. The Enlightenment is largely responsible for the spread of secularism. Several scholars have argued for a link between the rise of secularism and Protestantism, attributing it to the wide-ranging freedom in the Protestant countries.Has Lutheranism caused secularism? In North America, South America and Australia Christian religious observance is much higher than in Europe. United States remains particularly religious in comparison to other developed countries. South America, historically Roman Catholic, has experienced a large Evangelical and Pentecostal infusion in the 20th and 21st centuries.
Radical Reformation
thumb|Dissatisfaction with the outcome of a disputation in 1525 prompted Swiss Brethren to part ways with Huldrych Zwingli.
Unlike mainstream Lutheran, Calvinist and Zwinglian movements, the Radical Reformation, which had no state sponsorship, generally abandoned the idea of the "Church visible" as distinct from the "Church invisible". It was a rational extension of the state-approved Protestant dissent, which took the value of independence from constituted authority a step further, arguing the same for the civic realm. The Radical Reformation was non-mainstream, though in parts of Germany, Switzerland and Austria, a majority would sympathize with the Radical Reformation despite the intense persecution it faced from both Roman Catholics and Magisterial Protestants.
The early Anabaptists believed that their reformation must purify not only theology but also the actual lives of Christians, especially their political and social relationships.Gonzalez, A History of Christian Thought, 88. Therefore, the church should not be supported by the state, neither by tithes and taxes, nor by the use of the sword; Christianity was a matter of individual conviction, which could not be forced on anyone, but rather required a personal decision for it. Protestant ecclesial leaders such as Hubmaier and Hofmann preached the invalidity of infant baptism, advocating baptism as following conversion ("believer's baptism") instead. This was not a doctrine new to the reformers, but was taught by earlier groups, such as the Albigenses in 1147. Though most of the Radical Reformers were Anabaptist, some did not identify themselves with the mainstream Anabaptist tradition. Thomas Müntzer was involved in the German Peasants' War. Andreas Karlstadt disagreed theologically with Huldrych Zwingli and Martin Luther, teaching nonviolence and refusing to baptize infants while not rebaptizing adult believers. Kaspar Schwenkfeld and Sebastian Franck were influenced by German mysticism and spiritualism.
In the view of many associated with the Radical Reformation, the Magisterial Reformation had not gone far enough. Radical Reformer, Andreas von Bodenstein Karlstadt, for example, referred to the Lutheran theologians at Wittenberg as the "new papists".The Magisterial Reformation. Since the term "magister" also means "teacher", the Magisterial Reformation is also characterized by an emphasis on the authority of a teacher. This is made evident in the prominence of Luther, Calvin, and Zwingli as leaders of the reform movements in their respective areas of ministry. Because of their authority, they were often criticized by Radical Reformers as being too much like the Roman Popes. A more political side of the Radical Reformation can be seen in the thought and practice of Hans Hut, although typically Anabaptism has been associated with pacifism.
Anabaptism in shape of its various diversifications such as the Amish, Mennonites and Hutterites came out of the Radical Reformation. Later in history, Schwarzenau Brethren, Bruderhof, and the Apostolic Christian Church would emerge in Anabaptist circles.
Denominations
thumb|upright=1.25|right|Protestantism as state religion:
Protestants refer to specific groupings of congregations or churches that share in common foundational doctrines and the name of their groups as denominations. DIANE Publishing Company: Occupational Outlook Handbook, 1996–1997 The term denomination (national body) is to be distinguished from branch (denominational family; tradition), communion (international body) and congregation (church). An example (this is no universal way to classify Protestant churches, as these may sometimes vary broadly in their structures) to show the difference:
Branch/denominational family/tradition: Methodism
Communion/international body: World Methodist Council
Denomination/national body: United Methodist Church
Congregation/church: First United Methodist Church (Paintsville, Kentucky)
Protestants reject the Roman Catholic Church's doctrine that it is the one true church, believing in the invisible church, which consists of all who profess faith in Jesus Christ.Fr. John Morris: An Orthodox Response to the Recent Roman Catholic Declaration on the Nature of the Church Some Protestant denominations are less accepting of other denominations, and the basic orthodoxy of some is questioned by most of the others. Individual denominations also have formed over very subtle theological differences. Other denominations are simply regional or ethnic expressions of the same beliefs. Because the five solas are the main tenets of the Protestant faith, non-denominational groups and organizations are also considered Protestant.
Various ecumenical movements have attempted cooperation or reorganization of the various divided Protestant denominations, according to various models of union, but divisions continue to outpace unions, as there is no overarching authority to which any of the churches owe allegiance, which can authoritatively define the faith. Most denominations share common beliefs in the major aspects of the Christian faith while differing in many secondary doctrines, although what is major and what is secondary is a matter of idiosyncratic belief.
Several countries have established their national churches, linking the ecclesiastical structure with the state. Jurisdictions where a Protestant denomination has been established as a state religion include several Nordic countries; Denmark (including Greenland),Denmark – Constitution: Section 4 State Church, International Constitutional Law. the Faroe Islands (its church being independent since 2007),Fólkakirkjan's official website (in Faroese) IcelandConstitution of the Republic of Iceland: Article 62, Government of Iceland. and NorwayLøsere bånd, men fortsatt statskirke, ABC NyheterStaten skal ikke lenger ansette biskoper, NRKSlik blir den nye statskirkeordningen have established Evangelical Lutheran churches. Tuvalu has the only established church in Reformed tradition in the world, while Tonga—in the Methodist tradition.www.reformiert-online.net/adressen/detail.php?id=13338&lg=eng Te Ekalesia Kelisiano Tuvalu The Church of England is the officially established religious institution in England, and also the Mother Church of the worldwide Anglican Communion.
In 1869, Finland was the first Nordic country to disestablish its Evangelical Lutheran church by introducing the Church Act. Although the church still maintains a special relationship with the state, it is not described as a state religion in the Finnish Constitution or other laws passed by the Finnish Parliament.Finland – Constitution, Section 76 The Church Act, http://servat.unibe.ch/icl/fi00000_.html. In 2000, Sweden was the second Nordic country to do so.MAARIT JÄNTERÄ-JAREBORG: Religion and the Secular State in Sweden
United and uniting churches
United and uniting churches are churches formed from the merger or other form of union of two or more different Protestant denominations.
Historically, unions of Protestant churches were enforced by the state, usually in order to have a stricter control over the religious sphere of its people, but also other organizational reasons. As modern Christian ecumenism progresses, unions between various Protestant traditions are becoming more and more common, resulting in a growing number of united and uniting churches. Some of the recent major examples are the United Protestant Church of France (2013) and the Protestant Church in the Netherlands (2004). As mainline Protestantism shrinks in Europe and North America due to the rise of secularism, Reformed and Lutheran denominations merge, often creating large nationwide denominations. The phenomenon is much less common among evangelical, nondenominational and charismatic churches as new ones arise and plenty of them remain independent of each other.
Perhaps the oldest official united church is found in Germany, where the Evangelical Church in Germany is a federation of Lutheran, United (Prussian Union) and Reformed churches, a union dating back to 1817. The first of the series of unions was at a synod in Idstein to form the Protestant Church in Hesse and Nassau in August 1817, commemorated in naming the church of Idstein Unionskirche one hundred years later.
Around the world, each united or uniting church comprises a different mix of predecessor Protestant denominations. Trends are visible, however, as most united and uniting churches have one or more predecessors with heritage in the Reformed tradition and many are members of the World Alliance of Reformed Churches.
Major branches
Protestants can be differentiated according to how they have been influenced by important movements since the Reformation, today regarded as branches. Some of these movements have a common lineage, sometimes directly spawning individual denominations. Due to the earlier stated multitude of denominations, this section discusses only the largest denominational families, or branches, widely considered to be a part of Protestantism. These are, in alphabetical order: Adventist, Anglican, Baptist, Calvinist (Reformed), Lutheran, Methodist and Pentecostal. A small but historically significant Anabaptist branch is also discussed.
The chart below shows the mutual relations and historical origins of the main Protestant denominational families, or their parts.
650px|center|Historical chart of the main Protestant branches
Adventism
Adventism began in the 19th century in the context of the Second Great Awakening revival in the United States. The name refers to belief in the imminent Second Coming (or "Second Advent") of Jesus Christ. William Miller started the Adventist movement in the 1830s. His followers became known as Millerites.
Although the Adventist churches hold much in common, their theologies differ on whether the intermediate state is unconscious sleep or consciousness, whether the ultimate punishment of the wicked is annihilation or eternal torment, the nature of immortality, whether or not the wicked are resurrected after the millennium, and whether the sanctuary of refers to the one in heaven or one on earth. The movement has encouraged the examination of the whole Bible, leading Seventh-day Adventists and some smaller Adventist groups to observe the Sabbath. The General Conference of Seventh-day Adventists has compiled that church's core beliefs in the 28 Fundamental Beliefs (1980 and 2005), which use Biblical references as justification.
In 2010, Adventism claimed some 22 million believers scattered in various independent churches.Christianity report The largest church within the movement—the Seventh-day Adventist Church—has more than 18 million members.
Anabaptism
Anabaptism traces its origins to the Radical Reformation. Anabaptists believe in delaying baptism until the candidate confesses his or her faith. Although some consider this movement to be an offshoot of Protestantism, others see it as a distinct one.. The Amish, Hutterites, and Mennonites are direct descendants of the movement. Schwarzenau Brethren, Bruderhof, and the Apostolic Christian Church are considered later developments among the Anabaptists.
The name Anabaptist, meaning "one who baptizes again", was given them by their persecutors in reference to the practice of re-baptizing converts who already had been baptized as infants.. Anabaptists required that baptismal candidates be able to make their own confessions of faith and so rejected baptism of infants. The early members of this movement did not accept the name Anabaptist, claiming that since infant baptism was unscriptural and null and void, the baptizing of believers was not a re-baptism but in fact their first real baptism. As a result of their views on the nature of baptism and other issues, Anabaptists were heavily persecuted during the 16th century and into the 17th by both Magisterial Protestants and Roman Catholics.Since the middle of the 20th century, the German-speaking world no longer uses the term "Wiedertäufer" (translation: "Re-baptizers") considering it biased. The term "Täufer" (translation: "Baptizers") is now used, which is considered more impartial. From the perspective of their persecutors, the "Baptizers" baptized for the second time those "who as infants had already been baptized". Since the denigrative term Anabaptist signifies re-baptizing, it is considered a polemic term and therefore has been dropped from use in modern German. However, in the English-speaking world it is still in use in order to distinguish the "Baptizers" more clearly from the "Baptists" who emerged later. While most Anabaptists adhered to a literal interpretation of the Sermon on the Mount, which precluded taking oaths, participating in military actions, and participating in civil government, some who practiced re-baptism felt otherwise.For example, the followers of Thomas Müntzer and Balthasar Hubmaier. They were thus technically Anabaptists, even though conservative Amish, Mennonites, and Hutterites and some historians tend to consider them as outside of true Anabaptism. Anabaptist reformers of the Radical Reformation are divided into Radical and the so-called Second Front. Some important Radical Reformation theologians were John of Leiden, Thomas Müntzer, Kaspar Schwenkfeld, Sebastian Franck, Menno Simons. Second Front Reformers included Hans Denck, Conrad Grebel, Balthasar Hubmaier and Felix Manz.
Anglicanism
Anglicanism comprises the Church of England and churches which are historically tied to it or hold similar beliefs, worship practices and church structures. The word Anglican originates in ecclesia anglicana, a medieval Latin phrase dating to at least 1246 that means the English Church. There is no single "Anglican Church" with universal juridical authority, since each national or regional church has full autonomy. As the name suggests, the communion is an association of churches in full communion with the Archbishop of Canterbury. The great majority of Anglicans are members of churches which are part of the international Anglican Communion, which has 85 million adherents.The Anglican Communion official website – "Provincial Registry"
The Church of England declared its independence from the Catholic Church at the time of the Elizabethan Religious Settlement. Many of the new Anglican formularies of the mid-16th century corresponded closely to those of contemporary Reformed tradition. These reforms were understood by one of those most responsible for them, the then Archbishop of Canterbury, Thomas Cranmer, as navigating a middle way between two of the emerging Protestant traditions, namely Lutheranism and Calvinism.Diarmaid MacCulloch, Thomas Cranmer: A Life, Yale University Press, p.617 (1996). By the end of the century, the retention in Anglicanism of many traditional liturgical forms and of the episcopate was already seen as unacceptable by those promoting the most developed Protestant principles.
Unique to Anglicanism is the Book of Common Prayer, the collection of services that worshippers in most Anglican churches used for centuries. While it has since undergone many revisions and Anglican churches in different countries have developed other service books, the Book of Common Prayer is still acknowledged as one of the ties that bind the Anglican Communion together.
Baptists
Baptists subscribe to a doctrine that baptism should be performed only for professing believers (believer's baptism, as opposed to infant baptism), and that it must be done by complete immersion (as opposed to affusion or sprinkling). Other tenets of Baptist churches include soul competency (liberty), salvation through faith alone, Scripture alone as the rule of faith and practice, and the autonomy of the local congregation. Baptists recognize two ministerial offices, pastors and deacons. Baptist churches are widely considered to be Protestant churches, though some Baptists disavow this identity.Buescher, John. "Baptist Origins." Teaching History. Retrieved 23 September 2011.
Diverse from their beginning, those identifying as Baptists today differ widely from one another in what they believe, how they worship, their attitudes toward other Christians, and their understanding of what is important in Christian discipleship.
Historians trace the earliest church labeled Baptist back to 1609 in Amsterdam, with English Separatist John Smyth as its pastor.Gourley, Bruce. "A Very Brief Introduction to Baptist History, Then and Now." The Baptist Observer. In accordance with his reading of the New Testament, he rejected baptism of infants and instituted baptism only of believing adults. Baptist practice spread to England, where the General Baptists considered Christ's atonement to extend to all people, while the Particular Baptists believed that it extended only to the elect. In 1638, Roger Williams established the first Baptist congregation in the North American colonies. In the mid-18th century, the First Great Awakening increased Baptist growth in both New England and the South."Baptist." 2010. Encyclopædia Britannica Online. The Second Great Awakening in the South in the early 19th century increased church membership, as did the preachers' lessening of support for abolition and manumission of slavery, which had been part of the 18th-century teachings. Baptist missionaries have spread their church to every continent.
The Baptist World Alliance reports more than 41 million members in more than 150,000 congregations. In 2002, there were over 100 million Baptists and Baptistic group members worldwide and over 33 million in North America. The largest Baptist association is the Southern Baptist Convention, with the membership of associated churches totaling more than 15 million.
Calvinism
Calvinism, also called the Reformed tradition, was advanced by several theologians such as Martin Bucer, Heinrich Bullinger, Peter Martyr Vermigli, and Huldrych Zwingli, but this branch of Christianity bears the name of the French reformer John Calvin because of his prominent influence on it and because of his role in the confessional and ecclesiastical debates throughout the 16th century.
Today, this term also refers to the doctrines and practices of the Reformed churches of which Calvin was an early leader. Less commonly, it can refer to the individual teaching of Calvin himself. The particulars of Calvinist theology may be stated in a number of ways. Perhaps the best known summary is contained in the five points of Calvinism, though these points identify the Calvinist view on soteriology rather than summarizing the system as a whole. Broadly speaking, Calvinism stresses the sovereignty or rule of God in all things—in salvation but also in all of life. This concept is seen clearly in the doctrines of predestination and total depravity.
The biggest Reformed association is the World Communion of Reformed Churches with more than 80 million members in 211 member denominations around the world. There are more conservative Reformed federations like the World Reformed Fellowship and the International Conference of Reformed Churches, as well as independent churches.
Lutheranism
Lutheranism identifies with the theology of Martin Luther—a German friar, ecclesiastical reformer, and theologian.
Lutheranism advocates a doctrine of justification "by grace alone through faith alone on the basis of Scripture alone", the doctrine that scripture is the final authority on all matters of faith, denying the belief of the Catholic Church defined at the Council of Trent concerning authority coming from both the Scriptures and Tradition.Canons and Decrees of the Council of Trent, Fourth Session, Decree on Sacred Scripture (Denzinger 783 [1501]; Schaff 2:79-81). For a history of the discussion of various interpretations of the Tridentine decree, see Selby, Matthew L., The Relationship Between Scripture and Tradition according to the Council of Trent, unpublished Master's thesis, University of St Thomas, July 2013. In addition, Lutheranism accepts the teachings of the first four ecumenical councils of the undivided Christian Church.
Unlike the Reformed tradition, Lutherans retain many of the liturgical practices and sacramental teachings of the pre-Reformation Church, with a particular emphasis on the Eucharist, or Lord's Supper. Lutheran theology differs from Reformed theology in Christology, the purpose of God's Law, the divine grace, the concept of perseverance of the saints, and predestination.
Today, Lutheranism is one of the largest branches of Protestantism. With approximately 80 million adherents, it constitutes the third most common Protestant confession after historically Pentecostal denominations and Anglicanism. The Lutheran World Federation, the largest global communion of Lutheran churches represents over 72 million people. Additionally, there are also many smaller bodies such as the International Lutheran Council and the Confessional Evangelical Lutheran Conference, as well as independent churches.
Methodism
Methodism identifies principally with the theology of John Wesley—an Anglican priest and evangelist. This evangelical movement originated as a revival within the 18th-century Church of England and became a separate Church following Wesley's death. Because of vigorous missionary activity, the movement spread throughout the British Empire, the United States, and beyond, today claiming approximately 80 million adherents worldwide. Originally it appealed especially to labourers and slaves.
Soteriologically, most Methodists are Arminian, emphasizing that Christ accomplished salvation for every human being, and that humans must exercise an act of the will to receive it (as opposed to the traditional Calvinist doctrine of monergism). Methodism is traditionally low church in liturgy, although this varies greatly between individual congregations; the Wesleys themselves greatly valued the Anglican liturgy and tradition. Methodism is known for its rich musical tradition; John Wesley's brother, Charles, was instrumental in writing much of the hymnody of the Methodist Church, and many other eminent hymn writers come from the Methodist tradition.
Pentecostalism
Pentecostalism is a movement that places special emphasis on a direct personal experience of God through the baptism with the Holy Spirit. The term Pentecostal is derived from Pentecost, the Greek name for the Jewish Feast of Weeks. For Christians, this event commemorates the descent of the Holy Spirit upon the followers of Jesus Christ, as described in the second chapter of the Book of Acts.
This branch of Protestantism is distinguished by belief in the baptism with the Holy Spirit as an experience separate from conversion that enables a Christian to live a Holy Spirit–filled and empowered life. This empowerment includes the use of spiritual gifts such as speaking in tongues and divine healing—two other defining characteristics of Pentecostalism. Because of their commitment to biblical authority, spiritual gifts, and the miraculous, Pentecostals tend to see their movement as reflecting the same kind of spiritual power and teachings that were found in the Apostolic Age of the early church. For this reason, some Pentecostals also use the term Apostolic or Full Gospel to describe their movement.
Pentecostalism eventually spawned hundreds of new denominations, including large groups such as the Assemblies of God and the Church of God in Christ, both in the United States and elsewhere. There are over 279 million Pentecostals worldwide, and the movement is growing in many parts of the world, especially the global South. Since the 1960s, Pentecostalism has increasingly gained acceptance from other Christian traditions, and Pentecostal beliefs concerning Spirit baptism and spiritual gifts have been embraced by non-Pentecostal Christians in Protestant and Catholic churches through the Charismatic Movement. Together, Pentecostal and Charismatic Christianity numbers over 500 million adherents..
Other Protestants
There are many other Protestant denominations that do not fit neatly into the mentioned branches, and are far smaller in membership. Some groups of individuals who hold basic Protestant tenets identify themselves simply as "Christians" or "born-again Christians". They typically distance themselves from the confessionalism and/or creedalism of other Christian communitiesConfessionalism is a term employed by historians to refer to "the creation of fixed identities and systems of beliefs for separate churches which had previously been more fluid in their self-understanding, and which had not begun by seeking separate identities for themselves—they had wanted to be truly Catholic and reformed." (MacCulloch, The Reformation: A History, p. xxiv.) by calling themselves "non-denominational" or "evangelical". Often founded by individual pastors, they have little affiliation with historic denominations.
Hussitism follows the teachings of Czech reformer Jan Hus, who became the best-known representative of the Bohemian Reformation and one of the forerunners of the Protestant Reformation. This predominantly religious movement was propelled by social issues and strengthened Czech national awareness. Among present-day Christians, Hussite traditions are represented in the Moravian Church, Unity of the Brethren, and the refounded Czechoslovak Hussite churches.Nĕmec, Ludvík "The Czechoslovak heresy and schism: the emergence of a national Czechoslovak church," American Philosophical Society, Philadelphia, 1975, ISBN 0-87169-651-7
The Plymouth Brethren are a conservative, low church, evangelical movement, whose history can be traced to Dublin, Ireland, in the late 1820s, originating from Anglicanism. Among other beliefs, the group emphasizes sola scriptura. Brethren generally see themselves not as a denomination, but as a network, or even as a collection of overlapping networks, of like-minded independent churches. Although the group refused for many years to take any denominational name to itself—a stance that some of them still maintain—the title The Brethren, is one that many of their number are comfortable with in that the Bible designates all believers as brethren.
The Holiness movement refers to a set of beliefs and practices emerging from 19th-century Methodism, and a number of evangelical denominations, parachurch organizations, and movements which emphasized those beliefs as a central doctrine. There are an estimated 12 million adherents in Holiness movement churches. The Salvation Army and The Wesleyan Church are notable examples.
Quakers, or Friends, are members of a family of religious movements collectively known as the Religious Society of Friends. The central unifying doctrine of these movements is the priesthood of all believers. Many Friends view themselves as members of a Christian denomination. They include those with evangelical, holiness, liberal, and traditional conservative Quaker understandings of Christianity. Unlike many other groups that emerged within Christianity, the Religious Society of Friends has actively tried to avoid creeds and hierarchical structures.The Trouble With "Ministers" by Chuck Fager gives an overview of the hierarchy Friends had until it began to be abolished in the mid-eighteenth century. Retrieved 25 April 2014.
Unitarianism is sometimes considered Protestant due to its origins in the Reformation and strong cooperation with other Protestants since the 16th century. It is excluded due to its Nontrinitarian nature. Unitarians can be regarded as Nontrinitarian Protestants, or simply Nontrinitarians. Unitarianism has been popular in the region of Transylvania within today's Romania, England, and the United States. It originated almost simultaneously in Transylvania and the Polish-Lithuanian Commonwealth.
Interdenominational movements
There are also Christian movements which cross denominational lines and even branches, and cannot be classified on the same level previously mentioned forms. Evangelicalism is a prominent example. Some of those movements are active exclusively within Protestantism, some are Christian-wide. Transdenominational movements are sometimes capable of affecting parts of the Roman Catholic Church, such as does it the Charismatic Movement, which aims to incorporate beliefs and practices similar to Pentecostals into the various branches of Christianity. Neo-charismatic churches are sometimes regarded as a subgroup of the Charismatic Movement. Both are put under a common label of Charismatic Christianity (so-called Renewalists), along with Pentecostals. Nondenominational churches and various house churches often adopt, or are akin to one of these movements.
Megachurches are usually influenced by interdenominational movements. Globally, these large congregations are a significant development in Protestant Christianity. In the United States, the phenomenon has more than quadrupled in the past two decades.http://www.secularhumanism.org/index.php?section=library&page=tflynn_26_5 It has since spread worldwide.
The chart below shows the mutual relations and historical origins of the main interdenominational movements and other developments within Protestantism.
650px|center|Links between interdenominational movements and other developments within Protestantism
Evangelicalism
Evangelicalism, or Evangelical Protestantism, is a worldwide, transdenominational movement which maintains that the essence of the gospel consists in the doctrine of salvation by grace through faith in Jesus Christ's atonement.
Evangelicals are Christians who believe in the centrality of the conversion or "born again" experience in receiving salvation, believe in the authority of the Bible as God's revelation to humanity and have a strong commitment to evangelism or sharing the Christian message.
It gained great momentum in the 18th and 19th centuries with the emergence of Methodism and the Great Awakenings in Britain and North America. The origins of Evangelicalism are usually traced back to the English Methodist movement, Nicolaus Zinzendorf, the Moravian Church, Lutheran pietism, Presbyterianism and Puritanism. Among leaders and major figures of the Evangelical Protestant movement were John Wesley, George Whitefield, Jonathan Edwards, Billy Graham, Harold John Ockenga, John Stott and Martyn Lloyd-Jones.
There are an estimated 285,480,000 Evangelicals, corresponding to 13.1% of the Christian population and 4.1% of the total world population. The Americas, Africa and Asia are home to the majority of Evangelicals. The United States has the largest concentration of Evangelicals. Evangelicalism is gaining popularity both in and outside the English-speaking world, especially in Latin America and the developing world.
Charismatic movement
The Charismatic movement is the international trend of historically mainstream congregations adopting beliefs and practices similar to Pentecostals. Fundamental to the movement is the use of spiritual gifts. Among Protestants, the movement began around 1960.
In America, Episcopalian Dennis Bennett is sometimes cited as one of the charismatic movement's seminal influence.. In the United Kingdom, Colin Urquhart, Michael Harper, David Watson and others were in the vanguard of similar developments. The Massey conference in New Zealand, 1964 was attended by several Anglicans, including the Rev. Ray Muller, who went on to invite Bennett to New Zealand in 1966, and played a leading role in developing and promoting the Life in the Spirit seminars. Other Charismatic movement leaders in New Zealand include Bill Subritzky.
Larry Christenson, a Lutheran theologian based in San Pedro, California, did much in the 1960s and 1970s to interpret the charismatic movement for Lutherans. A very large annual conference regarding that matter was held in Minneapolis. Charismatic Lutheran congregations in Minnesota became especially large and influential; especially "Hosanna!" in Lakeville, and North Heights in St. Paul. The next generation of Lutheran charismatics cluster around the Alliance of Renewal Churches. There is considerable charismatic activity among young Lutheran leaders in California centered around an annual gathering at Robinwood Church in Huntington Beach. Richard A. Jensen's Touched by the Spirit published in 1974, played a major role of the Lutheran understanding to the charismatic movement.
In Congregational and Presbyterian churches which profess a traditionally Calvinist or Reformed theology there are differing views regarding present-day continuation or cessation of the gifts (charismata) of the Spirit. Generally, however, Reformed charismatics distance themselves from renewal movements with tendencies which could be perceived as overemotional, such as Word of Faith, Toronto Blessing, Brownsville Revival and Lakeland Revival. Prominent Reformed charismatic denominations are the Sovereign Grace Churches and the Every Nation Churches in the USA, in Great Britain there is the Newfrontiers churches and movement, which leading figure is Terry Virgo.http://www.tateville.com/churches.html
A minority of Seventh-day Adventists today are charismatic. They are strongly associated with those holding more "progressive" Adventist beliefs. In the early decades of the church charismatic or ecstatic phenomena were commonplace.
Neo-charismatic churches
Neo-charismatic churches are a category of churches in the Christian Renewal movement. Neo-charismatics include the Third Wave, but are broader. Now more numerous than Pentecostals (first wave) and charismatics (second wave) combined, owing to the remarkable growth of postdenominational and independent charismatic groups..
Neo-charismatics believe in and stress the post-Biblical availability of gifts of the Holy Spirit, including glossolalia, healing, and prophecy. They practice laying on of hands and seek the "infilling" of the Holy Spirit. However, a specific experience of baptism with the Holy Spirit may not be requisite for experiencing such gifts. No single form, governmental structure, or style of church service characterizes all neo-charismatic services and churches.
Some nineteen thousand denominations, with approximately 295 million individual adherents, are identified as neo-charismatic.. Neo-charismatic tenets and practices are found in many independent, nondenominational or post-denominational congregations, with strength of numbers centered in the African independent churches, among the Han Chinese house-church movement, and in Latin American churches.
Other Protestant developments
A plenty of other movements and thoughts to be distinguished from the widespread transdenominational ones and branches appeared within Protestant Christianity. Some of them are also in evidence today. Others appeared during the centuries following the Reformation and disappeared gradually with the time, such as much of Pietism. Some inspired the current transdenominational ones, such as Evangelicalism which has its foundation in the Christian fundamentalism.
Arminianism
thumb|right|200px|Jacobus Arminius was a Dutch Reformed theologian, whose views influenced parts of Protestantism. A small Remonstrant community remains in the Netherlands.
Arminianism is based on theological ideas of the Dutch Reformed theologian Jacobus Arminius (1560–1609) and his historic supporters known as Remonstrants. His teachings held to the five solae of the Reformation, but they were distinct from particular teachings of Martin Luther, Huldrych Zwingli, John Calvin, and other Protestant Reformers. Jacobus Arminius was a student of Theodore Beza at the Theological University of Geneva. Arminianism is known to some as a soteriological diversification of Calvinism."Chambers Biographical Dictionary," ed. Magnus Magnusson (Chambers: Cambridge University Press, 1995), 62. However, to others, Arminianism is a reclamation of early Church theological consensus.Kenneth D. Keathley, "The Work of God: Salvation," in A Theology for the Church, ed. Daniel L. Akin (Nashville: B&H Academic, 2007), 703. Dutch Arminianism was originally articulated in the Remonstrance (1610), a theological statement signed by 45 ministers and submitted to the States General of the Netherlands. Many Christian denominations have been influenced by Arminian views on the will of man being freed by grace prior to regeneration, notably the Baptists in the 16th century,Robert G. Torbet, A History of the Baptists, third edition the Methodists in the 18th century and the Seventh-day Adventist Church in the 19th century.
The original beliefs of Jacobus Arminius himself are commonly defined as Arminianism, but more broadly, the term may embrace the teachings of Hugo Grotius, John Wesley, and others as well. Classical Arminianism and Wesleyan Arminianism are the two main schools of thought. Wesleyan Arminianism is often identical with Methodism. The two systems of Calvinism and Arminianism share both history and many doctrines, and the history of Christian theology. However, because of their differences over the doctrines of divine predestination and election, many people view these schools of thought as opposed to each other. In short, the difference can be seen ultimately by whether God allows His desire to save all to be resisted by an individual's will (in the Arminian doctrine) or if God's grace is irresistible and limited to only some (in Calvinism). Some Calvinists assert that the Arminian perspective presents a synergistic system of Salvation and therefore is not only by grace, while Arminians firmly reject this conclusion. Many consider the theological differences to be crucial differences in doctrine, while others find them to be relatively minor.Gonzalez, Justo L. The Story of Christianity, Vol. Two: The Reformation to the Present Day (New York: Harpercollins Publishers, 1985; reprint – Peabody: Prince Press, 2008) 180
Pietism
Pietism was an influential movement within Lutheranism that combined the 17th century Lutheran principles with the Reformed emphasis on individual piety and living a vigorous Christian life.In places, such as parts of England and America, where Pietism was frequently juxtaposed with Roman Catholicism, Catholics also became naturally influenced by Pietism, helping to foster a stronger tradition of congregational hymn-singing, including among Pietists who converted to Catholicism and brought their pietistic inclination with them, such as Frederick William Faber.
It began in the late 17th century, reached its zenith in the mid-18th century, and declined through the 19th century, and had almost vanished in America by the end of the 20th century. While declining as an identifiable Lutheran group, some of its theological tenets influenced Protestantism generally, inspiring the Anglican priest John Wesley to begin the Methodist movement and Alexander Mack to begin the Brethren movement among Anabaptists.
Though Pietism shares an emphasis on personal behavior with the Puritan movement, and the two are often confused, there are important differences, particularly in the concept of the role of religion in government.Calvinist Puritans believed that government was ordained by God to enforce Christian behavior upon the world; pietists see the government as a part of the world, and believers were called to voluntarily live faithful lives independent of government.
Puritanism, English dissenters and nonconformists
The Puritans were a group of English Protestants in the 16th and 17th centuries, which sought to purify the Church of England of what they considered to be Roman Catholic practices, maintaining that the church was only partially reformed. Puritanism in this sense was founded by some of the returning clergy exiled under Mary I shortly after the accession of Elizabeth I of England in 1558, as an activist movement within the Church of England.
Puritans were blocked from changing the established church from within, and were severely restricted in England by laws controlling the practice of religion. Their beliefs, however, were transported by the emigration of congregations to the Netherlands (and later to New England), and by evangelical clergy to Ireland (and later into Wales), and were spread into lay society and parts of the educational system, particularly certain colleges of the University of Cambridge. They took on distinctive beliefs about clerical dress and in opposition to the episcopal system, particularly after the 1619 conclusions of the Synod of Dort they were resisted by the English bishops. They largely adopted Sabbatarianism in the 17th century, and were influenced by millennialism.
They formed, and identified with various religious groups advocating greater purity of worship and doctrine, as well as personal and group piety. Puritans adopted a Reformed theology, but they also took note of radical criticisms of Zwingli in Zurich and Calvin in Geneva. In church polity, some advocated for separation from all other Christians, in favor of autonomous gathered churches. These separatist and independent strands of Puritanism became prominent in the 1640s, when the supporters of a Presbyterian polity in the Westminster Assembly were unable to forge a new English national church.
Nonconforming Protestants along with the Protestant refugees from continental Europe were the primary founders of the United States of America.
Neo-orthodoxy and Paleo-orthodoxy
200px|thumb|right|Karl Barth, often regarded as the greatest Protestant theologian of the twentieth century
A non-fundamentalist rejection of liberal Christianity, associated primarily with Karl Barth and Jürgen Moltmann, neo-orthodoxy sought to counter-act the tendency of liberal theology to make theological accommodations to modern scientific perspectives. Sometimes called "Crisis theology", according to the influence of philosophical existentialism on some important segments of the movement; also, somewhat confusingly, sometimes called neo-evangelicalism.
Paleo-orthodoxy is a movement similar in some respects to neo-evangelicalism but emphasizing the ancient Christian consensus of the undivided church of the first millennium AD, including in particular the early creeds and church councils as a means of properly understanding the scriptures. This movement is cross-denominational and the most notable exponent in the movement is United Methodist theologian Thomas Oden.
Christian fundamentalism
In reaction to liberal Bible critique, fundamentalism arose in the 20th century, primarily in the United States, among those denominations most affected by Evangelicalism.
Fundamentalist theology tends to stress Biblical inerrancy and Biblical literalism.
Toward the end of the 20th century, some have tended to confuse evangelicalism and fundamentalism, however the labels represent very distinct differences of approach that both groups are diligent to maintain, although because of fundamentalism's dramatically smaller size it often gets classified simply as an ultra-conservative branch of evangelicalism.
Modernism and liberalism
Modernism and liberalism do not constitute rigorous and well-defined schools of theology, but are rather an inclination by some writers and teachers to integrate Christian thought into the spirit of the Age of Enlightenment. New understandings of history and the natural sciences of the day led directly to new approaches to theology. Its opposition to the fundamentalist teaching resulted in religious debates, such as the Fundamentalist–Modernist Controversy within the Presbyterian Church in the United States of America in the 1920s.
Protestant culture
Although the Reformation was a religious movement, it also had a strong impact on all other aspects of life: marriage and family, education, the humanities and sciences, the political and social order, the economy, and the arts. Protestant churches reject the idea of a celibate priesthood and thus allow their clergy to marry. Many of their families contributed to the development of intellectual elites in their countries.Karl Heussi, Kompendium der Kirchengeschichte, p. 319 Since about 1950, women have entered the ministry, and some have assumed leading positions (e.g. bishops), in most Protestant churches.
As the Reformers wanted all members of the church to be able to read the Bible, education on all levels got a strong boost. By the middle of the eighteenth century, the literacy rate in England was about 60 per cent, in Scotland 65 per cent, and in Sweden eight of ten men and women were able to read and to write.Heinrich August Winkler (2012), Geschichte des Westens. Von den Anfängen in der Antike bis zum 20. Jahrhundert, Third, Revised Edition, Munich (Germany), p. 233 Colleges and universities were founded. For example, the Puritans who established Massachusetts Bay Colony in 1628 founded Harvard College only eight years later. About a dozen other colleges followed in the 18th century, including Yale (1701). Pennsylvania also became a centre of learning.Clifton E. Olmstead (1960), History of Religion in the United States, Prentice-Hall, Englewood Cliffs, N.J., pp. 69–80, 88–89, 114–117, 186–188M. Schmidt, Kongregationalismus, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band III (1959), Tübingen (Germany), col. 1770
Members of mainline Protestant denominations have played leadership roles in many aspects of American life, including politics, business, science, the arts, and education. They founded most of the country's leading institutes of higher education.McKinney, William. "Mainline Protestantism 2000." Annals of the American Academy of Political and Social Science, Vol. 558, Americans and Religions in the Twenty-First Century (July, 1998), pp. 57–66.
Thought and work ethic
The Protestant concept of God and man allows believers to use all their God-given faculties, including the power of reason. That means that they are allowed to explore God's creation and, according to Genesis 2:15, make use of it in a responsible and sustainable way. Thus a cultural climate was created that greatly enhanced the development of the humanities and the sciences.Gerhard Lenski (1963), The Religious Factor: A Sociological Study of Religion's Impact on Politics, Economics, and Family Life, Revised Edition, A Doubleday Anchor Book, Garden City, New York, pp. 348–351 Another consequence of the Protestant understanding of man is that the believers, in gratitude for their election and redemption in Christ, are to follow God's commandments. Industry, frugality, calling, discipline, and a strong sense of responsibility are at the heart of their moral code.Cf. Robert Middlekauff (2005), The Glorious Cause: The American Revolution, 1763–1789, Revised and Expanded Edition, Oxford University Press, ISBN 978-0-19-516247-9, p. 52Jan Weerda, Soziallehre des Calvinismus, in Evangelisches Soziallexikon, 3. Auflage (1958), Stuttgart (Germany), col. 934 In particular, Calvin rejected luxury. Therefore, craftsmen, industrialists, and other businessmen were able to reinvest the greater part of their profits in the most efficient machinery and the most modern production methods that were based on progress in the sciences and technology. As a result, productivity grew, which led to increased profits and enabled employers to pay higher wages. In this way, the economy, the sciences, and technology reinforced each other. The chance to participate in the economic success of technological inventions was a strong incentive to both inventors and investors.Eduard Heimann, Kapitalismus, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band III (1959), Tübingen (Germany), col. 1136–1141Hans Fritz Schwenkhagen, Technik, in Evangelisches Soziallexikon, 3. Auflage, col. 1029–1033Georg Süßmann, Naturwissenschaft und Christentum, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band IV, col. 1377–1382C. Graf von Klinckowstroem, Technik. Geschichtlich, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band VI, col. 664–667 The Protestant work ethic was an important force behind the unplanned and uncoordinated mass action that influenced the development of capitalism and the Industrial Revolution. This idea is also known as the "Protestant ethic thesis."
In a factor analysis of the latest wave of World Values Survey data, Arno Tausch (Corvinus University of Budapest) found that Protestantism emerges to be very close to combining religion and the traditions of liberalism. The Global Value Development Index, calculated by Tausch, relies on the World Values Survey dimensions such as trust in the state of law, no support for shadow economy, postmaterial activism, support for democracy, a non-acceptance of violence, xenophobia and racism, trust in transnational capital and Universities, confidence in the market economy, supporting gender justice, and engaging in environmental activism, etc.Tausch A. (2015) Towards new maps of global human values, based on World Values Survey (6) data. Corvinus University of Budapest
Episcopalians and Presbyterians, as well as other WASPs, tend to be considerably wealthier and better educated (having graduate and post-graduate degrees per capita) than most other religious groups in United States,Irving Lewis Allen, "WASP—From Sociological Concept to Epithet," Ethnicity, 1975 154+ and are disproportionately represented in the upper reaches of American business, law and politics, especially the Republican Party. Numbers of the most wealthy and affluent American families as the Vanderbilts and the Astors, Rockefeller, Du Pont, Roosevelt, Forbes, Whitneys, the Morgans and Harrimans are Mainline Protestant families.
Science
thumb|left|Columbia University, established by the Church of England
Protestantism has had an important influence on science. According to the Merton Thesis, there was a positive correlation between the rise of English Puritanism and German Pietism on the one hand and early experimental science on the other.Sztompka, 2003 The Merton Thesis has two separate parts: Firstly, it presents a theory that science changes due to an accumulation of observations and improvement in experimental technique and methodology; secondly, it puts forward the argument that the popularity of science in 17th-century England and the religious demography of the Royal Society (English scientists of that time were predominantly Puritans or other Protestants) can be explained by a correlation between Protestantism and the scientific values.Gregory, 1998 Merton focused on English Puritanism and German Pietism as having been responsible for the development of the scientific revolution of the 17th and 18th centuries. He explained that the connection between religious affiliation and interest in science was the result of a significant synergy between the ascetic Protestant values and those of modern science.Becker, 1992 Protestant values encouraged scientific research by allowing science to identify God's influence on the world—his creation—and thus providing a religious justification for scientific research.
According to Scientific Elite: Nobel Laureates in the United States by Harriet Zuckerman, a review of American Nobel prizes awarded between 1901 and 1972, 72% of American Nobel Prize laureates identified a Protestant background.Harriet Zuckerman, Scientific Elite: Nobel Laureates in the United States New York, The Free Pres, 1977 , p.68: Protestants turn up among the American-reared laureates in slightly greater proportion to their numbers in the general population. Thus 72 percent of the seventy-one laureates but about two thirds of the American population were reared in one or another Protestant denomination-) Overall, 84.2% of all the Nobel Prizes awarded to Americans in Chemistry, 60% in Medicine, and 58.6% in Physics between 1901 and 1972 were won by Protestants.
According to 100 Years of Nobel Prize (2005), a review of Nobel prizes awarded between 1901 and 2000, 65.4% of Nobel Prize Laureates, have identified Christianity in its various forms as their religious preference (423 prizes).Baruch A. Shalev, 100 Years of Nobel Prizes (2003),Atlantic Publishers & Distributors , p.57: between 1901 and 2000 reveals that 654 Laureates belong to 28 different religion Most 65.4% have identified Christianity in its various forms as their religious preference.
While separating Roman Catholic from Protestants among Christians proved difficult in some cases, available information suggests that more Protestants were involved in the scientific categories and more Catholics were involved in the Literature and Peace categories.
Atheists, agnostics, and freethinkers comprise 10.5% of total Nobel Prize winners; but in the category of Literature, these preferences rise sharply to about 35%. A striking fact involving religion is the high number of Laureates of the Jewish faith – over 20% of total Nobel Prizes (138); including: 17% in Chemistry, 26% in Medicine and Physics, 40% in Economics and 11% in Peace and Literature each. The numbers are especially startling in light of the fact that only some 14 million people (0.02% of the world's population) are Jewish. By contrast, only 5 Nobel Laureates have been of the Muslim faith-0.8% of total number of Nobel prizes awarded – from a population base of about 1.2 billion (20% of the world's population) While 32% have identified with Protestantism in its various forms (208 prizes), although Protestant comprise 11.6% to 13% of the world's population.
Government
In the Middle Ages, the Church and the worldly authorities were closely related. Martin Luther separated the religious and the worldly realms in principle (doctrine of the two kingdoms).Heinrich Bornkamm, Toleranz. In der Geschichte des Christentums in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band VI (1962), col. 937 The believers were obliged to use reason to govern the worldly sphere in an orderly and peaceful way. Luther's doctrine of the priesthood of all believers upgraded the role of laymen in the church considerably. The members of a congregation had the right to elect a minister and, if necessary, to vote for his dismissal (Treatise On the right and authority of a Christian assembly or congregation to judge all doctrines and to call, install and dismiss teachers, as testified in Scripture; 1523).Original German title: Dass eine christliche Versammlung oder Gemeine Recht und Macht habe, alle Lehre zu beurteilen und Lehrer zu berufen, ein- und abzusetzen: Grund und Ursach aus der Schrift Calvin strengthened this basically democratic approach by including elected laymen (church elders, presbyters) in his representative church government.Clifton E. Olmstead, History of Religion in the United States, pp. 4–10 The Huguenots added regional synods and a national synod, whose members were elected by the congregations, to Calvin's system of church self-government. This system was taken over by the other reformed churches.Karl Heussi, Kompendium der Kirchengeschichte, 11. Auflage, p. 325
Politically, Calvin favoured a mixture of aristocracy and democracy. He appreciated the advantages of democracy: "It is an invaluable gift, if God allows a people to freely elect its own authorities and overlords."Quoted in Jan Weerda, Calvin, in Evangelisches Soziallexikon, 3. Auflage (1958), Stuttgart (Germany), col. 210 Calvin also thought that earthly rulers lose their divine right and must be put down when they rise up against God. To further protect the rights of ordinary people, Calvin suggested separating political powers in a system of checks and balances (separation of powers). Thus he and his followers resisted political absolutism and paved the way for the rise of modern democracy.Clifton E. Olmstead, History of Religion in the United States, p. 10 Besides England, the Netherlands were, under Calvinist leadership, the freest country in Europe in the seventeenth and eighteenth centuries. It granted asylum to philosophers like Baruch Spinoza and Pierre Bayle. Hugo Grotius was able to teach his natural-law theory and a relatively liberal interpretation of the Bible.Karl Heussi, Kompendium der Kirchengeschichte, pp. 396–397
Consistent with Calvin's political ideas, Protestants created both the English and the American democracies. In seventeenth-century England, the most important persons and events in this process were the English Civil War, Oliver Cromwell, John Milton, John Locke, the Glorious Revolution, the English Bill of Rights, and the Act of Settlement.Cf. M. Schmidt, England. Kirchengeschichte, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band II (1959), Tübingen (Germany), col. 476–478 Later, the British took their democratic ideals to their colonies, e.g. Australia, New Zealand, and India. In North America, Plymouth Colony (Pilgrim Fathers; 1620) and Massachusetts Bay Colony (1628) practised democratic self-rule and separation of powers.Nathaniel Philbrick (2006), Mayflower: A Story of Courage, Community, and War, Penguin Group, New York, N.Y., ISBN 0-670-03760-5Clifton E. Olmstead, History of Religion in the United States, pp. 65–76Christopher Fennell (1998), Plymouth Colony Legal StructureHanover Historical Texts Project <http://history.hanover.edu/texts/masslib.html> These Congregationalists were convinced that the democratic form of government was the will of God.M. Schmidt, Pilgerväter, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band V (1961), col. 384 The Mayflower Compact was a social contract.Christopher Fennell, Plymouth Colony Legal StructureAllen Weinstein and David Rubel (2002), The Story of America: Freedom and Crisis from Settlement to Superpower, DK Publishing, Inc., New York, N.Y., ISBN 0-7894-8903-1, p. 61
Rights and liberty
thumb|upright|Protestant and Enlightenment philosopher John Locke argued for individual conscience, free from state control.
Protestants also took the initiative in advocating for religious freedom. Freedom of conscience had high priority on the theological, philosophical, and political agendas since Luther refused to recant his beliefs before the Diet of the Holy Roman Empire at Worms (1521). In his view, faith was a free work of the Holy Spirit and could, therefore, not be forced on a person.Clifton E. Olmstead, History of Religion in the United States, p. 5 The persecuted Anabaptists and Huguenots demanded freedom of conscience, and they practised separation of church and state.Heinrich Bornkamm, Toleranz. In der Geschichte des Christentums, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band VI (1962), col. 937-938 In the early seventeenth century, Baptists like John Smyth and Thomas Helwys published tracts in defense of religious freedom.H. Stahl, Baptisten, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band I, col. 863 Their thinking influenced John Milton and John Locke's stance on tolerance.G. Müller-Schwefe, Milton, John, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band IV, col. 955Karl Heussi, Kompendium der Kirchengeschichte, p. 398 Under the leadership of Baptist Roger Williams, Congregationalist Thomas Hooker, and Quaker William Penn, respectively, Rhode Island, Connecticut, and Pennsylvania combined democratic constitutions with freedom of religion. These colonies became safe havens for persecuted religious minorities, including Jews.Clifton E. Olmstead, History of Religion in the United States, pp. 99–106, 111–117, 124Edwin S. Gaustad (1999), Liberty of Conscience: Roger Williams in America, Judson Press, Valley Forge, p. 28Hans Fantel (1974), William Penn: Apostle of Dissent, William Morrow & Co., New York, N.Y., pp. 150–153 The United States Declaration of Independence, the United States Constitution, and the American Bill of Rights with its fundamental human rights made this tradition permanent by giving it a legal and political framework.Robert Middlekauff (2005), The Glorious Cause: The American Revolution, 1763–1789, Revised and Expanded Edition, Oxford University Press, New York, N.Y., ISBN 978-0-19-516247-9, pp. 4–6, 49–52, 622–685 The great majority of American Protestants, both clergy and laity, strongly supported the independence movement. All major Protestant churches were represented in the First and Second Continental Congresses.Clifton E. Olmstead, History of Religion in the United States, pp. 192–209 In the nineteenth and twentieth centuries, the American democracy became a model for numerous other countries and regions throughout the world (e.g., Latin America, Japan, and Germany). The strongest link between the American and French Revolutions was Marquis de Lafayette, an ardent supporter of the American constitutional principles. The French Declaration of the Rights of Man and of the Citizen was mainly based on Lafayette's draft of this document.Cf. R. Voeltzel, Frankreich. Kirchengeschichte, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band II (1958), col. 1039 The United Nations Declaration and Universal Declaration of Human Rights also echo the American constitutional tradition.Douglas K. Stevenson (1987), American Life and Institutions, Ernst Klett Verlag, Stuttgart (Germany), p. 34G. Jasper, Vereinte Nationen, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band VI, col. 1328–1329Cf. G. Schwarzenberger, Völkerrecht, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band VI, col. 1420–1422
Democracy, social-contract theory, separation of powers, religious freedom, separation of church and state – these achievements of the Reformation and early Protestantism were elaborated on and popularized by Enlightenment thinkers. Some of the philosophers of the English, Scottish, German, and Swiss Enlightenment – Thomas Hobbes, John Locke, John Toland, David Hume, Gottfried Wilhelm Leibniz, Christian Wolff, Immanuel Kant, and Jean-Jacques Rousseau – had Protestant backgrounds.Karl Heussi, Kompendium der Kirchengeschichte, 11. Auflage, pp. 396–399, 401–403, 417–419 For example, John Locke, whose political thought was based on "a set of Protestant Christian assumptions",Jeremy Waldron (2002), God, Locke, and Equality: Christian Foundations in Locke's Political Thought, Cambridge University Press, New York, N.Y., ISBN 978-0521-89057-1, p. 13 derived the equality of all humans, including the equality of the genders ("Adam and Eve"), from Genesis 1, 26–28. As all persons were created equally free, all governments needed "the consent of the governed."Jeremy Waldron, God, Locke, and Equality, pp. 21–43, 120 These Lockean ideas were fundamental to the United States Declaration of Independence, which also deduced human rights from the biblical belief in creation: "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness."
Also, other human rights were advocated for by some Protestants. For example, torture was abolished in Prussia in 1740, slavery in Britain in 1834 and in the United States in 1865 (William Wilberforce, Harriet Beecher Stowe, Abraham Lincoln – against Southern Protestants).Allen Weinstein and David Rubel, The Story of America, pp. 189–309Karl Heussi, Kompendium der Kirchengeschichte, 11. Auflage, pp. 403, 425 Hugo Grotius and Samuel Pufendorf were among the first thinkers who made significant contributions to international law.M. Elze,Grotius, Hugo, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band II, col. 1885–1886H. Hohlwein, Pufendorf, Samuel, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band V, col. 721 The Geneva Convention, an important part of humanitarian international law, was largely the work of Henry Dunant, a reformed pietist. He also founded the Red Cross.R. Pfister, Schweiz. Seit der Reformation, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band V (1961), col. 1614–1615
Social teaching
Protestants have founded hospitals, homes for disabled or elderly people, educational institutions, organizations that give aid to developing countries, and other social welfare agencies.Clifton E. Olmstead, History of Religion in the United States, pp. 484–494H. Wagner, Diakonie, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band I, col. 164–167J.R.H. Moorman, Anglikanische Kirche, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band I, col. 380–381 In the nineteenth century, throughout the Anglo-American world, numerous dedicated members of all Protestant denominations were active in social reform movements such as the abolition of slavery, prison reforms, and woman suffrage.Clifton E.Olmstead, History of Religion in the United States, pp. 461–465Allen Weinstein and David Rubel, The Story of America, pp. 274–275M. Schmidt, Kongregationalismus, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band III, col. 1770 As an answer to the "social question" of the nineteenth century, Germany under Chancellor Otto von Bismarck introduced insurance programs that led the way to the welfare state (health insurance, accident insurance, disability insurance, old-age pensions). To Bismarck this was "practical Christianity".K. Kupisch, Bismarck, Otto von, in Die Religion in Geschichte und Gegenwart, 3. Auflage, Band I, col. 1312–1315P. Quante, Sozialversicherung, in Die Religion in Geschichte und Gegenwart, Band VI, col. 205–206 These programs, too, were copied by many other nations, particularly in the Western world.
Arts
The arts have been strongly inspired by Protestant beliefs.
Martin Luther, Paul Gerhardt, George Wither, Isaac Watts, Charles Wesley, William Cowper, and many other authors and composers created well-known church hymns.
Musicians like Heinrich Schütz, Johann Sebastian Bach, George Frideric Handel, Henry Purcell, Johannes Brahms, and Felix Mendelssohn-Bartholdy composed great works of music.
Prominent painters with Protestant background were, for example, Albrecht Dürer, Hans Holbein the Younger, Lucas Cranach the Elder, Lucas Cranach the Younger, Rembrandt, and Vincent van Gogh.
World literature was enriched by the works of Edmund Spenser, John Milton, John Bunyan, John Donne, John Dryden, Daniel Defoe, William Wordsworth, Jonathan Swift, Johann Wolfgang Goethe, Friedrich Schiller, Samuel Taylor Coleridge, Edgar Allan Poe, Matthew Arnold, Conrad Ferdinand Meyer, Theodor Fontane, Washington Irving, Robert Browning, Emily Dickinson, Emily Brontë, Charles Dickens, Nathaniel Hawthorne, Thomas Stearns Eliot, John Galsworthy, Thomas Mann, William Faulkner, John Updike, and many others.
Catholic and Eastern Orthodox responses
The view of the Roman Catholic Church is that Protestant denominations cannot be considered churches but rather that they are ecclesial communities or specific faith-believing communities because their ordinances and doctrines are not historically the same as the Catholic sacraments and dogmas, and the Protestant communities have no sacramental ministerial priesthood and therefore lack true apostolic succession.Responses to Some Questions Regarding Certain Aspects of the Doctrine on the Church, 29 June 2007, Congregation for the Doctrine of the Faith. According to Bishop Hilarion (Alfeyev) the Eastern Orthodox Church shares the same view on the subject.
Contrary to how the Protestant Reformers were often characterized, the concept of a catholic or universal Church was not brushed aside during the Protestant Reformation. On the contrary, the visible unity of the catholic or universal church was seen by the Protestant reformers as an important and essential doctrine of the Reformation. The Magisterial reformers, such as Martin Luther, John Calvin, and Huldrych Zwingli, believed that they were reforming the Roman Catholic Church, which they viewed as having become corrupted. Each of them took very seriously the charges of schism and innovation, denying these charges and maintaining that it was the Roman Catholic Church that had left them.The Protestant Reformers formed a new and radically different theological opinion on ecclesiology, that the visible Church is "catholic" (lower-case "c") rather than "Catholic" (upper-case "C"). Accordingly, there is not an indefinite number of parochial, congregational or national churches, constituting, as it were, so many ecclesiastical individualities, but one great spiritual republic of which these various organizations form a part, although they each have very different opinions. This was markedly far-removed from the traditional and historic Roman Catholic understanding that the Roman Catholic Church was the one true Church of Christ.
Yet in the Protestant understanding, the visible church is not a genus, so to speak, with so many species under it. It is thus you may think of the State, but the visible church is a totum integrale, it is an empire, with an ethereal emperor, rather than a visible one. The churches of the various nationalities constitute the provinces of this empire; and though they are so far independent of each other, yet they are so one, that membership in one is membership in all, and separation from one is separation from all.... This conception of the church, of which, in at least some aspects, we have practically so much lost sight, had a firm hold of the Scottish theologians of the seventeenth century. James Walker in The Theology of Theologians of Scotland. (Edinburgh: Rpt. Knox Press, 1982) Lecture iv. pp.95–6. In order to justify their departure from the Roman Catholic Church, Protestants often posited a new argument, saying that there was no real visible Church with divine authority, only a spiritual, invisible, and hidden church—this notion began in the early days of the Protestant Reformation.
Wherever the Magisterial Reformation, which received support from the ruling authorities, took place, the result was a reformed national Protestant church envisioned to be a part of the whole invisible church, but disagreeing, in certain important points of doctrine and doctrine-linked practice, with what had until then been considered the normative reference point on such matters, namely the Papacy and central authority of the Roman Catholic Church. The Reformed churches thus believed in some form of Catholicity, founded on their doctrines of the five solas and a visible ecclesiastical organization based on the 14th and 15th century Conciliar movement, rejecting the papacy and papal infallibility in favor of ecumenical councils, but rejecting the latest ecumenical council, the Council of Trent. Religious unity therefore became not one of doctrine and identity but one of invisible character, wherein the unity was one of faith in Jesus Christ, not common identity, doctrine, belief, and collaborative action.
Today there is a growing movement of Protestants, especially of the Reformed tradition, that either reject or down-play the designation Protestant because of the negative idea that the word invokes in addition to its primary meaning, preferring the designation Reformed, Evangelical or even Reformed Catholic expressive of what they call a Reformed Catholicity and defending their arguments from the traditional Protestant confessions.The Canadian Reformed Magazine, 18 (20–27 September, 4 –11 October, 18, 1, 8 November 1969) http://spindleworks.com/library/faber/008_theca.htm
Ecumenism
thumb|250px|left|The Edinburgh Missionary Conference is considered the symbolic starting point of the contemporary ecumenical movement.History – World Council of Churches
The ecumenical movement has had an influence on mainline churches, beginning at least in 1910 with the Edinburgh Missionary Conference. Its origins lay in the recognition of the need for cooperation on the mission field in Africa, Asia and Oceania. Since 1948, the World Council of Churches has been influential, but ineffective in creating a united church. There are also ecumenical bodies at regional, national and local levels across the globe; but schisms still far outnumber unifications. One, but not the only expression of the ecumenical movement, has been the move to form united churches, such as the Church of South India, the Church of North India, the US-based United Church of Christ, the United Church of Canada, the Uniting Church in Australia and the United Church of Christ in the Philippines which have rapidly declining memberships. There has been a strong engagement of Orthodox churches in the ecumenical movement, though the reaction of individual Orthodox theologians has ranged from tentative approval of the aim of Christian unity to outright condemnation of the perceived effect of watering down Orthodox doctrine.
A Protestant baptism is held to be valid by the Catholic Church if given with the trinitarian formula and with the intent to baptize. However, as the ordination of Protestant ministers is not recognized due to the lack of apostolic succession and the disunity from Catholic Church, all other sacraments (except marriage) performed by Protestant denominations and ministers are not recognized as valid. Therefore, Protestants desiring full communion with the Catholic Church are not re-baptized (although they are confirmed) and Protestant ministers who become Catholics may be ordained to the priesthood after a period of study.
In 1999, the representatives of Lutheran World Federation and Catholic Church signed the Joint Declaration on the Doctrine of Justification, apparently resolving the conflict over the nature of justification which was at the root of the Protestant Reformation, although Confessional Lutherans reject this statement. This is understandable, since there is no compelling authority within them. On 18 July 2006, delegates to the World Methodist Conference voted unanimously to adopt the Joint Declaration.
Spread and demographics
There are more than 900 million Protestants worldwide,Hans J. Hillerbrand: Encyclopedia of Protestantism: 4-volume SetPeter B. Clarke, Peter Beyer: "The World's Religions: Continuities and Transformations"Stephen F. Brown: "Protestantism"Mark A. Noll: "Protestantism: A Very Short Introduction"<ref name="Diamond, Plattner, Costopoulos, 2005">Jay Diamond, Larry. Plattner, Marc F. and Costopoulos, Philip J. World Religions and Democracy. 2005, page 119. link (saying "Not only do Protestants presently constitute 13 percent of the world's population—about 800 million people—but since 1900 Protestantism has spread rapidly in Africa, Asia, and Latin America.')</ref> among approximately 2.4 billion Christians.33.39% of ~7.2 billion world population (under the section 'People') In 2010, a total of more than 800 million included 300 million in Sub-Saharan Africa, 260 million in the Americas, 140 million in Asia-Pacific region, 100 million in Europe and 2 million in Middle East-North Africa. Protestants account for nearly forty percent of Christians worldwide and more than one tenth of the total human population. Various estimates put the percentage of Protestants in relation to the total number of world's Christians at 33%, 36%,Protestant Demographics and Fragmentations 36.7%, and 40%, while in relation to the world's population at 11.6% and 13%.
In European countries which were most profoundly influenced by the Reformation, Protestantism still remains the most practiced religion. These include the Nordic countries and the United Kingdom. In other historical Protestant strongholds such as Germany, the Netherlands, Switzerland, Latvia, Estonia and Hungary, it remains one of the most popular religions.Edgar Thorpe: "The Pearson General Knowledge Manual 2012" Although Czech Republic was the site of one of the most significant pre-reformation movements,Protestantism in Bohemia and Moravia (Czech Republic) there are only few Protestant adherents; mainly due to historical reasons like persecution of Protestants by the Catholic Habsburgs,Hana Mastrini: "Frommer's Prague & the Best of the Czech Republic" restrictions during the Communist rule, and also the ongoing secularization. Over the last several decades, religious practice has been declining as secularization has increased. According to a 2012 study about Religiosity in the European Union in 2012 by Eurobarometer, Protestants made up 12% of the EU population. According to Pew Research Center, Protestants constituted nearly one fifth (or 17.8%) of the continent's Christian population in 2010. Clarke and Beyer estimate that Protestants constituted 15% of all Europeans in 2009, while Noll claims that less than 12% of them lived in Europe in 2010.
Changes in worldwide Protestantism over the last century have been significant.John Witte, Frank S. Alexander: "The Teachings of Modern Protestantism on Law, Politics, and Human Nature" Since 1900, Protestantism has spread rapidly in Africa, Asia, Oceania and Latin America.Encyclopedia of Protestantism That caused Protestantism to be called a primarily non-Western religion. Much of the growth has occurred after World War II, when decolonization of Africa and abolition of various restrictions against Protestants in Latin American countries occurred. According to one source, Protestants constituted respectively 2.5%, 2%, 0.5% of Latin Americans, Africans and Asians. In 2000, percentage of Protestants on mentioned continents was 17%, more than 27% and 5.5%, respectively. According to Mark A. Noll, 79% of Anglicans lived in the United Kingdom in 1910, while most of the remainder was found in the United States and across the British Commonwealth. By 2010, 59% of Anglicans were found in Africa. In 2010, more Protestants lived in India than in the UK or Germany, while Protestants in Brazil accounted for as many people as Protestants in the UK and Germany combined. Almost as many lived in each of Nigeria and China as in all of Europe. China is home to world's largest Protestant minority.
Protestantism is growing in Africa,Study: Christianity grows exponentially in AfricaThe Battle for Latin America's Soul Asia,In China, Protestantism's Simplicity Yields More Converts Than Catholicism Latin America,Evangelicals rise in Latin America and Oceania, while declining in Anglo AmericaAmerica's Changing Religious Landscape, by Pew Research Center, 12 May 2015 and Europe,Loek Halman, Ole Riis: "Religion in a Secularizing Society: The Europeans' Religion at the End of the 20th Century" with some exceptions such as France,Religious Newcomers and the Nation State: Political Culture and Organized Religion in France and the Netherlands where it was eradicated after the abolition of the Edict of Nantes by the Edict of Fontainebleau and the following persecution of Huguenots, but now is claimed to be stable in number or even growing slightly. According to some, Russia is another country to see a Protestant revival.Protestantism in Postsoviet Russia: An Unacknowledged Triumph
In 2010, the largest Protestant denominational families were historically Pentecostal denominations (10.8%), Anglican (10.6%), Lutheran (9.7%), Baptist (9%), United and uniting churches (unions of different denominations) (7.2%), Presbyterian or Reformed (7%), Methodist (3.4%), Adventist (2.7%), Congregationalist (0.5%), Brethren (0.5%), The Salvation Army (0.3%) and Moravian (0.1%). Other denominations accounted for 38.2% of Protestants.
United States is home to approximately 20% of Protestants. According to a 2012 study, Protestant share of U.S. population dropped to 48%, thus ending its status as religion of the majority for the first time."Nones" on the Rise: One-in-Five Adults Have No Religious Affiliation, Pew Research Center (The Pew Forum on Religion & Public Life), 9 October 2012US Protestants no longer a majority – studyFor the first time ever, Protestants are not the majority in U.S. – due to rising number of Americans with 'no religion' The decline is attributed mainly to the dropping membership of the Mainline Protestant churches,Benton Johnson, Dean R. Hoge & Donald A. Luidens "Mainline Churches: The Real Reason for Decline" while Evangelical Protestant and Black churches are stable or continue to grow.
By 2050, Protestantism is projected to rise to slightly more than half of the world's total Christian population.Johnstone, Patrick, "The Future of the Global Church: History, Trends and Possibilities", p. 100, fig 4.10 & 4.11 According to other experts such as Hans J. Hillerbrand, Protestants will be as numerous as Catholics.Hillerbrand, Hans J., "Encyclopedia of Protestantism: 4-volume Set", p. 1815, "Observers carefully comparing all these figures in the total context will have observed the even more startling finding that for the first time ever in the history of Protestantism, Wider Protestants will by 2050 have become almost exactly as numerous as Roman Catholics – each with just over 1.5 billion followers, or 17 percent of the world, with Protestants growing considerably faster than Catholics each year."
According to Mark Jürgensmeyer of the University of California, popular Protestantism is the most dynamic religious movement in the contemporary world, alongside the resurgent Islam.
See also
Anti-Catholicism
Anti-Protestantism
Criticism of the Catholic Church
European wars of religion
Islamic Protestantism
Jehovah's Witnesses
Latter Day Saint movement
Messianic Judaism
Protestantism and Islam
Restorationism
Stone-Campbell Restoration Movement
Unitarianism
Universalism
Unitarian Universalism
Oneness Pentecostalism
The New Church
Christadelphians
Notes
References
Further reading
Cook, Martin L. (1991). The Open Circle: Confessional Method in Theology. Minneapolis, Minn.: Fortress Press. xiv, 130 p. N.B.: Discusses the place of Confessions of Faith in Protestant theology, especially in Lutheranism. ISBN 0-8006-2482-3
Dillenberger, John, and Claude Welch (1988). Protestant Christianity, Interpreted through Its Development. Second ed. New York: Macmillan Publishing Co. ISBN 0-02-329601-1
Giussani, Luigi (1969), trans. Damian Bacich (2013). American Protestant Theology: A Historical Sketch. Montreal: McGill-Queens UP.
Nash, Arnold S., ed. (1951). Protestant Thought in the Twentieth Century: Whence & Whither? New York: Macmillan Co.
External links
Protestantism (Encyclopedia.com)
"Protestantism" from the 1917 Catholic Encyclopedia''
The Historyscoper
World Council of Churches World body for mainline Protestant churches
Category:Christian terminology | 25,814,008 | 2017-01 |
Russian Soviet Federative Socialist Republic |
Russia (; ; from the — Rus'), officially known as the Russian Soviet Federative Socialist Republic (Russian SFSR or RSFSR; ) and the Russian FederationSee for example, the log of the meeting of the Supreme Soviet of the USSR on February 19, 1954 (in Russian) () commonly referred to as Soviet RussiaDeclaration of Rights of the laboring and exploited people (original VTsIK variant, III Congress revision), article I was a sovereign state in 1917–22 and 1991-93, the largest, most populous, and most economically developed republic of the Soviet Union in 1922–91 and a sovereign part of the Soviet Union with its own legislation in 1990–91.The Free Dictionary Russian Soviet Federated Socialist Republic. Encyclopedia2.thefreedictionary.com. Retrieved on 22 June 2011. The Republic comprised sixteen autonomous republics, five autonomous oblasts, ten autonomous okrugs, six krais, and forty oblasts. Russians formed the largest ethnic group. The capital of the Russian SFSR was Moscow and the other major urban centers included Leningrad, Novosibirsk, Yekaterinburg, Nizhny Novgorod and Samara.
The Russian Soviet Republic was proclaimed on November 7, 1917 (October Revolution) as a sovereign state and the world's first constitutionally socialist state with the ideology of Communism. The first Constitution was adopted in 1918. In 1922 the Russian SFSR signed the Treaty on the Creation of the USSR.
The economy of Russia became heavily industrialized, accounting for about two-thirds of the electricity produced in the USSR. It was, by 1961, the third largest producer of petroleum due to new discoveries in the Volga-Urals region and Siberia, trailing only the United States and Saudi Arabia. In 1974, there were 475 institutes of higher education in the republic providing education in 47 languages to some 23,941,000 students. A network of territorially organized public-health services provided health care. After 1985, the restructuring policies of the Gorbachev administration relatively liberalised the economy, which had become stagnant since the late 1970s, with the introduction of non-state owned enterprises such as cooperatives. The effects of market policies led to the failure of many enterprises and total instability by 1990.
On June 12, 1990, the Congress of People's Deputies adopted the Declaration of State Sovereignty. On June 12, 1991, Boris Yeltsin was elected the first President. On December 8, 1991, heads of Russia, Ukraine and Belarus signed the Belavezha Accords. The agreement declared dissolution of the USSR by its founder states (i.e. denunciation of 1922 Treaty on the Creation of the USSR) and established the Commonwealth of Independent States (CIS). On December 12, the agreement was ratified by the Russian Parliament, therefore Russian SFSR denounced the Treaty on the Creation of the USSR and de facto declared Russia's independence from the USSR.
On December 25, 1991, following the resignation of Mikhail Gorbachev as president of the Soviet Union, the Russian SFSR was renamed the Russian Federation re-establishing the sovereign state. On December 26, 1991, the USSR was self-dissolved by the Soviet of Nationalities, which by that time was the only functioning house of the Supreme Soviet (the other house, Soviet of the Union, had already lost the quorum after recall of its members by the union republics). After dissolution of the USSR, Russia declared that it assumed the rights and obligations of the dissolved central Soviet government, including UN membership.
The new Russian constitution, adopted on December 12, 1993 after a constitutional crisis, abolished the Soviet system of government in its entirety.
Nomenclature
Under the leadership of Vladimir Lenin, the Bolsheviks established the Soviet state on , immediately after the Russian Provisional Government, which governed the Russian Republic, was overthrown during the October Revolution. Initially, the state did not have an official name and wasn't recognized by neighboring countries for five months. Meanwhile, anti-Bolsheviks coined the mocking label "Sovdepia" for the nascent state of the "Soviets of Workers' and Peasants' Deputies".
On January 25, 1918 the third meeting of the All-Russian Congress of Soviets renamed the unrecognized state the Soviet Russian Republic.Declaration on the rights of working and exploited people. Hist.msu.ru. Retrieved on June 22, 2011. The Treaty of Brest-Litovsk was signed on March 3, 1918, giving away much of the land of the former Russian Empire to Germany in exchange for peace during the rest of World War I. On July 10, 1918, the Russian Constitution of 1918 renamed the country the Russian Socialist Federative Soviet Republic.Soviet Russia information. Russians.net (August 23, 1943). Retrieved on June 22, 2011. By 1918, during the Russian Civil War, several states within the former Russian Empire seceded, reducing the size of the country even more.
Internationally, in 1920, the RSFSR was recognized as an independent state only by Estonia, Finland, Latvia and Lithuania in the Treaty of Tartu and by the short-lived Irish Republic.Carr, EH The Bolshevik Revolution 1917–23, vol 3 Penguin Books, London, 4th reprint (1983), pp. 257–258. The draft treaty was published for propaganda purposes in the 1921 British document Intercourse between Bolshevism and Sinn Féin (Cmd 1326).
On December 30, 1922, with the creation of the Soviet Union, Russia became one of sixteen republics within the federation of the Union of Soviet Socialist Republics. The final Soviet name for the republic, the Russian Soviet Federative Socialist Republic, was adopted in the Soviet Constitution of 1936. By that time, Soviet Russia had gained roughly the same borders of the old Tsardom of Russia before the Great Northern War of 1700.
For most of the Soviet Union's existence, it was commonly referred to as "Russia," even though technically "Russia" was only one republic within the larger union—albeit by far the largest, most powerful and most highly developed.
On December 25, 1991, following the collapse of the Soviet Union (officially on 26 December), the republic was renamed the Russian Federation, which it remains to this day.Chronicle of Events. Marxistsfr.org. Retrieved on June 22, 2011. This name and "Russia" were specified as the official state names in the April 21, 1992, amendment to the existing constitution and were retained as such in the 1993 Constitution of Russia.
Geography
At a total of 17,125,200 km (6,612,100 sq mi), the Russian S.F.S.R. was the largest of its fifteen republics, with its southerly neighbor, the Kazakh S.S.R., being second.
The international borders of the RSFSR touched Poland on the west; Norway and Finland on the northwest; and to its southeast were the Democratic People's Republic of Korea, Mongolian People's Republic, and the People's Republic of China. Within the Soviet Union, the RSFSR bordered the Ukrainian, Belarusian, Estonian, Latvian and Lithuanian SSRs to its west and Azerbaijan, Georgian and Kazakh SSRs to the south.
Roughly 70% of the area in the RSFSR consisted of broad plains, with mountainous tundra regions mainly concentrated in the east. The area is rich in mineral resources, including petroleum, natural gas, and iron ore.
History
Early years (1917–1920)
The Soviet government first came to power on November 7, 1917, immediately after the Russian Provisional Government, which governed the Russian Republic, was overthrown in the October Revolution. The state it governed, which did not have an official name, would be unrecognized by neighboring countries for another five months.
On January 25, 1918, at the third meeting of the All-Russian Congress of Soviets, the unrecognized state was renamed the Soviet Russian Republic. On March 3, 1918, the Treaty of Brest-Litovsk was signed, giving away much of the land of the former Russian Empire to Germany, in exchange for peace in World War I. On July 10, 1918, the Russian Constitution of 1918 renamed the country the Russian Socialist Federative Soviet Republic. By 1918, during the Russian Civil War, several states within the former Russian Empire had seceded, reducing the size of the country even more.
The RSFSR was recognized as an independent state internationally by only Estonia, Finland, Latvia, and Lithuania, in the Treaty of Tartu in 1920.
1920s
thumb|The Russian SFSR in 1922.
thumb|The Russian SFSR in 1924.
thumb|The Russian SFSR in 1929.
On December 30, 1922, the First Congress of the Soviets of the USSR approved the Treaty on the Creation of the USSR, by which Russia was united with the Ukrainian Soviet Socialist Republic, Byelorussian Soviet Socialist Republic, and Transcaucasian Soviet Federal Socialist Republic into a single federal state, the Soviet Union. Later treaty was included in the 1924 Soviet Constitution, adopted on January 31, 1924 by the Second Congress of Soviets of the USSR.
Paragraph 3 of Chapter 1 of the 1925 Constitution of the RSFSR stated the following:Constitution (Basic Law) of the Russian Socialist Federative Soviet Republic (approved by Twelfth All-Russian Congress of Soviets on May 11, 1925).
By the will of the peoples of the Russian Socialist Federative Soviet Republic, who decided on the formation of the Union of Soviet Socialist Republics during the Tenth All-Russian Congress of Soviets, the Russian Socialist Federative Soviet Republic, being a part of the Union of Soviet Socialist Republics, devolves to the Union the powers which according to Article 1 of the Constitution of the Union of Soviet Socialist Republics are included within the scope of responsibilities of the government bodies of the Union of Soviet Socialist Republics.
1930s
thumb|The Russian SFSR in 1936.
Many regions in Russia were affected by the Soviet famine of 1932–1933: Volga; Central Black Soil Region; North Caucasus; the Urals; the Crimea; part of Western Siberia; and the Kazak ASSR. With the adoption of the 1936 Soviet Constitution on December 5, 1936, the size of the RSFSR was significantly reduced. The Kazakh ASSR and Kirghiz ASSR were transformed into the Kazakh and Kirghiz Soviet Socialist Republics. The Karakalpak Autonomous Socialist Soviet Republic was transferred to the Uzbek SSR.
The final name for the republic during the Soviet era was adopted by the Russian Constitution of 1937, which renamed it the Russian Soviet Federative Socialist Republic.
1940s
thumb|The Russian SFSR in 1940.
In 1943, Karachay Autonomous Oblast was dissolved by Joseph Stalin, when the Karachays were exiled to Central Asia for their alleged collaboration with the Germans and territory was incorporated into the Georgian SSR.
On March 3, 1944, on the orders of Stalin, the Chechen-Ingush ASSR was disbanded and its population forcibly deported upon the accusations of collaboration with the invaders and separatism. The territory of the ASSR was divided between other administrative units of Russian SFSR and the Georgian SSR.
On October 11, 1944, the Tuvan People's Republic joined the Russian SFSR as the Tuvan Autonomous Oblast, in 1961 becoming an Autonomous Soviet Socialist Republic.
After reconquering Estonia and Latvia in 1944, the Russian SFSR annexed their easternmost territories around Ivangorod and within the modern Pechorsky and Pytalovsky Districts in 1944–1945.
At the end of World War II Soviet troops occupied southern Sakhalin Island and the Kuril Islands, making them part of the RSFSR. The status of the southernmost Kurils remains in dispute with Japan.
On April 17, 1946, the Kaliningrad Oblast — the northern portion of the former German province of East Prussia—was annexed by the Soviet Union and made part of the Russian SFSR.
1950s
After the death of Joseph Stalin, March 5, 1953, Georgy Malenkov became the new leader of the USSR.
In January 1954, Malenkov transferred Crimea from the Russian SFSR to the Ukrainian SSR.
On February 8, 1955, Malenkov was officially demoted to deputy Prime Minister. As First Secretary of the Central Committee of the Communist Party, Nikita Khrushchev's authority was significantly enhanced by Malenkov's demotion.
On January 9, 1957, Karachay Autonomous Oblast and Chechen-Ingush Autonomous Soviet Socialist Republic were restored by Khrushchev and they were transferred from the Georgian SSR back to the Russian SFSR.
The Karelo-Finnish SSR was transferred back to the RSFSR as the Karelian ASSR in 1956.
1960s–1980s
In 1964, Nikita Khrushchev was removed from his position of power and replaced with Leonid Brezhnev. Under his rule, the Russian SFSR and the rest of the Soviet Union went through an era of stagnation. Even after he died in 1982, the era didn’t end until Mikhail Gorbachev took power in March 1985 and introduced liberal reforms in Soviet society.
Early 1990s
thumb|right|Flag adopted by the Russian SFSR national parliament in 1991.
On May 29, 1990, at his third attempt, Boris Yeltsin was elected the chairman of the Supreme Soviet of the Russian SFSR. The Congress of People's Deputies of the Republic adopted the Declaration of State Sovereignty of the Russian SFSR on June 12, 1990, which was the beginning of the "War of Laws", pitting the Soviet Union against the Russian Federation and other constituent republics.
On March 17, 1991, an all-Russian referendum created the post of President of the RSFSR. On June 12, Boris Yeltsin was elected President of Russia by popular vote. During an unsuccessful coup attempt on August 19–21, 1991 in Moscow, the capital of the Soviet Union and Russia, President of Russia Yeltsin strongly supported the President of the Soviet Union, Mikhail Gorbachev.
After the failure of GKChP, in the presence of Gorbachev, on August 23, 1991, Yeltsin signed a decree suspending all activity by the Communist Party of the Russian SFSR in the territory of Russia.Decree of the President of the Russian SFSR of August 23, 1991 No. 79 On November 6, he went further, banning the Communist Parties of the USSR and the RSFSR from the territory of the RSFSR.Decree of the President of the Russian SFSR 06.11. 1991 N169 "On activity of the CPSU and the Communist Party of the Russian SFSR"
On December 8, 1991, at Viskuli near Brest (Belarus), the President of the Russian SFSR and the heads of Byelorussian SSR and Ukrainian SSR signed the "Agreement on the Establishment of the Commonwealth of Independent States" (known in media as Belavezha Accords). The document, consisting of a preamble and fourteen articles, stated that the Soviet Union ceased to exist as a subject of international law and geopolitical reality. However, based on the historical community of peoples, relations between them, given the bilateral treaties, the desire for a democratic rule of law, the intention to develop their relations based on mutual recognition and respect for state sovereignty, the parties agreed to the formation of the Commonwealth of Independent States. On December 12, the agreement was ratified by the Supreme Soviet of the Russian SFSR by an overwhelming majority: 188 votes for, 6 against, 7 abstentions. On the same day, the Supreme Soviet of the Russian SFSR denounced the Treaty on the Creation of the USSR and recalled all Russian deputies from the Supreme Soviet of the Soviet Union. The legality of this act is the subject of discussions because, according to the 1978 Constitution (Basic Law) of the Russian SFSR, the Russian Supreme Soviet had no right to do so.The Russian SFSR has constitutional right to "freely secede from the Soviet Union" (art. 69 of the RSFSR Constitution, Article 72 of the USSR Constitution), but according to USSR laws 1409-I (enacted on April 3, 1990) and 1457-I (enacted on April 26, 1990) this can be done only by a referendum and only if two-thirds of all registered voters of the republic has supported that motion. No special referendum on the secession from the USSR was held in the RSFSR However, by this time the Soviet government had been rendered more or less impotent, and was in no position to object. Although the December 12 vote is sometimes reckoned as the moment that the RSFSR seceded from the collapsing Soviet Union, this is not the case. It appears that the RSFSR took the line that it was not possible to secede from an entity that no longer existed.
On December 24, Yeltsin informed the Secretary-General of the United Nations that by agreement of the member states of the CIS Russian Federation would assume the membership of the Soviet Union in all UN organs (including permanent membership in the UN Security Council). Thus, Russia is considered to be an original member of the UN (since October 24, 1945) along with Ukraine (Ukrainian SSR) and Belarus (Byelorussian SSR). On December 25—just hours after Gorbachev resigned as president of the Soviet Union—the Russian SFSR was renamed the Russian Federation (Russia), reflecting that it was now a sovereign state with Yeltsin assuming the Presidency.Supreme Soviet of the Russian SFSR approved the Law of the RSFSR #2094-I of December 25, 1991 "On renaming of the Russian Soviet Federative Socialist Republic" // Congress of People's Deputies of the Russian SFSR and Supreme Soviet of the Russian SFSR Daily. – 1992. – № 2. – Article 62 That same night, the Soviet flag was lowered and replaced with the tricolor. The Soviet Union officially ceased to exist the next day. The change was originally published on January 6, 1992 (Rossiyskaya Gazeta). According to law, during 1992, it was allowed to use the old name of the RSFSR for official business (forms, seals and stamps).
Russia made a significant turn toward developing a market economy by implanting basic tenets such as market-determined prices. Two fundamental and interdependent goals — macroeconomic stabilization and economic restructuring — the transition from central planning to a market-based economy. The former entailed implementing fiscal and monetary policies that promote economic growth in an environment of stable prices and exchange rates. The latter required establishing the commercial, and institutional entities — banks, private property, and commercial legal codes— that permit the economy to operate efficiently. Opening domestic markets to foreign trade and investment, thus linking the economy with the rest of the world, was an important aid in reaching these goals. The Gorbachev regime failed to address these fundamental goals. At the time of the Soviet Union's demise, the Yeltsin government of the Russian Republic had begun to attack the problems of macroeconomic stabilization and economic restructuring. By mid-1996, the results were mixed.
The struggle for the center of power in post-Soviet Russia and for the nature of the economic reforms culminated in a political crisis and bloodshed in the fall of 1993. Yeltsin, who represented a course of radical privatization, was opposed by the parliament. Confronted with opposition to the presidential power of decree and threatened with impeachment, he "dissolved" the parliament on September 21, in contravention of the existing constitution, and ordered new elections and a referendum on a new constitution. The parliament then declared Yeltsin deposed and appointed Aleksandr Rutskoy acting president on September 22. Tensions built quickly, and matters came to a head after street riots on October 2–October 3. On October 4, Yeltsin ordered Special Forces and elite army units to storm the parliament building, the "White House" as it is called. With tanks thrown against the small-arms fire of the parliamentary defenders, the outcome was not in doubt. Rutskoy, Ruslan Khasbulatov, and the other parliamentary supporters surrendered and were immediately arrested and jailed. The official count was 187 dead, 437 wounded (with several men killed and wounded on the presidential side).
The transitional period ended as Russia's first constitutional period, which was defined by the much-amended constitution adopted by the Supreme Soviet of the Russian Soviet Federative Socialist Republic in 1978. A new post-Soviet constitution, creating a strong presidency, was approved by referendum on December 12, 1993.
Government
The Government was known officially as the Council of People's Commissars (1917–1946), Council of Ministers (1946–1978) and Council of Ministers–Government (1978–1991). The first government was headed by Vladimir Lenin as "Chairman of the Council of People's Commissars of the Russian SFSR" and the last by Boris Yeltsin as both head of government and head of state under the title "President".
The Russian SFSR was controlled by the Communist Party of the Soviet Union, until the abortive 1991 August coup, which prompted President Yeltsin to suspend the recently created Communist Party of the Russian Soviet Federative Socialist Republic.
Autonomous Soviet Socialist Republics (ASSRs) within the Russian SFSR
Turkestan ASSR – Formed on April 30, 1918, on the territory of the former Turkestan General-Governorate. As part of the delimitation programme of Soviet Central Asia, the Turkestan ASSR along with the Khorezm SSR and the Bukharan PSR were disbanded on October 27, 1924, and in their place came the Union republics of Turkmen SSR and Uzbek SSR. The latter contained the Tajik ASSR until December 1929 when it too became a full Union republic, the Tajik SSR. The RSFSR retained the newly formed Kara-Kirghiz and the Kara-Kalpak Autonomous Oblasts. The latter was part of the Kirgiz, then the Kazak ASSR until 1930, when it was directly subordinated to Moscow.
Bashkir ASSR – Formed on March 23, 1919 from several northern districts of the Orenburg Guberniya populated by Bashkirs. On October 11, 1990, it declared its sovereignty, as the Bashkir SSR, which was renamed in 1992 the Republic of Bashkortostan.
Tatar ASSR – Formed on May 27, 1920 on the territory of the western two-thirds of the Kazan Governorate populated by Tatars. On October 30, 1990, declared sovereignty as the Republic of Tatarstan and on October 18, 1991 it declared its independence. The Russian constitutional court overturned the declaration on March 13, 1992. In February 1994, a separate agreement was reached with Moscow on the status of Tatarstan as an associate state in Russia with confederate status.
Kirgiz ASSR Formed on August 26, 1920, from the Ural, Turgay, Semipalatinsk Oblasts, and parts of Transcaspia, Bukey Horde and Orenburg Guberniya populated by Kirgiz-Kaysaks (former name of Kazakh people). Further enlarged in 1921 upon gaining land from Omsk Guberniya and again in 1924 from parts of Jetysui Guberniya and Syr Darya and Samarkand Oblasts. On 19 April 1925 renamed as the Kazak ASSR (see below)
Mountain ASSR Formed on January 20, 1921, after the Bolshevik Red Army evicted the short-lived Mountainous Republic of the Northern Caucasus. Initially composed of several national districts; one-by-one these left the republic until November 7, 1924, when the remains of the republic was partitioned into the Ingush Autonomous Oblast, the North Ossetian Autonomous Oblast and the Sunzha Cossack district (all subordinates to the North Caucasus Kray).
Dagestan ASSR – Formed on January 20, 1921, from the former Dagestan Oblast. On September 17 1991, it declared sovereignty as the Dagestan SSR.
Crimean ASSR Formed on October 18, 1921, on the territory of Crimean peninsula, following the Red Army's eviction of Baron Wrangel's army, ending the Russian Civil War in Europe. On May 18, 1944, it was reduced to the status of Oblast, alongside the deportation of the Crimean Tatars, as collective punishment for alleged collaboration with the Nazi occupation regime in Taurida Subdistrict. On February 19, 1954, it was transferred to the Ukrainian SSR. Re-established on February 12 1991, it declared sovereignty on September 4 of that year. On May 5 1992, it declared independence as the Republic of Crimea, on May 13; the Verkhovna Rada of Ukraine overturned the declaration, but compromised on an Autonomous Republic of Crimea within Ukraine. After the 2014 Ukrainian revolution, an internationally disputed referendum and Russian military intervention, Crimea was annexed by Russia in March 2014.
Yakut ASSR – Formed on February 16 1922 upon the elevation of the Yakut Autonomous Oblast into an ASSR. On September 27, 1990, it declared sovereignty as the Yakut-Sakha Soviet Socialist Republic. From December 21, 1991, it has been known as the Republic of Sakha (Yakutia).
Buryat ASSR – Formed on March 30, 1923 as due to the merger of the Mongol-Buryat Autonomous Oblast of the RSFSR and the Buryat-Mongol Autonomous Oblast of the Far Eastern Republic. Until 7 July 1958 – Mongol-Buryat ASSR. On March 27, 1991 it became the Republic of Buryatia.
Karelian ASSR – Formed on July 23, 1923 when the Karelian Labor Commune was integrated into the RSFSR's administrative structure. On March 31, 1940, it was elevated into a full Union republic as the Karelo-Finnish SSR. On July 16, 1956, it was downgraded in status to that of an ASSR and re-subordinated to RSFSR. It declared sovereignty on October 13 1991 as the Republic of Karelia.
Volga German ASSR – Formed on December 19, 1924, upon elevation of the Volga German Autonomous Oblast into an ASSR. On August 28, 1941, upon the deportation of Volga Germans to Central Asia, the ASSR was disbanded. The territory was partitioned between the Saratov and Stalingrad Oblasts.
Kazak ASSR was formed on April 19, 1925, when the first Kirgiz ASSR was renamed and partitioned. Upon the ratification of the new Soviet constitution, the ASSR was elevated into a full Union Republic on December 3, 1936. On October 25, 1990, it declared sovereignty and on December 16, 1991 its independence as the Republic of Kazakhstan.
Chuvash ASSR – Formed on April 21, 1925 upon the elevation of the Chuvash Autonomous Oblast into an ASSR. It declared sovereignty on October 26 1990 as the Chuvash SSR.
Kirghiz ASSR was formed on February 1, 1926 upon elevation of the Kirghiz Autonomous Oblast. Upon the ratification of the new Soviet constitution, the ASSR was elevated into a full Union Republic on December 3, 1936. On December 12, 1990, it declared sovereignty as the Republic of Kyrgyzstan and on August 31, 1991 its independence.
Kara-Kalpak ASSR – Formed on March 20, 1932 upon elevation of the Kara-Kalpak Autonomous Oblast into the Kara-Kalpak ASSR; from December 5 1936 a part of the Uzbek SSR. In 1964, it was renamed the Karakalpak ASSR. It declared sovereignty on December 14, 1990.
Mordovian ASSR – Formed on December 20, 1934 upon the elevation of Mordovian Autonomous Oblast into an ASSR. It declared sovereignty on December 13, 1990 as the Mordovian SSR. Since January 25, 1991 it has been known as the Republic of Mordovia.
Udmurt ASSR was formed on December 28, 1934 upon the elevation of Udmurt Autonomous Oblast into an ASSR. It declared sovereignty on September 20, 1990. Since October 11, 1991 it has been known as the Udmurt Republic.
Kalmyk ASSR was formed on October 20 1935 upon the elevation of Kalmyk Autonomous Oblast into an ASSR. On December 27, 1943, upon the deportation of the Kalmyks, the ASSR was disbanded and split between the newly established Astrakhan Oblast and parts adjoined to Rostov Oblast, Krasnodar Krai, and Stavropol Krai. On January 9, 1957, Kalmyk Autonomous Oblast was re-established in its present borders, first as a part of Stavropol Krai and from July 19, 1958 as a part of the Kalmyk ASSR. On October 18, 1990, it declared sovereignty as the Kalmyk SSR.
Kabardino-Balkar ASSR – Formed on 5 December 1936, upon the departure of the Kabardino-Balkar Autonomous Oblast from the North Caucasus Kray. After the deportation of the Balkars on 8 April 1944, the republic is renamed as Kabardin ASSR and parts of its territory transferred to Georgian SSR, upon the return of the Balkars, the KBASSR is re-instated on 9 January 1957. On 31 January 1991, the republic declared sovereignty as the Kabardino-Balkar SSR, and from 10 March 1992 – Kabardino-Balkarian Republic.
Northern Ossetian ASSR – Formed on 5 December 1936, upon the disbandment of the North Caucasus Kray, and its constituent North Ossetian Autonomous Oblast was raised into an ASSR. Declared sovereignty on 26 December 1990 as the North Ossetian SSR.
Chechen-Ingush ASSR – Formed on 5 December 1936, when the North Caucausus Kray was disestablished and its constituent Chechen-Ingush Autonomous Oblast was elevated into an ASSR and subordinated to Moscow. Following the en masse deportation of the Chechens and Ingush, on 7 March 1944, the ChIASSR was disbanded, and the Grozny Okrug was temporarily administered by Stavropol Kray until the 22 March, when the territory was portioned between North Ossetian and Dagestan ASSRs, and the Georgian SSR. The remaining land was merged with Stavropol Krays Kizlyar district and organised as Grozny Oblast, which existed until 9 January 1957 when the ChIASSR was re-established though only the southern border's original shape was retained. Declared sovereignty on 27 November 1990 as the Chechen-Ingush Republic. On 8 June 1991, the 2nd Chechen National Congress proclaimed a separate Chechen-Republic (Noxchi-Cho), and on September 6, began a coup which overthrew the Soviet local government. De facto, all authority passed to the self-proclaimed government which was renamed as the Chechen Republic of Ichkeria in early 1993. In response, the western Ingush districts after a referendum on 28 November 1991, were organised into an Ingush Republic which was officially established on 4 June 1992, by decree of Russian President as the Republic of Ingushetia. The same decree de jure created a Chechen republic, although it would be established only on 3 June 1994 and carry out partial governance during the First Chechen War. The Khasavyurt Accord would again suspend the government on 15 November 1996. The present Chechen Republic government was re-established on 15 October 1999.
Komi ASSR – Formed on 5 December 1936 upon the elevation of the Komi (Zyryan) Autonomous Oblast into an ASSR. Declared sovereignty on 23 November 1990 as the Komi SSR. From 26 May 1992 – the Republic of Komi.
Mari ASSR – Formed on 5 December 1936 upon the elevation of the Mari Autonomous Oblast into an ASSR. Declared Sovereignty on 22 December 1990 as the Mari Soviet Socialist Republic (Mari El).
Tuva ASSR – Formed on 10 October 1961 when the Tuva Autonomous Oblast was elevated into an ASSR. On December 12, 1990 declared sovereignty as the Soviet Republic of Tyva.
Gorno-Altai ASSR was formed on October 25, 1990, when Gorno-Altai Autonomous Oblast declared sovereignty; since July 3, 1991 it has been known as the Gorno-Altai SSR.
Karachayevo-Cherkessian ASSR was formed on November 17, 1990, when Karachay-Cherkess Autonomous Oblast was elevated into an ASSR and, instead of Stavropol Krai, subordinated directly to the RSFSR. It declared sovereignty on July 3, 1991 as the Karachay-Cherkess SSR.
Culture
National holidays and symbols
The public holidays for the Russian SFSR included Defender of the Fatherland Day (February 23), which honors Russian men, especially those serving in the army; International Women's Day (March 8), which combines the traditions of Mother's Day and Valentine's Day; Spring and Labor Day (1 May); Victory Day; and like all other Soviet republics, the Great October Socialist Revolution (November 7).
Victory Day is the second most popular holiday in Russia; it commemorates the victory over Nazism in the Great Patriotic War. A huge military parade, hosted by the President of Russia, is annually organised in Moscow on Red Square. Similar parades take place in all major Russian cities and cities with the status Hero city or City of Military Glory.
thumb|Matryoshka doll taken apart
During its 76-year existence, the Russian SFSR anthem was Patrioticheskaya Pesnya, but before 1990, the previous anthem shared its music with the Soviet Anthem, though not the lyrics and The Internationale was its anthem before 1944. The motto Proletarians of all countries, unite! was commonly used and shared with other Soviet republics. The hammer and sickle and the full Soviet coat of arms were still widely seen in Russian cities as a part of old architectural decorations until its slow gradual removal in 1991. The Soviet Red Stars are also encountered, often on military equipment and war memorials. The Red Banner continues to be honored, especially the Banner of Victory of 1945.
The Matryoshka doll is a recognizable symbol of the Russian SFSR (and the Soviet Union as a whole), and the towers of Moscow Kremlin and Saint Basil's Cathedral in Moscow are Russian SFSR's main architectural icons. The Chamomile is the national flower, while birch is the national tree. The Russian bear is an animal symbol and a national personification of Russia. Though this image has a Western origin, Russians themselves have accepted it. The native Soviet Russian national personification is Mother Russia.
Flag history
References
External links
Full Texts and All Laws Amending Constitutions of the Russian SFSR
Russian Federation; The Whole Republic a Construction Site by D. S. Polyanski.
Full 1918 RSFSR Constitution
Category:Republics of the Soviet Union
Category:Communism in Russia
*
Category:Former Slavic countries
Category:Former socialist republics
Category:Russian-speaking countries and territories
Category:20th century in Russia
Category:States and territories established in 1917
Category:States and territories disestablished in 1991
Category:States and territories disestablished in 1993
Category:1917 establishments in Russia
Category:1991 disestablishments in the Soviet Union
Category:1993 disestablishments in Russia
Category:Former member states of the United Nations
Category:North Asian countries
Category:Northeast Asian countries
Category:Northern European countries
Category:Eastern European countries
Category:Federal republics | 24,795,561 | 2017-01 |
Israel | Israel (; ; ), officially known as the State of Israel ( ; ), is a country in the Middle East, on the southeastern shore of the Mediterranean Sea and the northern shore of the Red Sea. It has land borders with Lebanon to the north, Syria to the northeast, Jordan on the east, the Palestinian territories of the West Bank and Gaza Strip to the east and west, respectively, and Egypt to the southwest. The country contains geographically diverse features within its relatively small area. Israel's financial and technology center is Tel Aviv, while its seat of government and proclaimed capital is Jerusalem, although the state's sovereignty over the city of Jerusalem is internationally unrecognized.The Jerusalem Law states that "Jerusalem, complete and united, is the capital of Israel" and the city serves as the seat of the government, home to the President's residence, government offices, supreme court, and parliament. United Nations Security Council Resolution 478 (20 August 1980; 14–0, U.S. abstaining) declared the Jerusalem Law "null and void" and called on member states to withdraw their diplomatic missions from Jerusalem. The United Nations and all member nations refuse to accept the Jerusalem Law (see ) and maintain their embassies in other cities such as Tel Aviv, Ramat Gan, and Herzliya (see the CIA Factbook and Map of Israel). The U.S. Congress subsequently adopted the Jerusalem Embassy Act, which said that the U.S. embassy should be relocated to Jerusalem and that it should be recognized as the capital of Israel. However, the US Justice Department Office of Legal Counsel concluded that the provisions of the act "invade exclusive presidential authorities in the field of foreign affairs and are unconstitutional". Since passage of the act, all presidents serving in office have determined that moving forward with the relocation would be detrimental to U.S. national security concerns and opted to issue waivers suspending any action on this front. The Palestinian Authority sees East Jerusalem as the capital of a future Palestinian state. The city's final status awaits future negotiations between Israel and the Palestinian Authority (see "Negotiating Jerusalem," Palestine–Israel Journal). See Positions on Jerusalem for more information.
On 29 November 1947, the United Nations General Assembly adopted a Partition Plan for Mandatory Palestine. This specified borders for new Arab and Jewish states and an area of Jerusalem which was to be administered by the UN under an international regime. The end of the British Mandate for Palestine was set for midnight on 14 May 1948. That day, David Ben-Gurion, the executive head of the Zionist Organization and president of the Jewish Agency for Palestine, declared "the establishment of a Jewish state in Eretz Israel, to be known as the State of Israel", which would start to function from the termination of the mandate. The borders of the new state were not specified in the declaration.Declaration of Establishment of State of Israel Israel Ministry of Foreign Affairs Neighboring Arab armies invaded the former British mandate on the next day and fought the Israeli forces.The Arab-Israeli War of 1948 (US Department of State, Office of the Historian)"Arab forces joining the Palestinian Arabs in attacking territory in the former Palestinian mandate."Yoav Gelber, Palestine 1948, 2006 — Chap.8 "The Arab Regular Armies' Invasion of Palestine". Israel has since fought several wars with neighboring Arab states, in the course of which it has occupied the West Bank, Sinai Peninsula (1956–57, 1967–82), part of Southern Lebanon (1982–2000), Gaza Strip (1967–2005; still considered occupied after 2005 disengagement) and the Golan Heights. It extended its laws to the Golan Heights and East Jerusalem, but not the West Bank. Efforts to resolve the Israeli–Palestinian conflict have not resulted in peace. However, peace treaties between Israel and both Egypt and Jordan have successfully been signed. Israel's occupation of Gaza, the West Bank and East Jerusalem is the world's longest military occupation in modern times.See for example:* * * * * *
The population of Israel, as defined by the Israel Central Bureau of Statistics, was estimated in 2016 to be 8,602,000 people. It is the world's only Jewish-majority state, with 6,430,500 citizens, or 74.8%, being designated as Jewish. The country's second largest group of citizens are Arabs, numbering 1,789,800 people (including the Druze and most East Jerusalem Arabs). The great majority of Israeli Arabs are Sunni Muslims, including significant numbers of semi-settled Negev Bedouins; the rest are Christians and Druze. Other minorities include Arameans, Assyrians, Samaritans, Armenians, Circassians, Dom people, Maronites and Vietnamese. The Black Hebrew Israelites are subject to a slow process of deeper integration, but are still in their majority permanent residents rather than citizens.Leader of African Hebrew Israelites of Jerusalem dies By Jeremy Sharon, 12/28/2014 Israel also hosts a significant population of non-citizen foreign workers and asylum seekers from Africa and Asia, including illegal migrants from Sudan, Eritrea and other Sub-Saharan Africans.
In its Basic Laws, Israel defines itself as a Jewish and democratic state. Israel is a representative democracy with a parliamentary system, proportional representation and universal suffrage.. "A current list of liberal democracies includes: Andorra, Argentina, ... , Cyprus, ... , Israel, ..." The prime minister is head of government and the Knesset is the legislature. Israel is a developed country and an OECD member, with the 35th-largest economy in the world by nominal gross domestic product . The country benefits from a highly skilled workforce and is among the most educated countries in the world with the one of the highest percentage of its citizens holding a tertiary education degree. The country has the highest standard of living in the Middle East and the third highest in Asia, and has one of the highest life expectancies in the world.
Etymology
thumb|upright|The Merneptah Stele (13th century BC). The majority of biblical archeologists translate a set of hieroglyphs as "Israel," the first instance of the name in the record.
Upon independence in 1948, the country formally adopted the name "State of Israel" (Medinat Yisrael) after other proposed historical and religious names including Eretz Israel ("the Land of Israel"), Zion, and Judea, were considered and rejected. In the early weeks of independence, the government chose the term "Israeli" to denote a citizen of Israel, with the formal announcement made by Minister of Foreign Affairs Moshe Sharett.
The names Land of Israel and Children of Israel have historically been used to refer to the biblical Kingdom of Israel and the entire Jewish people respectively. The name "Israel" (Standard Yisraʾel, Isrāʾīl; Septuagint Israēl; 'El(God) persists/rules' though, after Hosea 12:4 often interpreted as "struggle with God"William G. Dever, Did God Have a Wife?: Archaeology and Folk Religion in Ancient Israel, Wm. B. Eerdmans Publishing, 2005 p.186.Geoffrey W. Bromiley, 'Israel,' in International Standard Bible Encyclopedia: E-J,Wm. B. Eerdmans Publishing, 1995 p.907.R. L. Ottley, The Religion of Israel: A Historical Sketch, Cambridge University Press, 2013 pp.31-2 note 5. entry "Jacob".) in these phrases refers to the patriarch Jacob who, according to the Hebrew Bible, was given the name after he successfully wrestled with the angel of the Lord."And he said, Thy name shall be called no more Jacob, but Israel: for as a prince hast thou power with God and with men, and hast prevailed." (Genesis, 32:28, 35:10). See also Hosea 12:5. Jacob's twelve sons became the ancestors of the Israelites, also known as the Twelve Tribes of Israel or Children of Israel. Jacob and his sons had lived in Canaan but were forced by famine to go into Egypt for four generations, lasting 430 years, until Moses, a great-great grandson of Jacob, led the Israelites back into Canaan during the "Exodus". The earliest known archaeological artifact to mention the word "Israel" is the Merneptah Stele of ancient Egypt (dated to the late 13th century BCE).. "The Merneptah Stele ... is arguably the oldest evidence outside the Bible for the existence of Israel as early as the 13th century BCE."
The area is also known as the Holy Land, being holy for all Abrahamic religions including Judaism, Christianity, Islam and the Bahá'í Faith. From 1920, the whole region was known as Palestine (under British Mandate)(פלשתינה (א״י in Hebrew (translation: Palestine (Eretz Israel)) until the Israeli Declaration of Independence of 1948. Through the centuries, the territory was known by a variety of other names, including Judea, Samaria, Southern Syria, Syria Palaestina, Kingdom of Jerusalem, Iudaea Province, Coele-Syria, Djahy, and Canaan.
History
Prehistory
The oldest evidence of early humans in the territory of modern Israel, dating to 1.5 million years ago, was found in Ubeidiya near the Sea of Galilee. Other notable Paleolithic sites include caves Tabun, Qesem and Manot. The oldest fossils of anatomically modern humans found outside Africa are the Skhul and Qafzeh hominids, who lived in northern Israel 120,000 years ago. Around 10th millennium BCE, the Natufian culture existed in the area.
Antiquity
upright|thumb|left|Map of the Kingdom of Israel, 1020 BCE–930 BCE as imagined from the Bible narrative
The notion of the "Land of Israel", known in Hebrew as Eretz Yisrael, has been important and sacred to the Jewish people since Biblical times. According to the Torah, God promised the land to the three Patriarchs of the Jewish people."And the Lord thy God will bring thee into the land which thy fathers possessed, and thou shalt possess it; and he will do thee good, and multiply thee above thy fathers." ()."But if ye return unto me, and keep my commandments and do them, though your dispersed were in the uttermost part of the heaven, yet will I gather them from thence, and will bring them unto the place that I have chosen to cause my name to dwell there." (). On the basis of scripture, the period of the three Patriarchs has been placed somewhere in the early 2nd millennium BCE, and the first Kingdom of Israel was established around the 11th century BCE. Subsequent Israelite kingdoms and states ruled intermittently over the next four hundred years, and are known from various extra-biblical sources.. "For a thousand years Jerusalem was the seat of Jewish sovereignty, the household site of kings, the location of its legislative councils and courts."
The first record of the name Israel (as ) occurs in the Merneptah stele, erected for Egyptian Pharaoh Merneptah c. 1209 BCE, "Israel is laid waste and his seed is not."Stager in Coogan 1998, p. 91.
This "Israel" was a cultural and probably political entity of the central highlands, well enough established to be perceived by the Egyptians as a possible challenge to their hegemony, but an ethnic group rather than an organised state;
Ancestors of the Israelites may have included Semites native to Canaan and the Sea Peoples.Miller 1986, pp. 78–9. McNutt says, "It is probably safe to assume that sometime during Iron Age a population began to identify itself as 'Israelite'", differentiating itself from the Canaanites through such markers as the prohibition of intermarriage, an emphasis on family history and genealogy, and religion.McNutt 1999, p. 35.
Villages had populations of up to 300 or 400,McNutt 1999, p. 70.Miller 2005, p. 98. which lived by farming and herding, and were largely self-sufficient;McNutt 1999, p. 72. economic interchange was prevalent.Miller 2005, p. 99. Writing was known and available for recording, even in small sites.Miller 2005, p. 105. The archaeological evidence indicates a society of village-like centres, but with more limited resources and a small population.Lehman in Vaughn 1992, pp. 156–62. Modern scholars see Israel arising peacefully and internally from existing people in the highlands of Canaan.Gnuse 1997, pp.28,31
thumb|The Large Stone Structure, archaeological site of ancient Jerusalem
Around 930 BCE, the kingdom split into a southern Kingdom of Judah and a northern Kingdom of Israel. From the middle of the 8th century BCE Israel came into increasing conflict with the expanding neo-Assyrian empire. Under Tiglath-Pileser III it first split Israel's territory into several smaller units and then destroyed its capital, Samaria (722 BCE).
An Israelite revolt (724–722 BCE) was crushed after the siege and capture of Samaria by the Assyrian king Sargon II. Sargon's son, Sennacherib, tried and failed to conquer Judah. Assyrian records say he leveled 46 walled cities and besieged Jerusalem, leaving after receiving extensive tribute.column 2 line 61 to column 3 line 49
In 586 BCE King Nebuchadnezzar II of Babylon conquered Judah.
According to the Hebrew Bible, he destroyed Solomon's Temple and exiled the Jews to Babylon. The defeat was also recorded in the Babylonian Chronicles.See http://www.livius.org/cg-cm/chronicles/abc5/jerusalem.html reverse side, line 12. In 538 BCE, Cyrus the Great of Persia conquered Babylon and took over its empire. Cyrus issued a proclamation granting subjugated nations (including the people of Judah) religious freedom (for the original text, which corroborates the biblical narrative only in very broad terms, see the Cyrus Cylinder). According to the Hebrew Bible 50,000 Judeans, led by Zerubabel, returned to Judah and rebuilt the temple. A second group of 5,000, led by Ezra and Nehemiah, returned to Judah in 456 BCE although non-Jews wrote to Cyrus to try to prevent their return.
Classical period
thumb|upright=0.75|left|Portion of the Temple Scroll, one of the Dead Sea Scrolls written during the Second Temple period
With successive Persian rule, the region, divided between Coele-Syria province and later the autonomous Yehud Medinata, was gradually developing back into urban society, largely dominated by Judeans. The Greek conquests largely skipped the region without any resistance or interest. Incorporated into Ptolemaic and finally Seleucid empires, the southern Levant was heavily hellenized, building the tensions between Judeans and Greeks. The conflict erupted in 167 BCE with the Maccabean Revolt, which succeeded in establishing an independent Hasmonean Kingdom in Judah, which later expanded over much of modern Israel, as the Seleucids gradually lost control in the region.
thumb|upright|Masada fortress, location of the final battle in the First Jewish–Roman War
The Roman Empire invaded the region in 63 BCE, first taking control of Syria, and then intervening in the Hasmonean civil war. The struggle between pro-Roman and pro-Parthian factions in Judea eventually led to the installation of Herod the Great and consolidation of the Herodian Kingdom as a vassal Judean state of Rome.
With the decline of the Herodian dynasty, Judea, transformed into a Roman province, became the site of a violent struggle of Jews against Greco-Romans, culminating in the Jewish-Roman Wars, ending in wide-scale destruction, expulsions, and genocide. Jewish presence in the region significantly dwindled after the failure of the Bar Kokhba revolt against the Roman Empire in 132 CE.Oppenheimer, A'haron and Oppenheimer, Nili. Between Rome and Babylon: Studies in Jewish Leadership and Society. Mohr Siebeck, 2005, p. 2. Nevertheless, there was a continuous small Jewish presence and Galilee became its religious center. The Mishnah and part of the Talmud, central Jewish texts, were composed during the 2nd to 4th centuries CE in Tiberias and Jerusalem. The region came to be populated predominantly by Greco-Romans on the coast and Samaritans in the hill-country. Christianity was gradually evolving over Roman paganism, when the area stood under Byzantine rule. Through the 5th and 6th centuries, the dramatic events of the repeated Samaritan revolts reshaped the land, with massive destruction to Byzantine Christian and Samaritan societies and a resulting decrease of the population. After the Persian conquest and the installation of a short-lived Jewish Commonwealth in 614 CE, the Byzantine Empire reconquered the country in 628.
Middle Ages and modern history
thumb|left|Kfar Bar'am, an ancient Jewish village, abandoned some time between the 7th–13th centuries AD.Judaism in late antiquity, Jacob Neusner, Bertold Spuler, Hady R Idris, BRILL, 2001, p. 155
In 634–641 CE, the region, including Jerusalem, was conquered by the Arabs who had just recently adopted Islam. Control of the region transferred between the Rashidun Caliphs, Umayyads, Abbasids, Fatimids, Seljuks, Crusaders, and Ayyubids throughout the next three centuries.
During the siege of Jerusalem by the First Crusade in 1099, the Jewish inhabitants of the city fought side by side with the Fatimid garrison and the Muslim population who tried in vain to defend the city against the Crusaders. When the city fell, about 60,000 people were massacred, including 6,000 Jews seeking refuge in a synagogue. At this time, a full thousand years after the fall of the Jewish state, there were Jewish communities all over the country. Fifty of them are known and include Jerusalem, Tiberias, Ramleh, Ashkelon, Caesarea, and Gaza.Carmel, Alex. The History of Haifa Under Turkish Rule. Haifa: Pardes, 2002 (ISBN 965-7171-05-9), pp. 16–17 According to Albert of Aachen, the Jewish residents of Haifa were the main fighting force of the city, and "mixed with Saracen [Fatimid] troops", they fought bravely for close to a month until forced into retreat by the Crusader fleet and land army. However, Joshua Prawer expressed doubt over the story, noting that Albert did not attend the Crusades and that such a prominent role for the Jews is not mentioned by any other source.
thumb|The 15th-century Abuhav synagogue, established by Sephardic Jews in Safed, Northern Israel.The Abuhav Synagogue, Jewish Virtual Library.
In 1165, Maimonides visited Jerusalem and prayed on the Temple Mount, in the "great, holy house."Sefer HaCharedim Mitzvat Tshuva Chapter 3. Maimonides established a yearly holiday for himself and his sons, 6 Cheshvan, commemorating the day he went up to pray on the Temple Mount, and another, 9 Cheshvan, commemorating the day he merited to pray at the Cave of the Patriarchs in Hebron. In 1141 the Spanish-Jewish poet Yehuda Halevi issued a call for Jews to migrate to the Land of Israel, a journey he undertook himself. In 1187 Sultan Saladin, founder of the Ayyubid dynasty, defeated the Crusaders in the Battle of Hattin and subsequently captured Jerusalem and almost all of Palestine. In time, Saladin issued a proclamation inviting Jews to return and settle in Jerusalem, and according to Judah al-Harizi, they did: "From the day the Arabs took Jerusalem, the Israelites inhabited it." Al-Harizi compared Saladin's decree allowing Jews to re-establish themselves in Jerusalem to the one issued by the Persian king Cyrus the Great over 1,600 years earlier.
In 1211, the Jewish community in the country was strengthened by the arrival of a group headed by over 300 rabbis from France and England, among them Rabbi Samson ben Abraham of Sens.Samson ben Abraham of Sens, Jewish Encyclopedia. Nachmanides, the 13th-century Spanish rabbi and recognised leader of Jewry greatly praised the land of Israel and viewed its settlement as a positive commandment incumbent on all Jews. He wrote "If the gentiles wish to make peace, we shall make peace and leave them on clear terms; but as for the land, we shall not leave it in their hands, nor in the hands of any nation, not in any generation."
In 1260, control passed to the Mamluk sultans of Egypt. The country was located between the two centres of Mamluk power, Cairo and Damascus, and only saw some development along the postal road connecting the two cities. Jerusalem, although left without the protection of any city walls since 1219, also saw a flurry of new construction projects centred around the Al-Aqsa Mosque compound on the Temple Mount. In 1266 the Mamluk Sultan Baybars converted the Cave of the Patriarchs in Hebron into an exclusive Islamic sanctuary and banned Christians and Jews from entering, which previously would be able to enter it for a fee. The ban remained in place until Israel took control of the building in 1967.International Dictionary of Historic Places: Middle East and Africa by Trudy Ring, Robert M. Salkin, Sharon La Boda, pp. 336–339
thumb|Jews at the Western Wall, 1870s
In 1470, Isaac b. Meir Latif arrived from Italy and counted 150 Jewish families in Jerusalem.
Thanks to Joseph Saragossi who had arrived in the closing years of the 15th century, Safed and its environs had developed into the largest concentration of Jews in Palestine. With the help of the Sephardic immigration from Spain, the Jewish population had increased to 10,000 by the early 16th century.
In 1516, the region was conquered by the Ottoman Empire; it remained under Turkish rule until the end of the First World War, when Britain defeated the Ottoman forces and set up a military administration across the former Ottoman Syria. In 1920 the territory was divided between Britain and France under the mandate system, and the British-administered area which included modern day Israel was named Mandatory Palestine."Mandate for Palestine," Encyclopaedia Judaica, Vol. 11, p. 862, Keter Publishing House, Jerusalem, 1972
Zionism and British mandate
thumb|upright|Theodor Herzl, visionary of the Jewish state|alt=Black and white portrait of a long-bearded man.
Since the existence of the earliest Jewish diaspora, many Jews have aspired to return to "Zion" and the "Land of Israel", "Zionism, the urge of the Jewish people to return to Palestine, is almost as ancient as the Jewish diaspora itself. Some Talmudic statements ... Almost a millennium later, the poet and philosopher Yehuda Halevi ... In the 19th century ..." though the amount of effort that should be spent towards such an aim was a matter of dispute. The hopes and yearnings of Jews living in exile are an important theme of the Jewish belief system. After the Jews were expelled from Spain in 1492, some communities settled in Palestine.. "Jews sought a new homeland here after their expulsions from Spain (1492) ..." During the 16th century, Jewish communities struck roots in the Four Holy Cities—Jerusalem, Tiberias, Hebron, and Safed—and in 1697, Rabbi Yehuda Hachasid led a group of 1,500 Jews to Jerusalem. In the second half of the 18th century, Eastern European opponents of Hasidism, known as the Perushim, settled in Palestine.
The first wave of modern Jewish migration to Ottoman-ruled Palestine, known as the First Aliyah, began in 1881, as Jews fled pogroms in Eastern Europe. The source provides information on the First, Second, Third, Fourth and Fifth Aliyot in their respective articles. The White Paper leading to Aliyah Bet is discussed Although the Zionist movement already existed in practice, Austro-Hungarian journalist Theodor Herzl is credited with founding political Zionism, "How did Theodor Herzl, an assimilated German nationalist in the 1880s, suddenly in the 1890s become the founder of Zionism?" a movement which sought to establish a Jewish state in the Land of Israel, thus offering a solution to the so-called Jewish Question of the European states, in conformity with the goals and achievements of other national projects of the time. In 1896, Herzl published Der Judenstaat (The Jewish State), offering his vision of a future Jewish state; the following year he presided over the First Zionist Congress.
The Second Aliyah (1904–14), began after the Kishinev pogrom; some 40,000 Jews settled in Palestine, although nearly half of them left eventually. Both the first and second waves of migrants were mainly Orthodox Jews,. "As with the First Aliyah, most Second Aliyah migrants were non-Zionist orthodox Jews ..." although the Second Aliyah included socialist groups who established the kibbutz movement. During World War I, British Foreign Secretary Arthur Balfour sent the Balfour Declaration of 1917 to Baron Rothschild (Walter Rothschild, 2nd Baron Rothschild), a leader of the British Jewish community, that stated that Britain intended for the creation of a Jewish "national home" within the Palestinian Mandate.
In 1918, the Jewish Legion, a group primarily of Zionist volunteers, assisted in the British conquest of Palestine. Arab opposition to British rule and Jewish immigration led to the 1920 Palestine riots and the formation of a Jewish militia known as the Haganah (meaning "The Defense" in Hebrew), from which the Irgun and Lehi, or the Stern Gang, paramilitary groups later split off.. "During the First and Second Aliyot, there were many Arab attacks against Jewish settlements ... In 1920, Hashomer was disbanded and Haganah ("The Defense") was established." In 1922, the League of Nations granted Britain a mandate over Palestine under terms which included the Balfour Declaration with its promise to the Jews, and with similar provisions regarding the Arab Palestinians. The population of the area at this time was predominantly Arab and Muslim, with Jews accounting for about 11%, and Arab Christians at about 9.5% of the population.
The Third (1919–23) and Fourth Aliyahs (1924–29) brought an additional 100,000 Jews to Palestine. The rise of Nazism and the increasing persecution of Jews in 1930s Europe led to the Fifth Aliyah, with an influx of a quarter of a million Jews. This was a major cause of the Arab revolt of 1936–39 during which the British Mandate authorities alongside the Zionist militias of Haganah and Irgun killed 5,032 Arabs and wounded 14,760, resulting in over ten percent of the adult male Palestinian Arab population killed, wounded, imprisoned or exiled.Khalidi, Walid (1987). From Haven to Conquest: Readings in Zionism and the Palestine Problem Until 1948. Institute for Palestine Studies. ISBN 978-0-88728-155-6 The British introduced restrictions on Jewish immigration to Palestine with the White Paper of 1939. With countries around the world turning away Jewish refugees fleeing the Holocaust, a clandestine movement known as Aliyah Bet was organized to bring Jews to Palestine. By the end of World War II, the Jewish population of Palestine had increased to 33% of the total population.
After World War II
thumb|upright|UN Map, "Palestine plan of partition with economic union"
After World War II, Britain found itself in intense conflict with the Jewish community over Jewish immigration limits, as well as continued conflict with the Arab community over limit levels. The Haganah joined Irgun and Lehi in an armed struggle against British rule. At the same time, hundreds of thousands of Jewish Holocaust survivors and refugees sought a new life far from their destroyed communities in Europe. The Yishuv attempted to bring these refugees to Palestine but many were turned away or rounded up and placed in detention camps in Atlit and Cyprus by the British.
On 22 July 1946, Irgun attacked the British administrative headquarters for Palestine, which was housed in the southern wingThe Terrorism Ahead: Confronting Transnational Violence in the Twenty-First | By Paul J. Smith | M.E. Sharpe, 10 Sep 2007 | pg 27 of the King David Hotel in Jerusalem.Encyclopedia of Terrorism, Harvey W. Kushner, Sage, 2003 p.181Encyclopædia Britannica article on the Irgun Zvai LeumiThe British Empire in the Middle East, 1945–1951: Arab Nationalism, the United States, and Postwar Imperialism. William Roger Louis, Oxford University Press, 1986, p. 430 A total of 91 people of various nationalities were killed and 46 were injured.Clarke, Thurston. By Blood and Fire, G. P. Puttnam's Sons, New York, 1981 The hotel was the site of the Secretariat of the Government of Palestine and the Headquarters of the British Armed Forces in Palestine and Transjordan. The attack initially had the approval of the Haganah. It was conceived as a response to Operation Agatha (a series of widespread raids, including one on the Jewish Agency, conducted by the British authorities) and was the deadliest directed at the British during the Mandate era. It was characterized as one of the "most lethal terrorist incidents of the twentieth century." In 1947, the British government announced it would withdraw from Palestine, stating it was unable to arrive at a solution acceptable to both Arabs and Jews.
On 15 May 1947, the General Assembly of the newly formed United Nations resolved that the United Nations Special Committee on Palestine be created "to prepare for consideration at the next regular session of the Assembly a report on the question of Palestine." In the Report of the Committee dated 3 September 1947 to the General Assembly, the majority of the Committee in Chapter VI proposed a plan to replace the British Mandate with "an independent Arab State, an independent Jewish State, and the City of Jerusalem ... the last to be under an International Trusteeship System." On 29 November 1947, the General Assembly adopted Resolution 181 (II) recommending the adoption and implementation of the Plan of Partition with Economic Union. The plan attached to the resolution was essentially that proposed by the majority of the Committee in the report of 3 September. The Jewish Agency, which was the recognized representative of the Jewish community, accepted the plan. The Arab League and Arab Higher Committee of Palestine rejected it, and indicated that they would reject any other plan of partition. On the following day, 1 December 1947, the Arab Higher Committee proclaimed a three-day strike, and Arab gangs began attacking Jewish targets. The Jews were initially on the defensive as civil war broke out, but in early April 1948 moved onto the offensive.Morris, 2008, p. 77-78 The Arab Palestinian economy collapsed and 250,000 Palestinian Arabs fled or were expelled.
thumb|left|David Ben-Gurion proclaiming the Israeli Declaration of Independence on 14 May 1948
On 14 May 1948, the day before the expiration of the British Mandate, David Ben-Gurion, the head of the Jewish Agency, declared "the establishment of a Jewish state in Eretz-Israel, to be known as the State of Israel."Clifford, Clark, "Counsel to the President: A Memoir", 1991, p. 20. The only reference in the text of the Declaration to the borders of the new state is the use of the term Eretz-Israel ("Land of Israel"). The following day, the armies of four Arab countries—Egypt, Syria, Transjordan and Iraq—entered what had been British Mandatory Palestine, launching the 1948 Arab–Israeli War; contingents from Yemen, Morocco, Saudi Arabia and Sudan joined the war.Morris, 2008, p. 205
The apparent purpose of the invasion was to prevent the establishment of the Jewish state at inception, and some Arab leaders talked about driving the Jews into the sea. According to Benny Morris, Jews felt that the invading Arab armies aimed to slaughter the Jews. The Arab league stated that the invasion was to restore law and order and to prevent further bloodshed.
thumb|upright|Raising of the Ink Flag, marking the end of the 1948 Arab–Israeli War
After a year of fighting, a ceasefire was declared and temporary borders, known as the Green Line, were established. Jordan annexed what became known as the West Bank, including East Jerusalem, and Egypt took control of the Gaza Strip. The United Nations estimated that more than 700,000 Palestinians were expelled by or fled from advancing Israeli forces during the conflict—what would become known in Arabic as the Nakba ("catastrophe").
Early years of the State of Israel
Israel was admitted as a member of the United Nations by majority vote on 11 May 1949. Both Israel and Jordan were genuinely interested in a peace agreement but the British acted as a brake on the Jordanian effort in order to avoid damaging British interests in Egypt. In the early years of the state, the Labor Zionist movement led by Prime Minister David Ben-Gurion dominated Israeli politics. The Kibbutzim, or collective farming communities, played a pivotal role in establishing the new state.
Immigration to Israel during the late 1940s and early 1950s was aided by the Israeli Immigration Department and the non-government sponsored Mossad LeAliyah Bet ("Institution for Illegal Immigration"). Both groups facilitated regular immigration logistics like arranging transportation, but the latter also engaged in clandestine operations in countries, particularly in the Middle East and Eastern Europe, where the lives of Jews were believed to be in danger and exit from those places was difficult. Mossad LeAliyah Bet was disbanded in 1953. The immigration was in accordance with the One Million Plan. The immigrants came for differing reasons. Some believed in a Zionist ideology or did it for the promise of a better life in Israel, while others moved to escape persecution or were expelled.
An influx of Holocaust survivors and Jews from Arab and Muslim countries to Israel during the first three years increased the number of Jews from 700,000 to 1,400,000. By 1958, the population of Israel rose to two million. Between 1948 and 1970, approximately 1,150,000 Jewish refugees relocated to Israel. Some new immigrants arrived as refugees with no possessions and were housed in temporary camps known as ma'abarot; by 1952, over 200,000 people were living in these tent cities. Jews of European background were often treated more favorably than Jews from Middle Eastern and North African countries—housing units reserved for the latter were often re-designated for the former, with the result that Jews newly arrived from Arab lands generally ended up staying in transit camps for longer.Clive Jones, Emma Murphy, Israel: Challenges to Identity, Democracy, and the State, Routledge 2002 p. 37: "Housing units earmarked for the Oriental Jews were often reallocated to European Jewish immigrants; Consigning Oriental Jews to the privations of ma'aborot (transit camps) for longer periods." Tensions that developed between the two groups over such discrimination persist to the present day. During this period, food, clothes and furniture had to be rationed in what became known as the austerity period. The need to solve the crisis led Ben-Gurion to sign a reparations agreement with West Germany that triggered mass protests by Jews angered at the idea that Israel could accept monetary compensation for the Holocaust.
thumb|U.S. newsreel on the trial of Adolf Eichmann
During the 1950s, Israel was frequently attacked by Palestinian fedayeen, nearly always against civilians, mainly from the Egyptian-occupied Gaza Strip, leading to several Israeli counter-raids. In 1956, Great Britain and France aimed at regaining control of the Suez Canal, which the Egyptians had nationalized. The continued blockade of the Suez Canal and Straits of Tiran to Israeli shipping, together with the growing amount of Fedayeen attacks against Israel's southern population, and recent Arab grave and threatening statements, prompted Israel to attack Egypt. Israel joined a secret alliance with Great Britain and France and overran the Sinai Peninsula but was pressured to withdraw by the United Nations in return for guarantees of Israeli shipping rights in the Red Sea via the Straits of Tiran and the Canal. The war, known as the Suez Crisis, resulted in significant reduction of Israeli border infiltration. In the early 1960s, Israel captured Nazi war criminal Adolf Eichmann in Argentina and brought him to Israel for trial. The trial had a major impact on public awareness of the Holocaust.. "... the Eichmann trial, which did so much to raise public awareness of the Holocaust ..." Eichmann remains the only person executed in Israel by conviction by an Israeli civilian court.
thumb|upright|left|Territory held by Israel: The Sinai Peninsula was returned to Egypt in 1982.
Since 1964, Arab countries, concerned over Israeli plans to divert waters of the Jordan River into the coastal plain,"The Politics of Miscalculation in the Middle East", by Richard B. Parker (1993 Indiana University Press) pp. 38 had been trying to divert the headwaters to deprive Israel of water resources, provoking tensions between Israel on the one hand, and Syria and Lebanon on the other. Arab nationalists led by Egyptian President Gamal Abdel Nasser refused to recognize Israel, and called for its destruction. By 1966, Israeli-Arab relations had deteriorated to the point of actual battles taking place between Israeli and Arab forces. In May 1967, Egypt massed its army near the border with Israel, expelled UN peacekeepers, stationed in the Sinai Peninsula since 1957, and blocked Israel's access to the Red Sea. Other Arab states mobilized their forces. Israel reiterated that these actions were a casus belli and, on 5 June, launched a pre-emptive strike against Egypt. Jordan, Syria and Iraq responded and attacked Israel. In a Six-Day War, Israel defeated Jordan and captured the West Bank, defeated Egypt and captured the Gaza Strip and Sinai Peninsula, and defeated Syria and captured the Golan Heights.. "Nasser, the Egyptian president, decided to mass troops in the Sinai ... casus belli by Israel." Jerusalem's boundaries were enlarged, incorporating East Jerusalem, and the 1949 Green Line became the administrative boundary between Israel and the occupied territories.
Following the 1967 war and the "three nos" resolution of the Arab League, during the 1967–1970 War of Attrition Israel faced attacks from the Egyptians in the Sinai, and from Palestinian groups targeting Israelis in the occupied territories, in Israel proper, and around the world. Most important among the various Palestinian and Arab groups was the Palestinian Liberation Organization (PLO), established in 1964, which initially committed itself to "armed struggle as the only way to liberate the homeland". In the late 1960s and early 1970s, Palestinian groups launched a wave of attacks against Israeli and Jewish targets around the world, including a massacre of Israeli athletes at the 1972 Summer Olympics in Munich. The Israeli government responded with an assassination campaign against the organizers of the massacre, a bombing and a raid on the PLO headquarters in Lebanon.
On 6 October 1973, as Jews were observing Yom Kippur, the Egyptian and Syrian armies launched a surprise attack against Israeli forces in the Sinai Peninsula and Golan Heights, that opened the Yom Kippur War. The war ended on 25 October with Israel successfully repelling Egyptian and Syrian forces but having suffered over 2,500 soldiers killed in a war which collectively took 10–35,000 lives in about 20 days. An internal inquiry exonerated the government of responsibility for failures before and during the war, but public anger forced Prime Minister Golda Meir to resign. In July 1976 an airliner was hijacked during its flight from Israel to France by Palestinian guerrillas and landed at Entebbe, Uganda. Israeli commandos carried out an operation in which 102 out of 106 Israeli hostages were successfully rescued.
Further conflict and peace process
The 1977 Knesset elections marked a major turning point in Israeli political history as Menachem Begin's Likud party took control from the Labor Party. "In hindsight we can say that 1977 was a turning point ..." Later that year, Egyptian President Anwar El Sadat made a trip to Israel and spoke before the Knesset in what was the first recognition of Israel by an Arab head of state. In the two years that followed, Sadat and Begin signed the Camp David Accords (1978) and the Israel–Egypt Peace Treaty (1979). In return, Israel withdrew from the Sinai Peninsula and agreed to enter negotiations over an autonomy for Palestinians in the West Bank and the Gaza Strip.
On 11 March 1978, a PLO guerilla raid from Lebanon led to the Coastal Road massacre. Israel responded by launching an invasion of southern Lebanon to destroy the PLO bases south of the Litani River. Most PLO fighters withdrew, but Israel was able to secure southern Lebanon until a UN force and the Lebanese army could take over. The PLO soon resumed its policy of attacks against Israel. In the next few years, the PLO infiltrated the south and kept up a sporadic shelling across the border. Israel carried out numerous retaliatory attacks by air and on the ground.
thumb|Israel's 1980 law declared that "Jerusalem, complete and united, is the capital of Israel."
Meanwhile, Begin's government provided incentives for Israelis to settle in the occupied West Bank, increasing friction with the Palestinians in that area. The Basic Law: Jerusalem, Capital of Israel, passed in 1980, was believed by some to reaffirm Israel's 1967 annexation of Jerusalem by government decree, and reignited international controversy over the status of the city. No Israeli legislation has defined the territory of Israel and no act specifically included East Jerusalem therein. The position of the majority of UN member states is reflected in numerous resolutions declaring that actions taken by Israel to settle its citizens in the West Bank, and impose its laws and administration on East Jerusalem, are illegal and have no validity.See for example UN General Assembly resolution 63/30, passed 163 for, 6 against In 1981 Israel annexed the Golan Heights, although annexation was not recognized internationally. Israel's population diversity expanded in the 1980s and 1990s. Several waves of Ethiopian Jews immigrated to Israel since the 1980s, while between 1990 and 1994, immigration from the post-Soviet states increased Israel's population by twelve percent.
On 7 June 1981, the Israeli air force destroyed Iraq's sole nuclear reactor under construction just outside Baghdad, in order to impede Iraq's nuclear weapons program. Following a series of PLO attacks in 1982, Israel invaded Lebanon that year to destroy the bases from which the PLO launched attacks and missiles into northern Israel. In the first six days of fighting, the Israelis destroyed the military forces of the PLO in Lebanon and decisively defeated the Syrians. An Israeli government inquiry—the Kahan Commission—would later hold Begin, Sharon and several Israeli generals as indirectly responsible for the Sabra and Shatila massacre. In 1985, Israel responded to a Palestinian terrorist attack in Cyprus by bombing the PLO headquarters in Tunisia. Israel withdrew from most of Lebanon in 1986, but maintained a borderland buffer zone in southern Lebanon until 2000, from where Israeli forces engaged in conflict with Hezbollah. The First Intifada, a Palestinian uprising against Israeli rule, broke out in 1987, with waves of uncoordinated demonstrations and violence occurring in the occupied West Bank and Gaza. Over the following six years, the Intifada became more organised and included economic and cultural measures aimed at disrupting the Israeli occupation. More than a thousand people were killed in the violence.. "Toward the end of 1991 ... were the result of internal Palestinian terror." During the 1991 Gulf War, the PLO supported Saddam Hussein and Iraqi Scud missile attacks against Israel. Despite public outrage, Israel heeded American calls to refrain from hitting back and did not participate in that war.
thumb|left|U.S. President Bill Clinton watches Jordan's King Hussein (left) and Israeli Prime Minister Yitzhak Rabin (right) sign the Israel–Jordan peace treaty
In 1992, Yitzhak Rabin became Prime Minister following an election in which his party called for compromise with Israel's neighbors. The following year, Shimon Peres on behalf of Israel, and Mahmoud Abbas for the PLO, signed the Oslo Accords, which gave the Palestinian National Authority the right to govern parts of the West Bank and the Gaza Strip. The PLO also recognized Israel's right to exist and pledged an end to terrorism. In 1994, the Israel–Jordan peace treaty was signed, making Jordan the second Arab country to normalize relations with Israel.. "Even though Jordan in 1994 became the second country, after Egypt to sign a peace treaty with Israel ..." Arab public support for the Accords was damaged by the continuation of Israeli settlements and checkpoints, and the deterioration of economic conditions. Israeli public support for the Accords waned as Israel was struck by Palestinian suicide attacks. In November 1995, while leaving a peace rally, Yitzhak Rabin was assassinated by Yigal Amir, a far-right-wing Jew who opposed the Accords.
thumb|The site of the 2001 Tel Aviv Dolphinarium discotheque massacre, in which 21 Israelis were killed.
Under the leadership of Benjamin Netanyahu at the end of the 1990s, Israel withdrew from Hebron, and signed the Wye River Memorandum, giving greater control to the Palestinian National Authority. Ehud Barak, elected Prime Minister in 1999, began the new millennium by withdrawing forces from Southern Lebanon and conducting negotiations with Palestinian Authority Chairman Yasser Arafat and U.S. President Bill Clinton at the 2000 Camp David Summit. During the summit, Barak offered a plan for the establishment of a Palestinian state. The proposed state included the entirety of the Gaza Strip and over 90% of the West Bank with Jerusalem as a shared capital. Each side blamed the other for the failure of the talks. After a controversial visit by Likud leader Ariel Sharon to the Temple Mount, the Second Intifada began. Some commentators contend that the uprising was pre-planned by Arafat due to the collapse of peace talks. Sharon became prime minister in a 2001 special election. During his tenure, Sharon carried out his plan to unilaterally withdraw from the Gaza Strip and also spearheaded the construction of the Israeli West Bank barrier, ending the Intifada.; ; ; ; ; ; ; ; ; ; By this time 1,100 Israelis had been killed, mostly in suicide bombings.https://www.jewishvirtuallibrary.org/jsource/Terrorism/victims.html#2000; The Psychology of Strategic Terrorism: Public and Government Responses to Attack, Shepherd, Ben, p. 172 The Palestinian fatalities, from 2000 to 2008, reached 4,791 killed by Israeli security forces, 44 killed by Israeli civilians, and 609 killed by Palestinians.
In July 2006, a Hezbollah artillery assault on Israel's northern border communities and a cross-border abduction of two Israeli soldiers precipitated the month-long Second Lebanon War.Escalation of hostilities in Lebanon and in Israel since Hizbollah's attack on Israel on 12 July 2006 On 6 September 2007, the Israeli Air Force destroyed a nuclear reactor in Syria. At the end of 2008, Israel entered another conflict as a ceasefire between Hamas and Israel collapsed. The 2008–09 Gaza War lasted three weeks and ended after Israel announced a unilateral ceasefire. Hamas announced its own ceasefire, with its own conditions of complete withdrawal and opening of border crossings. Despite neither the rocket launchings nor Israeli retaliatory strikes having completely stopped, the fragile ceasefire remained in order. In what Israel described as a response to more than a hundred Palestinian rocket attacks on southern Israeli cities, Israel began an operation in Gaza on 14 November 2012, lasting eight days.; ; Israel started another operation in Gaza following an escalation of rocket attacks by Hamas in July 2014.
Geography and environment
Israel is at the eastern end of the Mediterranean Sea, bounded by Lebanon to the north, Syria to the northeast, Jordan and the West Bank to the east, and Egypt and the Gaza Strip to the southwest. It lies between latitudes 29° and 34° N, and longitudes 34° and 36° E.
The sovereign territory of Israel (according to the demarcation lines of the 1949 Armistice Agreements and excluding all territories captured by Israel during the 1967 Six-Day War) is approximately in area, of which two percent is water. However Israel is so narrow that the exclusive economic zone in the Mediterranean is double the land area of the country. The total area under Israeli law, including East Jerusalem and the Golan Heights, is , and the total area under Israeli control, including the military-controlled and partially Palestinian-governed territory of the West Bank, is . Despite its small size, Israel is home to a variety of geographic features, from the Negev desert in the south to the inland fertile Jezreel Valley, mountain ranges of the Galilee, Carmel and toward the Golan in the north. The Israeli coastal plain on the shores of the Mediterranean is home to most of the nation's population. East of the central highlands lies the Jordan Rift Valley, which forms a small part of the Great Rift Valley.
The Jordan River runs along the Jordan Rift Valley, from Mount Hermon through the Hulah Valley and the Sea of Galilee to the Dead Sea, the lowest point on the surface of the Earth. Further south is the Arabah, ending with the Gulf of Eilat, part of the Red Sea. Unique to Israel and the Sinai Peninsula are makhteshim, or erosion cirques. The largest makhtesh in the world is Ramon Crater in the Negev,. "The extraordinary Makhtesh Ramon – the largest natural crater in the world ..." which measures . A report on the environmental status of the Mediterranean Basin states that Israel has the largest number of plant species per square meter of all the countries in the basin.
Tectonics and seismicity
The Jordan Rift Valley is the result of tectonic movements within the Dead Sea Transform (DSF) fault system. The DSF forms the transform boundary between the African Plate to the west and the Arabian Plate to the east. The Golan Heights and all of Jordan are part of the Arabian Plate, while the Galilee, West Bank, Coastal Plain, and Negev along with the Sinai Peninsula are on the African Plate. This tectonic disposition leads to a relatively high seismic activity in the region. The entire Jordan Valley segment is thought to have ruptured repeatedly, for instance during the last two major earthquakes along this structure in 749 and 1033. The deficit in slip that has built up since the 1033 event is sufficient to cause an earthquake of Mw~7.4.
The most catastrophic earthquakes we know of occurred in 31 BCE, 363, 749, and 1033 CE, that is every ca. 400 years on average.American Friends of the Tel Aviv University, Earthquake Experts at Tel Aviv University Turn to History for Guidance (October 4, 2007). Quote: The major ones were recorded along the Jordan Valley in the years 31 B.C.E., 363 C.E., 749 C.E., and 1033 C.E. "So roughly, we are talking about an interval of every 400 years. If we follow the patterns of nature, a major quake should be expected any time because almost a whole millennium has passed since the last strong earthquake of 1033." (Tel Aviv University Associate Professor Dr. Shmuel (Shmulik) Marco). Destructive earthquakes leading to serious loss of life strike about every 80 years.Zafrir Renat, Israel Is Due, and Ill Prepared, for Major Earthquake, Haaretz, 15 January 2010. "On average, a destructive earthquake takes place in Israel once every 80 years, causing serious casualties and damage." While stringent construction regulations are currently in place and recently built structures are earthquake-safe, the majority of the buildings in Israel were older than these regulations and many public buildings as well as 50,000 residential buildings did not meet the new standards and were "expected to collapse" if exposed to a strong quake. Given the fragile political situation of the Middle East region and the presence there of major holy sites, a quake reaching magnitude 7 on the Richter scale could have dire consequences for world peace.
Climate
thumb|Israel map of Köppen climate classification.
Temperatures in Israel vary widely, especially during the winter. Coastal areas, such as those of Tel Aviv and Haifa, have a typical Mediterranean climate with cool, rainy winters and long, hot summers. The area of Beersheba and the Northern Negev has a semi-arid climate with hot summers, cool winters and fewer rainy days than the Mediterranean climate. The Southern Negev and the Arava areas have desert climate with very hot and dry summers, and mild winters with few days of rain. The highest temperature in the continent of Asia () was recorded in 1942 at Tirat Zvi kibbutz in the northern Jordan river valley.
At the other extreme mountainous regions can be windy, cold, and areas at elevation of 750 meters or more (same elevation as Jerusalem) will usually receive at least one snowfall each year. From May to September, rain in Israel is rare. With scarce water resources, Israel has developed various water-saving technologies, including drip irrigation. Israelis also take advantage of the considerable sunlight available for solar energy, making Israel the leading nation in solar energy use per capita (practically every house uses solar panels for water heating).
Four different phytogeographic regions exist in Israel, due to the country's location between the temperate and the tropical zones, bordering the Mediterranean Sea in the west and the desert in the east. For this reason the flora and fauna of Israel is extremely diverse. There are 2,867 known species of plants found in Israel. Of these, at least 253 species are introduced and non-native. There are 380 Israeli nature reserves.
Demographics
In 2016, Israel's population was an estimated 8,602,000 people, of whom 6,430,500 (74.8%) were recorded by the civil government as Jews. 1,789,800 Arabs comprised 20.8% of the population, while non-Arab Christians and people who have no religion listed in the civil registry made up 4.4%. Over the last decade, large numbers of migrant workers from Romania, Thailand, China, Africa, and South America have settled in Israel. Exact figures are unknown, as many of them are living in the country illegally, but estimates run in the region of 203,000.Adriana Kemp, "Labour migration and racialisation: labour market mechanisms and labour migration control policies in Israel", Social Identities 10:2, 267–292, 2004 By June 2012, approximately 60,000 African migrants had entered Israel. About 92% of Israelis live in urban areas.
300px|thumb|Immigration to Israel in the years 1948–2015. The two peaks were in 1949 and 1990.
Israel was established as a homeland for the Jewish people and is often referred to as a Jewish state. The country's Law of Return grants all Jews and those of Jewish ancestry the right to Israeli citizenship. Retention of Israel's population since 1948 is about even or greater, when compared to other countries with mass immigration. Jewish emigration from Israel (called yerida in Hebrew), primarily to the United States and Canada, is described by demographers as modest, but is often cited by Israeli government ministries as a major threat to Israel's future.
Three quarters, or 74.8%, of the population are Jews from a diversity of Jewish backgrounds. Approximately 76% of Israeli Jews are born in Israel, 16% are immigrants from Europe and the Americas, and 8% are immigrants from Asia and Africa (including the Arab World). Jews from Europe and the former Soviet Union and their descendants born in Israel, including Ashkenazi Jews, constitute approximately 50% of Jewish Israelis. Jews who left or fled Arab and Muslim countries and their descendants, including both Mizrahi and Sephardi Jews, form most of the rest of the Jewish population. Jewish intermarriage rates run at over 35% and recent studies suggest that the percentage of Israelis descended from both Sephardi and Ashkenazi Jews increases by 0.5 percent every year, with over 25% of school children now originating from both communities. Around 4% of Israelis (300,000), ethnically defined as "others", are Russian descendants of Jewish origin or family who are not Jewish according to rabbinical law, but were eligible for Israeli citizenship under the Law of Return.
, 385,900 Israelis lived in West Bank settlements, including those that predated the establishment of the State of Israel and which were re-established after the Six-Day War, in cities such as Hebron and Gush Etzion bloc. In addition, there were more than 200,000 Jews living in East Jerusalem, and 20,000 in Golan Heights settlements. The total number of Israeli settlers is over 600,000 (~10% of the Jewish Israeli population). Approximately 7,800 Israelis lived in settlements in the Gaza Strip, known as Gush Katif, until they were evacuated by the government as part of its 2005 disengagement plan.
Language
thumb|Road sign in Hebrew, Arabic, and English
Israel has two official languages, Hebrew and Arabic. Hebrew is the primary language of the state and is spoken every day by the majority of the population. Arabic is spoken by the Arab minority, with Hebrew taught in Arab schools.
As a country of immigrants, many languages can be heard on the streets. Due to mass immigration from the former Soviet Union and Ethiopia (some 130,000 Ethiopian Jews live in Israel),Israel Central Bureau of Statistics: The Ethiopian Community in Israel Russian and Amharic are widely spoken. More than one million Russian-speaking immigrants arrived in Israel from the post-Soviet states between 1990 and 2004. French is spoken by around 700,000 Israelis, mostly originating from France and North Africa (see Maghrebi Jews). English was an official language during the Mandate period; it lost this status after the establishment of Israel, but retains a role comparable to that of an official language, as may be seen in road signs and official documents. Many Israelis communicate reasonably well in English, as many television programs are broadcast in English with subtitles and the language is taught from the early grades in elementary school. In addition, Israeli universities offer courses in the English language on various subjects.
Religion
thumb|left|The Dome of the Rock and the Western Wall, Jerusalem.|alt=A large open area with people bounded by old stone walls. To the left is a mosque with large golden dome.
Israel comprises a major part of the Holy Land, a region of significant importance to all Abrahamic religions – Judaism, Christianity, Islam, Druze and Bahá'í Faith.
The religious affiliation of Israeli Jews varies widely: a social survey indicates that 49% self-identify as Hiloni (secular), 29% as Masorti (traditional), 13% as Dati (Orthodox) and 9% as Haredi (ultra-Orthodox). Haredi Jews are expected to represent more than 20% of Israel's Jewish population by 2028.
thumb|upright|9th Station of the Cross on the Via Dolorosa street in Jerusalem. The Church of the Holy Sepulchre in the background is venerated by Christians as the site of the Burial of Jesus.
Making up 17.6% of the population, Muslims constitute Israel's largest religious minority. About 2% of the population is Christian and 1.6% is Druze. The Christian population primarily comprises Arab Christians, but also includes post-Soviet immigrants, the foreign laborers of multinational origins, and followers of Messianic Judaism, considered by most Christians and Jews to be a form of Christianity. Members of many other religious groups, including Buddhists and Hindus, maintain a presence in Israel, albeit in small numbers. Out of more than one million immigrants from the former Soviet Union, about 300,000 are considered not Jewish by the Chief Rabbinate of Israel.
The city of Jerusalem is of special importance to Jews, Muslims and Christians as it is the home of sites that are pivotal to their religious beliefs, such as the Old City that incorporates the Western Wall and the Temple Mount, the Al-Aqsa Mosque and the Church of the Holy Sepulchre. Other locations of religious importance in Israel are Nazareth (holy in Christianity as the site of the Annunciation of Mary), Tiberias and Safed (two of the Four Holy Cities in Judaism), the White Mosque in Ramla (holy in Islam as the shrine of the prophet Saleh), and the Church of Saint George in Lod (holy in Christianity and Islam as the tomb of Saint George or Al Khidr). A number of other religious landmarks are located in the West Bank, among them Joseph's Tomb in Nablus, the birthplace of Jesus and Rachel's Tomb in Bethlehem, and the Cave of the Patriarchs in Hebron. The administrative center of the Bahá'í Faith and the Shrine of the Báb are located at the Bahá'í World Centre in Haifa; the leader of the faith is buried in Acre. Apart from maintenance staff, there is no Bahá'í community in Israel, although it is a destination for pilgrimages. Bahá'í staff in Israel do not teach their faith to Israelis following strict policy. A few miles south of the Bahá'í World Centre is Mahmood Mosque affiliated with the reformist Ahmadiyya movement. Kababir, Haifa's mixed neighbourhood of Jews and Ahmadi Arabs is the only one of its kind in the country.
Education
thumb|left|Students at Ben-Gurion University of the Negev
Education is highly valued in the Israeli culture and was viewed as a fundamental block of ancient Israelites. Jewish communities in the Levant were the first to introduce compulsory education for which the organized community, not less than the parents, was responsible. Many international business leaders and organizations such as Microsoft founder Bill Gates have praised Israel for its high quality of education in helping spur Israel's economic development and technological boom. In 2015, the country ranked third among OECD members (after Canada and Japan) for the percentage of 25-64 year-olds that have attained tertiary education with 49% compared with the OECD average of 35%. In 2012, the country ranked third in the world in the number of academic degrees per capita (20 percent of the population).
Israel has a school life expectancy of 16 years and a literacy rate of 97.8%. The State Education Law, passed in 1953, established five types of schools: state secular, state religious, ultra orthodox, communal settlement schools, and Arab schools. The public secular is the largest school group, and is attended by the majority of Jewish and non-Arab pupils in Israel. Most Arabs send their children to schools where Arabic is the language of instruction. Education is compulsory in Israel for children between the ages of three and eighteen. Schooling is divided into three tiers – primary school (grades 1–6), middle school (grades 7–9), and high school (grades 10–12) – culminating with Bagrut matriculation exams. Proficiency in core subjects such as mathematics, the Hebrew language, Hebrew and general literature, the English language, history, Biblical scripture and civics is necessary to receive a Bagrut certificate. In Arab, Christian and Druze schools, the exam on Biblical studies is replaced by an exam on Muslim, Christian or Druze heritage. Maariv described the Christian Arabs sectors as "the most successful in education system", since Christians fared the best in terms of education in comparison to any other religion in Israel. Israeli children from Russian-speaking families have a higher bagrut pass rate at high-school level.Концепция Русского Израиля – десять программных тезисов. 19 июня 2015, 12:49, Cursorinfo Александр Гольденштейн Although amongst immigrant children born in the Former Soviet Union, the bagrut pass rate is highest amongst those families from European FSU states at 62.6%, and lower amongst those from Central Asian and Caucasian FSU states. In 2003, over half of all Israeli twelfth graders earned a matriculation certificate.
thumb|Hebrew University of Jerusalem
Israel has nine public universities that are subsidized by the state and 49 private colleges. The Hebrew University of Jerusalem, Israel's second-oldest university after the Technion, houses the National Library of Israel, the world's largest repository of Judaica and Hebraica. The Technion and the Hebrew University consistently ranked among world's 100 top universities by the prestigious ARWU academic ranking. Other major universities in the country include the Weizmann Institute of Science, Tel Aviv University, Ben-Gurion University of the Negev, Bar-Ilan University, the University of Haifa and the Open University of Israel. Ariel University, in the West Bank, is the newest university institution, upgraded from college status, and the first in over thirty years.
Politics
thumb|The Knesset chamber, home to the Israeli parliament
Israel operates under a parliamentary system as a democratic republic with universal suffrage. A member of parliament supported by a parliamentary majority becomes the prime minister—usually this is the chair of the largest party. The prime minister is the head of government and head of the cabinet.In 1996, direct elections for the prime minister were inaugurated, but the system was declared unsatisfactory and the old one reinstated. See Israel is governed by a 120-member parliament, known as the Knesset. Membership of the Knesset is based on proportional representation of political parties, with a 3.25% electoral threshold, which in practice has resulted in coalition governments.
Parliamentary elections are scheduled every four years, but unstable coalitions or a no-confidence vote by the Knesset can dissolve a government earlier. The Basic Laws of Israel function as an uncodified constitution. In 2003, the Knesset began to draft an official constitution based on these laws. The president of Israel is head of state, with limited and largely ceremonial duties.
Israel has no official religion, but the definition of the state as "Jewish and democratic" creates a strong connection with Judaism, as well as a conflict between state law and religious law. Interaction between the political parties keeps the balance between state and religion largely as it existed during the British Mandate.
Legal system
thumb|left|Supreme Court of Israel, Givat Ram, Jerusalem
Israel has a three-tier court system. At the lowest level are magistrate courts, situated in most cities across the country. Above them are district courts, serving as both appellate courts and courts of first instance; they are situated in five of Israel's six districts. The third and highest tier is the Supreme Court, located in Jerusalem; it serves a dual role as the highest court of appeals and the High Court of Justice. In the latter role, the Supreme Court rules as a court of first instance, allowing individuals, both citizens and non-citizens, to petition against the decisions of state authorities. Although Israel supports the goals of the International Criminal Court, it has not ratified the Rome Statute, citing concerns about the ability of the court to remain free from political impartiality.
Israel's legal system combines three legal traditions: English common law, civil law, and Jewish law. It is based on the principle of stare decisis (precedent) and is an adversarial system, where the parties in the suit bring evidence before the court. Court cases are decided by professional judges rather than juries. Marriage and divorce are under the jurisdiction of the religious courts: Jewish, Muslim, Druze, and Christian. The election of judges is carried out by a committee of two Knesset members, three Supreme Court justices, two Israeli Bar members and two ministers (one of which, Israel's justice minister, is the committee's chairman). The committee's members of the Knesset are secretly elected by the Knesset, and one of them is traditionally a member of the opposition, the committee's Supreme Court justices are chosen by tradition from all Supreme Court justices by seniority, the Israeli Bar members are elected by the bar, and the second minister is appointed by the Israeli cabinet. The current justice minister and committee's chairwoman is Ayelet Shaked. Administration of Israel's courts (both the "General" courts and the Labor Courts) is carried by the Administration of Courts, situated in Jerusalem. Both General and Labor courts are paperless courts: the storage of court files, as well as court decisions, are conducted electronically. Israel's Basic Law: Human Dignity and Liberty seeks to defend human rights and liberties in Israel.
Administrative divisions
The State of Israel is divided into six main administrative districts, known as mehozot (מחוזות; singular: mahoz) – Center, Haifa, Jerusalem, North, South, and Tel Aviv districts, as well as the Judea and Samaria Area in the West Bank. All of the Judea and Samaria Area and parts of the Jerusalem and Northern districts are not recognized internationally as part of Israel. Districts are further divided into fifteen sub-districts known as nafot (נפות; singular: nafa), which are themselves partitioned into fifty natural regions.
There are four metropolitan areas: Gush Dan (Tel Aviv metropolitan area; population 3,785,000), Jerusalem metropolitan area (population 1,223,800), Haifa metropolitan area (population 913,700), and Beersheba metropolitan area (population 369,200). Israel's largest municipality, in population and area, is Jerusalem with residents in an area of . Israeli government statistics on Jerusalem include the population and area of East Jerusalem, which is widely recognized as part of the Palestinian territories under Israeli occupation. Although East Jerusalem and the Golan Heights have been brought directly under Israeli law, by acts that amount to annexation, both of these areas continue to be viewed by the international community as occupied, and their status as regards the applicability of international rules is in most respects identical to that of the West Bank and Gaza. Tel Aviv and Haifa rank as Israel's next most populous cities, with populations of and , respectively.
District Capital Largest city Population Jews Arabs Total note Jerusalem Jerusalem North Nazareth Illit Nazareth Haifa Haifa Center Ramla Rishon LeZion Tel Aviv Tel Aviv South Beersheba Ashdod Judea and Samaria Ariel Modi'in Illit
Including 201,170 Jews and 313,350 Arabs in East Jerusalem, .
Israeli citizens only.
Israeli-occupied territories
thumb|300px|left|Map of Israel showing the West Bank, the Gaza Strip, and the Golan Heights
In 1967, as a result of the Six-Day War, Israel captured and occupied the West Bank, including East Jerusalem, the Gaza Strip and the Golan Heights. Israel also captured the Sinai Peninsula, but returned it to Egypt as part of the 1979 Egypt–Israel Peace Treaty. Between 1982 and 2000, Israel occupied part of southern Lebanon, in what was known as the Security Belt. Since Israel's capture of these territories, Israeli settlements and military installations have been built within each of them, except Lebanon. Israel has applied civilian law to the Golan Heights and East Jerusalem and granted their inhabitants permanent residency status and the ability to apply for citizenship. The West Bank, outside of the Israeli settlements within the territory, has remained under direct military rule, and Palestinians in this area cannot become Israeli citizens. Israel withdrew its military forces and dismantled the Israeli settlements in the Gaza Strip as part of its disengagement from Gaza though it continues to maintain control of its airspace and waters.
The UN Security Council has declared the annexation of the Golan Heights and East Jerusalem to be "null and void" and continues to view the territories as occupied. The International Court of Justice, principal judicial organ of the United Nations, asserted, in its 2004 advisory opinion on the legality of the construction of the Israeli West Bank barrier, that the lands captured by Israel in the Six-Day War, including East Jerusalem, are occupied territory. The status of East Jerusalem in any future peace settlement has at times been a difficult issue in negotiations between Israeli governments and representatives of the Palestinians, as Israel views it as its sovereign territory, as well as part of its capital. Most negotiations relating to the territories have been on the basis of United Nations Security Council Resolution 242, which emphasises "the inadmissibility of the acquisition of territory by war", and calls on Israel to withdraw from occupied territories in return for normalization of relations with Arab states, a principle known as "Land for peace".
thumb|Israeli West Bank barrier separating Israel and the West Bank
The West Bank was annexed by Jordan in 1950, following the Arab rejection of the UN decision to create two states in Palestine. Only Britain recognized this annexation and Jordan has since ceded its claim to the territory to the PLO. The population are mainly Palestinians, including refugees of the 1948 Arab–Israeli War. From their occupation in 1967 until 1993, the Palestinians living in these territories were under Israeli military administration. Since the Israel–PLO letters of recognition, most of the Palestinian population and cities have been under the internal jurisdiction of the Palestinian Authority, and only partial Israeli military control, although Israel has on several occasions redeployed its troops and reinstated full military administration during periods of unrest. In response to increasing attacks during the Second Intifada, the Israeli government started to construct the Israeli West Bank barrier. When completed, approximately 13% of the barrier will be constructed on the Green Line or in Israel with 87% inside the West Bank.
The Gaza Strip was occupied by Egypt from 1948 to 1967 and then by Israel after 1967. In 2005, as part of Israel's unilateral disengagement plan, Israel removed all of its settlers and forces from the territory. Israel does not consider the Gaza Strip to be occupied territory and declared it a "foreign territory". That view has been disputed by numerous international humanitarian organizations and various bodies of the United Nations. Following the 2007 Battle of Gaza, when Hamas assumed power in the Gaza Strip, Israel tightened its control of the Gaza crossings along its border, as well as by sea and air, and prevented persons from entering and exiting the area except for isolated cases it deemed humanitarian. Gaza has a border with Egypt and an agreement between Israel, the European Union and the PA governed how border crossing would take place (it was monitored by European observers).
Foreign relations
thumb|300px|
Israel maintains diplomatic relations with 158 countries and has 107 diplomatic missions around the world; countries with whom they have no diplomatic relations include most Muslim countries. Only three members of the Arab League have normalized relations with Israel: Egypt and Jordan signed peace treaties in 1979 and 1994, respectively, and Mauritania opted for full diplomatic relations with Israel in 1999. Despite the peace treaty between Israel and Egypt, Israel is still widely considered an enemy country among Egyptians."Massive Israel protests hit universities" (Egyptian Mail, 16 March 2010) "According to most Egyptians, almost 31 years after a peace treaty was signed between Egypt and Israel, having normal ties between the two countries is still a potent accusation and Israel is largely considered to be an enemy country" Under Israeli law, Lebanon, Syria, Saudi Arabia, Iraq, Iran, Sudan, and Yemen are enemy countries, and Israeli citizens may not visit them without permission from the Ministry of the Interior. Iran had diplomatic relations with Israel under the Pahlavi dynasty but withdrew its recognition of Israel during the Islamic Revolution. As a result of the 2008–09 Gaza War, Mauritania, Qatar, Bolivia, and Venezuela suspended political and economic ties with Israel.
The United States and the Soviet Union were the first two countries to recognize the State of Israel, having declared recognition roughly simultaneously. The United States regards Israel as its "most reliable partner in the Middle East," based on "common democratic values, religious affinities, and security interests". The United States has provided $68 billion in military assistance and $32 billion in grants to Israel since 1967, under the Foreign Assistance Act (period beginning 1962), more than any other country for that period until 2003. The United Kingdom is seen as having a "natural" relationship with Israel on account of the British Mandate for Palestine. Relations between the two countries were also made stronger by former prime minister Tony Blair's efforts for a two state resolution. , Germany had paid 25 billion euros in reparations to the Israeli state and individual Israeli Holocaust survivors. Israel is included in the European Union's European Neighbourhood Policy (ENP), which aims at bringing the EU and its neighbours closer.
Although Turkey and Israel did not establish full diplomatic relations until 1991,. "However, it was not until 1991 that the two countries established full diplomatic relations." Turkey has cooperated with the Jewish state since its recognition of Israel in 1949. Turkey's ties to the other Muslim-majority nations in the region have at times resulted in pressure from Arab and Muslim states to temper its relationship with Israel. Relations between Turkey and Israel took a downturn after the 2008–09 Gaza War and Israel's raid of the Gaza flotilla. Relations between Greece and Israel have improved since 1995 due to the decline of Israeli-Turkish relations. The two countries have a defense cooperation agreement and in 2010, the Israeli Air Force hosted Greece's Hellenic Air Force in a joint exercise at the Uvda base. The joint Cyprus-Israel oil and gas explorations centered on the Leviathan gas field are an important factor for Greece, given its strong links with Cyprus. Cooperation in the world's longest sub-sea electric power cable, the EuroAsia Interconnector, has strengthened relations between Cyprus and Israel.
Azerbaijan is one of the few majority Muslim countries to develop bilateral strategic and economic relations with Israel. Azerbaijan supplies Israel with a substantial amount of its oil needs, and Israel has helped modernize the Armed Forces of Azerbaijan. India established full diplomatic ties with Israel in 1992 and has fostered a strong military, technological and cultural partnership with the country since then. According to an international opinion survey conducted in 2009 on behalf of the Israel Ministry of Foreign Affairs, India is the most pro-Israel country in the world. India is the largest customer of the Israeli military equipment and Israel is the second-largest military partner of India after Russia. Ethiopia is Israel's main ally in Africa due to common political, religious and security interests. Israel provides expertise to Ethiopia on irrigation projects and thousands of Ethiopian Jews live in Israel.
International humanitarian efforts
Israeli foreign aid ranks low among OECD nations, spending less than 0.1% of its GNI on development assistance, as opposed to the recommended 0.7%. The country also ranked 43rd in the 2016 World Giving Index. However, Israel has a history of providing emergency aid and humanitarian response teams to disasters across the world. Israel's humanitarian efforts officially began in 1957, with the establishment of Mashav, the Israel's Agency for International Development Cooperation. There are additional Israeli humanitarian and emergency response groups that work with the Israel government, including IsraAid, a joint programme run by 14 Israeli organizations and North American Jewish groups,Haim Yacobi, Israel and Africa: A Genealogy of Moral Geography, Routledge, 2015 p.113. ZAKA, The Fast Israeli Rescue and Search Team (FIRST),Ueriel Hellman,"Israeli aid effort helps Haitians – and Israel's image", Jewish Telegraphic Agency 19 January 2010 Israeli Flying Aid (IFA), Save a Child's Heart (SACH) and Latet.
Between 1985 and 2015, Israel sent 24 delegations of IDF search and rescue unit, the Home Front Command, to 22 countries. In Haiti, immediately following the 2010 earthquake, Israel was the first country to set up a field hospital capable of performing surgical operations. Israel sent over 200 medical doctors and personnel to start treating injured Haitians at the scene. At the conclusion of its humanitarian mission 11 days later,Marcy Oster, Israeli delegation leaves Haiti Jewish Telegraphic Agency January 27, 2010. the Israeli delegation had treated more than 1,110 patients, conducted 319 successful surgeries, delivered 16 births and rescued or assisted in the rescue of four individuals. Despite radiation concerns, Israel was one of the first countries to send a medical delegation to Japan following the 2011 earthquake and tsunami disaster. Israel dispatched a medical team to the tsunami-stricken city of Kurihara in 2011. A medical clinic run by an IDF team of some 50 members featured pediatric, surgical, maternity and gynecological, and otolaryngology wards, together with an optometry department, a laboratory, a pharmacy and an intensive care unit. After treating 200 patients in two weeks, the departing emergency team donated its equipment to the Japanese.Kinue Tokudome, 'Promise fulfilled Israelìs Medical Team in Japan,' Jerusalem Post 18 April 2015.
Military
The Israel Defense Forces is the sole military wing of the Israeli security forces, and is headed by its Chief of General Staff, the Ramatkal, subordinate to the Cabinet. The IDF consist of the army, air force and navy. It was founded during the 1948 Arab–Israeli War by consolidating paramilitary organizations—chiefly the Haganah—that preceded the establishment of the state. The IDF also draws upon the resources of the Military Intelligence Directorate (Aman), which works with Mossad and Shabak. The Israel Defense Forces have been involved in several major wars and border conflicts in its short history, making it one of the most battle-trained armed forces in the world.
Most Israelis are drafted into the military at the age of 18. Men serve two years and eight months and women two years. Following mandatory service, Israeli men join the reserve forces and usually do up to several weeks of reserve duty every year until their forties. Most women are exempt from reserve duty. Arab citizens of Israel (except the Druze) and those engaged in full-time religious studies are exempt from military service, although the exemption of yeshiva students has been a source of contention in Israeli society for many years. An alternative for those who receive exemptions on various grounds is Sherut Leumi, or national service, which involves a program of service in hospitals, schools and other social welfare frameworks. As a result of its conscription program, the IDF maintains approximately 176,500 active troops and an additional 445,000 reservists.
thumb|Iron Dome is the world's first operational anti-artillery rocket defense system.
The nation's military relies heavily on high-tech weapons systems designed and manufactured in Israel as well as some foreign imports. The Arrow missile is one of the world's few operational anti-ballistic missile systems. The Python air-to-air missile series is often considered one of the most crucial weapons in its military history.Israeli Mirage III and Nesher Aces, By Shlomo Aloni, (Osprey 2004), page 60 Israel's Spike missile is one of the most widely exported ATGMs in the world.Spike Anti-Tank Missile, Israel army-technology.com Israel's Iron Dome anti-missile air defense system gained worldwide acclaim after intercepting hundreds of Qassam, 122 mm Grad and Fajr-5 artillery rockets fire by Palestinian militants from the Gaza Strip. Since the Yom Kippur War, Israel has developed a network of reconnaissance satellites. The success of the Ofeq program has made Israel one of seven countries capable of launching such satellites.
Israel is widely believed to possess nuclear weapons as well as chemical and biological weapons of mass destruction. Israel has not signed the Treaty on the Non-Proliferation of Nuclear Weapons and maintains a policy of deliberate ambiguity toward its nuclear capabilities.Ziv, Guy, "To Disclose or Not to Disclose: The Impact of Nuclear Ambiguity on Israeli Security," Israel Studies Forum, Vol. 22, No. 2 (Winter 2007): 76–94 The Israeli Navy's Dolphin submarines are believed to be armed with nuclear Popeye Turbo missiles, offering second-strike capability. Since the Gulf War in 1991, when Israel was attacked by Iraqi Scud missiles, all homes in Israel are required to have a reinforced security room, Merkhav Mugan, impermeable to chemical and biological substances.
As of 2015, Israel has the 15th largest military expenditure in the world and the 7th highest as a percentage of GDP. The country also ranked 8th globally for arms exports. The majority of Israel's arms exports are unreported for security reasons.Israel reveals more than $7 billion in arms sales, but few names By Gili Cohen | 9 January 2014, Haaretz Since 1967, the United States has been a particularly notable foreign contributor of military aid to Israel: the US is expected to provide the country with $3.15 billion per year from 2013 to 2018. Israel is consistently rated low in the Global Peace Index, ranking 144th out of 163 nations for peacefulness in 2016.
Economy
thumb|left|Israeli new shekel banknotes and coins (currently being replaced)
Israel is considered the most advanced country in Southwest Asia and the Middle East in economic and industrial development. Israel's quality university education and the establishment of a highly motivated and educated populace is largely responsible for spurring the country's high technology boom and rapid economic development. In 2010, it joined the OECD. The country is ranked 52nd worldwide on the World Bank's Doing Business index and 24th in the World Economic Forum's Global Competitiveness Report. It has the second-largest number of startup companies in the world (after the United States) and the third-largest number of NASDAQ-listed companies after the U.S. and China. In 2016, Israel ranked 21st among the world's most competitive nations, according to the IMD's World Competitiveness Yearbook. Israel was also ranked 4th in the world by share of people in high-skilled employment. The Bank of Israel holds $78 billion of foreign-exchange reserves.
thumb|upright|Tel Aviv Stock Exchange. Its building is optimized for computer trading, with systems located in an underground bunker to keep the exchange active during emergencies.Tel Aviv Stock Exchange inaugurates trading in new building, By GLOBES, NIV ELIS, 09/08/2014
Despite limited natural resources, intensive development of the agricultural and industrial sectors over the past decades has made Israel largely self-sufficient in food production, apart from grains and beef. Imports to Israel, totaling $77.59 billion in 2012, include raw materials, military equipment, investment goods, rough diamonds, fuels, grain, consumer goods. Leading exports include electronics, software, computerized systems, communications technology, medical equipment, pharmaceuticals, fruits, chemicals, military technology, and cut diamonds; in 2012, Israeli exports reached $64.74 billion. Israel has an impressive record for creating profit driven technologies making the country a top choice for many business leaders and high technology industry giants. Intel and Microsoft built their first overseas research and development centers in Israel, and other high-tech multi-national corporations, such as IBM, Google, Apple, HP, Cisco Systems, Facebook and Motorola have opened R&D facilities in the country.
In July 2007, American investor Warren Buffett's holding company Berkshire Hathaway bought an Israeli company, Iscar, its first acquisition outside the United States, for $4 billion. Since the 1970s, Israel has received military aid from the United States, as well as economic assistance in the form of loan guarantees, which now account for roughly half of Israel's external debt. Israel has one of the lowest external debts in the developed world, and is a net lender in terms of net external debt (the total value of assets vs. liabilities in debt instruments owed abroad), which stood at a surplus of US$118 billion.
Days of working time in Israel are Sunday through Thursday (for a five-day workweek), or Friday (for a six-day workweek). In observance of Shabbat, in places where Friday is a work day and the majority of population is Jewish, Friday is a "short day", usually lasting till 14:00 in the winter, or 16:00 in the summer. Several proposals have been raised to adjust the work week with the majority of the world, and make Sunday a non-working day, while extending working time of other days or replacing Friday with Sunday as a work day. (in Hebrew)
Science and technology
thumb|upright|Dan Shechtman, a materials science professor from the Technion, one of six Israelis to win the Nobel Prize in Chemistry in under a decade.
Israel is a leading nation in scientific research, particularly in the natural sciences, engineering, and health sciences. It is one of the country's most developed sectors. Israel's development of cutting-edge technologies in software, communications and the life sciences have evoked comparisons with Silicon Valley. Israel ranks fifth among the most innovative countries in the Bloomberg Innovation Index. Israel is ranked 2nd in the world in expenditure on research and development (R&D) as a percentage of GDP. Israel boasts the highest number of scientists, technicians, and engineers per capita in the world with 140 scientists, technicians, and engineers per 10,000 employees. In comparison, the same is 85 per 10,000 in the United States and 83 per 10,000 in Japan.Investing in Israel Israeli universities are ranked among the 50 top world universities in computer science (Technion and Tel Aviv University), mathematics (Hebrew University of Jerusalem) and chemistry (Weizmann Institute of Science). Israel has produced six Nobel Prize-winning scientists since 2002 and has been frequently ranked as one of the countries with the highest ratios of scientific papers per capita in the world. Israel has led the world in stem-cell research papers per capita since 2000.
The Israeli Space Agency coordinates all Israeli space research programs with scientific and commercial goals. In 2012 Israel was ranked ninth in the world by the Futron's Space Competitiveness Index. Israel is one of only seven countries that both build their own satellites and launch their own launchers. The Shavit is a space launch vehicle produced by Israel to launch small satellites into low earth orbit. It was first launched in 1988, making Israel the eighth nation to have a space launch capability. Shavit rockets are launched from the spaceport at the Palmachim Airbase by the Israeli Space Agency. Since 1988 Israel Aerospace Industries have indigenously designed and built at least 13 commercial, research and spy satellites. Some of Israel's satellites are ranked among the world's most advanced space systems. In 2003, Ilan Ramon became Israel's first astronaut, serving as payload specialist of STS-107, the fatal mission of the Space Shuttle Columbia.
Israel is one of the world's technological leaders in water technology. In 2011, its water technology industry was worth around $2 billion a year with annual exports of products and services in the tens of millions of dollars. The ongoing shortage of water in the country has spurred innovation in water conservation techniques, and a substantial agricultural modernization, drip irrigation, was invented in Israel. Israel is also at the technological forefront of desalination and water recycling. The Ashkelon seawater reverse osmosis (SWRO) plant, the largest in the world, was voted 'Desalination Plant of the Year' in the Global Water Awards in 2006. Israel hosts an annual Water Technology Exhibition and Conference (WaTec) that attracts thousands of people from across the world. By 2014, Israel's desalination programs provided roughly 35% of Israel's drinking water and it is expected to supply 40% by 2015 and 70% by 2050. As of May 29, 2015 more than 50 percent of the water for Israeli households, agriculture and industry is artificially produced. As a result of innovations in reverse osmosis technology, Israel is set to become a net exporter of water in the coming years.
thumb|alt=A horizontal parabolic dish, with a triangular structure on its top.|The world's largest solar parabolic dish at the Ben-Gurion National Solar Energy Center.
Israel has embraced solar energy; its engineers are on the cutting edge of solar energy technology and its solar companies work on projects around the world. Over 90% of Israeli homes use solar energy for hot water, the highest per capita in the world. According to government figures, the country saves 8% of its electricity consumption per year because of its solar energy use in heating. The high annual incident solar irradiance at its geographic latitude creates ideal conditions for what is an internationally renowned solar research and development industry in the Negev Desert. Israel had a modern electric car infrastructure involving a countrywide network of recharging stations to facilitate the charging and exchange of car batteries. It was thought that this would have lowered Israel's oil dependency and lowered the fuel costs of hundreds of Israel's motorists that use cars powered only by electric batteries. The Israeli model was being studied by several countries and being implemented in Denmark and Australia. However, Israel's trailblazing electric car company Better Place shut down in 2013.
Transportation
thumb|Reception hall at Ben Gurion Airport
Israel has 18,096 kilometers (11,244 mi) of paved roads, and 2.4 million motor vehicles. The number of motor vehicles per 1,000 persons was 324, relatively low with respect to developed countries. Israel has 5,715 buses on scheduled routes, operated by several carriers, the largest of which is Egged, serving most of the country. Railways stretch across 949 kilometers (590 mi) and are operated solely by government-owned Israel Railways (All figures are for 2008). Following major investments beginning in the early to mid-1990s, the number of train passengers per year has grown from 2.5 million in 1990, to 35 million in 2008; railways are also used to transport 6.8 million tons of cargo, per year.
Israel is served by two international airports, Ben Gurion International Airport, the country's main hub for international air travel near Tel Aviv-Yafo, and Ovda Airport, which serves the southernmost port city of Eilat. There are several small domestic airports as well. Ben Gurion, Israel's largest airport, handled over 12.1 million passengers in 2010. On the Mediterranean coast, Haifa Port is the country's oldest and largest port, while Ashdod Port is one of the few deep water ports in the world built on the open sea. In addition to these, the smaller Port of Eilat is situated on the Red Sea, and is used mainly for trading with Far East countries.
Tourism
thumb|The Bahá'í holy places in Haifa, a popular tourist attraction.
Tourism, especially religious tourism, is an important industry in Israel, with the country's temperate climate, beaches, archaeological, other historical and biblical sites, and unique geography also drawing tourists. Israel's security problems have taken their toll on the industry, but the number of incoming tourists is on the rebound. In 2013, a record of 3.54 million tourists visited Israel with the most popular site of attraction being the Western Wall with 68% of tourists visiting there.
Energy
In 2009, a natural gas reserve, Tamar was found near the coast of Israel. A second reserve, Leviathan, was discovered in 2010. In 2015, Israel located massive oil reserves in the occupied Golan Heights.
Culture
Israel's diverse culture stems from the diversity of its population: Jews from diaspora communities around the world have brought their cultural and religious traditions back with them, creating a melting pot of Jewish customs and beliefs. Israel is the only country in the world where life revolves around the Hebrew calendar. Work and school holidays are determined by the Jewish holidays, and the official day of rest is Saturday, the Jewish Sabbath. Israel's substantial Arab minority has also left its imprint on Israeli culture in such spheres as architecture, music, and cuisine.
Literature
thumb|upright|Shmuel Yosef Agnon, laureate of the Nobel Prize in Literature
thumb|upright|Amos Oz's works have been translated into 36 languages, more than any other Israeli writer.
Israeli literature is primarily poetry and prose written in Hebrew, as part of the renaissance of Hebrew as a spoken language since the mid-19th century, although a small body of literature is published in other languages, such as English. By law, two copies of all printed matter published in Israel must be deposited in the National Library of Israel at the Hebrew University of Jerusalem. In 2001, the law was amended to include audio and video recordings, and other non-print media. In 2013, 91 percent of the 7,863 books transferred to the library were in Hebrew. The Hebrew Book Week is held each June and features book fairs, public readings, and appearances by Israeli authors around the country. During the week, Israel's top literary award, the Sapir Prize, is presented.
In 1966, Shmuel Yosef Agnon shared the Nobel Prize in Literature with German Jewish author Nelly Sachs. Leading Israeli poets have been Yehuda Amichai, Nathan Alterman and Rachel Bluwstein. Internationally famous contemporary Israeli novelists include Amos Oz, Etgar Keret and David Grossman. The Israeli-Arab satirist Sayed Kashua (who writes in Hebrew) is also internationally known. Israel has also been the home of two leading Palestinian poets and writers: Emile Habibi, whose novel The Secret Life of Saeed the Pessoptimist, and other writings, won him the Israel prize for Arabic literature; and Mahmoud Darwish, considered by many to be "the Palestinian national poet." Darwish was born and raised in northern Israel, but lived his adult life abroad after joining the Palestine Liberation Organization.
Music and dance
thumb|left|Israel Philharmonic Orchestra conducted by Zubin Mehta|alt=Several dozen musicians in formal dress, holding their instruments, behind a conductor
Israeli music contains musical influences from all over the world; Sephardic music, Hasidic melodies, Belly dancing music, Greek music, jazz, and pop rock are all part of the music scene. Among Israel's world-renowned orchestras is the Israel Philharmonic Orchestra, which has been in operation for over seventy years and today performs more than two hundred concerts each year. Israel has also produced many musicians of note, some achieving international stardom. Itzhak Perlman, Pinchas Zukerman and Ofra Haza are among the internationally acclaimed musicians born in Israel. Israel has participated in the Eurovision Song Contest nearly every year since 1973, winning the competition three times and hosting it twice. Eilat has hosted its own international music festival, the Red Sea Jazz Festival, every summer since 1987.
thumb|upright|Celebrated Israeli ballet dancers Valery and Galina Panov, who founded the Ballet Panov, in AshdodThe Cambridge Dictionary of Judaism and Jewish Culture, (Cambridge University Press 2011), edited by Judith R. Baskin, Judith Reesa Baskin, page 125
The nation's canonical folk songs, known as "Songs of the Land of Israel," deal with the experiences of the pioneers in building the Jewish homeland. The Hora circle dance introduced by early Jewish settlers was originally popular in the Kibbutzim and outlying communities. It became a symbol of the Zionist reconstruction and of the ability to experience joy amidst austerity. It now plays a significant role in modern Israeli folk dancing and is regularly performed at weddings and other celebrations, and in group dances throughout Israel. Modern dance in Israel is a flourishing field, and several Israeli choreographers such as Ohad Naharin, Rami Beer, Barak Marshall and many others, are considered to be among the most versatile and original international creators working today. Famous Israeli companies include the Batsheva Dance Company and the Kibbutz Contemporary Dance Company.
Israel is home to many Palestinian musicians, including internationally acclaimed oud and violin virtuoso Taiseer Elias, singer Amal Murkus, and brothers Samir and Wissam Joubran. Israeli Arab musicians have achieved fame beyond Israel's borders: Elias and Murkus frequently play to audiences in Europe and America, and oud player Darwish Darwish (Prof. Elias's student) was awarded first prize in the all-Arab oud contest in Egypt in 2003. The Jerusalem Academy of Music and Dance has an advanced degree program, headed by Taiseer Elias, in Arabic music.
Cinema and theatre
thumb|right|Habima Theatre, in Tel Aviv
Ten Israeli films have been final nominees for Best Foreign Language Film at the Academy Awards since the establishment of Israel. The 2009 movie Ajami was the third consecutive nomination of an Israeli film. Palestinian Israeli filmmakers have made a number of films dealing with the Arab-Israel conflict and the status of Palestinians within Israel, such as Mohammed Bakri's 2002 film Jenin, Jenin and The Syrian Bride.
Continuing the strong theatrical traditions of the Yiddish theatre in Eastern Europe, Israel maintains a vibrant theatre scene. Founded in 1918, Habima Theatre in Tel Aviv is Israel's oldest repertory theater company and national theater.
Media
The 2016 Freedom of the Press annual report by Freedom House ranked Israel as the Middle East and North Africa's most free country, and 65th globally. In the 2016 Press Freedom Index by Reporters Without Borders, Israel (including "Israel extraterritorial" since 2013 ranking) was placed 101st of 180 countries, and 3rd below Tunisia (at 96) and Lebanon (at 98) in the Middle East and North Africa region.
Museums
thumb|Shrine of the Book, repository of the Dead Sea Scrolls in Jerusalem
The Israel Museum in Jerusalem is one of Israel's most important cultural institutions and houses the Dead Sea scrolls, along with an extensive collection of Judaica and European art. Israel's national Holocaust museum, Yad Vashem, is the world central archive of Holocaust-related information. Beth Hatefutsoth (the Diaspora Museum), on the campus of Tel Aviv University, is an interactive museum devoted to the history of Jewish communities around the world. Apart from the major museums in large cities, there are high-quality artspaces in many towns and kibbutzim. Mishkan Le'Omanut on Kibbutz Ein Harod Meuhad is the largest art museum in the north of the country.
Israel has the highest number of museums per capita in the world. Several Israeli museums are devoted to Islamic culture, including the Rockefeller Museum and the L. A. Mayer Institute for Islamic Art, both in Jerusalem. The Rockefeller specializes in archaeological remains from the Ottoman and other periods of Middle East history. It is also the home of the first hominid fossil skull found in Western Asia called Galilee Man. A cast of the skull is on display at the Israel Museum.
Cuisine
thumb|A meal including falafel, hummus, French fries and Israeli salad
Israeli cuisine includes local dishes as well as dishes brought to the country by Jewish immigrants from the diaspora. Since the establishment of the State in 1948, and particularly since the late 1970s, an Israeli fusion cuisine has developed. Roughly half of the Israeli-Jewish population attests to keeping kosher at home.Uzi Rebhun, Lilakh Lev Ari, American Israelis: Migration, Transnationalism, and Diasporic Identity, BRILL, 2010 pp.112-113.Julia Bernstein, Food for Thought: Transnational Contested Identities and Food Practices of Russian-Speaking Jewish Migrants in Israel and Germany, Campus Verlag, 2010 pp.227,233-234. Kosher restaurants, though rare in the 1960s, make up around 25% of the total , perhaps reflecting the largely secular values of those who dine out.Yael Raviv, Falafel Nation, University of Nebraska Press, 2015 Hotel restaurants are much more likely to serve kosher food. The non-kosher retail market was traditionally sparse, but grew rapidly and considerably following the influx of immigrants from Eastern Europe and Russia during the 1990s. Together with non-kosher fish, rabbits and ostriches, pork—often called "white meat" in IsraelBernstein, pp. 231–233.—is produced and consumed, though it is forbidden by both Judaism and Islam.
Israeli cuisine has adopted, and continues to adapt, elements of various styles of Jewish cuisine, particularly the Mizrahi, Sephardic, and Ashkenazi styles of cooking, along with Moroccan Jewish, Iraqi Jewish, Ethiopian Jewish, Indian Jewish, Iranian Jewish and Yemeni Jewish influences. It incorporates many foods traditionally eaten in the Arab, Middle Eastern and Mediterranean cuisines, such as falafel, hummus, shakshouka, couscous, and za'atar, which have become common ingredients in Israeli cuisine. Schnitzel, pizza, hamburgers, French fries, rice and salad are also very common in Israel.
Sports
Israel has won nine Olympic medals since its first win in 1992, including a gold medal in windsurfing at the 2004 Summer Olympics. Israel has won over 100 gold medals in the Paralympic Games and is ranked about 15th in the all-time medal count. The 1968 Summer Paralympics were hosted by Israel. The Maccabiah Games, an Olympic-style event for Jewish athletes and Israeli athletes, was inaugurated in the 1930s, and has been held every four years since then.
thumb|left|Teddy Stadium of Jerusalem
The most popular spectator sports in Israel are association football and basketball. The Israeli Premier League is the country's premier football league, and the Israeli Basketball Super League is the premier basketball league. Maccabi Haifa, Maccabi Tel Aviv, Hapoel Tel Aviv and Beitar Jerusalem are the largest sports clubs. Maccabi Tel Aviv, Maccabi Haifa and Hapoel Tel Aviv have competed in the UEFA Champions League and Hapoel Tel Aviv reached the UEFA Cup quarter-finals. Maccabi Tel Aviv B.C. has won the European championship in basketball six times. In 2016, the country was chosen as a host for the official 2017 EuroBasket.
In 1964 Israel hosted and won the Asian Nations Cup; in 1970 the Israel national football team qualified for the FIFA World Cup, the only time it participated in the World Cup. The 1974 Asian Games held in Tehran, were the last Asian Games in which Israel participated, and was plagued by the Arab countries which refused to compete with Israel. Israel was excluded from the 1978 Asian Games and since then has not competed in Asian sport events. In 1994, UEFA agreed to admit Israel and Israeli soccer teams now compete in Europe.
thumb|Boris Gelfand, chess Grandmaster
Chess is a leading sport in Israel and is enjoyed by people of all ages. There are many Israeli grandmasters and Israeli chess players have won a number of youth world championships. Israel stages an annual international championship and hosted the World Team Chess Championship in 2005. The Ministry of Education and the World Chess Federation agreed upon a project of teaching chess within Israeli schools, and it has been introduced into the curriculum of some schools. The city of Beersheba has become a national chess center, with the game being taught in the city's kindergartens. Owing partly to Soviet immigration, it is home to the largest number of chess grandmasters of any city in the world. The Israeli chess team won the silver medal at the 2008 Chess Olympiad and the bronze, coming in third among 148 teams, at the 2010 Olympiad. Israeli grandmaster Boris Gelfand won the Chess World Cup in 2009 and the 2011 Candidates Tournament for the right to challenge the world champion. He only lost the World Chess Championship 2012 to reigning world champion Anand after a speed-chess tie breaker.
Israeli tennis champion Shahar Pe'er ranked 11th in the world on 31 January 2011. Krav Maga, a martial art developed by Jewish ghetto defenders during the struggle against fascism in Europe, is used by the Israeli security forces and police. Its effectiveness and practical approach to self-defense, have won it widespread admiration and adherence around the world.
See also
Index of Israel-related articles
Outline of Israel
Notes
References
Bibliography
External links
Government
Government services and information website
About Israel at the Israel Ministry of Foreign Affairs
Official website of the Israel Prime Minister's Office
Official website of the Israel Ministry of Tourism
Official website of the Israel Central Bureau of Statistics
General information
Israel at the Jewish Virtual Library
Key Development Forecasts for Israel from International Futures
Maps
Category:Arabic-speaking countries and territories
Category:Hebrew words and phrases
Category:Liberal democracies
Category:Member states of the Union for the Mediterranean
Category:Member states of the United Nations
Category:Middle Eastern countries
Category:Near Eastern countries
*
Category:Republics
Category:States and territories established in 1948
Category:Articles containing video clips | 9,282,173 | 2017-01 |
Neoclassical architecture | thumb|300px|The Cathedral of Vilnius
Neoclassical architecture is an architectural style produced by the neoclassical movement that began in the mid-18th century. In its purest form, it is a style principally derived from the architecture of classical antiquity, the Vitruvian principles, and the work of the Italian architect Andrea Palladio.
In form, Neoclassical architecture emphasizes the wall rather than chiaroscuro and maintains separate identities to each of its parts. The style is manifested both in its details as a reaction against the Rococo style of naturalistic ornament, and in its architectural formulae as an outgrowth of some classicising features of Late Baroque. Neoclassical architecture is still designed today, but may be labelled New Classical Architecture for contemporary buildings.
In Central and Eastern Europe, the style is usually referred to as Classicism (), while the newer revival styles of the 19th century until today are called Neoclassical.
History
Intellectually, Neoclassicism was symptomatic of a desire to return to the perceived "purity" of the arts of Rome, to the more vague perception ("ideal") of Ancient Greek arts and, to a lesser extent, 16th-century Renaissance Classicism, which was also a source for academic Late Baroque architecture.
Many early 19th-century neoclassical architects were influenced by the drawings and projects of Étienne-Louis Boullée and Claude Nicolas Ledoux. The many graphite drawings of Boullée and his students depict spare geometrical architecture that emulates the eternality of the universe. There are links between Boullée's ideas and Edmund Burke's conception of the sublime. Ledoux addressed the concept of architectural character, maintaining that a building should immediately communicate its function to the viewer: taken literally such ideas give rise to "architecture parlante".
Palladianism
thumb|right|250px|Palladian revival: Stourhead House, designed by Colen Campbell and completed in 1720. The design is based on Palladio's Villa Emo.
A return to more classical architectural forms as a reaction to the Rococo style can be detected in some European architecture of the earlier 18th century, most vividly represented in the Palladian architecture of Georgian Britain and Ireland.
The baroque style had never truly been to the English taste. Four influential books were published in the first quarter of the 18th century which highlighted the simplicity and purity of classical architecture: Vitruvius Britannicus (Colen Campbell 1715), Palladio's Four Books of Architecture (1715), De Re Aedificatoria (1726) and The Designs of Inigo Jones... with Some Additional Designs (1727). The most popular was the four-volume Vitruvius Britannicus by Colen Campbell. The book contained architectural prints of famous British buildings that had been inspired by the great architects from Vitruvius to Palladio. At first the book mainly featured the work of Inigo Jones, but the later tomes contained drawings and plans by Campbell and other 18th-century architects. Palladian architecture became well established in 18th-century Britain.
thumb|left|225px|Woburn Abbey, an excellent example of English Palladianism, designed by Burlington's student Henry Flitcroft in 1746.
At the forefront of the new school of design was the aristocratic "architect earl", Richard Boyle, 3rd Earl of Burlington; in 1729, he and William Kent, designed Chiswick House. This House was a reinterpretation of Palladio's Villa Capra, but purified of 16th century elements and ornament. This severe lack of ornamentation was to be a feature of the Palladianism. In 1734 William Kent and Lord Burlington designed one of England's finest examples of Palladian architecture with Holkham Hall in Norfolk. The main block of this house followed Palladio's dictates quite closely, but Palladio's low, often detached, wings of farm buildings were elevated in significance.
This classicising vein was also detectable, to a lesser degree, in the Late Baroque architecture in Paris, such as in Perrault's east range of the Louvre. This shift was even visible in Rome at the redesigned façade for S. Giovanni in Laterano.
Neoclassicism
thumb|250px|right|Altes Museum, built by Karl Friedrich Schinkel in Berlin.
By the mid 18th century, the movement broadened to incorporate a greater range of Classical influences, including those from Ancient Greece. The shift to neoclassical architecture is conventionally dated to the 1750s. It first gained influence in England and France; in England, Sir William Hamilton's excavations at Pompeii and other sites, the influence of the Grand Tour and the work of William Chambers and Robert Adam, was pivotal in this regard. In France, the movement was propelled by a generation of French art students trained in Rome, and was influenced by the writings of Johann Joachim Winckelmann. The style was also adopted by progressive circles in other countries such as Sweden and Russia.
International neoclassical architecture was exemplified in Karl Friedrich Schinkel's buildings, especially the Old Museum in Berlin, Sir John Soane's Bank of England in London and the newly built White House and Capitol in Washington, D.C. of the nascent American Republic. The style was international.
A second neoclassic wave, more severe, more studied and more consciously archaeological, is associated with the height of the Napoleonic Empire. In France, the first phase of neoclassicism was expressed in the "Louis XVI style", and the second in the styles called "Directoire" or Empire. The Rococo style remained popular in Italy until the Napoleonic regimes brought the new archaeological classicism, which was embraced as a political statement by young, progressive, urban Italians with republican leanings.
In the decorative arts, neoclassicism is exemplified in French furniture of the Empire style; the English furniture of Chippendale, George Hepplewhite and Robert Adam, Wedgwood's bas reliefs and "black basaltes" vases, and the Biedermeier furniture of Austria. The Scottish architect Charles Cameron created palatial Italianate interiors for the German-born Catherine II the Great in St. Petersburg.
Interior design
thumb|Château de Malmaison, 1800, room for the Empress Joséphine, on the cusp between Directoire style and Empire style
Indoors, neoclassicism made a discovery of the genuine classic interior, inspired by the rediscoveries at Pompeii and Herculaneum. These had begun in the late 1740s, but only achieved a wide audience in the 1760s, with the first luxurious volumes of tightly controlled distribution of Le Antichità di Ercolano (The Antiquities of Herculaneum). The antiquities of Herculaneum showed that even the most classicising interiors of the Baroque, or the most "Roman" rooms of William Kent were based on basilica and temple exterior architecture turned outside in, hence their often bombastic appearance to modern eyes: pedimented window frames turned into gilded mirrors, fireplaces topped with temple fronts.
The new interiors sought to recreate an authentically Roman and genuinely interior vocabulary. Techniques employed in the style included flatter, lighter motifs, sculpted in low frieze-like relief or painted in monotones en camaïeu ("like cameos"), isolated medallions or vases or busts or bucrania or other motifs, suspended on swags of laurel or ribbon, with slender arabesques against backgrounds, perhaps, of "Pompeiian red" or pale tints, or stone colours. The style in France was initially a Parisian style, the Goût grec ("Greek style"), not a court style; when Louis XVI acceded to the throne in 1774, Marie Antoinette, his fashion-loving Queen, brought the "Louis XVI" style to court.
thumb|300px|left|Interior of Home House in London, designed by Robert Adam in 1777 in the Adam style.
However, there was no real attempt to employ the basic forms of Roman furniture until around the turn of the century, and furniture-makers were more likely to borrow from ancient architecture, just as silversmiths were more likely to take from ancient pottery and stone-carving than metalwork: "Designers and craftsmen ... seem to have taken an almost perverse pleasure in transferring motifs from one medium to another".Honour, 110–111, 110 quoted
A new phase in neoclassical design was inaugurated by Robert and James Adam, who travelled in Italy and Dalmatia in the 1750s, observing the ruins of the classical world. On their return to Britain, they published a book entitled The Works in Architecture in installments between 1773 and 1779. This book of engraved designs made the Adam repertory available throughout Europe. The Adam brothers aimed to simplify the rococo and baroque styles which had been fashionable in the preceding decades, to bring what they felt to be a lighter and more elegant feel to Georgian houses. The Works in Architecture illustrated the main buildings the Adam brothers had worked on and crucially documented the interiors, furniture and fittings, designed by the Adams.
Greek revival
thumb|left|300px|Saint Isaac's Cathedral in Saint Petersburg
From about 1800 a fresh influx of Greek architectural examples, seen through the medium of etchings and engravings, gave a new impetus to neoclassicism, the Greek Revival. There was little to no direct knowledge of Greek civilization before the middle of the 18th century in Western Europe, when an expedition funded by the Society of Dilettanti in 1751 and led by James Stuart and Nicholas Revett began serious archaeological enquiry. Stuart was commissioned after his return from Greece by George Lyttelton to produce the first Greek building in England, the garden temple at Hagley Hall (1758–59).Though Giles Worsley detects the first Grecian influenced architectural element in the windows of Nuneham Park from 1756, see Giles Worsley, "The First Greek Revival Architecture", The Burlington Magazine, Vol. 127, No. 985 (April 1985), pp. 226–229. A number of British architects in the second half of the century took up the expressive challenge of the Doric from their aristocratic patrons, including Joseph Bonomi and John Soane, but it was to remain the private enthusiasm of connoisseurs up to the first decade of the 19th century.
thumb|right|Thomas Hamilton's design for the Royal High School, Edinburgh, 1831.
Seen in its wider social context, Greek Revival architecture sounded a new note of sobriety and restraint in public buildings in Britain around 1800 as an assertion of nationalism attendant on the Act of Union, the Napoleonic Wars, and the clamour for political reform. It was to be William Wilkins's winning design for the public competition for Downing College, Cambridge that announced the Greek style was to be the dominant idiom in architecture. Wilkins and Robert Smirke went on to build some of the most important buildings of the era, including the Theatre Royal, Covent Garden (1808–09), the General Post Office (1824–29) and the British Museum (1823–48), Wilkins University College London (1826–30) and the National Gallery (1832–38). In Scotland, Thomas Hamilton (1784–1858), in collaboration with the artists Andrew Wilson (1780–1848) and Hugh William Williams (1773–1829) created monuments and buildings of international significance; the Burns Monument at Alloway (1818) and the (Royal) High School in Edinburgh (1823–29).
At the same time the Empire style in France was a more grandiose wave of neoclassicism in architecture and the decorative arts. Mainly based on Imperial Roman styles, it originated in, and took its name from, the rule of Napoleon I in the First French Empire, where it was intended to idealize Napoleon's leadership and the French state. The style corresponds to the more bourgeois Biedermeier style in the German-speaking lands, Federal style in the United States, the Regency style in Britain, and the Napoleonstil in Sweden. According to the art historian Hugh Honour "so far from being, as is sometimes supposed, the culmination of the Neo-classical movement, the Empire marks its rapid decline and transformation back once more into a mere antique revival, drained of all the high-minded ideas and force of conviction that had inspired its masterpieces".Honour, 171–184, 171 quoted
Neoclassicism continued to be a major force in academic art through the 19th century and beyond—a constant antithesis to Romanticism or Gothic revivals— although from the late 19th century on it had often been considered anti-modern, or even reactionary, in influential critical circles. The centres of several European cities, notably St Petersburg and Munich, came to look much like museums of Neoclassical architecture.
Characteristics
thumb|A. Rinaldi. The White hall of the Gatchina palace. 1760s. An early example of the Italianate neoclassical interior design in Russian architecture.
High neoclassicism was an international movement. Though neoclassical architecture employed the same classical vocabulary as Late Baroque architecture, it tended to emphasize its planar qualities, rather than sculptural volumes. Projections and recessions and their effects of light and shade were more flat; sculptural bas-reliefs were flatter and tended to be enframed in friezes, tablets or panels. Its clearly articulated individual features were isolated rather than interpenetrating, autonomous and complete in themselves.
thumb|200px|left|The L'Enfant Plan for Washington, D.C., as revised by Andrew Ellicott in 1792.
Neoclassicism also influenced city planning; the ancient Romans had used a consolidated scheme for city planning for both defence and civil convenience, however, the roots of this scheme go back to even older civilizations. At its most basic, the grid system of streets, a central forum with city services, two main slightly wider boulevards, and the occasional diagonal street were characteristic of the very logical and orderly Roman design. Ancient façades and building layouts were oriented to these city design patterns and they tended to work in proportion with the importance of public buildings.
Many of these urban planning patterns found their way into the first modern planned cities of the 18th century. Exceptional examples include Karlsruhe and Washington, D.C. Not all planned cities and planned neighbourhoods are designed on neoclassical principles, however. Opposing models may be found in Modernist designs exemplified by Brasília, the Garden city movement, levittowns, and new urbanism.
Regional trends
Britain
right|thumb|left|250px|The central courtyard of Sir William Chambers' Somerset House in London.
From the middle of the 18th century, exploration and publication changed the course of British architecture towards a purer vision of the Ancient Greco-Roman ideal. James 'Athenian' Stuart's work The Antiquities of Athens and Other Monuments of Greece was very influential in this regard, as were Robert Wood's Palmyra and Baalbec. A combination of simple forms and high levels of enrichment was adopted by the majority of contemporary British architects and designers. The revolution begun by Stuart was soon to be eclipsed by the work of the Adam Brothers, James Wyatt, Sir William Chambers, George Dance, James Gandon and provincially based architects such as John Carr and Thomas Harrison of Chester.
In the early 20th century, the writings of Albert Richardson were responsible for a re-awakening of interest in pure neoclassical design. Vincent Harris (compare Harris's colonnaded and domed interior of Manchester Central Reference Library to the colonnaded and domed interior by John Carr and R R Duke), Bradshaw Gass & Hope and Percy Thomas were among those who designed public buildings in the neoclassical style in the interwar period. In the British Raj in India, Sir Edwin Lutyens' monumental city planning for New Delhi marked the sunset of neoclassicism.
In Scotland and the north of England, where the Gothic Revival was less strong, architects continued to develop the neoclassical style of William Henry Playfair. The works of Cuthbert Brodrick and Alexander Thomson show that by the end of the 19th century the results could be powerful and eccentric.
France
thumb|250px|left|Saint Louis church in La Roche-sur-Yon 1812/1830
thumb|right|250px|Château de Montmusard (1765), by Charles de Wailly.
The first phase of neoclassicism in France is expressed in the "Louis XVI style" of architects like Ange-Jacques Gabriel (Petit Trianon, 1762–68); the second phase, in the styles called Directoire and "Empire", might be characterized by Jean Chalgrin's severe astylar Arc de Triomphe (designed in 1806). In England the two phases might be characterized first by the structures of Robert Adam, the second by those of Sir John Soane. The interior style in France was initially a Parisian style, the "Goût grec" ("Greek style") not a court style. Only when the young king acceded to the throne in 1771 did Marie Antoinette, his fashion-loving Queen, bring the "Louis XVI" style to court.
From about 1800 a fresh influx of Greek architectural examples, seen through the medium of etchings and engravings, gave a new impetus to neoclassicism that is called the Greek Revival. Although several European cities — notably St Petersburg, Athens, Berlin and Munich — were transformed into veritable museums of Greek revival architecture, the Greek revival in France was never popular with either the State or the public.
What little there was, started with Charles de Wailly's crypt in the church of St Leu-St Gilles (1773–80), and Claude Nicolas Ledoux's Barriere des Bonshommes (1785–89). First-hand evidence of Greek architecture was of very little importance to the French, due to the influence of Marc-Antoine Laugier's doctrines that sought to discern the principles of the Greeks instead of their mere practices. It would take until Laboustre's Neo-Grec of the second Empire for the Greek revival to flower briefly in France.
Hungary
thumb|left|200px|Cathedral of Vác by I. M. A. Ganneval, 1762–1777
thumb|right|250px|Hungarian National Museum, Budapest by Mihály Pollack, 1837-1847
The earliest examples of neoclassical architecture in Hungary may be found in Vác. In this town the triumphal arch and the neoclassical façade of the baroque Cathedral were designed by the French architect Isidor Marcellus Amandus Ganneval (Isidore Canevale) in the 1760s. Also the work of a French architect Charles Moreau is the garden façade of the Esterházy Palace (1797–1805) in Kismarton (today Eisenstadt in Austria). The two principal architect of Neoclassicism in Hungary was Mihály Pollack and József Hild. Pollack's major work is the Hungarian National Museum (1837–1844). Hild is famous for his designs for the Cathedral of Eger and Esztergom. The Reformed Great Church of Debrecen is an outstanding example of the many Protestant churches that were built in the first half of the 19th century. This was the time of the first iron structures in Hungarian architecture, the most important of which is the Chain Bridge (Budapest) by William Tierney Clark.
Malta
thumb|250px|The Rotunda of Mosta, which was built between 1833 and 1860
Neoclassical architecture was introduced in Malta in the late 18th century, during the final years of Hospitaller rule. Early examples include the Bibliotheca (1786), the De Rohan Arch (1798) and the Hompesch Gate (1801). However, neoclassical architecture only became popular in Malta following the establishment of British rule in the early 19th century. In 1814, a neoclassical portico decorated with the British coat of arms was added to the Main Guard building so as to serve as a symbol of British Malta. Other 19th century neoclassical buildings include RNH Bighi (1832), St Paul's Pro-Cathedral (1844), the Rotunda of Mosta (1860) and the now destroyed Royal Opera House (1866).
Neoclassicism gave way to other architectural styles by the late 19th century. Few buildings were built in the neoclassical style during the 20th century, such as the Domvs Romana museum (1922), and the Courts of Justice building in Valletta (1965–71).
Polish–Lithuanian Commonwealth
The center of Polish Neoclassicism was Warsaw under the rule of the last Polish king Stanisław August Poniatowski. Vilnius University was another important center of the Neoclassical architecture in Europe, led by notable professors of architecture Marcin Knackfus, Laurynas Gucevicius and Karol Podczaszyński. The style was expressed in the shape of main public buildings, such as the University's Observatory, Vilnius Cathedral and the town hall.
The best-known architects and artists, who worked in Polish–Lithuanian Commonwealth were Dominik Merlini, Jan Chrystian Kamsetzer, Szymon Bogumił Zug, Jakub Kubicki, Antonio Corazzi, Efraim Szreger, Christian Piotr Aigner and Bertel Thorvaldsen.
thumb|left|A Russian Orthodox church near Lake Baikal in Siberia (built in 1816).
Russia
In the Russian Empire at the end of the 19th century, neoclassical architecture was equal to Saint Petersburg architecture because this style was specific for huge amount of buildings in the city.
In the Soviet Union (1917–1991), neoclassical architecture was very popular among the political elite, as it effectively expressed state power and a vast array of neoclassical building was erected all over the country.
Soviet neoclassical architecture was exported to other socialist countries of the Eastern Bloc, as a gift from the Soviet Union. Examples of this include the Palace of Culture and Science, Warsaw, Poland and the Shanghai International Convention Centre in Shanghai, China.
Spain
thumb|Prado Museum in Madrid, by Juan de Villanueva
Spanish Neoclassicism was exemplified by the work of Juan de Villanueva, who adapted Burke's theories of beauty and the sublime to the requirements of Spanish climate and history. He built the Prado Museum, that combined three functions — an academy, an auditorium and a museum — in one building with three separate entrances.
This was part of the ambitious program of Charles III, who intended to make Madrid the Capital of the Arts and Sciences. Very close to the museum, Villanueva built the Astronomical Observatory. He also designed several summer houses for the kings in El Escorial and Aranjuez and reconstructed the Major Square of Madrid, among other important works. Villanueva´s pupils expanded the Neoclassical style in Spain.
The Third Reich
Neoclassical architecture was the preferred style by the leaders of the National Socialist movement in the Third Reich, especially admired by Adolf Hitler himself. Hitler commissioned his favourite architect, Albert Speer, to plan a re-design of Berlin as a city comprising imposing neoclassical structures, which would be renamed as Welthauptstadt Germania, the centrepiece of Hitler's Thousand Year Reich.
These plans never came to fruition due to the eventual downfall of Nazi Germany and the suicide of its leader.
thumb|left|The Lincoln Memorial, an early 20th century example of American Renaissance neoclassical architecture.
United States
In the new republic, Robert Adam's neoclassical manner was adapted for the local late 18th and early 19th-century style, called "Federal architecture". One of the pioneers of this style was English-born Benjamin Henry Latrobe, who is often noted as one of the first formally trained America's professional architects and the father of American architecture. The Baltimore Basilica, the first Roman Catholic Cathedral in the United States, is considered by many experts to be Latrobe's masterpiece.
The widespread use of neoclassicism in American architecture, as well as by French revolutionary regimes, and the general tenor of rationalism associated with the movement, all created a link between neoclassicism and republicanism and radicalism in much of Europe. The Gothic Revival can be seen as an attempt to present a monarchist and conservative alternative to neoclassicism.
In later 19th-century American architecture, neoclassicism was one expression of the American Renaissance movement, ca 1880–1917. Its last manifestation was in Beaux-Arts architecture (1885–1920), and its very last, large public projects in the United States include the Lincoln Memorial (1922), the National Gallery in Washington, D.C. (1937), and the American Museum of Natural History's Roosevelt Memorial (1936).
Today, there is a small revival of Classical Architecture as evidenced by the groups such as The Institute of Classical Architecture and Classical America. The School of Architecture at the University of Notre Dame, currently teaches a fully Classical curriculum.
Neoclassical architecture today
thumb|right|The Keating Millennium Centre at St. Francis Xavier University, Canada, completed in 2001
After a lull during the period of modern architectural dominance (roughly post-World War II until the mid-1980s), neoclassicism has seen somewhat of a resurgence. This rebirth can be traced to the movement of New Urbanism and postmodern architecture's embrace of classical elements as ironic, especially in light of the dominance of Modernism. While some continued to work with classicism as ironic, some architects such as Thomas Gordon Smith, began to consider classicism seriously.
While some schools had interest in classical architecture, such as the University of Virginia, no school was purely dedicated to classical architecture. In the early 1990s a program in classical architecture was started by Smith and Duncan Stroik at the University of Notre Dame that continues successfully.School of Architecture at the University of Notre Dame "Twenty years ago the curriculum was reformed to focus on traditional and classical architecture and urbanism." Programs at the University of Miami, Andrews University, Judson University and The Prince's Foundation for Building Community have trained a number of new classical architects since this resurgence. Today one can find numerous buildings embracing neoclassical style, since a generation of architects trained in this discipline shapes urban planning.
As of the first decade of the 21st century, contemporary neoclassical architecture is usually classed under the umbrella term of New Classical Architecture. Sometimes it is also referred to as Neo-Historicism/Revivalism, Traditionalism or simply neoclassical architecture like the historical style.Neo-classicist Architecture. Traditionalism. Historicism. For sincere traditional-style architecture that sticks to regional architecture, materials and craftsmanship, the term Traditional Architecture (or vernacular) is mostly used. The Driehaus Architecture Prize is awarded to major contributors in the field of 21st century traditional or classical architecture, and comes with a prize money twice as high as that of the modernist Pritzker Prize.Driehaus Prize for New Classical Architecture at Notre Dame SoA: "Together, the $200,000 Driehaus Prize and the $50,000 Reed Award represent the most significant recognition for classicism in the contemporary built environment"; retained 7 March 2014
Regional developments
In the United States various contemporary public buildings are built in neoclassical style, with the 2006 Schermerhorn Symphony Center in Nashville being an example.
In Britain a number of architects are active in the neoclassical style. Two new university Libraries, Quinlan Terry's Maitland Robinson Library at Downing College and ADAM Architecture's Sackler Library illustrate that the approach taken can range from the traditional, in the former case, to the unconventional, in the latter case. Recently, Prince Charles came under controversy for promoting a classically designed development on the land of the former Chelsea Barracks in London. Writing to the Qatari Royal family (who were funding the development through the property development company Qatari Diar) he condemned the accepted modernist plans, instead advocating a classical approach. His appeal was met with success and the plans were withdrawn. A new design by architecture house Dixon Jones is currently being drafted.
See also
Neo-Historism
New Urbanism
Federal Period
Nordic Classicism
Neoclassical architecture in Milan
John Carr
Robert Adam
Sir William Chambers
References
Further reading
Détournelle, Athanase, Recueil d'architecture nouvelle, A Paris : Chez l'auteur, 1805
Dowling, Elizabeth Meredith, New Classicism, Rizzoli, 2004 ISBN 978-0-8478-2660-5
Gabriel, Jean-François, Classical Architecture for the Twenty-first Century, Norton, 2004
Groth, Håkan, Neoclassicism in the North: Swedish Furniture and Interiors, 1770–1850
Honour, Hugh, Neoclassicism
Irwin, David, Neoclassicism (in series Art and Ideas) Phaidon, paperback, 1997
Lorentz, Stanislaw, Neoclassicism in Poland (Series History of art in Poland)
McCormick, Thomas, Charles-Louis Clérisseau and the Genesis of Neoclassicism Architectural History Foundation, 1991
Praz, Mario. On Neoclassicism
Rawle, Tim (author), Tim Rawle and Louis Sinclair (photographers), John Adamson (editor), A Classical Adventure: The Architectural History of Downing College, Cambridge, Cambridge, The Oxbridge Portfolio, 2015, 200 pp. ISBN 978 0 9572867 4 0
Skurman, Andrew, Contemporary Classical: The Architecture of Andrew Skurman, Princeton Architectural Press, 2012 ISBN 978-1-61689-088-9
External links
Institute of Classical Architecture and Art
Traditional Architecture Group
A01
Category:Revival architectural styles
Category:Architectural styles
Category:18th-century architecture
Category:19th-century architecture
Category:20th-century architecture | 2,682,331 | 2017-01 |
Elevator | right|thumb|A set of lifts in the lower level of Borough station on the London Underground Northern line. The "up" and "down" arrows indicate each lift's position and direction of travel. Notice how the next lift is indicated with a right and left arrow by the words "Next Lift" at the top.
thumb|This elevator to the Alexanderplatz U-Bahn station in Berlin is built with glass walls, exposing the inner workings.
thumb|Glass elevator traveling up the facade of Westport Plaza. An HVAC unit is on top of the car because the elevator is completely outside.
thumb|Freight elevator at North Carolina State University. The doors open vertically.
An elevator (US and Canada) or lift (UK, Australia,http://www.plslifts.com.au/http://www.residentiallift.com.au/http://www.harwellifts.com.au/ Ireland,http://www.ennislifts.ie/http://www.pickerings.ie/ New Zealand,http://www.cremerlifts.co.nz/http://www.nzes.co.nz/ and South Africahttp://www.shortslifts.co.za/profile.htm) is a type of vertical transportation that moves people or goods between floors (levels, decks) of a building, vessel, or other structure. Elevators are generally powered by electric motors that either drive traction cables or counterweight systems like a hoist, or pump hydraulic fluid to raise a cylindrical piston like a jack.
In agriculture and manufacturing, an elevator is any type of conveyor device used to lift materials in a continuous stream into bins or silos. Several types exist, such as the chain and bucket elevator, grain auger screw conveyor using the principle of Archimedes' screw, or the chain and paddles or forks of hay elevators.
Languages other than English may have loanwords based on either elevator or lift.
thumb|Elevator lobby at the Forest Glen Washington Metro station in Silver Spring, Maryland
Because of wheelchair access laws, elevators are often a legal requirement in new multistory buildings, especially where wheelchair ramps would be impractical.
History
Pre-industrial era
thumb|left|Elevator design by the German engineer Konrad Kyeser (1405)
The earliest known reference to an elevator is in the works of the Roman architect Vitruvius, who reported that Archimedes (c. 287 BC – c. 212 BC) built his first elevator probably in 236 BC."Laying the foundation for today's skyscrapers". San Francisco Chronicle. August 23, 2008. Some sources from later historical periods mention elevators as cabs on a hemp rope powered by hand or by animals.
In 1000, the Book of Secrets by al-Muradi in Islamic Spain described the use of an elevator-like lifting device, in order to raise a large battering ram to destroy a fortress. In the 17th century the prototypes of elevators were located in the palace buildings of England and France. Louis XV of France had a so-called 'flying chair' built for one of his mistresses at the Chateau de Versailles in 1743.
Ancient and medieval elevators used drive systems based on hoists or winders. The invention of a system based on the screw drive was perhaps the most important step in elevator technology since ancient times, leading to the creation of modern passenger elevators. The first screw drive elevator was built by Ivan Kulibin and installed in Winter Palace in 1793. Several years later another of Kulibin's elevators was installed in Arkhangelskoye near Moscow.
Industrial era
The development of elevators was led by the need for movement of raw materials including coal and lumber from hillsides. The technology developed by these industries and the introduction of steel beam construction worked together to provide the passenger and freight elevators in use today.
Starting in the coal mines, by the mid-19th century elevators were operated with steam power and were used for moving goods in bulk in mines and factories. These steam driven devices were soon being applied to a diverse set of purposes – in 1823, two architects working in London, Burton and Hormer, built and operated a novel tourist attraction, which they called the "ascending room". It elevated paying customers to a considerable height in the center of London, allowing them a magnificent panoramic view of downtown.
Early, crude steam-driven elevators were refined in the ensuing decade; – in 1835 an innovative elevator called the "Teagle" was developed by the company Frost and Stutt in England. The elevator was belt-driven and used a counterweight for extra power.
The hydraulic crane was invented by Sir William Armstrong in 1846, primarily for use at the Tyneside docks for loading cargo. These quickly supplanted the earlier steam driven elevators: exploiting Pascal's law, they provided a much greater force. A water pump supplied a variable level of water pressure to a plunger encased inside a vertical cylinder, allowing the level of the platform (carrying a heavy load) to be raised and lowered. Counterweights and balances were also used to increase the lifting power of the apparatus.
Henry Waterman of New York is credited with inventing the "standing rope control" for an elevator in 1850.
In 1845, the Neapolitan architect Gaetano Genovese installed in the Royal Palace of Caserta the "Flying Chair", an elevator ahead of its time, covered with chestnut wood outside and with maple wood inside. It included a light, two benches and a hand operated signal, and could be activated from the outside, without any effort on the part of the occupants. Traction was controlled by a motor mechanic utilizing a system of toothed wheels. A safety system was designed to take effect if the cords broke. It consisted of a beam pushed outwards by a steel spring.
thumb|Elisha Otis demonstrating his safety system, Crystal Palace, 1853
In 1852, Elisha Otis introduced the safety elevator, which prevented the fall of the cab if the cable broke. The design of the Otis safety elevator is somewhat similar to one type still used today. A governor device engages knurled roller(s), locking the elevator to its guides should the elevator descend at excessive speed. He demonstrated it at the New York exposition in the Crystal Palace in a dramatic, death-defying presentation in 1854,"Skyscrapers," Magical Hystory Tour: The Origins of the Commonplace & Curious in America (September 1, 2010). and the first such passenger elevator was installed at 488 Broadway in New York City on March 23, 1857.
thumb|Elisha Otis's elevator patent drawing, 15 January 1861
The first elevator shaft preceded the first elevator by four years. Construction for Peter Cooper's Cooper Union Foundation building in New York began in 1853. An elevator shaft was included in the design, because Cooper was confident that a safe passenger elevator would soon be invented. The shaft was cylindrical because Cooper thought it was the most efficient design. Later, Otis designed a special elevator for the building. Today the Otis Elevator Company, now a subsidiary of United Technologies Corporation, is the world's largest manufacturer of vertical transport systems.
The Equitable Life Building completed in 1870 in New York City was the first office building to have passenger elevators.
The first electric elevator was built by Werner von Siemens in 1880 in Germany. The inventor Anton Freissler developed the ideas of von Siemens and built up a successful enterprise in Austria-Hungary. The safety and speed of electric elevators were significantly enhanced by Frank Sprague who added floor control, automatic elevators, acceleration control of cars, and safeties. His elevator ran faster and with larger loads than hydraulic or steam elevators, and 584 electric elevators were installed before Sprague sold his company to the Otis Elevator Company in 1895. Sprague also developed the idea and technology for multiple elevators in a single shaft.
In 1882, when hydraulic power was a well established technology, a company later named the London Hydraulic Power Company was formed by Edward B. Ellington and others. It constructed a network of high-pressure mains on both sides of the Thames which, ultimately, extended to 184 miles and powered some 8,000 machines, predominantly elevators (lifts) and cranes.Ralph Turvey, London Lifts and Hydraulic Power, Transactions of the Newcomen Society, Vol. 65, 1993–94, pp. 147–164
In 1874, J.W. Meaker patented a method which permitted elevator doors to open and close safely. In 1887, American Inventor Alexander Miles of Duluth, Minnesota patented an elevator with automatic doors that would close off the elevator shaft.
The first elevator in India was installed at the Raj Bhavan in Calcutta (now Kolkata) by Otis in 1892.
By 1900, completely automated elevators were available, but passengers were reluctant to use them. A 1945 elevator operator strike in New York City, and adoption of an emergency stop button, emergency telephone, and a soothing explanatory automated voice aided adoption.Remembering When Driverless Elevators Drew Skepticism
In 2000, the first vacuum elevator was offered commercially in Argentina.
Design
Some people argue that elevators began as simple rope or chain hoists (see Traction elevators below). An elevator is essentially a platform that is either pulled or pushed up by a mechanical means. A modern-day elevator consists of a cab (also called a "cage", "carriage" or "car") mounted on a platform within an enclosed space called a shaft or sometimes a "hoistway". In the past, elevator drive mechanisms were powered by steam and water hydraulic pistons or by hand. In a "traction" elevator, cars are pulled up by means of rolling steel ropes over a deeply grooved pulley, commonly called a sheave in the industry. The weight of the car is balanced by a counterweight. Sometimes two elevators are built so that their cars always move synchronously in opposite directions, and are each other's counterweight.
The friction between the ropes and the pulley furnishes the traction which gives this type of elevator its name.
Hydraulic elevators use the principles of hydraulics (in the sense of hydraulic power) to pressurize an above ground or in-ground piston to raise and lower the car (see Hydraulic elevators below). Roped hydraulics use a combination of both ropes and hydraulic power to raise and lower cars. Recent innovations include permanent magnet motors, machine room-less rail mounted gearless machines, and microprocessor controls.
The technology used in new installations depends on a variety of factors. Hydraulic elevators are cheaper, but installing cylinders greater than a certain length becomes impractical for very-high lift hoistways. For buildings of much over seven floors, traction elevators must be employed instead. Hydraulic elevators are usually slower than traction elevators.
Elevators are a candidate for mass customization.
There are economies to be made from mass production of the components, but each building comes with its own requirements like different number of floors, dimensions of the well and usage patterns.
Elevator doors
Elevator doors protect riders from falling into the shaft. The most common configuration is to have two panels that meet in the middle, and slide open laterally. In a cascading telescopic configuration (potentially allowing wider entryways within limited space), the doors roll on independent tracks so that while open, they are tucked behind one another, and while closed, they form cascading layers on one side. This can be configured so that two sets of such cascading doors operate like the center opening doors described above, allowing for a very wide elevator cab. In less expensive installations the elevator can also use one large "slab" door: a single panel door the width of the doorway that opens to the left or right laterally. Some buildings have elevators with the single door on the shaft way, and double cascading doors on the cab.
Machine room-less (MRL) elevators
thumb|Kone EcoDisc. The entire drive system is in the hoistway.
Machine room-less elevators are designed so that most of the components fit within the shaft containing the elevator car; and a small cabinet houses the elevator controller. Other than the machinery being in the hoistway, the equipment is similar to a normal traction or hole-less hydraulic elevator. The world's first machine room-less elevator, the Kone MonoSpace was introduced in 1996, by Kone. The benefits are:
creates more usable space
use less energy (70–80% less than standard hydraulic elevators)
uses no oil (assuming it is a traction elevator)
all components are above ground similar to roped hydraulic type elevators (this takes away the environmental concern that was created by the hydraulic cylinder on direct hydraulic type elevators being stored underground)
slightly lower cost than other elevators; significantly so for the hydraulic MRL elevator
can operate at faster speeds than hydraulics but not normal traction units.
Detriments
Equipment can be harder to service and maintain.
No code has been approved for the installation of residential elevator equipment.
Code is not universal for hydraulic machine room less elevators.
Facts
Noise level is at 50–55 dBA (A-weighted decibels), which can be lower than some but not all types of elevators.
Usually used for low-rise to mid-rise buildings
The motor mechanism is placed in the hoistway itself
The US was slow to accept the commercial MRL Elevator because of codes
National and local building codes did not address elevators without machine rooms. Residential MRL Elevators are still not allowed by the ASME A17 code in the US. MRL elevators have been recognized in the 2005 supplement to the 2004 A17.1 Elevator Code.
Today, some machine room less hydraulic elevators by Otis and ThyssenKrupp exist; they do not involve the use of a piston located underground or a machine room, mitigating environmental concerns; however, code is not yet accepting of them in all parts of the United States.https://thyssenkruppelevator.com/elevator-products/enduraMRL
Elevator traffic calculations
Round-trip time calculations
The majority of elevator designs are developed from Up Peak Round Trip Time calculations as described in the following publications:-
CIBSE Guide D: Transportation Systems in Building Elevator Traffic Handbook, Theory and Practice. Gina Barney.
The Vertical Transportation Handbook. George Strakosch
Traditionally, these calculations have formed the basis of establishing the Handling Capacity of an elevator system.
Modern Installations with more complex elevator arrangements have led to the development of more specific formula such as the General Analysis calculation.
Subsequently, this has been extended for Double Deck elevators.
Simulation
Elevator traffic simulation software can be used to model complex traffic patterns and elevator arrangements that cannot necessarily be analyzed by RTT calculations.
Elevator traffic patterns
There are four main types of elevator traffic patterns that can be observed in most modern office installations. They are up peak traffic, down peak traffic, lunch time (two way) traffic and interfloor traffic.
Types of hoist mechanisms
Elevators can be rope dependent or rope-free. There are at least four means of moving an elevator:
Traction elevators
Geared and gearless traction elevators
Geared traction machines are driven by AC or DC electric motors. Geared machines use worm gears to control mechanical movement of elevator cars by "rolling" steel hoist ropes over a drive sheave which is attached to a gearbox driven by a high-speed motor. These machines are generally the best option for basement or overhead traction use for speeds up to .
Historically, AC motors were used for single or double speed elevator machines on the grounds of cost and lower usage applications where car speed and passenger comfort were less of an issue, but for higher speed, larger capacity elevators, the need for infinitely variable speed control over the traction machine becomes an issue. Therefore, DC machines powered by an AC/DC motor generator were the preferred solution. The MG set also typically powered the relay controller of the elevator, which has the added advantage of electrically isolating the elevators from the rest of a building's electrical system, thus eliminating the transient power spikes in the building's electrical supply caused by the motors starting and stopping (causing lighting to dim every time the elevators are used for example), as well as interference to other electrical equipment caused by the arcing of the relay contactors in the control system.
The widespread availability of variable frequency AC drives has allowed AC motors to be used universally, bringing with it the advantages of the older motor-generator, DC-based systems, without the penalties in terms of efficiency and complexity. The older MG-based installations are gradually being replaced in older buildings due to their poor energy efficiency.
Gearless traction machines are low-speed (low-RPM), high-torque electric motors powered either by AC or DC. In this case, the drive sheave is directly attached to the end of the motor. Gearless traction elevators can reach speeds of up to , A brake is mounted between the motor and gearbox or between the motor and drive sheave or at the end of the drive sheave to hold the elevator stationary at a floor. This brake is usually an external drum type and is actuated by spring force and held open electrically; a power failure will cause the brake to engage and prevent the elevator from falling (see inherent safety and safety engineering). But it can also be some form of disc type like 1 or more calipers over a disc in one end of the motor shaft or drive sheave which is used in high speed, high rise and large capacity elevators with machine rooms(an exception is the Kone MonoSpace's EcoDisc which is not high speed, high rise and large capacity and is machine room less but it uses the same design as is a thinner version of a conventional gearless traction machine) for braking power, compactness and redundancy (assuming there's at least 2 calipers on the disc), or 1 or more disc brakes with a single caliper at one end of the motor shaft or drive sheave which is used in machine room less elevators for compactness, braking power, and redundancy (assuming there's 2 brakes or more).
In each case, cables are attached to a hitch plate on top of the cab or may be "underslung" below a cab, and then looped over the drive sheave to a counterweight attached to the opposite end of the cables which reduces the amount of power needed to move the cab. The counterweight is located in the hoist-way and rides a separate railway system; as the car goes up, the counterweight goes down, and vice versa. This action is powered by the traction machine which is directed by the controller, typically a relay logic or computerized device that directs starting, acceleration, deceleration and stopping of the elevator cab. The weight of the counterweight is typically equal to the weight of the elevator cab plus 40–50% of the capacity of the elevator. The grooves in the drive sheave are specially designed to prevent the cables from slipping. "Traction" is provided to the ropes by the grip of the grooves in the sheave, thereby the name. As the ropes age and the traction grooves wear, some traction is lost and the ropes must be replaced and the sheave repaired or replaced. Sheave and rope wear may be significantly reduced by ensuring that all ropes have equal tension, thus sharing the load evenly. Rope tension equalization may be achieved using a rope tension gauge, and is a simple way to extend the lifetime of the sheaves and ropes.
Elevators with more than of travel have a system called compensation. This is a separate set of cables or a chain attached to the bottom of the counterweight and the bottom of the elevator cab. This makes it easier to control the elevator, as it compensates for the differing weight of cable between the hoist and the cab. If the elevator cab is at the top of the hoist-way, there is a short length of hoist cable above the car and a long length of compensating cable below the car and vice versa for the counterweight. If the compensation system uses cables, there will be an additional sheave in the pit below the elevator, to guide the cables. If the compensation system uses chains, the chain is guided by a bar mounted between the counterweight railway lines.
Hydraulic elevators
Conventional hydraulic elevators. They use an underground hydraulic cylinder, are quite common for low level buildings with two to five floors (sometimes but seldom up to six to eight floors), and have speeds of up to . For higher rise applications, a telescopic hydraulic cylinder can be used.
Holeless hydraulic elevators were developed in the 1970s, and use a pair of above ground cylinders, which makes it practical for environmentally or cost sensitive buildings with two, three, or four floors.
Roped hydraulic elevators use both above ground cylinders and a rope system, allowing the elevator to travel further than the piston has to move.
The low mechanical complexity of hydraulic elevators in comparison to traction elevators makes them ideal for low rise, low traffic installations. They are less energy efficient as the pump works against gravity to push the car and its passengers upwards; this energy is lost when the car descends on its own weight. The high current draw of the pump when starting up also places higher demands on a building’s electrical system. There are also environmental concerns should the lifting cylinder leak fluid into the ground.
The modern generation of low cost, machine room-less traction elevators made possible by advances in miniaturization of the traction motor and control systems challenges the supremacy of the hydraulic elevator in their traditional market niche.
Climbing elevator
A climbing elevator is a self-ascending elevator with its own propulsion. The propulsion can be done by an electric or a combustion engine. Climbing elevators are used in guyed masts or towers, in order to make easy access to parts of these constructions, such as flight safety lamps for maintenance. An example would be the Moonlight towers in Austin, Texas, where the elevator holds only one person and equipment for maintenance. The Glasgow Tower — an observation tower in Glasgow, Scotland — also makes use of two climbing elevators. The ThyssenKrupp MULTI elevator system is based on this principle and it uses a linear motor, like the ones used in maglev trains.
Pneumatic elevator
An elevator of this kind uses a vacuum on top of the cab and a valve on the top of the "shaft" to move the cab upwards and closes the valve in order to keep the cab at the same level. A diaphragm or a piston is used as a "brake", if there's a sudden increase in pressure above the cab. To go down, it opens the valve so that the air can pressurize the top of the "shaft", allowing the cab to go down by its own weight. This also means that in case of a power failure, the cab will automatically go down. The "shaft" is made of acylic, is always round due to the shape of the vacuum pump turbine. In order to keep the air inside of the cab, rubber seals are used. Due to technical limitations, these elevators have a low capacity, they usually allow 1–3 passengers and up to 525 lbs.
Controlling elevators
Manual controls
thumb|Otis 1920s controller, operational in NYC apartment building
In the first half of the twentieth century, almost all elevators had no automatic positioning of the floor on which the cab would stop. Some of the older freight elevators were controlled by switches operated by pulling on adjacent ropes. In general, most elevators before WWII were manually controlled by elevator operators using a rheostat connected to the motor. This rheostat (see picture) was enclosed within a cylindrical container about the size and shape of a cake. This was mounted upright or sideways on the cab wall and operated via a projecting handle, which was able to slide around the top half of the cylinder.
The elevator motor was located at the top of the shaft or beside the bottom of the shaft. Pushing the handle forward would cause the cab to rise; backwards would make it sink. The harder the pressure, the faster the elevator would move. The handle also served as a dead man switch: if the operator let go of the handle, it would return to its upright position, causing the elevator cab to stop. In time, safety interlocks would ensure that the inner and outer doors were closed before the elevator was allowed to move.
This lever would allow some control over the energy supplied to the motor and so enabled the elevator to be accurately positioned — if the operator was sufficiently skilled. More typically, the operator would have to "jog" the control, moving the cab in small increments until the elevator was reasonably close to the landing point. Then the operator would direct the outgoing and incoming passengers to "watch the step".
thumb|Manual pushbutton elevator controls
Automatic elevators began to appear as early as the 1930s, their development being hastened by striking elevator operators which brought large cities dependent on skyscrapers (and therefore their elevators) such as New York and Chicago to their knees. These electromechanical systems used relay logic circuits of increasing complexity to control the speed, position and door operation of an elevator or bank of elevators.
The Otis Autotronic system of the early 1950s brought the earliest predictive systems which could anticipate traffic patterns within a building to deploy elevator movement in the most efficient manner. Relay-controlled elevator systems remained common until the 1980s and their gradual replacement with solid-state, microprocessor-based controls are now the industry standard. Most older, manually-operated elevators have been retrofitted with automatic or semi-automatic controls.
thumb|Typical freight elevator control station
thumb|Typical passenger elevator control station
thumb|Using the emergency call button in an elevator. There is Braille text for visually impaired people and the button glows to alert a hearing impaired person that the bell is ringing and the call is being placed.
General controls
A typical modern passenger elevator will have:
Space to stand in, guardrails, seating cushion (luxury)
Overload sensor — prevents the elevator from moving until excess load has been removed. It may trigger a voice prompt or buzzer alarm. This may also trigger a "full car" indicator, indicating the car's inability to accept more passengers until some are unloaded.
Electric fans or air conditioning units to enhance circulation and comfort.
A control panel with various buttons. In the United States and other countries, button text and icons are raised to allow blind users to operate the elevator; many have Braille text besides. Buttons include:
Call buttons to choose a floor. Some of these may be key switches (to control access). In some elevators, certain floors are inaccessible unless one swipes a security card or enters a passcode (or both).
Door open and Door close buttons.
The operation of the door open button is transparent, immediately opening and holding the door, typically until a timeout occurs and the door closes. The operation of the door close button is less transparent, and it often appears to do nothing, leading to frequent but incorrect reports that the door close button is a placebo button: either not wired up at all, or inactive in normal service. Working door open and door close buttons are required by code in many jurisdictions, including the United States, specifically for emergency operation: in independent mode, the door open and door close buttons are used to manually open or close the door.ASME A17.1 – 2000, Safety Code for Elevators and Escalators, Requirements 2.27.3.3, "Phase II Emergency In-Car Operation" Beyond this, programming varies significantly, with some door close buttons immediately closing the door, but in other cases being delayed by an overall timeout, so the door cannot be closed until a few seconds after opening. In this case (hastening normal closure), the door close button has no effect. However, the door close button will cause a hall call to be ignored (so the door won't reopen), and once the timeout has expired, the door close will immediately close the door, for example to cancel a door open push. The minimum timeout for automatic door closing in the US is 5 seconds,ASME A17.1 – 2000, Safety Code for Elevators and Escalators, Requirements 4.10.7 – Door and Signal Timing for Hall Calls, "The minimum acceptable notification time shall be 5 seconds." which is a noticeable delay if not overridden.
An alarm button or switch, which passengers can use to warn the premises manager that they have been trapped in the elevator.
A set of doors kept locked on each floor to prevent unintentional access into the elevator shaft by the unsuspecting individual. The door is unlocked and opened by a machine sitting on the roof of the car, which also drives the doors that travel with the car. Door controls are provided to close immediately or reopen the doors, although the button to close them immediately is often disabled during normal operations, especially on more recent elevators. Objects in the path of the moving doors will either be detected by sensors or physically activate a switch that reopens the doors. Otherwise, the doors will close after a preset time. Some elevators are configured to remain open at the floor until they are required to move again.
Elevators in high traffic buildings often have a "nudge" function (the Otis Autotronic system first introduced this feature) which will close the doors at a reduced speed, and sound a buzzer if the "door open" button is being deliberately held down, or if the door sensors have been blocked for too long a time.
A stop switch (not allowed under British regulations) to halt the elevator while in motion and often used to hold an elevator open while freight is loaded. Keeping an elevator stopped for too long may set off an alarm. Unless local codes require otherwise, this will most likely be a key switch.
Some elevators may have one or more of the following:
An elevator telephone, which can be used (in addition to the alarm) by a trapped passenger to call for help. This may consist of a transceiver, or simply a button.
Hold button: This button delays the door closing timer, useful for loading freight and hospital beds.
Call cancellation: A destination floor may be deselected by double clicking.
Access restriction by key switches, RFID reader, code keypad, hotel room card, etc.
One or more additional sets of doors. This is primarily used to serve different floor plans: on each floor only one set of doors opens. For example, in an elevated crosswalk setup, the front doors may open on the street level, and the rear doors open on the crosswalk level. This is also common in garages, rail stations, and airports. Alternatively, both doors may open on a given floor. This is sometimes timed so that one side opens first for getting off, and then the other side opens for getting on, to improve boarding/exiting speed. This is particularly useful when passengers have luggage or carts, as at an airport, due to reduced maneuverability.
thumb|Dual Door open and Door close buttons, in an elevator with two sets of doors. In case of dual doors, there may be two sets of Door open and Door close buttons, with one pair controlling the front doors, from the perspective of the console, typically denoted <> and ><, with the other pair controlling the rear doors, typically denoted with a line in the middle, <|> and >|<. This second set is required in the US if both doors can be opened at the same landing, so that the doors can both be controlled in independent service.ASME A17.1 – 2000, Safety Code for Elevators and Escalators, Requirements 2.27.3.3.1.d "On cars with two entrances, a separate door-close button shall be provided for each entrance if both entrances can be opened at the same landing."**Note:Otis may have double lines,like this : |<>| and >||<.
Security camera
Plain walls or mirrored walls.
Glass windowpane providing a view of the building interior or onto the streets.
An audible signal button, labeled "S": in the US, for elevators installed between 1991 and 2012 (initial passage of ADA and coming into force of 2010 revision), a button which if pushed, sounds an audible signal as each floor is passed, to assist visually impaired passengers. No longer used on new elevators, where the sound is obligatory.
Other controls, which are generally inaccessible to the public (either because they are key switches, or because they are kept behind a locked panel), include:
Fireman's service, phase II key switch
Switch to enable or disable the elevator.
An inspector's switch, which places the elevator in inspection mode (this may be situated on top of the elevator)
Manual up/down controls for elevator technicians, to be used in inspection mode, for example.
An independent service/exclusive mode (also known as "Car Preference"), which will prevent the car from answering to hall calls and only arrive at floors selected via the panel. The door should stay open while parked on a floor. This mode may be used for temporarily transporting goods.
Attendant service mode.
Large buildings with multiple elevators of this type also had an elevator dispatcher stationed in the lobby to direct passengers and to signal the operator to leave with the use of a mechanical "cricket" noisemaker.
External controls
thumb|An external control panel
Elevators are typically controlled from the outside by a call box, which has up and down buttons, at each stop. When pressed at a certain floor, the button calls the elevator to pick up more passengers. If the particular elevator is currently serving traffic in a certain direction, it will only answer calls in the same direction unless there are no more calls beyond that floor.
In a group of two or more elevators, the call buttons may be linked to a central dispatch computer, such that they illuminate and cancel together. This is done to ensure that only one car is called at one time.
Key switches may be installed on the ground floor so that the elevator can be remotely switched on or off from the outside.
In destination control systems, one selects the intended destination floor (in lieu of pressing "up" or "down") and is then notified which elevator will serve their request.
Floor numbering
thumb|Elevator buttons showing the missing 13th floor
The elevator algorithm
The elevator algorithm, a simple algorithm by which a single elevator can decide where to stop, is summarized as follows:
Continue traveling in the same direction while there are remaining requests in that same direction.
If there are no further requests in that direction, then stop and become idle, or change direction if there are requests in the opposite direction.
The elevator algorithm has found an application in computer operating systems as an algorithm for scheduling hard disk requests.
Modern elevators use more complex heuristic algorithms to decide which request to service next. An introduction to these algorithms can be found in the "Elevator traffic handbook: theory and practice" given in the references below.
Destination control system
Some skyscraper buildings and other types of installation feature a destination operating panel where a passenger registers their floor calls before entering the car. The system lets them know which car to wait for, instead of everyone boarding the next car. In this way, travel time is reduced as the elevator makes fewer stops for individual passengers, and the computer distributes adjacent stops to different cars in the bank. Although travel time is reduced, passenger waiting times may be longer as they will not necessarily be allocated the next car to depart. During the down peak period the benefit of destination control will be limited as passengers have a common destination.
It can also improve accessibility, as a mobility-impaired passenger can move to his or her designated car in advance.
Inside the elevator there is no call button to push, or the buttons are there but they cannot be pushed — except door opening and alarm button — they only indicate stopping floors.
The idea of destination control was originally conceived by Leo Port from Sydney in 1961,Port, L.W. (1961), Elevator System Commonwealth of Australia Patent Specification, Application Number 1421/61, 14 February 1961 but at that time elevator controllers were implemented in relays and were unable to optimize the performance of destination control allocations.
The system was first pioneered by Schindler Elevator in 1992 as the Miconic 10. Manufacturers of such systems claim that average traveling time can be reduced by up to 30%.
However, performance enhancements cannot be generalized as the benefits and limitations of the system are dependent on many factors. One problem is that the system is subject to gaming. Sometimes, one person enters the destination for a large group of people going to the same floor. The dispatching algorithm is usually unable to completely cater for the variation, and latecomers may find the elevator they are assigned to is already full. Also, occasionally, one person may press the floor multiple times. This is common with up/down buttons when people believe this to be an effective way to hurry elevators. However, this will make the computer think multiple people are waiting and will allocate empty cars to serve this one person.
To prevent this problem, in one implementation of destination control, every user gets an RFID card to identify himself, so the system knows every user call and can cancel the first call if the passenger decides to travel to another destination to prevent empty calls. The newest invention knows even where people are located and how many on which floor because of their identification, either for the purposes of evacuating the building or for security reasons. Another way to prevent this issue is to treat everyone traveling from one floor to another as one group and to allocate only one car for that group.
The same destination scheduling concept can also be applied to public transit such as in group rapid transit.
thumb|upright|A destination dispatch control station, outside of the car, on which the user presses a button to indicate the desired destination floor, and the panel indicates which car will be dispatched
Special operating modes
Anti-crime protection
The anti-crime protection (ACP) feature will force each car to stop at a pre-defined landing and open its doors. This allows a security guard or a receptionist at the landing to visually inspect the passengers. The car stops at this landing as it passes to serve further demand.
Up peak
During up-peak mode (also called moderate incoming traffic), elevator cars in a group are recalled to the lobby to provide expeditious service to passengers arriving at the building, most typically in the morning as people arrive for work or at the conclusion of a lunch-time period. Elevators are dispatched one-by-one when they reach a pre-determined passenger load, or when they have had their doors opened for a certain period of time. The next elevator to be dispatched usually has its hall lantern or a "this car leaving next" sign illuminated to encourage passengers to make maximum use of the available elevator system capacity. Some elevator banks are programmed so that at least one car will always return to the lobby floor and park whenever it becomes free.
The commencement of up-peak may be triggered by a time clock, by the departure of a certain number of fully loaded cars leaving the lobby within a given time period, or by a switch manually operated by a building attendant.
Down peak
During down-peak mode, elevator cars in a group are sent away from the lobby towards the highest floor served, after which they commence running down the floors in response to hall calls placed by passengers wishing to leave the building. This allows the elevator system to provide maximum passenger handling capacity for people leaving the building.
The commencement of down-peak may be triggered by a time clock, by the arrival of a certain number of fully loaded cars at the lobby within a given time period, or by a switch manually operated by a building attendant.
Sabbath service
thumb|upright|left|A switch to turn Sabbath elevator mode on or off
In areas with large populations of observant Jews or in facilities catering to Jews, one may find a "Sabbath elevator". In this mode, an elevator will stop automatically at every floor, allowing people to step on and off without having to press any buttons. This prevents violation of the Sabbath prohibition against operating electrical devices when Sabbath is in effect for those who observe this ritual.
However, Sabbath mode has the side effect of using considerable amounts of energy, running the elevator car sequentially up and down every floor of a building, repeatedly servicing floors where it is not needed. For a tall building with many floors, the car must move on a frequent enough basis so as to not cause undue delay for potential users that will not touch the controls as it opens the doors on every floor up the building.
Some taller buildings may have the Sabbath elevator alternate floors in order to save time and energy; for example, an elevator may stop at only even-numbered floors on the way up, and then the odd-numbered floors on the way down.
Independent service
Independent service is a special service mode found on most elevators. It is activated by a key switch either inside the elevator itself or on a centralized control panel in the lobby. When an elevator is placed on independent service, it will no longer respond to hall calls. (In a bank of elevators, traffic is rerouted to the other elevators, while in a single elevator, the hall buttons are disabled). The elevator will remain parked on a floor with its doors open until a floor is selected and the door close button is held until the elevator starts to travel. Independent service is useful when transporting large goods or moving groups of people between certain floors.
Inspection service
Inspection service is designed to provide access to the hoistway and car top for inspection and maintenance purposes by qualified elevator mechanics. It is first activated by a key switch on the car operating panel usually labeled 'Inspection', 'Car Top', 'Access Enable' or 'HWENAB'. When this switch is activated the elevator will come to a stop if moving, car calls will be canceled (and the buttons disabled), and hall calls will be assigned to other elevator cars in the group (or canceled in a single elevator configuration). The elevator can now only be moved by the corresponding 'Access' key switches, usually located at the highest (to access the top of the car) and lowest (to access the elevator pit) landings. The access key switches will allow the car to move at reduced inspection speed with the hoistway door open. This speed can range from anywhere up to 60% of normal operating speed on most controllers, and is usually defined by local safety codes.
Elevators have a car top inspection station that allows the car to be operated by a mechanic in order to move it through the hoistway. Generally, there are three buttons: UP, RUN, and DOWN. Both the RUN and a direction button must be held to move the car in that direction, and the elevator will stop moving as soon as the buttons are released. Most other elevators have an up/down toggle switch and a RUN button. The inspection panel also has standard power outlets for work lamps and powered tools.
Fire service
Depending on the location of the elevator, fire service code will vary state to state and country to country. Fire service is usually split up into two modes: phase one and phase two. These are separate modes that the elevator can go into.
Phase one mode is activated by a corresponding smoke sensor or heat sensor in the building. Once an alarm has been activated, the elevator will automatically go into phase one. The elevator will wait an amount of time, then proceed to go into nudging mode to tell everyone the elevator is leaving the floor. Once the elevator has left the floor, depending on where the alarm was set off, the elevator will go to the fire-recall floor. However, if the alarm was activated on the fire-recall floor, the elevator will have an alternate floor to recall to. When the elevator is recalled, it proceeds to the recall floor and stops with its doors open. The elevator will no longer respond to calls or move in any direction. Located on the fire-recall floor is a fire-service key switch. The fire-service key switch has the ability to turn fire service off, turn fire service on or to bypass fire service. The only way to return the elevator to normal service is to switch it to bypass after the alarms have reset.
thumb|KONE Ecodisc elevator in fireman's mode
Phase-two mode can only be activated by a key switch located inside the elevator on the centralized control panel. This mode was created for firefighters so that they may rescue people from a burning building. The phase-two key switch located on the COP has three positions: off, on, and hold. By turning phase two on, the firefighter enables the car to move. However, like independent-service mode, the car will not respond to a car call unless the firefighter manually pushes and holds the door close button. Once the elevator gets to the desired floor it will not open its doors unless the firefighter holds the door open button. This is in case the floor is burning and the firefighter can feel the heat and knows not to open the door. The firefighter must hold door open until the door is completely opened. If for any reason the firefighter wishes to leave the elevator, they will use the hold position on the key switch to make sure the elevator remains at that floor. If the firefighter wishes to return to the recall floor, they simply turn the key off and close the doors.
Medical emergency/code-blue service
Commonly found in hospitals, code-blue service allows an elevator to be summoned to any floor for use in an emergency situation. Each floor will have a code-blue recall key switch, and when activated, the elevator system will immediately select the elevator car that can respond the fastest, regardless of direction of travel and passenger load. Passengers inside the elevator will be notified with an alarm and indicator light to exit the elevator when the doors open.
Once the elevator arrives at the floor, it will park with its doors open and the car buttons will be disabled to prevent a passenger from taking control of the elevator. Medical personnel must then activate the code-blue key switch inside the car, select their floor and close the doors with the door close button. The elevator will then travel non-stop to the selected floor, and will remain in code-blue service until switched off in the car. Some hospital elevators will feature a 'hold' position on the code-blue key switch (similar to fire service) which allows the elevator to remain at a floor locked out of service until code blue is deactivated.
Emergency power operation
Many elevator installations now feature emergency power systems which allow elevator use in blackout situations and prevent people from becoming trapped in elevators.
Traction elevators
When power is lost in a traction elevator system, all elevators will initially come to a halt. One by one, each car in the group will return to the lobby floor, open its doors and shut down. People in the remaining elevators may see an indicator light or hear a voice announcement informing them that the elevator will return to the lobby shortly. Once all cars have successfully returned, the system will then automatically select one or more cars to be used for normal operations and these cars will return to service. The car(s) selected to run under emergency power can be manually overridden by a key or strip switch in the lobby. In order to help prevent entrapment, when the system detects that it is running low on power, it will bring the running cars to the lobby or nearest floor, open the doors and shut down.
Hydraulic elevators
In hydraulic elevator systems, emergency power will lower the elevators to the lowest landing and open the doors to allow passengers to exit. The doors then close after an adjustable time period and the car remains unusable until reset, usually by cycling the elevator main power switch. Typically, due to the high current draw when starting the pump motor, hydraulic elevators are not run using standard emergency power systems. Buildings like hospitals and nursing homes usually size their emergency generators to accommodate this draw. However, the increasing use of current-limiting motor starters, commonly known as "soft-start" contactors, avoid much of this problem, and the current draw of the pump motor is less of a limiting concern.
Elevator modernization
thumb|An elevator test tower
Most elevators are built to provide about 20 years of service, as long as service intervals specified and periodic maintenance/inspections by the manufacturer are followed. As the elevator ages and equipment become increasingly difficult to find or replace, along with code changes and deteriorating ride performance, a complete overhaul of the elevator may be suggested to the building owners.
A typical modernization consists of controller equipment, electrical wiring and buttons, position indicators and direction arrows, hoist machines and motors (including door operators), and sometimes door hanger tracks. Rarely are car slings, rails, or other heavy structures changed. The cost of an elevator modernization can range greatly depending on which type of equipment is to be installed.
Modernization can greatly improve operational reliability by replacing mechanical relays and contacts with solid-state electronics. Ride quality can be improved by replacing motor-generator-based drive designs with Variable-Voltage, Variable Frequency (V3F) drives, providing near-seamless acceleration and deceleration. Passenger safety is also improved by updating systems and equipment to conform to current codes.
Elevator safety
Cable-borne elevators
Statistically speaking, cable-borne elevators are extremely safe. Their safety record is unsurpassed by any other vehicle system. In 1998, it was estimated that approximately eight millionths of one percent (1 in 12 million) of elevator rides result in an anomaly, and the vast majority of these were minor things such as the doors failing to open. Of the 20 to 30 elevator-related deaths each year, most of them are maintenance-related — for example, technicians leaning too far into the shaft or getting caught between moving parts, and most of the rest are attributed to other kinds of accidents, such as people stepping blindly through doors that open into empty shafts or being strangled by scarves caught in the doors. In fact, prior to the September 11th terrorist attacks, the only known free-fall incident in a modern cable-borne elevator happened in 1945 when a B-25 bomber struck the Empire State Building in fog, severing the cables of an elevator cab, which fell from the 75th floor all the way to the bottom of the building, seriously injuring (though not killing) the sole occupant — the elevator operator. However, there was an incident in 2007 at a Seattle children's hospital, where a ThyssenKrupp ISIS machine-room-less elevator free-fell until the safety brakes were engaged. This was due to a flaw in the design where the cables were connected at one common point, and the kevlar ropes had a tendency to overheat and cause slipping (or, in this case, a free-fall). While it is possible (though extraordinarily unlikely) for an elevator's cable to snap, all elevators in the modern era have been fitted with several safety devices which prevent the elevator from simply free-falling and crashing. An elevator cab is typically borne by 2 to 6 (up to 12 or more in high rise installations) hoist cables or belts, each of which is capable on its own of supporting the full load of the elevator plus twenty-five percent more weight. In addition, there is a device which detects whether the elevator is descending faster than its maximum designed speed; if this happens, the device causes copper (or silicon nitride in high rise installations) brake shoes to clamp down along the vertical rails in the shaft, stopping the elevator quickly, but not so abruptly as to cause injury. This device is called the governor, and was invented by Elisha Graves Otis. In addition, an oil/hydraulic or spring or polyurethane or telescopic oil/hydraulic buffer or a combination (depending on the travel height and travel speed) is installed at the bottom of the shaft (or in the bottom of the cab and sometimes also in the top of the cab or shaft) to somewhat cushion any impact. However, In Thailand, in November 2012, a woman was killed in free falling elevator, in what was reported as the "first legally recognised death caused by a falling lift".
Hydraulic elevators
Past problems with hydraulic elevators include underground electrolytic destruction of the cylinder and bulkhead, pipe failures, and control failures. Single bulkhead cylinders, typically built prior to a 1972 ASME A17.1 Elevator Safety Code change requiring a second dished bulkhead, were subject to possible catastrophic failure. The code previously permitted only single-bottom hydraulic cylinders. In the event of a cylinder breach, the fluid loss results in uncontrolled down movement of the elevator. This creates two significant hazards: being subject to an impact at the bottom when the elevator stops suddenly and being in the entrance for a potential shear if the rider is partly in the elevator. Because it is impossible to verify the system at all times, the code requires periodic testing of the pressure capability. Another solution to protect against a cylinder blowout is to install a plunger gripping device. One commercially available is known by the marketing name "LifeJacket". This is a device which, in the event of an uncontrolled downward acceleration, nondestructively grips the plunger and stops the car. A device known as an overspeed or rupture valve is attached to the hydraulic inlet/outlet of the cylinder and is adjusted for a maximum flow rate. If a pipe or hose were to break (rupture), the flow rate of the rupture valve will surpass a set limit and mechanically stop the outlet flow of hydraulic fluid, thus stopping the plunger and the car in the down direction.
In addition to the safety concerns for older hydraulic elevators, there is risk of leaking hydraulic oil into the aquifer and causing potential environmental contamination. This has led to the introduction of PVC liners (casings) around hydraulic cylinders which can be monitored for integrity.
In the past decade, recent innovations in inverted hydraulic jacks have eliminated the costly process of drilling the ground to install a borehole jack. This also eliminates the threat of corrosion to the system and increases safety.
Mine-shaft elevators
Safety testing of mine shaft elevator rails is routinely undertaken. The method involves destructive testing of a segment of the cable. The ends of the segment are frayed, then set in conical zinc molds. Each end of the segment is then secured in a large, hydraulic stretching machine. The segment is then placed under increasing load to the point of failure. Data about elasticity, load, and other factors is compiled and a report is produced. The report is then analyzed to determine whether or not the entire rail is safe to use.
Elevator accidents
In June 2014, a ThyssenKrupp elevator brake released in Santiago, Chile, forcing the elevator and a passenger up to the 31st floor in 50 mph, crashing at the top of the shaft. The man suffered head and leg injuries. This occurred 8 months after the building was finished being built and the elevator was complete and installed in the building.
In September 2014, a student died when an elevator at Huaqiao University China unexpectedly rose as he was exiting the elevator on the third floor, crushing him between the elevator and the shaft wall.https://www.youtube.com/watch?v=s9mUVpCSGl4 http://www.chinadaily.com.cn/china/2014-09/17/content_18614526.htm
In November 2014, a waitress in Russia severely injured herself when her head was crushed by a dumbwaiter when she tried to grab food as the lift moved away. She was found by a security guard and taken to hospital where she was treated for head and spine injurieshttp://www.dailymail.co.uk/news/article-2831443/Russian-waitress-rescued-getting-head-stuck-food-service-lift.html
In December 2015, an elevator suddenly started ascending when a 84-year-old man was between the car and the door. The man died of his injuries.
Uses of elevators
thumb|A Fujitec traction elevator in Block 192, Bishan, Singapore
Passenger service
A passenger elevator is designed to move people between a building's floors.
Passenger elevators capacity is related to the available floor space. Generally passenger elevators are available in capacities from in increments. Generally passenger elevators in buildings of eight floors or fewer are hydraulic or electric, which can reach speeds up to hydraulic and up to electric. In buildings up to ten floors, electric and gearless elevators are likely to have speeds up to , and above ten floors speeds range .
Sometimes passenger elevators are used as a city transport along with funiculars. For example, there is a 3-station underground public elevator in Yalta, Ukraine, which takes passengers from the top of a hill above the Black Sea on which hotels are perched, to a tunnel located on the beach below. At Casco Viejo station in the Bilbao Metro, the elevator that provides access to the station from a hilltop neighborhood doubles as city transportation: the station's ticket barriers are set up in such a way that passengers can pay to reach the elevator from the entrance in the lower city, or vice versa. See also the Elevators for urban transport section.
Types of passenger elevators
thumb|The former World Trade Center's twin towers used skylobbies, located on the 44th and 78th floors of each tower
Passenger elevators may be specialized for the service they perform, including: hospital emergency (code blue), front and rear entrances, a television in high-rise buildings, double-decker, and other uses. Cars may be ornate in their interior appearance, may have audio visual advertising, and may be provided with specialized recorded voice announcements. Elevators may also have loudspeakers in them to play calm, easy listening music. Such music is often referred to as elevator music.
An express elevator does not serve all floors. For example, it moves between the ground floor and a skylobby, or it moves from the ground floor or a skylobby to a range of floors, skipping floors in between. These are especially popular in eastern Asia.
Capacity
Residential elevators may be small enough to only accommodate one person while some are large enough for more than a dozen. Wheelchair, or platform elevators, a specialized type of elevator designed to move a wheelchair or less, can often accommodate just one person in a wheelchair at a time with a load of . Viewed August 2013.
Freight elevators
thumb|A specialized elevator from 1905 for lifting narrow gauge railroad cars between a railroad freight house and the Chicago Tunnel Company tracks below
thumb|The interior of a freight elevator. It is very basic yet rugged for freight loading.
A freight elevator, or goods lift, is an elevator designed to carry goods, rather than passengers. Freight elevators are generally required to display a written notice in the car that the use by passengers is prohibited (though not necessarily illegal), though certain freight elevators allow dual use through the use of an inconspicuous riser. In order for an elevator to be legal to carry passengers in some jurisdictions it must have a solid inner door. Freight elevators are typically larger and capable of carrying heavier loads than a passenger elevator, generally from 2,300 to 4,500 kg. Freight elevators may have manually operated doors, and often have rugged interior finishes to prevent damage while loading and unloading. Although hydraulic freight elevators exist, electric elevators are more energy efficient for the work of freight lifting.
Sidewalk elevators
A sidewalk elevator is a special type of freight elevator. Sidewalk elevators are used to move materials between a basement and a ground-level area, often the sidewalk just outside the building. They are controlled via an exterior switch and emerge from a metal trap door at ground level. Sidewalk elevator cars feature a uniquely shaped top that allows this door to open and close automatically.
Stage lifts
Stage lifts and orchestra lifts are specialized elevators, typically powered by hydraulics, that are used to raise and lower entire sections of a theater stage. For example, Radio City Music Hall has four such elevators: an orchestra lift that covers a large area of the stage, and three smaller lifts near the rear of the stage. In this case, the orchestra lift is powerful enough to raise an entire orchestra, or an entire cast of performers (including live elephants) up to stage level from below. There's a barrel on the background of the image of the left which can be used as a scale to represent the size of the mechanism
Vehicle elevators
Vehicular elevators are used within buildings or areas with limited space (in place of ramps), generally to move cars into the parking garage or manufacturer's storage. Geared hydraulic chains (not unlike bicycle chains) generate lift for the platform and there are no counterweights. To accommodate building designs and improve accessibility, the platform may rotate so that the driver only has to drive forward. Most vehicle elevators have a weight capacity of 2 tons.
Rare examples of extra-heavy elevators for 20-ton lorries, and even for railcars (like one that was used at Dnipro Station of the Kiev Metro) also occur.
Boat lift
In some smaller canals, boats and small ships can pass between different levels of a canal with a boat elevator rather than through a canal lock.
Aircraft elevators
thumb|An F/A-18C on an aircraft elevator of the USS Kitty Hawk
Elevators for aircraft
On aircraft carriers, elevators carry aircraft between the flight deck and the hangar deck for operations or repairs. These elevators are designed for much greater capacity than other elevators, up to of aircraft and equipment. Smaller elevators lift munitions to the flight deck from magazines deep inside the ship.
Elevators within aircraft
On some passenger double-deck aircraft such as the Boeing 747 or other widebody aircraft, elevators transport flight attendants and food and beverage trolleys from lower deck galleys to upper passenger carrying decks.
Limited use & limited application
The limited-use, limited-application (LU/LA) elevator is a special purpose passenger elevator used infrequently, and which is exempt from many commercial regulations and accommodations. For example, a LU/LA is primarily meant to be handicapped accessible, and there might only be room for a single wheelchair and a standing passenger.
Residential elevator
thumb|A residential elevator with integrated hoistway construction and machine-room-less design
A residential elevator is often permitted to be of lower cost and complexity than full commercial elevators. They may have unique design characteristics suited for home furnishings, such as hinged wooden shaft-access doors rather than the typical metal sliding doors of commercial elevators. Construction may be less robust than in commercial designs with shorter maintenance periods, but safety systems such as locks on shaft access doors, fall arrestors, and emergency phones must still be present in the event of malfunction.
The American Society of Mechanical Engineers (ASME) has a specific section of Safety Code (ASME A17.1 Section 5.3) which addresses Residential Elevators. This section allows for different parameters to alleviate design complexity based on the limited use of a residential elevator by a specific user or user group. Section 5.3 of the ASME A17.1 Safety Code is for Private Residence Elevators, which does not include multi-family dwellings.
Some types of residential elevators do not use a traditional elevator shaft, machine room, and elevator hoistway. This allows an elevator to be installed where a traditional elevator may not fit, and simplifies installation. The ASME board first approved machine-room-less systems in a revision of the ASME A17.1 in 2007. Machine-room-less elevators have been available commercially since the mid 1990s, however cost and overall size prevented their adoption to the residential elevator market until around 2010.
Also, residential elevators are smaller than commercial elevators. The smallest passenger elevator is pneumatic, and it allows for only 1 person.http://www.vacuumelevators.com/pve30
The smallest traction elevator allows for just 2 persons.http://www.mh-he.co.jp/family_product/
Dumbwaiter
Dumbwaiters are small freight elevators that are intended to carry food, books or other small freight loads rather than passengers. They often connect kitchens to rooms on other floors. They usually do not have the same safety features found in passenger elevators, like various ropes for redundancy. They have a lower capacity, and they can be up to tall. Control panels at every stop mimic those found in passenger elevators, allowing calling, door control and floor selection. In Cutthroat Kitchen, Alton Brown use on to bring sabotages down to the kitchen to make it harder for chefs to make and cook dishes.
Paternoster
thumb|upright|left|A paternoster in Berlin, Germany
A special type of elevator is the paternoster, a constantly moving chain of boxes. A similar concept, called the manlift or humanlift, moves only a small platform, which the rider mounts while using a handhold seen in multi-story industrial plants.
Scissor lift
right|thumb|A mobile scissor lift, extended to near its highest position
The scissor lift is yet another type of lift. These are usually mobile work platforms that can be easily moved to where they are needed, but can also be installed where space for counter-weights, machine room and so forth is limited. The mechanism that makes them go up and down is like that of a scissor jack.
Rack-and-pinion elevator
Rack-and-pinion elevator are powered by a motor driving a pinion gear. Because they can be installed on a building or structure's exterior and there is no machine room or hoistway required, they are the most used type of elevator for buildings under construction (to move materials and tools up and down).
Material handling belts and belt elevators
Material transport elevators generally consist of an inclined plane on which a conveyor belt runs. The conveyor often includes partitions to ensure that the material moves forward. These elevators are often used in industrial and agricultural applications. When such mechanisms (or spiral screws or pneumatic transport) are used to elevate grain for storage in large vertical silos, the entire structure is called a grain elevator. Belt elevators are often used in docks for loading loose materials such as coal, iron ore and grain into the holds of bulk carriers
There have occasionally been belt lifts for humans; these typically have steps about every along the length of the belt, which moves vertically, so that the passenger can stand on one step and hold on to the one above. These belts are sometimes used, for example, to carry the employees of parking garages, but are considered too dangerous for public use.
Social impact
Before the widespread use of elevators, most residential buildings were limited to about seven stories. The wealthy lived on lower floors, while poorer residents–required to climb many flights of stairs–lived on higher floors. The elevator reversed this social stratification, exemplified by the modern penthouse suite.
Early users of elevators sometimes reported nausea caused by abrupt stops while descending, and some users would use stairs to go down. In 1894, a Chicago physician documented "elevator sickness".
Elevators necessitated new social protocols. When Nicholas II of Russia visited the Hotel Adlon in Berlin, his courtiers panicked about who would enter the elevator first, and who would press the buttons. In Lifted: A Cultural History of the Elevator, author Andreas Bernard documents other social impacts caused by the modern elevator, including thriller movies about stuck elevators, casual encounters and sexual tension on elevators, the reduction of personal space and claustrophobia, and concerns about personal hygiene.
Elevator convenience features
thumb|LCD elevator floor indicator
thumb|A typical elevator indicator located in the Waldorf Astoria New York. This elevator was made by Otis.
Elevators may feature talking devices as an accessibility aid for the blind. In addition to floor arrival notifications, the computer announces the direction of travel, and notifies the passengers before the doors are to close.
In addition to the call buttons, elevators usually have floor indicators (often illuminated by LED) and direction lanterns. The former are almost universal in cab interiors with more than two stops and may be found outside the elevators as well on one or more of the floors. Floor indicators can consist of a dial with a rotating needle, but the most common types are those with successively illuminated floor indications or LCDs. Likewise, a change of floors or an arrival at a floor is indicated by a sound, depending on the elevator.
Direction lanterns are also found both inside and outside elevator cars, but they should always be visible from outside because their primary purpose is to help people decide whether or not to get on the elevator. If somebody waiting for the elevator wants to go up, but a car comes first that indicates that it is going down, then the person may decide not to get on the elevator. If the person waits, then one will still stop going up. Direction indicators are sometimes etched with arrows or shaped like arrows and/or use the convention that one that lights up red means "down" and green means "up". Since the color convention is often undermined or overridden by systems that do not invoke it, it is usually used only in conjunction with other differentiating factors. An example of a place whose elevators use only the color convention to differentiate between directions is the Museum of Contemporary Art in Chicago, where a single circle can be made to light up green for "up" and red for "down". Sometimes directions must be inferred by the position of the indicators relative to one another.
In addition to lanterns, most elevators have a chime to indicate if the elevator is going up or down either before or after the doors open, usually in conjunction with the lanterns lighting up. Universally, one chime is for up, two is for down, and none indicates an elevator that is 'free'.
thumb|right|Elevator with a virtual window affording a view of the City of London
Observatory service elevators often convey other facts of interest, including elevator speed, stopwatch, and current position (altitude), as with the case for Taipei 101's service elevators.
There are several technologies aimed to provide better experience to passengers suffering from claustrophobia, anthropophobia or social anxiety. Israeli startup DigiGage uses motion sensors to scroll the pre-rendered images, building and floor-specific content on a screen embedded into the wall as the cab moves up and down. British company LiftEye provides a virtual window technology to turn common elevator into panoramic. It creates 3d video panorama using live feed from cameras placed vertically along the facade and synchronizes it with cab movement. The video is projected on a wall-sized screens making it look like the walls are made of glass.
Elevator air conditioning
thumb|right|Elevator airflow diagram
Elevator air conditioning is fast becoming a popular concept around the world. The primary reason for installing an elevator air conditioner is the comfort that it provides while traveling in the elevator. It stabilizes the condition of the air inside the elevator car. Some elevator air conditioners can be used in countries with cold climates if a thermostat is used to reverse the refrigeration cycle to warm the elevator car.
Heat generated from the cooling process is dissipated into the hoistway. The elevator cab (or car) is ordinarily not air-tight, and some of this heat may reenter the car and reduce the overall cooling effect.
The air from the lobby constantly leaks into the elevator shaft due to elevator movements as well as elevator shaft ventilation requirements. Using this conditioned air in the elevator does not increase energy costs. However, by using an independent elevator air conditioner to achieve better temperature control inside the car, more energy will be used.
Air conditioning poses a problem to elevators because of the condensation that occurs. The condensed water produced has to be disposed of; otherwise, it would create flooding in the elevator car and hoistway.
Methods of removing condensed water
There are at least four ways to remove condensed water from the air conditioner. However, each solution has its pros and cons.
Atomizing
Atomizing, also known as misting the condensed water, is one way to dispose of the condensed water. Spraying ultra-fine water droplets onto the hot coils of the air conditioner ensures that the condensed water evaporates quickly.
Though this is one of the best methods to dispose of the condensed water, it is also one of the costliest because the nozzle that atomizes the water easily gets choked. The majority of the cost goes to maintaining the entire atomizing system.
Boiling
Disposing of condensed water works by firstly collecting the condensed water and then heating it to above boiling point. The condensed water is eventually evaporated, thereby disposing of it.
Consumers are reluctant to employ this system because of the high rate of energy used just to dispose of this water.
Cascading
The cascading method works by flowing the condensed water directly onto the hot coils of the air conditioner. This eventually evaporates the condensed water.
The downside of this technology is that the coils have to be at extremely high temperature for the condensed water to be evaporated. There is a chance that the water might not evaporate entirely and that would cause water to overflow onto the exterior of the car.
Drainage system
Drainage system works by creating a sump to collect the condensed water and using a pump to dispose of it through a drainage system.
It is an efficient method, but it comes at a heavy price because the cost of building the sump. Moreover, maintaining the pump to make sure it operates is very expensive. Furthermore, the pipes used for drainage would look ugly on the exterior. This system also cannot be implemented on a built project.
ISO 22559
thumb|right|a symbol for elevator/lift
The mechanical and electrical design of elevators is dictated according to various standards (aka elevator codes), which may be international, national, state, regional or city based. Whereas once many standards were prescriptive, specifying exact criteria which must be complied with, there has recently been a shift towards more performance-based standards where the onus falls on the designer to ensure that the elevator meets or exceeds the standard.
National elevator standards:
Australia – AS1735
Canada – CAN/CSA B44
Europe – EN 81 series (EN 81-1, EN 81-2, EN 81-28, EN 81-70, EN 12015, EN 12016, EN 13015, etc.)
USA – ASME A17
converged in ISO 22559 series, "Safety requirements for lifts (elevators)":
Part 1: Global essential safety requirements (GESRs).
Part 2: Safety parameters meeting the global essential safety requirements (GESRs).
Part 3: Global conformity assessment procedures (GCAP) – Prerequisites for certification of conformity of lift systems, lift components and lift functions
Part 4: Global conformity assessment procedures (GCAP) – Certification and accreditation requirements
ISO/TC 178 is the Technical Committee on Lifts, escalators and moving walks.
Because an elevator is part of a building, it must also comply with building code standards relating to earthquake resilience, fire standards, electrical wiring rules and so forth.
The American National Elevator Standards Group (ANESG) sets an elevator weight standard to be .
Additional requirements relating to access by disabled persons, may be mandated by laws or regulations such as the Americans with Disabilities Act. Elevators marked with a Star of Life are big enough for a stretcher.
U.S. and Canadian elevator standard specifics
thumb|A typical elevator style found in many modern residential and small commercial buildings
In most US and Canadian jurisdictions, passenger elevators are required to conform to the American Society of Mechanical Engineers' Standard A17.1, Safety Code for Elevators and Escalators. As of 2006, all states except Kansas, Mississippi, North Dakota, and South Dakota have adopted some version of ASME codes, though not necessarily the most recent. In Canada the document is the CAN/CSA B44 Safety Standard, which was harmonized with the US version in the 2000 edition. In addition, passenger elevators may be required to conform to the requirements of A17.3 for existing elevators where referenced by the local jurisdiction. Passenger elevators are tested using the ASME A17.2 Standard. The frequency of these tests is mandated by the local jurisdiction, which may be a town, city, state or provincial standard.
Passenger elevators must also conform to many ancillary building codes including the Local or State building code, National Fire Protection Association standards for Electrical, Fire Sprinklers and Fire Alarms, Plumbing codes, and HVAC codes. Also, passenger elevators are required to conform to the Americans with Disabilities Act and other State and Federal civil rights legislation regarding accessibility.
Residential elevators are required to conform to ASME A17.1. Platform and Wheelchair lifts are required to comply with ASME A18.1 in most US jurisdictions.
Most elevators have a location in which the permit for the building owner to operate the elevator is displayed. While some jurisdictions require the permit to be displayed in the elevator cab, other jurisdictions allow for the operating permit to be kept on file elsewhere – such as the maintenance office – and to be made available for inspection on demand. In such cases instead of the permit being displayed in the elevator cab, often a notice is posted in its place informing riders of where the actual permits are kept.
Unique elevator installations
World statistics
Country Number of elevators installedItaly900,000United States900,000China4,000,000South Korea530,000Russia520,000Spain950,000
As of January 2008, Spain is the nation with the most elevators installed in the world, with 950,000 elevators installed"", ANIE Federazione (Federazione Nazionale Industrie Elettrotecniche ed Elettroniche) that run more than one hundred million lifts every day, followed by United States with 700,000 elevators installed and China with 610,000 elevators installed since 1949. In Brazil, it is estimated that there are approximately 300,000 elevators currently in operation. The world's largest market for elevators is Italy, with more than 1,629 million euros of sales and 1,224 million euros of internal market.
In Spain, the elevators in maintenance, invoice €4.mil mill a year and €250 mill in repairs, followed by €300 mill in export-2012.
The existing building market in Spain is 25 mill, with Law 8/2013, of 26 June, rehabilitation, regeneration and renovation of existing buildings, this law is anticipated that boost the installation of 700,000 new elevators in buildings existing without elevator in the next few years.
In South Korea there are 530,000 elevators in operation, with 36,000 added in 2015. Hyundai elevators has 48% market share Thyssen-Krupp Korea (ex-Dongyang) 17%, OtisKorea (ex-LG hitachi) 16%, as of 2015. Korean annual elevator maintenance market is around 1 billion USD.
Eiffel Tower
right|thumb|An elevator pulley in the Eiffel Tower
The Eiffel Tower has Otis double-deck elevators built into the legs of the tower, serving the ground level to the first and second levels. Even though the shaft runs diagonally upwards with the contour of the tower, both the upper and lower cars remain horizontally level. The offset distance of the two cars changes throughout the journey.
There are four elevator cars of the traditional design that run from the second level to the third level. The cars are connected to their opposite pairs (opposite in the elevator landing/hall) and use each other as the counterweight. As one car ascends from level 2, the other descends from level 3. The operations of these elevators are synchronized by a light signal in the car.
Taipei 101
left|thumb|The observation deck elevator floor indicator in the Taipei 101
Double deck elevators are used in the Taipei 101 office tower. Tenants of even-numbered floors first take an escalator (or an elevator from the parking garage) to the 2nd level, where they will enter the upper deck and arrive at their floors. The lower deck is turned off during low-volume hours, and the upper deck can act as a single-level elevator stopping at all adjacent floors. For example, the 85th floor restaurants can be accessed from the 60th floor sky-lobby. Restaurant customers must clear their reservations at the reception counter on the 2nd floor. A bank of express elevators stop only on the sky lobby levels (36 and 60, upper-deck car), where tenants can transfer to "local" elevators.
The high-speed observation deck elevators accelerate to a world-record certified speed of in 16 seconds, and then it slows down for arrival with subtle air pressure sensations. The door opens after 37 seconds from the 5th floor. Special features include aerodynamic car and counterweights, and cabin pressure control to help passengers adapt smoothly to pressure changes. The downwards journey is completed at a reduced speed of 600 meters per minute, with the doors opening at the 52nd second.
Gateway Arch
thumb|The interior of one of the Gateway Arch tramway cars
The Gateway Arch in St. Louis, Missouri, United States, has a unique Montgomery elevator system which carries passengers from the visitors' center underneath the Arch to the observation deck at the top of the structure.
Called a tram or tramway, people enter this unique tramway much as one would enter an ordinary elevator, through double doors. Passing through the doors the passengers in small groups enter a horizontal cylindrical compartment containing seats on each side and a flat floor. A number of these compartments are linked to form a train. These compartments each individually retain an appropriate level orientation by tilting while the entire train follows curved tracks up one leg of the arch.
There are two tramways within the Arch, one at the north end, and the other at the south end. The entry doors have windows, so people traveling within the Arch are able to see the interior structure of the Arch during the ride to and from the observation deck. At the beginning of the trip the cars hang from the drive cables, but as the angle of the shaft changes, they end up beside and then on top of the cables.
thumb|left|View up the shaft of the elevator at the New City Hall, Hannover, Germany
New City Hall, Hanover, Germany
thumb|100px|Elevator in the new city hall, Hannover, Germany, showing the cabin at the bottom and the top
The elevator in the New City Hall in Hanover, Germany, is a technical rarity, and unique in Europe, as the elevator starts straight up but then changes its angle by 15 degrees to follow the contour of the dome of the hall. The cabin therefore tilts 15 degrees during the ride. The elevator travels a height of 43 meters. The new city hall was built in 1913. The elevator was destroyed in 1943 and rebuilt in 1954.
Luxor incline elevator
The Luxor Hotel in Las Vegas, Nevada, United States, has inclined elevators. The shape of this casino is a pyramid. Therefore, the elevator travels up the side of the pyramid at a 39-degree angle. Other locations with inclined elevators include the Cityplace Station in Dallas, Texas, the Huntington Metro Station in Huntington, Virginia, and the San Diego Convention Center in San Diego, California.
Germany
At the Radisson Blue in Berlin, Germany, the main elevator is surrounded by an aquarium. 82 feet tall, the aquarium contains more than a thousand different fish and offers beautiful views to people using the elevator.
The Twilight Zone Tower of Terror
The Twilight Zone Tower of Terror is the common name for a series of elevator attractions at the Disney's Hollywood Studios park in Orlando, the Disney California Adventure Park park in Anaheim, the Walt Disney Studios Park in Paris and the Tokyo DisneySea park in Tokyo. The central element of this attraction is a simulated free-fall achieved through the use of a high-speed elevator system. For safety reasons, passengers are seated and secured in their seats rather than standing. Unlike most traction elevators, the elevator car and counterweight are joined using a rail system in a continuous loop running through both the top and the bottom of the drop shaft. This allows the drive motor to pull down on the elevator car from underneath, resulting in downward acceleration greater than that of normal gravity. The high-speed drive motor is used to rapidly lift the elevator as well.
The passenger cabs are mechanically separated from the lift mechanism, thus allowing the elevator shafts to be used continuously while passengers board and embark from the cabs, as well as move through show scenes on various floors. The passenger cabs, which are automated guided vehicles or AGVs, move into the vertical motion shaft and lock themselves in before the elevator starts moving vertically. Multiple elevator shafts are used to further improve passenger throughput. The doorways of the top few "floors" of the attraction are open to the outdoor environment, thus allowing passengers to look out from the top of the structure.
"Top of the Rock" elevators
Guests ascending to the 67th, 69th, and 70th level observation decks (dubbed "Top of the Rock") atop the GE Building at Rockefeller Center in New York City ride a high-speed glass-top elevator. When entering the cab, it appears to be any normal elevator ride. However, once the cab begins moving, the interior lights turn off and a special blue light above the cab turns on. This lights the entire shaft, so riders can see the moving cab through its glass ceiling as it rises and lowers through the shaft. Music plays and various animations are also displayed on the ceiling. The entire ride takes about 60 seconds.
The Haunted Mansion
Part of the Haunted Mansion attraction at Disneyland in Anaheim, California, and Disneyland in Paris, France, takes place on an elevator. The "stretching room" on the ride is actually an elevator that travels downwards, giving access to a short underground tunnel which leads to the rest of the attraction. The elevator has no ceiling and its shaft is decorated to look like walls of a mansion. Because there is no roof, passengers are able to see the walls of the shaft by looking up, which gives the illusion of the room stretching.
Elevators for urban transport
thumb|Elevador de Santa Justa, in Lisbon, Portugal
In some towns where terrain is difficult to navigate, elevators are used as part of urban transport systems.
thumb|Elevador Lacerda in Salvador, Brazil.
thumb|Shanklin Cliff lift in Shanklin, Isle of Wight
Examples:
Alexandria, Virginia, USA —Public incline elevators connect to Huntington station
Almada, Portugal: Elevador da Boca do Vento
Asansor, Izmir, Turkey
Bad Schandau Elevator in Bad Schandau, Germany
Barcelona, Spain — Elevator and cableway line connecting the port terminal to Montjuic hill
Bilbao — Casco Viejo Bilbao Metro station (fare-paying elevator connecting upper and lower neighborhoods, as well as the station)
Brussels — Marolles, Belgium: "Ascenseur des Marolles", links the upper part of the city to the lower one, from Place Poelaert to Breughel square.
Coimbra, Portugal: Elevador do Mercado
Durie Hill Elevator in Whanganui, New Zealand; originally built by subdividers of suburb
East Hill Cliff Railway, Hastings, UK
Genoa, Italy — eleven public elevators
Hammetschwand Elevator in Bürgenstock, Switzerland
Helgoland in Schleswig-Holstein, Germany — Connects upper and lower parts of the island.
Jersey City, New Jersey elevator at Hudson–Bergen Light Rail station at 9th Street and Palisade Avenue.
Katarina Elevator in Stockholm, Sweden
Knoxville, Tennessee, United States — Outdoor public elevator at World's Fair Park
Lisbon, Portugal: Elevador de Santa Justa, Castelo (planned), Chiado (closed), Município/Biblioteca (demolished)
Luxembourg
Lynchburg, Virginia, United States — Outdoor public elevator that connects Church Street on the lower level and Court Street on the upper level
Malta There is a lift that takes people from Barrakka Gardens (on the top of the fortifications) in the City of Valletta to harbor level.
Marburg, Germany — some parts of the historic city core built on higher ground (Uppertown, "Oberstadt" in German) are accessible from the lower street level by elevators. These elevators are unique in servicing also various buildings partially embedded in the steep-sloping terrain
Monaco, seven elevators
Naples, Italy — three public elevators
New York City, USA — the 190th Street (IND Eighth Avenue Line) subway station has a bank of elevators that can be used by pedestrians without paying a fare. Also, the 34th Street (IRT Flushing Line) station has an incline elevator that, when open, can be used without paying a fare. Both stations are very deep.
Oporto, Portugal: Elevador da Ribeira
Oregon City Municipal Elevator in Oregon City, Oregon, United States.
Salvador, Bahia, Brazil: Elevador Lacerda
City of San Marino, San Marino — Connects several levels of the town.
Savannah, Georgia: Public elevators with access to River Street.
Shanklin Cliff Lift in Shanklin, Isle of Wight — Fare was £1 as of 2013.
Skyway in Nagasaki, Japan
Val Thorens, France — Public elevators linking upper town with lower town.
Wah Fu Estate, Hong Kong — Public elevators connecting Wah Fu to Wah Kwai Estate
Yalta, Ukraine
Chongqing, People Republic of China, public elevator at Kaixuan Road.
Some cities have short two-station unenclosed inclined railway lines that serve the same function. These are called funiculars.
World's fastest elevators
The Shanghai Tower holds the current record of world's fastest elevators with their cars traveling at . The elevator, that was installed on July 7 2016, was manufactured by Mitsubishi Electric.
See also
Building transportation systems
Central–Mid-Levels escalator and walkway system (Hong Kong)
Double-deck elevator
Dumbwaiter
Elevator consultant
Elevator mechanic
Elevator operator
Elevator paradox
Elevator surfing
Escalator
Funicular
Global urbanization
Grain elevator
Home lift
Incline elevator
List of elevator manufacturers
Moving walkway
Paternoster
People mover
Schmid Peoplemover, an elevator capable of crossing a road
Shopping cart conveyor
Space elevator
Stairlift
Wheelchair lift
References
Notes
Bibliography
Bernard, Andreas. Lifted: A Cultural History of the Elevator (New York University Press; 2014) 309 pages; scholarly architectural and technological history; also examines literary and cinematic representations.
Traffic Performance of Elevators with Destination Control
Manavalan, Theresa (30 October 2005). "Don't let them ride alone". New Straits Times, p. F2.
Further reading
External links
The Lifting Operations and Lifting Equipment Regulations 1998 (LOLER) Guidance
Timeline of the elevator
A collection of elevator control panels
ACE3 Opportunities for Elevator Energy Efficiency Improvements
Nick Paumgarten, The New Yorker, 21 April 2008, Up And Then Down: The lives of elevators
Why do we behave so oddly in lifts? BBC News Online (2012-10-08)
Comparisons of different types of Elevators
General and Historic Information on MRL Elevators
A 3D Elevator Simulator
Record Breaking Elevators Of The Modern World. Record breaking elevators of the modern world with interesting facts and statistics on these seven engineering feats.
Category:Vertical transport devices | 19,373,997 | 2017-01 |
Frédéric Chopin | thumb|Photograph of Chopin by Bisson,
center|150px|alt=Chopin's signature
Frédéric François Chopin (; ; 1 March 181017 October 1849), born Fryderyk Franciszek Chopin, was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for the solo piano. He gained and has maintained renown worldwide as a leading musician of his era, whose "poetic genius was based on a professional technique that was without equal in his generation."Rosen (1995), p. 284. Chopin was born in what was then the Duchy of Warsaw and grew up in Warsaw, which in 1815 became part of Congress Poland. A child prodigy, he completed his musical education and composed his earlier works in Warsaw before leaving Poland at the age of 20, less than a month before the outbreak of the November 1830 Uprising.
At 21 he settled in Paris. Thereafter, during the last 18 years of his life, he gave only some 30 public performances, preferring the more intimate atmosphere of the salon. He supported himself by selling his compositions and by teaching piano, for which he was in high demand. Chopin formed a friendship with Franz Liszt and was admired by many of his musical contemporaries, including Robert Schumann. In 1835 he obtained French citizenship. After a failed engagement to Maria Wodzińska, from 1837 to 1847 he maintained an often troubled relationship with the French woman writer George Sand. A brief and unhappy visit to Majorca with Sand in 1838–39 was one of his most productive periods of composition. In his last years, he was financially supported by his admirer Jane Stirling, who also arranged for him to visit Scotland in 1848. Through most of his life, Chopin suffered from poor health. He died in Paris in 1849, at the age of 39, probably of tuberculosis.
All of Chopin's compositions include the piano. Most are for solo piano, though he also wrote two piano concertos, a few chamber pieces, and some songs to Polish lyrics. His keyboard style is highly individual and often technically demanding; his own performances were noted for their nuance and sensitivity. Chopin invented the concept of the instrumental ballade. His major piano works also include mazurkas, waltzes, nocturnes, polonaises, études, impromptus, scherzos, preludes and sonatas, some published only after his death. Influences on his composition style include Polish folk music, the classical tradition of J. S. Bach, Mozart and Schubert, the music of all of whom he admired, as well as the Paris salons where he was a frequent guest. His innovations in style, musical form, and harmony, and his association of music with nationalism, were influential throughout and after the late Romantic period.
Chopin's music, his status as one of music's earliest superstars, his association (if only indirect) with political insurrection, his love life and his early death have made him a leading symbol of the Romantic era in the public consciousness. His works remain popular, and he has been the subject of numerous films and biographies of varying degrees of historical accuracy.
Life
Childhood
thumb|upright=0.8|Chopin's father, Nicolas Chopin, by Mieroszewski, 1829
Fryderyk Chopin was born in Żelazowa Wola,Zamoyski (2010), pp. 4–5 (locs. 115–130). 46 kilometres () west of Warsaw, in what was then the Duchy of Warsaw, a Polish state established by Napoleon. The parish baptismal record gives his birthday as 22 February 1810, and cites his given names in the Latin form Fridericus Franciscus (in Polish, he was Fryderyk Franciszek).Hedley (1980), p. 292. However, the composer and his family used the birthdate 1 March, which is now generally accepted as the correct date.Rose Cholmondeley, "The Mystery of Chopin's Birthday", Chopin Society UK website, accessed 21 December 2013.
Fryderyk's father, Nicolas Chopin, was a Frenchman from Lorraine who had emigrated to Poland in 1787 at the age of sixteen.Zamoyski (2010), p. 3 (loc. 100). Nicolas tutored children of the Polish aristocracy, and in 1806 married Justyna Krzyżanowska, a poor relative of the Skarbeks, one of the families for whom he worked.Michałowski and Samson (n.d), §1, para. 1. Fryderyk was baptized on Easter Sunday, 23 April 1810, in the same church where his parents had married, in Brochów. His eighteen-year-old godfather, for whom he was named, was Fryderyk Skarbek, a pupil of Nicolas Chopin. Fryderyk was the couple's second child and only son; he had an elder sister, Ludwika (1807–55), and two younger sisters, Izabela (1811–81) and Emilia (1812–27).Zamoyski (2010) p. 7 (loc. 158). Nicolas was devoted to his adopted homeland, and insisted on the use of the Polish language in the household.
In October 1810, six months after Fryderyk's birth, the family moved to Warsaw, where his father acquired a post teaching French at the Warsaw Lyceum, then housed in the Saxon Palace. Fryderyk lived with his family in the Palace grounds. The father played the flute and violin;Zamoyski (2010), pp. 5–6 (locs. 130–144). the mother played the piano and gave lessons to boys in the boarding house that the Chopins kept. Chopin was of slight build, and even in early childhood was prone to illnesses.Zamoyski (2010), 6 (loc. 144).
thumb|left|upright=1.5|Chopin's birthplace: outbuilding of nonexistent Skarbek Palace at Żelazowa Wola
Fryderyk may have had some piano instruction from his mother, but his first professional music tutor, from 1816 to 1821, was the Czech pianist Wojciech Żywny.Michałowski and Samson (n.d), §1, para. 3. His elder sister Ludwika also took lessons from Żywny, and occasionally played duets with her brother.Samson (1996), p. 8. It quickly became apparent that he was a child prodigy. By the age of seven Fryderyk had begun giving public concerts, and in 1817 he composed two polonaises, in G minor and B-flat major."The Complete Keyboard Works", Chopin Project website, accessed 21 December 2013. His next work, a polonaise in A-flat major of 1821, dedicated to Żywny, is his earliest surviving musical manuscript.
In 1817 the Saxon Palace was requisitioned by Warsaw's Russian governor for military use, and the Warsaw Lyceum was reestablished in the Kazimierz Palace (today the rectorate of Warsaw University). Fryderyk and his family moved to a building, which still survives, adjacent to the Kazimierz Palace. During this period, Fryderyk was sometimes invited to the Belweder Palace as playmate to the son of the ruler of Russian Poland, Grand Duke Constantine; he played the piano for the Duke and composed a march for him. Julian Ursyn Niemcewicz, in his dramatic eclogue, "Nasze Przebiegi" ("Our Discourses", 1818), attested to "little Chopin's" popularity.Zamoyski (2010), pp. 11–12 (locs. 231–248).
Education
thumb|upright=0.8|Józef Elsner after 1853
From September 1823 to 1826, Chopin attended the Warsaw Lyceum, where he received organ lessons from the Czech musician Wilhelm Würfel during his first year. In the autumn of 1826 he began a three-year course under the Silesian composer Józef Elsner at the Warsaw Conservatory, studying music theory, figured bass and composition.Michałowski and Samson (n.d.), §1, para. 5. Throughout this period he continued to compose and to give recitals in concerts and salons in Warsaw. He was engaged by the inventors of a mechanical organ, the "eolomelodicon", and on this instrument in May 1825 he performed his own improvisation and part of a concerto by Moscheles. The success of this concert led to an invitation to give a similar recital on the instrument before Tsar Alexander I, who was visiting Warsaw; the Tsar presented him with a diamond ring. At a subsequent eolomelodicon concert on 10 June 1825, Chopin performed his Rondo Op. 1. This was the first of his works to be commercially published and earned him his first mention in the foreign press, when the Leipzig Allgemeine Musikalische Zeitung praised his "wealth of musical ideas".Zamoyski (2010), pp. 21–2 (locs. 365–387).
During 1824–28 Chopin spent his vacations away from Warsaw, at a number of locales. In 1824 and 1825, at Szafarnia, he was a guest of Dominik Dziewanowski, the father of a schoolmate. Here for the first time he encountered Polish rural folk music.Michałowski and Samson (n.d.), §1 para. 2. His letters home from Szafarnia (to which he gave the title "The Szafarnia Courier"), written in a very modern and lively Polish, amused his family with their spoofing of the Warsaw newspapers and demonstrated the youngster's literary gift.Zamoyski (2010), pp. 19–20 (locs. 334–352).
In 1827, soon after the death of Chopin's youngest sister Emilia, the family moved from the Warsaw University building, adjacent to the Kazimierz Palace, to lodgings just across the street from the university, in the south annex of the Krasiński Palace on Krakowskie Przedmieście, where Chopin lived until he left Warsaw in 1830. Here his parents continued running their boarding house for male students; the Chopin Family Parlour (Salonik Chopinów) became a museum in the 20th century. In 1829 the artist Ambroży Mieroszewski executed a set of portraits of Chopin family members, including the first known portrait of the composer.
Four boarders at his parents' apartments became Chopin's intimates: Tytus Woyciechowski, Jan Nepomucen Białobłocki, Jan Matuszyński and Julian Fontana; the latter two would become part of his Paris milieu. He was friendly with members of Warsaw's young artistic and intellectual world, including Fontana, Józef Bohdan Zaleski and Stefan Witwicki.Zamoyski (2010), p. 43 (loc. 696). He was also attracted to the singing student Konstancja Gładkowska. In letters to Woyciechowski, he indicated which of his works, and even which of their passages, were influenced by his fascination with her; his letter of 15 May 1830 revealed that the slow movement (Larghetto) of his Piano Concerto No. 1 (in E minor) was secretly dedicated to her – "It should be like dreaming in beautiful springtime – by moonlight."Zamoyski (2010), pp. 50–52 (locs. 801–838). His final Conservatory report (July 1829) read: "Chopin F., third-year student, exceptional talent, musical genius."
Travel and domestic success
thumb|left|upright=1.1|Chopin plays for the Radziwiłłs, 1829 (painting by Henryk Siemiradzki, 1887)
In September 1828 Chopin, while still a student, visited Berlin with a family friend, zoologist Feliks Jarocki, enjoying operas directed by Gaspare Spontini and attending concerts by Carl Friedrich Zelter, Felix Mendelssohn and other celebrities. On an 1829 return trip to Berlin, he was a guest of Prince Antoni Radziwiłł, governor of the Grand Duchy of Posen—himself an accomplished composer and aspiring cellist. For the prince and his pianist daughter Wanda, he composed his Introduction and Polonaise brillante in C major for cello and piano, Op. 3.Zamoyski (2010), p. 45 (loc. 731).
Back in Warsaw that year, Chopin heard Niccolò Paganini play the violin, and composed a set of variations, Souvenir de Paganini. It may have been this experience which encouraged him to commence writing his first Études, (1829–32), exploring the capacities of his own instrument.Zamoyski (2010), p. 35 (loc. 569). On 11 August, three weeks after completing his studies at the Warsaw Conservatory, he made his debut in Vienna. He gave two piano concerts and received many favourable reviews—in addition to some commenting (in Chopin's own words) that he was "too delicate for those accustomed to the piano-bashing of local artists". In one of these concerts, he premiered his Variations on Là ci darem la mano, Op. 2 (variations on an aria from Mozart's opera Don Giovanni) for piano and orchestra.Zamoyski (2010), pp. 37–39 (locs. 599-632). He returned to Warsaw in September 1829,Zamoyski (2010), p. 43 (loc. 689). where he premiered his Piano Concerto No. 2 in F minor, Op. 21 on 17 March 1830.
Chopin's successes as a composer and performer opened the door to western Europe for him, and on 2 November 1830, he set out, in the words of Zdzisław Jachimecki, "into the wide world, with no very clearly defined aim, forever."Jachimecki (1937), p. 422. With Woyciechowski, he headed for Austria again, intending to go on to Italy. Later that month, in Warsaw, the November 1830 Uprising broke out, and Woyciechowski returned to Poland to enlist. Chopin, now alone in Vienna, was nostalgic for his homeland, and wrote to a friend, "I curse the moment of my departure."Michałowski and Samson (n.d), §2, para. 1. When in September 1831 he learned, while travelling from Vienna to Paris, that the uprising had been crushed, he expressed his anguish in the pages of his private journal: "Oh God! ... You are there, and yet you do not take vengeance!"Michałowski and Samson (n.d), §2, para. 3. The journal is now in the National Library of Poland. Jachimecki ascribes to these events the composer's maturing "into an inspired national bard who intuited the past, present and future of his native Poland."
Paris
thumb|upright=0.8|Chopin at 25, by his fiancée Maria Wodzińska, 1835
Chopin arrived in Paris in late September 1831; he would never return to Poland,Michałowski and Samson (n.d), §1, para. 6. thus becoming one of many expatriates of the Polish Great Emigration. In France he used the French versions of his given names, and after receiving French citizenship in 1835, he travelled on a French passport.A French passport used by Chopin is shown at Emmanuel Langavant, Passeport français de Chopin, Chopin – musicien français website, accessed 13 August 2014. However, Chopin remained close to his fellow Poles in exile as friends and confidants and he never felt fully comfortable speaking French. Chopin's biographer Adam Zamoyski writes that he never considered himself to be French, despite his father's French origins, and always saw himself as a Pole.Zamoyski (2010), p. 128 (loc. 2027).
In Paris, Chopin encountered artists and other distinguished figures, and found many opportunities to exercise his talents and achieve celebrity. During his years in Paris he was to become acquainted with, among many others, Hector Berlioz, Franz Liszt, Ferdinand Hiller, Heinrich Heine, Eugène Delacroix, and Alfred de Vigny.Zamoyski (2010), p. 106 (loc. 1678). Chopin was also acquainted with the poet Adam Mickiewicz, principal of the Polish Literary Society, some of whose verses he set as songs.Zamoyski (2010), p. 137 (loc. 2164).
Two Polish friends in Paris were also to play important roles in Chopin's life there. His fellow student at the Warsaw Conservatory, Julian Fontana, had originally tried unsuccessfully to establish himself in England; Albert Grzymała, who in Paris became a wealthy financier and society figure, often acted as Chopin's adviser and "gradually began to fill the role of elder brother in [his] life."Zamoyski (2010), pp. 106–107 (locs. 1678–1696). Fontana was to become, in the words of Michałowski and Samson, Chopin's "general factotum and copyist".Michałowski and Samson (n.d), §3, para. 2.
At the end of 1831, Chopin received the first major endorsement from an outstanding contemporary when Robert Schumann, reviewing the Op. 2 Variations in the Allgemeine musikalische Zeitung (his first published article on music), declared: "Hats off, gentlemen! A genius."Schumann (1988), pp. 15–17. On 26 February 1832 Chopin gave a debut Paris concert at the Salle Pleyel which drew universal admiration. The critic François-Joseph Fétis wrote in the Revue et gazette musicale: "Here is a young man who ... taking no model, has found, if not a complete renewal of piano music, ... an abundance of original ideas of a kind to be found nowhere else ..."cited in Zamoyski (2010), p. 88 (loc. 1384). After this concert, Chopin realized that his essentially intimate keyboard technique was not optimal for large concert spaces. Later that year he was introduced to the wealthy Rothschild banking family, whose patronage also opened doors for him to other private salons (social gatherings of the aristocracy and artistic and literary elite). By the end of 1832 Chopin had established himself among the Parisian musical elite, and had earned the respect of his peers such as Hiller, Liszt, and Berlioz. He no longer depended financially upon his father, and in the winter of 1832 he began earning a handsome income from publishing his works and teaching piano to affluent students from all over Europe.Michałowski and Samson (n.d), §2, paras. 4–5. This freed him from the strains of public concert-giving, which he disliked.
thumb|left|upright=0.7||Maria Wodzińska, self-portrait
Chopin seldom performed publicly in Paris. In later years he generally gave a single annual concert at the Salle Pleyel, a venue that seated three hundred. He played more frequently at salons, but preferred playing at his own Paris apartment for small groups of friends. The musicologist Arthur Hedley has observed that "As a pianist Chopin was unique in acquiring a reputation of the highest order on the basis of a minimum of public appearances—few more than thirty in the course of his lifetime."Hedley (2005), pp. 263-4. The list of musicians who took part in some of his concerts provides an indication of the richness of Parisian artistic life during this period. Examples include a concert on 23 March 1833, in which Chopin, Liszt and Hiller performed (on pianos) a concerto by J.S. Bach for three keyboards; and, on 3 March 1838, a concert in which Chopin, his pupil Adolphe Gutmann, Charles-Valentin Alkan, and Alkan's teacher Joseph Zimmermann performed Alkan's arrangement, for eight hands, of two movements from Beethoven's 7th symphony.Conway (2012), p. 226 and n. 9. Chopin was also involved in the composition of Liszt's Hexameron; he wrote the sixth (and final) variation on Bellini's theme. Chopin's music soon found success with publishers, and in 1833 he contracted with Maurice Schlesinger, who arranged for it to be published not only in France but, through his family connections, also in Germany and England.Michałowski and Samson (n.d), §2, para. 5. For Schlesinger's international network see Conway (2012), pp. 185–7 and pp.238–9.
In the spring of 1834, Chopin attended the Lower Rhenish Music Festival in Aix-la-Chapelle with Hiller, and it was there that Chopin met Felix Mendelssohn. After the festival, the three visited Düsseldorf, where Mendelssohn had been appointed musical director. They spent what Mendelssohn described as "a very agreeable day", playing and discussing music at his piano, and met Friedrich Wilhelm Schadow, director of the Academy of Art, and some of his eminent pupils such as Lessing, Bendemann, Hildebrandt and Sohn.Niecks (1980), p. 313. In 1835 Chopin went to Carlsbad, where he spent time with his parents; it was the last time he would see them. On his way back to Paris, he met old friends from Warsaw, the Wodzińskis. He had made the acquaintance of their daughter Maria in Poland five years earlier, when she was eleven. This meeting prompted him to stay for two weeks in Dresden, when he had previously intended to return to Paris via Leipzig.Zamoyski (2010), pp. 118–9 (locs. 1861–1878). The sixteen-year-old girl's portrait of the composer is considered, along with Delacroix's, as among Chopin's best likenesses. In October he finally reached Leipzig, where he met Schumann, Clara Wieck and Felix Mendelssohn, who organised for him a performance of his own oratorio St. Paul, and who considered him "a perfect musician".Zamoyski (2010), pp. 119–20 (locs. 1878–1896). In July 1836 Chopin travelled to Marienbad and Dresden to be with the Wodziński family, and in September he proposed to Maria, whose mother Countess Wodzińska approved in principle. Chopin went on to Leipzig, where he presented Schumann with his G minor Ballade.Zamoyski (2010), pp. 126–7 (locs. 1983–2001). At the end of 1836 he sent Maria an album in which his sister Ludwika had inscribed seven of his songs, and his 1835 Nocturne in C-sharp minor, Op. 27, No. 1.Jachimecki (1937), p. 423. The anodyne thanks he received from Maria proved to be the last letter he was to have from her.Chopin (1962), p. 144.
Franz Liszt
thumb|right|upright=0.9|Franz Liszt in 1838, engraving by Josef Kriehuber
Although it is not known exactly when Chopin first met Liszt after arriving in Paris, on 12 December 1831 he mentioned in a letter to his friend Woyciechowski that "I have met Rossini, Cherubini, Baillot, etc.—also Kalkbrenner. You would not believe how curious I was about Herz, Liszt, Hiller, etc."Hall-Swadley (2011), p. 31. Liszt was in attendance at Chopin's Parisian debut on 26 February 1832 at the Salle Pleyel, which led him to remark: "The most vigorous applause seemed not to suffice to our enthusiasm in the presence of this talented musician, who revealed a new phase of poetic sentiment combined with such happy innovation in the form of his art."
The two became friends, and for many years lived in close proximity in Paris, Chopin at 38 Rue de la Chaussée-d'Antin, and Liszt at the Hôtel de France on the Rue Lafitte, a few blocks away. They performed together on seven occasions between 1833 and 1841. The first, on 2 April 1833, was at a benefit concert organized by Hector Berlioz for his bankrupt Shakespearean actress wife Harriet Smithson, during which they played George Onslow's Sonata in F minor for piano duet.Hall-Swadley (2011), p. 32. Later joint appearances included a benefit concert for the Benevolent Association of Polish Ladies in Paris. Their last appearance together in public was for a charity concert conducted for the Beethoven Memorial in Bonn, held at the Salle Pleyel and the Paris Conservatory on 25 and 26 April 1841.
Although the two displayed great respect and admiration for each other, their friendship was uneasy and had some qualities of a love-hate relationship. Harold C. Schonberg believes that Chopin displayed a "tinge of jealousy and spite" towards Liszt's virtuosity on the piano,Schonberg (1987), p. 151. and others have also argued that he had become enchanted with Liszt's theatricality, showmanship and success.Hall-Swadley (2011), p. 33. Liszt was the dedicatee of Chopin's Op. 10 Études, and his performance of them prompted the composer to write to Hiller, "I should like to rob him of the way he plays my studies."Walker (1988), p. 184. However, Chopin expressed annoyance in 1843 when Liszt performed one of his nocturnes with the addition of numerous intricate embellishments, at which Chopin remarked that he should play the music as written or not play it at all, forcing an apology. Most biographers of Chopin state that after this the two had little to do with each other, although in his letters dated as late as 1848 he still referred to him as "my friend Liszt". Some commentators point to events in the two men's romantic lives which led to a rift between them; there are claims that Liszt had displayed jealousy of his mistress Marie d'Agoult's obsession with Chopin, while others believe that Chopin had become concerned about Liszt's growing relationship with George Sand.
George Sand
thumb|upright=0.8|left|Chopin at 28, from Delacroix's joint portrait of Chopin and Sand
In 1836, at a party hosted by Marie d'Agoult, Chopin met the French author George Sand (born [Amantine] Aurore [Lucile] Dupin). Short (under five feet, or 152 cm), dark, big-eyed and a cigar smoker,Schonberg (1987), p. 152. she initially repelled Chopin, who remarked, "What an unattractive person la Sand is. Is she really a woman?"Michałowski and Samson (n.d.) §3, para. 3. However, by early 1837 Maria Wodzińska's mother had made it clear to Chopin in correspondence that a marriage with her daughter was unlikely to proceed.Chopin (1962), p. 141. It is thought that she was influenced by his poor health and possibly also by rumours about his associations with women such as d'Agoult and Sand.Zamoyski (2010), pp. 137–8 (locs. 2169–2186). Chopin finally placed the letters from Maria and her mother in a package on which he wrote, in Polish, "My tragedy".Zamoyski (2010), p. 147 (loc. 2318). Sand, in a letter to Grzymała of June 1838, admitted strong feelings for the composer and debated whether to abandon a current affair in order to begin a relationship with Chopin; she asked Grzymała to assess Chopin's relationship with Maria Wodzińska, without realising that the affair, at least from Maria's side, was over.Chopin (1962), pp. 151–161.
In June 1837 Chopin visited London incognito in the company of the piano manufacturer Camille Pleyel where he played at a musical soirée at the house of English piano maker James Broadwood.Załuski (1992), p. 226. On his return to Paris, his association with Sand began in earnest, and by the end of June 1838 they had become lovers.Michałowski and Samson (n.d.) §3, para. 4. Sand, who was six years older than the composer, and who had had a series of lovers, wrote at this time: "I must say I was confused and amazed at the effect this little creature had on me ... I have still not recovered from my astonishment, and if I were a proud person I should be feeling humiliated at having been carried away ..."Cited in Zamoyski (2010), p. 154 (loc. 2417). The two spent a miserable winter on Majorca (8 November 1838 to 13 February 1839), where, together with Sand's two children, they had journeyed in the hope of improving the health of Chopin and that of Sand's 15-year-old son Maurice, and also to escape the threats of Sand's former lover Félicien Mallefille.Zamoyski (2010), p. 159 (loc. 2514). After discovering that the couple were not married, the deeply traditional Catholic people of Majorca became inhospitable,Zamoyski (2010), pp. 161–162 (locs. 2544–2560). making accommodation difficult to find. This compelled the group to take lodgings in a former Carthusian monastery in Valldemossa, which gave little shelter from the cold winter weather.
On 3 December, Chopin complained about his bad health and the incompetence of the doctors in Majorca: "Three doctors have visited me ... The first said I was dead; the second said I was dying; and the third said I was about to die."cited in Zamoyski (2010), p. 162 (loc. 2560). He also had problems having his Pleyel piano sent to him. It finally arrived from Paris in December. Chopin wrote to Pleyel in January 1839: "I am sending you my Preludes [(Op. 28)]. I finished them on your little piano, which arrived in the best possible condition in spite of the sea, the bad weather and the Palma customs." Chopin was also able to undertake work on his Ballade No. 2, Op. 38; two Polonaises, Op. 40; and the Scherzo No. 3, Op. 39.Zamoyski (2010), p. 168 (loc. 2646).
thumb|upright|Chopin in 1838 by Charles Louis Gratia
Although this period had been productive, the bad weather had such a detrimental effect on Chopin's health that Sand determined to leave the island. To avoid further customs duties, Sand sold the piano to a local French couple, the Canuts.Zamoyski (2010), p. 168 (loc. 2654). The group traveled first to Barcelona, then to Marseilles, where they stayed for a few months while Chopin convalesced.Michałowski and Samson (n.d.) §3, para. 5. In May 1839 they headed for the summer to Sand's estate at Nohant, where they spent most summers until 1846. In autumn they returned to Paris, where Chopin's apartment at 5 rue Tronchet was close to Sand's rented accommodation at the rue Pigalle. He frequently visited Sand in the evenings, but both retained some independence.Michałowski and Samson (n.d.) §4, para. 1. In 1842 he and Sand moved to the Square d'Orléans, living in adjacent buildings.Michałowski and Samson (n.d.) §4, para. 4.
At the funeral of the tenor Adolphe Nourrit in Paris in 1839, Chopin made a rare appearance at the organ, playing a transcription of Franz Schubert's lied Die Gestirne. On 26 July 1840 Chopin and Sand were present at the dress rehearsal of Berlioz's Grande symphonie funèbre et triomphale, composed to commemorate the tenth anniversary of the July Revolution. Chopin was reportedly unimpressed with the composition.Goldberg (2004), p. 8.
During the summers at Nohant, particularly in the years 1839–43, Chopin found quiet, productive days during which he composed many works, including his Polonaise in A-flat major, Op. 53. Among the visitors to Nohant were Delacroix and the mezzo-soprano Pauline Viardot, whom Chopin had advised on piano technique and composition.Zamoyski (2010), p. 197 (loc. 3100). Delacroix gives an account of staying at Nohant in a letter of 7 June 1842:
The hosts could not be more pleasant in entertaining me. When we are not all together at dinner, lunch, playing billiards, or walking, each of us stays in his room, reading or lounging around on a couch. Sometimes, through the window which opens on the garden, a gust of music wafts up from Chopin at work. All this mingles with the songs of nightingales and the fragrance of roses.Cited in Atwood (1999), p. 315.
Decline
thumb|right|upright=0.7|George Sand sewing, from Delacroix's joint portrait of Chopin and Sand (1838)
From 1842 onwards, Chopin showed signs of serious illness. After a solo recital in Paris on 21 February 1842, he wrote to Grzymała: "I have to lie in bed all day long, my mouth and tonsils are aching so much."Zamoyski (2010) p. 212 (loc. 3331). He was forced by illness to decline a written invitation from Alkan to participate in a repeat performance of the Beethoven Seventh Symphony arrangement at Erard's on 1 March 1843.Eddie (2013), p. 8. Late in 1844, Charles Hallé visited Chopin and found him "hardly able to move, bent like a half-opened penknife and evidently in great pain", although his spirits returned when he started to play the piano for his visitor.Zamoyski (2010), p. 227 (loc. 3571). Chopin's health continued to deteriorate, particularly from this time onwards. Modern research suggests that apart from any other illnesses, he may also have suffered from temporal lobe epilepsy.Sara Reardon, "Chopin's hallucinations may have been caused by epilepsy", The Washington Post, 31 January 2011, accessed 10 January 2014.
Chopin's relations with Sand were soured in 1846 by problems involving her daughter Solange and Solange's fiancé, the young fortune-hunting sculptor Auguste Clésinger.Michałowski and Samson (n.d.), §5, para. 2. The composer frequently took Solange's side in quarrels with her mother; he also faced jealousy from Sand's son Maurice.Samson (1996), p. 194. Chopin was utterly indifferent to Sand's radical political pursuits, while Sand looked on his society friends with disdain.Chen (2009), p. 32. As the composer's illness progressed, Sand had become less of a lover and more of a nurse to Chopin, whom she called her "third child". In letters to third parties, she vented her impatience, referring to him as a "child," a "little angel", a "sufferer" and a "beloved little corpse." In 1847 Sand published her novel Lucrezia Floriani, whose main characters—a rich actress and a prince in weak health—could be interpreted as Sand and Chopin; the story was uncomplimentary to Chopin, who could not have missed the allusions as he helped Sand correct the printer's galleys. In 1847 he did not visit Nohant, and he quietly ended their ten-year relationship following an angry correspondence which, in Sand's words, made "a strange conclusion to nine years of exclusive friendship." The two would never meet again.Chen (2009), p. 34.
Chopin's output as a composer throughout this period declined in quantity year by year. Whereas in 1841 he had written a dozen works, only six were written in 1842 and six shorter pieces in 1843. In 1844 he wrote only the Op. 58 sonata. 1845 saw the completion of three mazurkas (Op. 59). Although these works were more refined than many of his earlier compositions, Zamoyski concludes that "his powers of concentration were failing and his inspiration was beset by anguish, both emotional and intellectual."Zamoyski (2010), p. 233 (loc. 3668).
Tour of England and Scotland
Chopin's public popularity as a virtuoso began to wane, as did the number of his pupils, and this, together with the political strife and instability of the time, caused him to struggle financially. In February 1848, with the cellist Auguste Franchomme, he gave his last Paris concert, which included three movements of the Cello Sonata Op. 65.
thumb|left|upright=0.6|Jane Stirling, by Devéria, c. 1830
In April, during the Revolution of 1848 in Paris, he left for London, where he performed at several concerts and at numerous receptions in great houses. This tour was suggested to him by his Scottish pupil Jane Stirling and her elder sister. Stirling also made all the logistical arrangements and provided much of the necessary funding.Michałowski and Samson (n.d.), §5, para. 3.
In London Chopin took lodgings at Dover Street, where the firm of Broadwood provided him with a grand piano. At his first engagement, on 15 May at Stafford House, the audience included Queen Victoria and Prince Albert. The Prince, who was himself a talented musician, moved close to the keyboard to view Chopin's technique. Broadwood also arranged concerts for him; among those attending were Thackeray and the singer Jenny Lind. Chopin was also sought after for piano lessons, for which he charged the high fee of one guinea (£1.05 in present British currency) per hour, and for private recitals for which the fee was 20 guineas. At a concert on 7 July he shared the platform with Viardot, who sang arrangements of some of his mazurkas to Spanish texts.Załuski (1992), pp. 227–9. On 28 August, he played at a concert in Manchester's Concert Hall, sharing the stage with Marietta Alboni and Lorenzo Salvi."Review: Frédéric Chopin and Marietta Alboni perform in Manchester", The Manchester Guardian, 30 August 1848; also singing was Amalia Colbari; the conductor was Charles Seymour, who was later first violinist in The Hallé orchestra. The Manchester Concert Hall is now the site of the Midland Hotel.
In late summer he was invited by Jane Stirling to visit Scotland, where he stayed at Calder House near Edinburgh and at Johnstone Castle in Renfrewshire, both owned by members of Stirling's family. She clearly had a notion of going beyond mere friendship, and Chopin was obliged to make it clear to her that this could not be so. He wrote at this time to Grzymała "My Scottish ladies are kind, but such bores", and responding to a rumour about his involvement, answered that he was "closer to the grave than the nuptial bed."Zamoyski (2010), p. 279 (loc. 4385). Letter of 30 October 1848. He gave a public concert in Glasgow on 27 September,Zamoyski (2010), pp. 276–8 (locs. 4340–4357). and another in Edinburgh, at the Hopetoun Rooms on Queen Street (now Erskine House) on 4 October. In late October 1848, while staying at 10 Warriston Crescent in Edinburgh with the Polish physician Adam Łyszczyński, he wrote out his last will and testament—"a kind of disposition to be made of my stuff in the future, if I should drop dead somewhere", he wrote to Grzymała.
thumb|London's Guildhall, where Chopin gave his last public performance.
Chopin made his last public appearance on a concert platform at London's Guildhall on 16 November 1848, when, in a final patriotic gesture, he played for the benefit of Polish refugees. By this time he was very seriously ill, weighing under 99 pounds (i.e. less than 45 kg), and his doctors were aware that his sickness was at a terminal stage.Michałowski and Samson (n.d.), §5, para. 4.
At the end of November, Chopin returned to Paris. He passed the winter in unremitting illness, but gave occasional lessons and was visited by friends, including Delacroix and Franchomme. Occasionally he played, or accompanied the singing of Delfina Potocka, for his friends. During the summer of 1849, his friends found him an apartment in Chaillot, out of the centre of the city, for which the rent was secretly subsidised by an admirer, Princess Obreskoff. Here in June 1849 he was visited by Jenny Lind.Zamoyski (2010), pp. 283–6 (locs. 4446–4487).
Death and funeral
thumb|Chopin on His Deathbed, by Teofil Kwiatkowski, 1849, commissioned by Jane Stirling. Chopin is in the presence of (from left) Aleksander Jełowicki, Chopin's sister Ludwika, Princess Marcelina Czartoryska, Wojciech Grzymała, Kwiatkowski.
With his health further deteriorating, Chopin desired to have a family member with him. In June 1849 his sister Ludwika came to Paris with her husband and daughter, and in September, supported by a loan from Jane Stirling, he took an apartment at Place Vendôme 12.Zamoyski (2010) p. 288 (loc. 4512). After 15 October, when his condition took a marked turn for the worse, only a handful of his closest friends remained with him, although Viardot remarked sardonically that "all the grand Parisian ladies considered it de rigueur to faint in his room."
Some of his friends provided music at his request; among them, Potocka sang and Franchomme played the cello. Chopin requested that his body be opened after death (for fear of being buried alive) and his heart returned to Warsaw where it rests at the Church of the Holy Cross. He also bequeathed his unfinished notes on a piano tuition method, Projet de méthode, to Alkan for completion.Zamoyski (2010), 291–3 (locs. 4566–4591). On 17 October, after midnight, the physician leaned over him and asked whether he was suffering greatly. "No longer", he replied. He died a few minutes before two o'clock in the morning. Those present at the deathbed appear to have included his sister Ludwika, Princess Marcelina Czartoryska, Sand's daughter Solange, and his close friend Thomas Albrecht. Later that morning, Solange's husband Clésinger made Chopin's death mask and a cast of his left hand.Zamoyski (2010), p. 293 (locs. 4591–4601).
Chopin's disease and the cause of his death have since been a matter of discussion. His death certificate gave the cause as tuberculosis, and his physician, Jean Cruveilhier, was then the leading French authority on this disease.Zamoyski (2010), p. 286 (loc. 4479). Other possibilities have been advanced including cystic fibrosis,Majka et al. (2003), p. 77. cirrhosis and alpha 1-antitrypsin deficiency.Kuzemko (1994), p. 771. See also Kubba and Young (1998), passim. However, the attribution of tuberculosis as principal cause of death has not been disproved.Young et al. (2014), p. 529. Permission for DNA testing, which could put the matter to rest, has been denied by the Polish government.Robin McKie, "Row over plan to DNA test Chopin's heart". The Guardian, 27 July 2008. Retrieved 3 November 2014
The funeral, held at the Church of the Madeleine in Paris, was delayed almost two weeks, until 30 October.Zamoyski (2010), pp. 293–4 (locs. 4601–4616). Entrance was restricted to ticket holdersZamoyski (2010), p. 1 (loc. 70). as many people were expected to attend. Over 3,000 people arrived without invitations, from as far as London, Berlin and Vienna, and were excluded.
Mozart's Requiem was sung at the funeral; the soloists were the soprano Jeanne-Anais Castellan, the mezzo-soprano Pauline Viardot, the tenor Alexis Dupont, and the bass Luigi Lablache; Chopin's Preludes No. 4 in E minor and No. 6 in B minor were also played. The organist at the funeral was Louis Lefébure-Wély."Funeral of Frédéric Chopin", in Revue et Gazette Musicale, 4 November 1847, printed in translation in Atwood (1999), pp. 410–11. The funeral procession to Père Lachaise Cemetery, which included Chopin's sister Ludwika, was led by the aged Prince Adam Czartoryski. The pallbearers included Delacroix, Franchomme, and Camille Pleyel. At the graveside, the Funeral March from Chopin's Piano Sonata No. 2 was played, in Reber's instrumentation."Funeral of Frédéric Chopin", in Revue et Gazette Musicale, 4 November 1847, printed in translation in Atwood (1999), pp. 412–13.
Chopin's tombstone, featuring the muse of music, Euterpe, weeping over a broken lyre, was designed and sculpted by Clésinger. The expenses of the funeral and monument, amounting to 5,000 francs, were covered by Jane Stirling, who also paid for the return of the composer's sister Ludwika to Warsaw. Ludwika took Chopin's heart in an urn, preserved in alcohol, back to Poland in 1850.Samson (1996), p. 193. She also took a collection of two hundred letters from Sand to Chopin; after 1851 these were returned to Sand, who seems to have destroyed them.
Music
Overview
thumb|left|225px|Autographed musical quotation from the Polonaise Op. 53, signed by Chopin on 25 May 1845
Over 230 works of Chopin survive; some compositions from early childhood have been lost. All his known works involve the piano, and only a few range beyond solo piano music, as either piano concertos, songs or chamber music.Hedley (1980), p. 298.
Chopin was educated in the tradition of Beethoven, Haydn, Mozart and Clementi; he used Clementi's piano method with his own students. He was also influenced by Hummel's development of virtuoso, yet Mozartian, piano technique. He cited Bach and Mozart as the two most important composers in shaping his musical outlook.Michałowski and Samson (n.d.), §6 para 7. Chopin's early works are in the style of the "brilliant" keyboard pieces of his era as exemplified by the works of Ignaz Moscheles, Friedrich Kalkbrenner, and others. Less direct in the earlier period are the influences of Polish folk music and of Italian opera. Much of what became his typical style of ornamentation (for example, his fioriture) is taken from singing. His melodic lines were increasingly reminiscent of the modes and features of the music of his native country, such as drones.Michałowski and Samson (n.d.). §6, paras 1–4.
Chopin took the new salon genre of the nocturne, invented by the Irish composer John Field, to a deeper level of sophistication. He was the first to write ballades and scherzi as individual concert pieces. He essentially established a new genre with his own set of free-standing preludes (Op. 28, published 1839). He exploited the poetic potential of the concept of the concert étude, already being developed in the 1820s and 1830s by Liszt, Clementi and Moscheles, in his two sets of studies (Op. 10 published in 1833, Op. 25 in 1837).Ferguson (1980), pp. 304–5.
Chopin also endowed popular dance forms with a greater range of melody and expression. Chopin's mazurkas, while originating in the traditional Polish dance (the mazurek), differed from the traditional variety in that they were written for the concert hall rather than the dance hall; "it was Chopin who put the mazurka on the European musical map."Jones (1998b), p. 177. The series of seven polonaises published in his lifetime (another nine were published posthumously), beginning with the Op. 26 pair (published 1836), set a new standard for music in the form. His waltzes were also written specifically for the salon recital rather than the ballroom and are frequently at rather faster tempos than their dance-floor equivalents.Jones (1998a), p. 162.
Titles, opus numbers and editions
Some of Chopin's well-known pieces have acquired descriptive titles, such as the Revolutionary Étude (Op. 10, No. 12), and the Minute Waltz (Op. 64, No. 1). However, with the exception of his Funeral March, the composer never named an instrumental work beyond genre and number, leaving all potential extramusical associations to the listener; the names by which many of his pieces are known were invented by others.Hedley (2005), p. 264; Kennedy (1980), p. 130, Chopin, Fryderyk. There is no evidence to suggest that the Revolutionary Étude was written with the failed Polish uprising against Russia in mind; it merely appeared at that time.Hedley and Brown (1980), p. 294. The Funeral March, the third movement of his Sonata No. 2 (Op. 35), the one case where he did give a title, was written before the rest of the sonata, but no specific event or death is known to have inspired it.Kallberg (2001), pp. 4–8.
The last opus number that Chopin himself used was 65, allocated to the Cello Sonata in G minor. He expressed a deathbed wish that all his unpublished manuscripts be destroyed. At the request of the composer's mother and sisters, however, his musical executor Julian Fontana selected 23 unpublished piano pieces and grouped them into eight further opus numbers (Opp. 66–73), published in 1855. In 1857, 17 Polish songs that Chopin wrote at various stages of his life were collected and published as Op. 74, though their order within the opus did not reflect the order of composition.
Works published since 1857 have received alternative catalogue designations instead of opus numbers. The present standard musicological reference for Chopin's works is the Kobylańska Catalogue (usually represented by the initials 'KK'), named for its compiler, the Polish musicologist Krystyna Kobylańska."What does the "KK" Mean?", The Chopin Project Website, accessed 21 December 2013.
Chopin's original publishers included Maurice Schlesinger and Camille Pleyel.Atwood (1999), pp. 166–7. His works soon began to appear in popular 19th-century piano anthologies.de Val (1998), p. 127. The first collected edition was by Breitkopf & Härtel (1878–1902).de Val (1998), p. 129. Among modern scholarly editions of Chopin's works are the version under the name of Paderewski published between 1937 and 1966 and the more recent Polish "National Edition", edited by Jan Ekier, both of which contain detailed explanations and discussions regarding choices and sources.Temperley (1980), p. 306.Jan Ekier, "Foundation for the National Edition of the Works of Fryderyk Chopin" on the website of the Fryderyk Chopin Institute, (accessed 4 August 2014).
Chopin published his music in France, England and the German states due to the copyright laws of the time. As such there are often three different kinds of ‘first editions’. Each edition is different from the other, as Chopin edited them separately and at times he did some revision to the music while editing it. Furthermore, Chopin provided his publishers with varying sources, including autographs, annotated proofsheets and scribal copies. Only recently have these differences gained greater recognition.
Form and harmony
thumb|Chopin's last (Pleyel) piano, on which he composed in 1848–49. Fryderyk Chopin Museum, Warsaw
Improvisation stands at the centre of Chopin's creative processes. However, this does not imply impulsive rambling: Nicholas Temperley writes that "improvisation is designed for an audience, and its starting-point is that audience's expectations, which include the current conventions of musical form."Temperley (1980), p. 298. The works for piano and orchestra, including the two concertos, are held by Temperley to be "merely vehicles for brilliant piano playing ... formally longwinded and extremely conservative".Temperley (1980), p. 305. After the piano concertos (which are both early, dating from 1830), Chopin made no attempts at large-scale multi-movement forms, save for his late sonatas for piano and for cello; "instead he achieved near-perfection in pieces of simple general design but subtle and complex cell-structure."Hutchings (1968), p. 137. Rosen suggests that an important aspect of Chopin's individuality is his flexible handling of the four-bar phrase as a structural unit.Rosen (1995), pp. 262–278.
J. Barrie Jones suggests that "amongst the works that Chopin intended for concert use, the four ballades and four scherzos stand supreme", and adds that "the Barcarolle Op. 60 stands apart as an example of Chopin's rich harmonic palette coupled with an Italianate warmth of melody."Jones (1998a), pp. 161–2. Temperley opines that these works, which contain "immense variety of mood, thematic material and structural detail", are based on an extended "departure and return" form; "the more the middle section is extended, and the further it departs in key, mood and theme, from the opening idea, the more important and dramatic is the reprise when it at last comes."Temperley (1980), p. 304.
Chopin's mazurkas and waltzes are all in straightforward ternary or episodic form, sometimes with a coda.Jones (1998b), p. 177; Temperley (1980), p. 304. The mazurkas often show more folk features than many of his other works, sometimes including modal scales and harmonies and the use of drone basses. However, some also show unusual sophistication, for example Op. 63 No. 3, which includes a canon at one beat's distance, a great rarity in music.Jones (1998b), pp. 177–9.
Chopin's polonaises show a marked advance on those of his Polish predecessors in the form (who included his teachers Zywny and Elsner). As with the traditional polonaise, Chopin's works are in triple time and typically display a martial rhythm in their melodies, accompaniments and cadences. Unlike most of their precursors, they also require a formidable playing technique.Reiss (1980), p. 51.
The 21 nocturnes are more structured, and of greater emotional depth, than those of Field (whom Chopin met in 1833). Many of the Chopin nocturnes have middle sections marked by agitated expression (and often making very difficult demands on the performer) which heightens their dramatic character.Brown (1980), p. 258.
Chopin's études are largely in straightforward ternary form.Jones (1998a), p. 160. He used them to teach his own technique of piano playing—for instance playing double thirds (Op. 25, No. 6), playing in octaves (Op. 25, No. 10), and playing repeated notes (Op. 10, No. 7).Jones (1998a), pp. 160–161.
The preludes, many of which are very brief (some consisting of simple statements and developments of a single theme or figure), were described by Schumann as "the beginnings of studies".Jones (1998a), p. 161. Inspired by J.S. Bach's The Well-Tempered Clavier, Chopin's preludes move up the circle of fifths (rather than Bach's chromatic scale sequence) to create a prelude in each major and minor tonality.Rosen (1995), p. 83. The preludes were perhaps not intended to be played as a group, and may even have been used by him and later pianists as generic preludes to others of his pieces, or even to music by other composers, as Kenneth Hamilton suggests: he has noted a recording by Ferruccio Busoni of 1922, in which the Prelude Op. 28 No. 7 is followed by the Étude Op. 10 No. 5.Hamilton (2008), pp. 101–2.
The two mature piano sonatas (No. 2, Op. 35, written in 1839 and No. 3, Op. 58, written in 1844) are in four movements. In Op. 35, Chopin was able to combine within a formal large musical structure many elements of his virtuosic piano technique—"a kind of dialogue between the public pianism of the brilliant style and the German sonata principle".Michałowski and Samson (n.d.), §9 para. 2. The last movement, a brief (75-bar) perpetuum mobile in which the hands play in unmodified octave unison throughout, was found shocking and unmusical by contemporaries, including Schumann.Rosen (1995), pp. 294–7. The Op. 58 sonata is closer to the German tradition, including many passages of complex counterpoint, "worthy of Brahms" according to the music historians Kornel Michałowski and Jim Samson.
Chopin's harmonic innovations may have arisen partly from his keyboard improvisation technique. Temperley says that in his works "novel harmonic effects frequently result from the combination of ordinary appoggiaturas or passing notes with melodic figures of accompaniment", and cadences are delayed by the use of chords outside the home key (neapolitan sixths and diminished sevenths), or by sudden shifts to remote keys. Chord progressions sometimes anticipate the shifting tonality of later composers such as Claude Debussy, as does Chopin's use of modal harmony.Temperley (1980), pp. 302–3.
Technique and performance style
thumb|right|upright=1.4|Extract from Chopin Nocturne Op. 62 no. 1 (1846, composer's manuscript)
thumb|right|upright=1.4|The same passage (1881 Schirmer edition). The examples show typical use by Chopin of trills, grace notes and detailed pedalling and tempo instructions.
In 1841, Léon Escudier wrote of a recital given by Chopin that year, "One may say that Chopin is the creator of a school of piano and a school of composition. In truth, nothing equals the lightness, the sweetness with which the composer preludes on the piano; moreover nothing may be compared to his works full of originality, distinction and grace."Samson (1994), p. 136. Chopin refused to conform to a standard method of playing and believed that there was no set technique for playing well. His style was based extensively on his use of very independent finger technique. In his Projet de méthode he wrote: "Everything is a matter of knowing good fingering ... we need no less to use the rest of the hand, the wrist, the forearm and the upper arm."Cited in Eigeldinger (1988), p. 18. He further stated: "One needs only to study a certain position of the hand in relation to the keys to obtain with ease the most beautiful quality of sound, to know how to play short notes and long notes, and [to attain] unlimited dexterity."Cited in Eigeldinger (1988), p. 23. The consequences of this approach to technique in Chopin's music include the frequent use of the entire range of the keyboard, passages in double octaves and other chord groupings, swiftly repeated notes, the use of grace notes, and the use of contrasting rhythms (four against three, for example) between the hands.Eigeldinger (1988), pp. 18–20.
Jonathan Bellman writes that modern concert performance style—set in the "conservatory" tradition of late 19th- and 20th-century music schools, and suitable for large auditoria or recordings—militates against what is known of Chopin's more intimate performance technique.Bellman (2000), pp. 149–50. The composer himself said to a pupil that "concerts are never real music, you have to give up the idea of hearing in them all the most beautiful things of art."Cited in Bellman (2000), p. 150; the pupil was Emilie von Gretsch. Contemporary accounts indicate that in performance, Chopin avoided rigid procedures sometimes incorrectly attributed to him, such as "always crescendo to a high note", but that he was concerned with expressive phrasing, rhythmic consistency and sensitive colouring.Bellman (2000), pp. 153–4. Berlioz wrote in 1853 that Chopin "has created a kind of chromatic embroidery ... whose effect is so strange and piquant as to be impossible to describe ... virtually nobody but Chopin himself can play this music and give it this unusual turn".Cited in Eigeldinger (1988), p. 272. Hiller wrote that "What in the hands of others was elegant embellishment, in his hands became a colourful wreath of flowers."Cited in Bellman (2000), p. 154.
Chopin's music is frequently played with rubato, "the practice in performance of disregarding strict time, 'robbing' some note-values for expressive effect".Latham (n.d.). There are differing opinions as to how much, and what type, of rubato is appropriate for his works. Charles Rosen comments that "most of the written-out indications of rubato in Chopin are to be found in his mazurkas ... It is probable that Chopin used the older form of rubato so important to Mozart ... [where] the melody note in the right hand is delayed until after the note in the bass ... An allied form of this rubato is the arpeggiation of the chords thereby delaying the melody note; according to Chopin's pupil, Karol Mikuli, Chopin was firmly opposed to this practice."Rosen (1995), p. 413.
Friederike Müller, a pupil of Chopin, wrote: "[His] playing was always noble and beautiful; his tones sang, whether in full forte or softest piano. He took infinite pains to teach his pupils this legato, cantabile style of playing. His most severe criticism was 'He—or she—does not know how to join two notes together.' He also demanded the strictest adherence to rhythm. He hated all lingering and dragging, misplaced rubatos, as well as exaggerated ritardandos ... and it is precisely in this respect that people make such terrible errors in playing his works."
Polish heritage
The "Polish character" of Chopin's work is unquestionable; not because he also wrote polonaises and mazurkas ... which forms ... were often stuffed with alien ideological and literary contents from the outside. ... As an artist he looked for forms that stood apart from the literary-dramatic character of music which was a feature of Romanticism, as a Pole he reflected in his work the very essence of the tragic break in the history of the people and instinctively aspired to give the deepest expression of his nation ... For he understood that he could invest his music with the most enduring and truly Polish qualities only by liberating art from the confines of dramatic and historical contents. This attitude toward the question of "national music" – an inspired solution to his art – was the reason why Chopin's works have come to be understood everywhere outside of Poland ... Therein lies the strange riddle of his eternal vigour.Karol Szymanowski, 1923Cited from Szymanowski's 1923 essay, "Fryderyk Chopin", in Downes (2001), p. 63 and n. 58.With his mazurkas and polonaises, Chopin has been credited with introducing to music a new sense of nationalism. Schumann, in his 1836 review of the piano concertos, highlighted the composer's strong feelings for his native Poland, writing that "Now that the Poles are in deep mourning [after the failure of the November 1830 rising], their appeal to us artists is even stronger ... If the mighty autocrat in the north [i.e. Nicholas I of Russia] could know that in Chopin's works, in the simple strains of his mazurkas, there lurks a dangerous enemy, he would place a ban on his music. Chopin's works are cannon buried in flowers!"Schumann (1988), p. 114. The biography of Chopin published in 1863 under the name of Franz Liszt (but probably written by Carolyne zu Sayn-Wittgenstein)Cooke (1966), pp. 856–61. claims that Chopin "must be ranked first among the first musicians ... individualizing in themselves the poetic sense of an entire nation."Liszt (1880), loc. 1503.
Some modern commentators have argued against exaggerating Chopin's primacy as a "nationalist" or "patriotic" composer. George Golos refers to earlier "nationalist" composers in Central Europe, including Poland's Michał Kleofas Ogiński and Franciszek Lessel, who utilised polonaise and mazurka forms.Golos (1960), pp. 439–42. Barbara Milewski suggests that Chopin's experience of Polish music came more from "urbanised" Warsaw versions than from folk music, and that attempts (by Jachimecki and others) to demonstrate genuine folk music in his works are without basis.Milewski (1999), pp. 113–21. Richard Taruskin impugns Schumann's attitude toward Chopin's works as patronizingTaruskin (2010), pp. 344–45. and comments that Chopin "felt his Polish patriotism deeply and sincerely" but consciously modelled his works on the tradition of Bach, Beethoven, Schubert and Field.Taruskin (2010), p. 346; see also Rosen (1995), pp. 361–63.
A reconciliation of these views is suggested by William Atwood: "Undoubtedly [Chopin's] use of traditional musical forms like the polonaise and mazurka roused nationalistic sentiments and a sense of cohesiveness amongst those Poles scattered across Europe and the New World ... While some sought solace in [them], others found them a source of strength in their continuing struggle for freedom. Although Chopin's music undoubtedly came to him intuitively rather than through any conscious patriotic design, it served all the same to symbolize the will of the Polish people ..."Atwood (1999), p. 57.
Reception and influence
thumb|left|upright=0.7|Funerary monument on a pillar in Holy Cross Church, Warsaw, enclosing Chopin's heart
Jones comments that "Chopin's unique position as a composer, despite the fact that virtually everything he wrote was for the piano, has rarely been questioned." He also notes that Chopin was fortunate to arrive in Paris in 1831—"the artistic environment, the publishers who were willing to print his music, the wealthy and aristocratic who paid what Chopin asked for their lessons"—and these factors, as well as his musical genius, also fuelled his contemporary and later reputation. While his illness and his love-affairs conform to some of the stereotypes of romanticism, the rarity of his public recitals (as opposed to performances at fashionable Paris soirées) led Arthur Hutchings to suggest that "his lack of Byronic flamboyance [and] his aristocratic reclusiveness make him exceptional" among his romantic contemporaries, such as Liszt and Henri Herz.
Chopin's qualities as a pianist and composer were recognized by many of his fellow musicians. Schumann named a piece for him in his suite Carnaval, and Chopin later dedicated his Ballade No. 2 in F major to Schumann. Elements of Chopin's music can be traced in many of Liszt's later works. Liszt later transcribed for piano six of Chopin's Polish songs. A less fraught friendship was with Alkan, with whom he discussed elements of folk music, and who was deeply affected by Chopin's death.Conway (2012), pp. 229–30.
Two of Chopin's long-standing pupils, Karol Mikuli (1821–1897) and Georges Mathias, were themselves piano teachers and passed on details of his playing to their own students, some of whom (such as Raoul Koczalski) were to make recordings of his music. Other pianists and composers influenced by Chopin's style include Louis Moreau Gottschalk, Édouard Wolff (1816–1880) and Pierre Zimmermann.Bellman (2000), pp. 150–51. Debussy dedicated his own 1915 piano Études to the memory of Chopin; he frequently played Chopin's music during his studies at the Paris Conservatoire, and undertook the editing of Chopin's piano music for the publisher Jacques Durand.Wheeldon (2009), pp. 55, 62.
thumb|right|upright=0.7|Chopin statue, Łazienki Park, Warsaw
Polish composers of the following generation included virtuosi such as Moritz Moszkowski, but, in the opinion of J. Barrie Jones, his "one worthy successor" among his compatriots was Karol Szymanowski (1882–1937).Jones (1998b), p. 180. Edvard Grieg, Antonín Dvořák, Isaac Albéniz, Pyotr Ilyich Tchaikovsky and Sergei Rachmaninoff, among others, are regarded by critics as having been influenced by Chopin's use of national modes and idioms.Temperley (1980), p. 307. Alexander Scriabin was devoted to the music of Chopin, and his early published works include nineteen mazurkas, as well as numerous études and preludes; his teacher Nikolai Zverev drilled him in Chopin's works to improve his virtuosity as a performer.Bowers (1996), p. 134. In the 20th century, composers who paid homage to (or in some cases parodied) the music of Chopin included George Crumb, Bohuslav Martinů, Darius Milhaud,
Igor StravinskyMariola Wojtkiewicz, tr. Jerzy Ossowski, "The Impact of Chopin's Music on the Work of 19th and 20th Century Composers", in chopin.pl website, accessed 4 January 2014. and Heitor Villa-Lobos.Hommage á Chopin on IMSLP website, accessed 27 October 2014.
Chopin's music was used in the 1909 ballet Chopiniana, choreographed by Michel Fokine and orchestrated by Alexander Glazunov. Sergei Diaghilev commissioned additional orchestrations—from Stravinsky, Anatoly Lyadov, Sergei Taneyev and Nikolai Tcherepnin—for later productions, which used the title Les Sylphides.Taruskin (1996), pp. 546–7.
Chopin's music remains very popular and is regularly performed, recorded and broadcast worldwide. The world's oldest monographic music competition, the International Chopin Piano Competition, founded in 1927, is held every five years in Warsaw."About Competition", International Chopin Competition website, accessed 12 January 2014. The Fryderyk Chopin Institute of Poland lists on its website over eighty societies worldwide devoted to the composer and his music."Institutions related to Chopin – Associations", Fryderyk Chopin Institute website, accessed 5 January 2014. The Institute site also lists nearly 1,500 performances of Chopin works on YouTube as of January 2014."Chopin on YouTube", Fryderyk Chopin Institute website, accessed 5 January 2014.
Recordings
The British Library notes that "Chopin's works have been recorded by all the great pianists of the recording era." The earliest recording was an 1895 performance by Paul Pabst of the Nocturne in E major Op. 62 No. 2. The British Library site makes available a number of historic recordings, including some by Alfred Cortot, Ignaz Friedman, Vladimir Horowitz, Benno Moiseiwitsch, Ignacy Jan Paderewski, Arthur Rubinstein, Xaver Scharwenka and many others."Chopin", British Library website, accessed 22 December 2013. Recordings accessible free online throughout the European Union. A select discography of recordings of Chopin works by pianists representing the various pedagogic traditions stemming from Chopin is given by Methuen-Campbell in his work tracing the lineage and character of those traditions.Methuen-Campbell (1981), pp. 241-67.
Numerous recordings of Chopin's works are available. On the occasion of the composer's bicentenary, the critics of The New York Times recommended performances by the following contemporary pianists (among many others):Anthony Tommasini et al., "1 Composer, 2 Centuries, Many Picks", The New York Times, 27 May 2010, accessed 28 December 2013. Martha Argerich, Vladimir Ashkenazy, Emanuel Ax, Evgeny Kissin, Murray Perahia, Maurizio Pollini and Krystian Zimerman. The Warsaw Chopin Society organizes the Grand prix du disque de F. Chopin for notable Chopin recordings, held every five years.Grand Prix du Disque Frédéric Chopin website, accessed 2 January 2014.
In literature, stage, film and television
thumb|left|upright=0.5|Chopin's grave at Père-Lachaise cemetery, Paris
Chopin has figured extensively in Polish literature, both in serious critical studies of his life and music and in fictional treatments. The earliest manifestation was probably an 1830 sonnet on Chopin by Leon Ulrich. French writers on Chopin (apart from Sand) have included Marcel Proust and André Gide; and he has also featured in works of Gottfried Benn and Boris Pasternak.Andrzej Hejmej, tr. Philip Stoeckle, "Chopin and his music in literature", in chopin.pl website, accessed 4 January 2014. There are numerous biographies of Chopin in English (see bibliography for some of these).
Possibly the first venture into fictional treatments of Chopin's life was a fanciful operatic version of some of its events. Chopin was written by Giacomo Orefice and produced in Milan in 1901. All the music is derived from that of Chopin.Ashbrooke (n.d.); Lanza (n.d.).
Chopin's life and his relations with George Sand have been fictionalized in numerous films. The 1945 biographical film A Song to Remember earned Cornel Wilde an Academy Award nomination as Best Actor for his portrayal of the composer. Other film treatments have included: La valse de l'adieu (France, 1928) by Henry Roussel, with Pierre Blanchar as Chopin; Impromptu (1991), starring Hugh Grant as Chopin; La note bleue (1991); and Chopin: Desire for Love (2002).Iwona Sowińska, tr. Philip Stoeckle, "Chopin goes to the movies", in chopin.pl website, accessed 4 January 2014. The site gives details of numerous other films featuring Chopin.
Chopin's life was covered in a BBC TV documentary Chopin – The Women Behind The Music (2010), and in a 2010 documentary realised by Angelo Bozzolini and Roberto Prosseda for Italian television.Film poster (in Italian), media.wix website, accessed 25 August 2013.
References
Notes
Citations
Bibliography
Ashbrooke, William (n.d). "Chopin", in The New Grove Dictionary of Opera online, accessed 4 August 2014. .
Atwood, William G. (1999). The Parisian Worlds of Frédéric Chopin. New Haven and London: Yale University Press. ISBN 978-0-300-07773-5.
Bellman, Jonathan (2000). "Chopin and His Imitators: Notated Emulations of the "True Style" of Performance", in 19th-Century Music, vol. 24, no. 2 (Autumn, 2000), pp. 149–160.
Bowers, Faubion (1996). Scriabin: A Biography. Mineola, NY: Dover Publications. ISBN 0-486-28897-8.
Brown, Maurice (1980). "Nocturne", in The New Grove Dictionary of Music and Musicians, ed. Stanley Sadie (20 vols.). London: Macmillan Publishers. ISBN 0-333-23111-2. Vol.13, pp. 258–9.
Chopin, Fryderyk (1962). Selected Correspondence of Fryderyk Chopin, coll. B. Sydow, tr. Arthur Hedley. London: Heinemann.
Conway, David (2012). Jewry in Music: Entry to the Profession from the Enlightenment to Richard Wagner. Cambridge: Cambridge University Press. ISBN 978-1-107-01538-8
Cooke, Charles (1966). "Chopin and Liszt with a Ghostly Twist" in Notes, Second Series, vol. 22, no. 2 (Winter, 1965 – Winter, 1966), pp. 855–61
De Val, Dorothy, and Cyril Ehrlich. "Repertory and Canon", in David Rowland (ed.), The Cambridge Companion to the Piano, 176–191. Cambridge: Cambridge University Press. ISBN 978-0-521-47986-8.
Downes, Stephen (2001). "Eros and PanEuropeanism", in Harry White and Michael Murphy (eds.), Musical Constructions of Nationalism: Essays on the History and Ideology of European Musical Culture 1800–1945, Cork: Cork University Press, pp. 51–71. ISBN 1-85918-322-0.
Ferguson, Howard (1980). "Study", in Stanley Sadie (ed.), The New Grove Dictionary of Music and Musicians, London: Macmillan, vol. 18, pp. 304–5.
Golos, George S. (1960). "Some Slavic Predecessors of Chopin" in The Musical Quarterly vol. 46 no. 4, pp. 437–47.
Hamilton, Kenneth (2008). After the Golden Age: Romantic Pianism and Modern Performance. Oxford: Oxford University Press. ISBN 978-0-19-517826-5
Hedley, Arthur et al. (2005). "Chopin, Frédéric (François)," Encyclopædia Britannica, 15th ed., vol. 3, pp. 263–64.
Hedley, Arthur and Maurice Brown (1980). "Chopin, Fryderyk Franciszek [Frédéric François]", sections 1–6 in S. Sadie (ed.) The New Grove Dictionary of Music and Musicians, London: Macmillan, vol. 4, pp. 292–8.
Hutchings, A. G. B. (1968). "The Romantic Era", in Alec Robertson and Denis Stevens (eds.), The Pelican History of Music 3: Classical and Romantic, Harmondsworth: Penguin Books, pp. 99–139.
Jones, J. Barrie (1998a). "Piano music for concert hall and salon c. 1830–1900", in David Rowland (ed.), The Cambridge Companion to the Piano, pp. 151–175. Cambridge: Cambridge University Press. ISBN 978-0-521-47986-8.
Jones, J. Barrie (1998b). "Nationalism", in David Rowland (ed.), The Cambridge Companion to the Piano, pp. 176–191. Cambridge: Cambridge University Press. ISBN 978-0-521-47986-8.
Kallberg, Jeffrey (2001). "Chopin's March, Chopin's Death", in 19th-Century Music, Vol. 25, No. 1 (Summer 2001), pp. 3–26.
Kennedy, Michael (1980). The Concise Oxford Dictionary of Music. Oxford: Oxford University Press. ISBN 978-0-19-311315-2.
Kubba, Adam and Madeleine Young (1998). "The Long Suffering of Frederic Chopin", in Chest, vol. 113 (1998), pp. 210–6. accessed 16 August 2014.
Kuhnke, Monika (2010). "Oryginalne kopie, czyli historia portretów rodziny Chopinów", in Cenne Bezcenne Utracone, no. 62 (2010 no. 1), pp. 8–12. In Polish. (English summary). Article and summary accessed 28 December 2013.
Kuzemko, J. A. (1994). "Chopin's Illnesses" in Journal of the Royal Society of Medicine volume 87 (December 1994) pp. 769–772, accessed 16 August 2014.
Lanza, Andrea (n.d.). "Orefice, Giacomo" in Oxford Companion to Music, Oxford Music Online, accessed 8 August 2014. .
Latham, Alison (n.d.). "Rubato" in Oxford Companion to Music, Oxford Music Online , accessed 15 July 2014.
Liszt, Franz, tr. M. W. Cook (1880). Life of Chopin (4th edition). E-text in Kindle version at Project Gutenberg accessed 27 December 2013.
Majka, Lucyna, Joanna Gozdzik and Michał Witt (2003). "Cystic fibrosis – a probable cause of Frédéric Chopin's suffering and death" in Journal of Applied Genetics, vol 44(1), pp. 77–84, accessed 16 August 2014.
Methuen-Campbell, James (1981). Chopin Playing from the Composer to the Present Day. London: Victor Gollancz Ltd.
Michałowski, Kornel, and Jim Samson (n.d.), "Chopin, Fryderyk Franciszek", Grove Music Online (accessed 25 July 2013).
Milewski, Barbara (1999). "Chopin's Mazurkas and the Myth of the Folk", in 19th-Century Music, vol. 23, no. 2 (Autumn 1999), pp. 113–35.
Müller-Streicher, Friederike (1949). "Aus dem Tagebuch einer Wiener Chopin-Schülerin (1839–1841, 1844–1845)" in Chopin Almanach, Potsdam, pp. 134–42. In German.
Niecks, Frederick (1902). Frederick Chopin as a Man and Musician, 3rd edition. E-text in Kindle version at Project Gutenberg accessed 4 January 2014.
Reiss, Jozef and Maurice Brown (1980). "Polonaise", in The New Grove Dictionary of Music and Musicians, ed. Stanley Sadie (20 vols.). London: Macmillan Publishers. ISBN 0-333-23111-2. Vol.15, pp. 49–52.
Rosen, Charles (1995). The Romantic Generation. Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-77933-4.
Rottermund, Krzysztof (2008). "Chopin and Hesse: New Facts about Their Artistic Acquaintance", in American Organist Magazine, vol. 42, issue 3, p. 82.
Samson, Jim (1996). Chopin. Oxford: Oxford University Press. ISBN 978-0-19-816703-7
Scholes, Percy (1938). The Oxford Companion to Music. Oxford: Oxford University Press.
Schumann, Robert (1988), tr. and ed. Henry Pleasants. Schumann on Music: A Selection from the Writings. New York: Dover Publications. ISBN 978-0-486-25748-8.
Taruskin, Richard (1996). Stravinsky and the Russian Traditions. Oxford: Oxford University Press, ISBN 0-19-816250-2.
Taruskin, Richard (2010). Music in the Nineteenth Century. Oxford: Oxford University Press. ISBN 978-0-19-538483-3.
Temperley, Nicholas (1980). "Chopin, Fryderyk Franciszek [Frédéric François]", sections 1–7 in S. Sadie (ed.), The New Grove Dictionary of Music and Musicians, vol. 4. London: Macmillan, pp. 298–307.
Turnbull, Michael T. R. B. (1989). Monuments and Statues of Edinburgh. Edinburgh: Chambers. ISBN 0-550-20050-9.
Wheeldon, Marianne (2009). Debussy's Late Style. Bloomington: Indiana University Press. ISBN 978-0-253-35239-2.
Young, Pablo et al. (2014) "Federico Chopin (1810–1849) y su enfermedad", Revista médica de Chile, vol. 142, no.4, pp. 529–535. In Spanish (summary in English). Accessed 16 August 2014.
Załuski, Iwo and Pamela (1992). "Chopin in London", in The Musical Times, vol. 133, no. 1791 (May 1992), pp. 226–230.
Zamoyski, Adam (2010). Chopin: Prince of the Romantics. London: HarperCollins. ISBN 978-0-00-735182-4 (e-book edition).
External links
Biography on official site of Fryderyk Chopin Institute
Chopin's last piano (Pleyel 14810)
Music scores
Chopin Early Editions, a collection of over 400 first and early printed editions of musical compositions by Frédéric Chopin published before 1881
Chopin's First Editions Online features an interface that allows three navigable scores to be open simultaneously in frames to facilitate comparison.
Category:Frédéric Chopin
Category:1810 births
Category:Polish people of French descent
Category:People from Sochaczew County
Category:People of Lorrainian descent
Category:Musicians from Warsaw
Category:19th-century classical composers
Category:19th-century classical pianists
Category:Child classical musicians
Category:19th-century Polish people
Category:University of Warsaw alumni
Category:Composers for piano
Category:Polish classical pianists
Category:Polish Romantic composers
Category:Polish male classical composers
Category:Polish music educators
Category:Great Emigration
Category:Polish emigrants to France
Category:19th-century French musicians
Category:19th-century French people
Category:French classical pianists
Category:French male classical composers
Category:French classical composers
Category:French music educators
Category:French Romantic composers
Category:Infectious disease deaths in France
Category:19th-century deaths from tuberculosis
Category:1849 deaths
Category:Burials at Père Lachaise Cemetery
Category:19th-century Polish musicians | 10,823 | 2017-01 |
Group (mathematics) | thumb|right|The manipulations of this Rubik's Cube form the Rubik's Cube group.
In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element. The operation satisfies four conditions called the group axioms, namely closure, associativity, identity and invertibility. One of the most familiar examples of a group is the set of integers together with the addition operation, but the abstract formalization of the group axioms, detached as it is from the concrete nature of any particular group and its operation, applies much more widely. It allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining their essential structural aspects. The ubiquity of groups in numerous areas within and outside mathematics makes them a central organizing principle of contemporary mathematics.: "The idea of a group is one which pervades the whole of mathematics both pure and applied."
Groups share a fundamental kinship with the notion of symmetry. For example, a symmetry group encodes symmetry features of a geometrical object: the group consists of the set of transformations that leave the object unchanged and the operation of combining two such transformations by performing one after the other. Lie groups are the symmetry groups used in the Standard Model of particle physics; Poincaré groups, which are also Lie groups, can express the physical symmetry underlying special relativity; and point groups are used to help understand symmetry phenomena in molecular chemistry.
The concept of a group arose from the study of polynomial equations, starting with Évariste Galois in the 1830s. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely, both from a point of view of representation theory (that is, through the representations of the group) and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become a particularly active area in group theory.
Definition and illustration
First example: the integers
One of the most familiar groups is the set of integers Z which consists of the numbers
..., −4, −3, −2, −1, 0, 1, 2, 3, 4, ..., together with addition.
The following properties of integer addition serve as a model for the abstract group axioms given in the definition below.
For any two integers a and b, the sum a + b is also an integer. That is, addition of integers always yields an integer. This property is known as closure under addition.
For all integers a, b and c, (a + b) + c = a + (b + c). Expressed in words, adding a to b first, and then adding the result to c gives the same final result as adding a to the sum of b and c, a property known as associativity.
If a is any integer, then 0 + a = a + 0 = a. Zero is called the identity element of addition because adding it to any integer returns the same integer.
For every integer a, there is an integer b such that a + b = b + a = 0. The integer b is called the inverse element of the integer a and is denoted −a.
The integers, together with the operation +, form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures as a collective, the following abstract definition is developed.
Definition
A group is a set, G, together with an operation • (called the group law of G) that combines any two elements a and b to form another element, denoted or ab. To qualify as a group, the set and operation, , must satisfy four requirements known as the group axioms:
Closure For all a, b in G, the result of the operation, a • b, is also in G.
Associativity For all a, b and c in G, (a • b) • c = a • (b • c).
Identity element There exists an element e in G such that, for every element a in G, the equation holds. Such an element is unique (see below), and thus one speaks of the identity element.
Inverse element For each a in G, there exists an element b in G, commonly denoted a−1 (or −a, if the operation is denoted "+"), such that a • b = b • a = e, where e is the identity element.
The result of an operation may depend on the order of the operands. In other words, the result of combining element a with element b need not yield the same result as combining element b with element a; the equation
may not always be true. This equation always holds in the group of integers under addition, because for any two integers (commutativity of addition). Groups for which the commutativity equation always holds are called abelian groups (in honor of Niels Henrik Abel). The symmetry group described in the following section is an example of a group that is not abelian.
The identity element of a group G is often written as 1 or 1G, a notation inherited from the multiplicative identity. If a group is abelian, then one may choose to denote the group operation by + and the identity element by 0; in that case, the group is called an additive group. The identity element can also be written as id.
The set G is called the underlying set of the group . Often the group's underlying set G is used as a short name for the group . Along the same lines, shorthand expressions such as "a subset of the group G" or "an element of group G" are used when what is actually meant is "a subset of the underlying set G of the group " or "an element of the underlying set G of the group ". Usually, it is clear from the context whether a symbol like G refers to a group or to an underlying set.
Second example: a symmetry group
Two figures in the plane are congruent if one can be changed into the other using a combination of rotations, reflections, and translations. Any figure is congruent to itself. However, some figures are congruent to themselves in more than one way, and these extra congruences are called symmetries. A square has eight symmetries. These are:
+ The elements of the symmetry group of the square (D4). Vertices are identified by color or number. 140px id (keeping it as it is) 140px r1 (rotation by 90° clockwise) 140px r2 (rotation by 180° clockwise) 140px r3 (rotation by 270° clockwise) 140px fv (vertical reflection) 140px fh (horizontal reflection) 140px fd (diagonal reflection) 140px fc (counter-diagonal reflection)
the identity operation leaving everything unchanged, denoted id;
rotations of the square around its center by 90° clockwise, 180° clockwise, and 270° clockwise, denoted by r1, r2 and r3, respectively;
reflections about the vertical and horizontal middle line (fh and fv), or through the two diagonals (fd and fc).
These symmetries are represented by functions. Each of these functions sends a point in the square to the corresponding point under the symmetry. For example, r1 sends a point to its rotation 90° clockwise around the square's center, and fh sends a point to its reflection across the square's vertical middle line. Composing two of these symmetry functions gives another symmetry function. These symmetries determine a group called the dihedral group of degree 4 and denoted D4. The underlying set of the group is the above set of symmetry functions, and the group operation is function composition. Two symmetries are combined by composing them as functions, that is, applying the first one to the square, and the second one to the result of the first application. The result of performing first a and then b is written symbolically from right to left as
("apply the symmetry b after performing the symmetry a").
The right-to-left notation is the same notation that is used for composition of functions.
The group table on the right lists the results of all such compositions possible. For example, rotating by 270° clockwise (r3) and then reflecting horizontally (fh) is the same as performing a reflection along the diagonal (fd). Using the above symbols, highlighted in blue in the group table:
.
+ Group table of D4 • id r1 r2 r3 fv fh fd fc id id r1 r2 r3 fv fh fd fc r1 r1 r2 r3 id fc fd fv fh r2 r2 r3 id r1 fh fv fc fd r3 r3 id r1 r2 fd fc fh fv fv fv fd fh fc id r2 r1 r3 fh fh fc fv fd r2 id r3 r1 fd fd fh fc fv r3 r1 id r2 fc fc fv fd fh r1 r3 r2 id The elements id, r1, r2, and r3 form a subgroup, highlighted in red (upper left region). A left and right coset of this subgroup is highlighted in green (in the last row) and yellow (last column), respectively.
Given this set of symmetries and the described operation, the group axioms can be understood as follows:
In contrast to the group of integers above, where the order of the operation is irrelevant, it does matter in D4: but In other words, D4 is not abelian, which makes the group structure more difficult than the integers introduced first.
History
The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions). The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois' ideas were rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation θn = 1 (1854) gives the first abstract definition of a finite group.
Geometry was a second field in which groups were used systematically, especially symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884.
The third field contributing to group theory was number theory. Certain abelian group structures had been used implicitly in Carl Friedrich Gauss' number-theoretical work Disquisitiones Arithmeticae (1798), and more explicitly by Leopold Kronecker. In 1847, Ernst Kummer made early attempts to prove Fermat's Last Theorem by developing groups describing factorization into prime numbers.
The convergence of these various sources into a uniform theory of groups started with Camille Jordan's Traité des substitutions et des équations algébriques (1870). Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an "abstract group", in the terminology of the time. As of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside, who worked on representation theory of finite groups, Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley (from the late 1930s) and later by the work of Armand Borel and Jacques Tits.
The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by Aschbacher and Smith in 2004. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research is ongoing to simplify the proof of this classification. These days, group theory is still a highly active mathematical branch, impacting many other fields.
Elementary consequences of the group axioms
Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under elementary group theory. For example, repeated applications of the associativity axiom show that the unambiguity of
a • b • c = (a • b) • c = a • (b • c)
generalizes to more than three factors. Because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted.
The axioms may be weakened to assert only the existence of a left identity and left inverses. Both can be shown to be actually two-sided, so the resulting definition is equivalent to the one given above.
Uniqueness of identity element and inverses
Two important consequences of the group axioms are the uniqueness of the identity element and the uniqueness of inverse elements. There can be only one identity element in a group, and each element in a group has exactly one inverse element. Thus, it is customary to speak of the identity, and the inverse of an element.
To prove the uniqueness of an inverse element of a, suppose that a has two inverses, denoted b and c, in a group (G, •). Then
{|
|b ||=||b • e || ||as e is the identity element
|-
| ||=||b • (a • c) || ||because c is an inverse of a, so e = a • c
|-
| ||=||(b • a) • c || ||by associativity, which allows to rearrange the parentheses
|-
| ||=||e • c|| ||since b is an inverse of a, i.e. b • a = e
|-
| ||=||c|| || for e is the identity element
|}
The term b on the first line above and the c on the last are equal, since they are connected by a chain of equalities. In other words, there is only one inverse element of a. Similarly, to prove that the identity element of a group is unique, assume G is a group with two identity elements e and f. Then e = e • f = f, hence e and f are equal.
Division
In groups, the existence of inverse elements implies that division is possible: given elements a and b of the group G, there is exactly one solution x in G to the equation , namely . In fact, we have
Uniqueness results by multiplying the two sides of the equation by . The element , often denoted , is called the right quotient of b by a, or the result of the right division of b by a.
Similarly there is exactly one solution y in G to the equation , namely . This solution is the left quotient of b by a, and is sometimes denoted .
In general and may be different, but, if the group operation is commutative (that is, if the group is abelian), they are equal. In this case, the group operation is often denoted as an addition, and one talks of subtraction and difference instead of division and quotient.
A consequence of this is that multiplication by a group element g is a bijection. Specifically, if g is an element of the group G, the function (mathematics) from G to itself that maps to is a bijection. This function is called the left translation by g . Similarly, the right translation by g is the bijection from G to itself, that maps h to . If G is abelian, the left and the right translation by a group element are the same.
Basic concepts
The following sections use mathematical symbols such as to denote a set X containing elements x, y, and z, or alternatively to restate that x is an element of X. The notation means f is a function assigning to every element of X an element of Y.
To understand groups beyond the level of mere symbolic manipulations as above, more structural concepts have to be employed. There is a conceptual principle underlying all of the following notions: to take advantage of the structure offered by groups (which sets, being "structureless", do not have), constructions related to groups have to be compatible with the group operation. This compatibility manifests itself in the following notions in various ways. For example, groups can be related to each other via functions called group homomorphisms. By the mentioned principle, they are required to respect the group structures in a precise sense. The structure of groups can also be understood by breaking them into pieces called subgroups and quotient groups. The principle of "preserving structures"—a recurring topic in mathematics throughout—is an instance of working in a category, in this case the category of groups.
Group homomorphisms
Group homomorphisms are functions that preserve group structure. A function between two groups and is called a homomorphism if the equation
holds for all elements g, k in G. In other words, the result is the same when performing the group operation after or before applying the map a. This requirement ensures that , and also for all g in G. Thus a group homomorphism respects all the structure of G provided by the group axioms.
Two groups G and H are called isomorphic if there exist group homomorphisms and , such that applying the two functions one after another in each of the two possible orders gives the identity functions of G and H. That is, and for any g in G and h in H. From an abstract point of view, isomorphic groups carry the same information. For example, proving that for some element g of G is equivalent to proving that , because applying a to the first equality yields the second, and applying b to the second gives back the first.
Subgroups
Informally, a subgroup is a group H contained within a bigger one, G. Concretely, the identity element of G is contained in H, and whenever h1 and h2 are in H, then so are and h1−1, so the elements of H, equipped with the group operation on G restricted to H, indeed form a group.
In the example above, the identity and the rotations constitute a subgroup highlighted in red in the group table above: any two rotations composed are still a rotation, and a rotation can be undone by (i.e. is inverse to) the complementary rotations 270° for 90°, 180° for 180°, and 90° for 270° (note that rotation in the opposite direction is not defined). The subgroup test is a necessary and sufficient condition for a nonempty subset H of a group G to be a subgroup: it is sufficient to check that for all elements . Knowing the subgroups is important in understanding the group as a whole.
Given any subset S of a group G, the subgroup generated by S consists of products of elements of S and their inverses. It is the smallest subgroup of G containing S. In the introductory example above, the subgroup generated by r2 and fv consists of these two elements, the identity element id and . Again, this is a subgroup, because combining any two of these four elements or their inverses (which are, in this particular case, these same elements) yields an element of this subgroup.
Cosets
In many situations it is desirable to consider two group elements the same if they differ by an element of a given subgroup. For example, in D4 above, once a reflection is performed, the square never gets back to the r2 configuration by just applying the rotation operations (and no further reflections), i.e. the rotation operations are irrelevant to the question whether a reflection has been performed. Cosets are used to formalize this insight: a subgroup H defines left and right cosets, which can be thought of as translations of H by arbitrary group elements g. In symbolic terms, the left and right cosets of H containing g are
and respectively.
The left cosets of any subgroup H form a partition of G; that is, the union of all left cosets is equal to G and two left cosets are either equal or have an empty intersection. The first case happens precisely when , i.e. if the two elements differ by an element of H. Similar considerations apply to the right cosets of H. The left and right cosets of H may or may not be equal. If they are, i.e. for all g in G, , then H is said to be a normal subgroup.
In D4, the introductory symmetry group, the left cosets gR of the subgroup R consisting of the rotations are either equal to R, if g is an element of R itself, or otherwise equal to (highlighted in green). The subgroup R is also normal, because and similarly for any element other than fc. (In fact, in the case of D4, observe that all such cosets are equal, such that .)
Quotient groups
In some situations the set of cosets of a subgroup can be endowed with a group law, giving a quotient group or factor group. For this to be possible, the subgroup has to be normal. Given any normal subgroup N, the quotient group is defined by
G / N = {gN, g ∈ G}, "G modulo N".
This set inherits a group operation (sometimes called coset multiplication, or coset addition) from the original group G: for all g and h in G. This definition is motivated by the idea (itself an instance of general structural considerations outlined above) that the map that associates to any element g its coset gN be a group homomorphism, or by general abstract considerations called universal properties. The coset serves as the identity in this group, and the inverse of gN in the quotient group is .
+ Group table of the quotient group • R U R R U U U R
The elements of the quotient group are R itself, which represents the identity, and . The group operation on the quotient is shown at the right. For example, . Both the subgroup as well as the corresponding quotient are abelian, whereas D4 is not abelian. Building bigger groups by smaller ones, such as D4 from its subgroup R and the quotient is abstracted by a notion called semidirect product.
Quotient groups and subgroups together form a way of describing every group by its presentation: any group is the quotient of the free group over the generators of the group, quotiented by the subgroup of relations. The dihedral group D4, for example, can be generated by two elements r and f (for example, r = r1, the right rotation and f = fv the vertical (or any other) reflection), which means that every symmetry of the square is a finite composition of these two symmetries or their inverses. Together with the relations
r 4 = f 2 = (r • f)2 = 1,
the group is completely described. A presentation of a group can also be used to construct the Cayley graph, a device used to graphically capture discrete groups.
Sub- and quotient groups are related in the following way: a subset H of G can be seen as an injective map , i.e. any element of the target has at most one element that maps to it. The counterpart to injective maps are surjective maps (every element of the target is mapped onto), such as the canonical map . Interpreting subgroup and quotients in light of these homomorphisms emphasizes the structural concept inherent to these definitions alluded to in the introduction. In general, homomorphisms are neither injective nor surjective. Kernel and image of group homomorphisms and the first isomorphism theorem address this phenomenon.
Examples and applications
Examples and applications of groups abound. A starting point is the group Z of integers with addition as group operation, introduced above. If instead of addition multiplication is considered, one obtains multiplicative groups. These groups are predecessors of important constructions in abstract algebra.
Groups are also applied in many other mathematical areas. Mathematical objects are often examined by associating groups to them and studying the properties of the corresponding groups. For example, Henri Poincaré founded what is now called algebraic topology by introducing the fundamental group. By means of this connection, topological properties such as proximity and continuity translate into properties of groups. For example, elements of the fundamental group are represented by loops. The second image at the right shows some loops in a plane minus a point. The blue loop is considered null-homotopic (and thus irrelevant), because it can be continuously shrunk to a point. The presence of the hole prevents the orange loop from being shrunk to a point. The fundamental group of the plane with a point deleted turns out to be infinite cyclic, generated by the orange loop (or any other loop winding once around the hole). This way, the fundamental group detects the hole.
In more recent applications, the influence has also been reversed to motivate geometric constructions by a group-theoretical background. In a similar vein, geometric group theory employs geometric concepts, for example in the study of hyperbolic groups. Further branches crucially applying groups include algebraic geometry and number theory.for example, class groups and Picard groups; see , in particular §§I.12 and I.13
In addition to the above theoretical applications, many practical applications of groups exist. Cryptography relies on the combination of the abstract group theory approach together with algorithmical knowledge obtained in computational group theory, in particular when implemented for finite groups. Applications of group theory are not restricted to mathematics; sciences such as physics, chemistry and computer science benefit from the concept.
Numbers
Many number systems, such as the integers and the rationals enjoy a naturally given group structure. In some cases, such as with the rationals, both addition and multiplication operations give rise to group structures. Such number systems are predecessors to more general algebraic structures known as rings and fields. Further abstract algebraic concepts such as modules, vector spaces and algebras also form groups.
Integers
The group of integers Z under addition, denoted (Z, +), has been described above. The integers, with the operation of multiplication instead of addition, (Z, ·) do not form a group. The closure, associativity and identity axioms are satisfied, but inverses do not exist: for example, is an integer, but the only solution to the equation in this case is , which is a rational number, but not an integer. Hence not every element of Z has a (multiplicative) inverse.
Rationals
The desire for the existence of multiplicative inverses suggests considering fractions
Fractions of integers (with b nonzero) are known as rational numbers. The set of all such fractions is commonly denoted Q. There is still a minor obstacle for the rationals with multiplication, being a group: because the rational number 0 does not have a multiplicative inverse (i.e., there is no x such that ), is still not a group.
However, the set of all nonzero rational numbers does form an abelian group under multiplication, denoted . Associativity and identity element axioms follow from the properties of integers. The closure requirement still holds true after removing zero, because the product of two nonzero rationals is never zero. Finally, the inverse of a/b is b/a, therefore the axiom of the inverse element is satisfied.
The rational numbers (including 0) also form a group under addition. Intertwining addition and multiplication operations yields more complicated structures called rings and—if division is possible, such as in Q—fields, which occupy a central position in abstract algebra. Group theoretic arguments therefore underlie parts of the theory of those entities.
Modular arithmetic
thumb|right|The hours on a clock form a group that uses addition modulo 12. Here 9 + 4 = 1.
In modular arithmetic, two integers are added and then the sum is divided by a positive integer called the modulus. The result of modular addition is the remainder of that division. For any modulus, n, the set of integers from 0 to forms a group under modular addition: the inverse of any element a is , and 0 is the identity element. This is familiar from the addition of hours on the face of a clock: if the hour hand is on 9 and is advanced 4 hours, it ends up on 1, as shown at the right. This is expressed by saying that 9 + 4 equals 1 "modulo 12" or, in symbols,
9 + 4 ≡ 1 modulo 12.
The group of integers modulo n is written Zn or Z/nZ.
For any prime number p, there is also the multiplicative group of integers modulo p. Its elements are the integers 1 to . The group operation is multiplication modulo p. That is, the usual product is divided by p and the remainder of this division is the result of modular multiplication. For example, if , there are four group elements 1, 2, 3, 4. In this group, , because the usual product 16 is equivalent to 1, which divided by 5 yields a remainder of 1. for 5 divides , denoted
16 ≡ 1 (mod 5).
The primality of p ensures that the product of two integers neither of which is divisible by p is not divisible by p either, hence the indicated set of classes is closed under multiplication. The identity element is 1, as usual for a multiplicative group, and the associativity follows from the corresponding property of integers. Finally, the inverse element axiom requires that given an integer a not divisible by p, there exists an integer b such that
a · b ≡ 1 (mod p), i.e. p divides the difference .
The inverse b can be found by using Bézout's identity and the fact that the greatest common divisor equals 1. In the case above, the inverse of 4 is 4, and the inverse of 3 is 2, as . Hence all group axioms are fulfilled. Actually, this example is similar to above: it consists of exactly those elements in Z/pZ that have a multiplicative inverse. These groups are denoted Fp×. They are crucial to public-key cryptography.
Cyclic groups
right|thumb|upright|The 6th complex roots of unity form a cyclic group. z is a primitive element, but z2 is not, because the odd powers of z are not a power of z2.
A cyclic group is a group all of whose elements are powers of a particular element a. In multiplicative notation, the elements of the group are:
..., a−3, a−2, a−1, a0 = e, a, a2, a3, ...,
where a2 means a • a, and a−3 stands for a−1 • a−1 • a−1 = (a • a • a)−1 etc. Such an element a is called a generator or a primitive element of the group. In additive notation, the requirement for an element to be primitive is that each element of the group can be written as
..., −a−a, −a, 0, a, a+a, ...
In the groups Z/nZ introduced above, the element 1 is primitive, so these groups are cyclic. Indeed, each element is expressible as a sum all of whose terms are 1. Any cyclic group with n elements is isomorphic to this group. A second example for cyclic groups is the group of n-th complex roots of unity, given by complex numbers z satisfying . These numbers can be visualized as the vertices on a regular n-gon, as shown in blue at the right for . The group operation is multiplication of complex numbers. In the picture, multiplying with z corresponds to a counter-clockwise rotation by 60°. Using some field theory, the group Fp× can be shown to be cyclic: for example, if , 3 is a generator since , , , and .
Some cyclic groups have an infinite number of elements. In these groups, for every non-zero element a, all the powers of a are distinct; despite the name "cyclic group", the powers of the elements do not cycle. An infinite cyclic group is isomorphic to , the group of integers under addition introduced above. As these two prototypes are both abelian, so is any cyclic group.
The study of finitely generated abelian groups is quite mature, including the fundamental theorem of finitely generated abelian groups; and reflecting this state of affairs, many group-related notions, such as center and commutator, describe the extent to which a given group is not abelian.
Symmetry groups
Symmetry groups are groups consisting of symmetries of given mathematical objects—be they of geometric nature, such as the introductory symmetry group of the square, or of algebraic nature, such as polynomial equations and their solutions. Conceptually, group theory can be thought of as the study of symmetry. Symmetries in mathematics greatly simplify the study of geometrical or analytical objects. A group is said to act on another mathematical object X if every group element performs some operation on X compatibly to the group law. In the rightmost example below, an element of order 7 of the (2,3,7) triangle group acts on the tiling by permuting the highlighted warped triangles (and the other ones, too). By a group action, the group pattern is connected to the structure of the object being acted on.
right|thumb|200px|Rotations and reflections form the symmetry group of a great icosahedron.
In chemical fields, such as crystallography, space groups and point groups describe molecular symmetries and crystal symmetries. These symmetries underlie the chemical and physical behavior of these systems, and group theory enables simplification of quantum mechanical analysis of these properties.. See also For example, group theory is used to show that optical transitions between certain quantum levels cannot occur simply because of the symmetry of the states involved.
Not only are groups useful to assess the implications of symmetries in molecules, but surprisingly they also predict that molecules sometimes can change symmetry. The Jahn-Teller effect is a distortion of a molecule of high symmetry when it adopts a particular ground state of lower symmetry from a set of possible ground states that are related to each other by the symmetry operations of the molecule.
Likewise, group theory helps predict the changes in physical properties that occur when a material undergoes a phase transition, for example, from a cubic to a tetrahedral crystalline form. An example is ferroelectric materials, where the change from a paraelectric to a ferroelectric state occurs at the Curie temperature and is related to a change from the high-symmetry paraelectric state to the lower symmetry ferroelectric state, accompanied by a so-called soft phonon mode, a vibrational lattice mode that goes to zero frequency at the transition.
Such spontaneous symmetry breaking has found further application in elementary particle physics, where its occurrence is related to the appearance of Goldstone bosons.
125px 125px 125px 125px 125px Buckminsterfullerene displaysicosahedral symmetry, though the double bonds reduce this to pyritohedral symmetry.Ammonia, NH3. Its symmetry group is of order 6, generated by a 120° rotation and a reflection.Cubane C8H8 features octahedral symmetry.Hexaaquacopper(II) complex ion, [Cu(OH2)6]2+. Compared to a perfectly symmetrical shape, the molecule is vertically dilated by about 22% (Jahn-Teller effect).The (2,3,7) triangle group, a hyperbolic group, acts on this tiling of the hyperbolic plane.
Finite symmetry groups such as the Mathieu groups are used in coding theory, which is in turn applied in error correction of transmitted data, and in CD players. Another application is differential Galois theory, which characterizes functions having antiderivatives of a prescribed form, giving group-theoretic criteria for when solutions of certain differential equations are well-behaved. Geometric properties that remain stable under group actions are investigated in (geometric) invariant theory.
General linear group and representation theory
right|thumb|250px|Two vectors (the left illustration) multiplied by matrices (the middle and right illustrations). The middle illustration represents a clockwise rotation by 90°, while the right-most one stretches the x-coordinate by factor 2.
Matrix groups consist of matrices together with matrix multiplication. The general linear group consists of all invertible n-by-n matrices with real entries. Its subgroups are referred to as matrix groups or linear groups. The dihedral group example mentioned above can be viewed as a (very small) matrix group. Another important matrix group is the special orthogonal group SO(n). It describes all possible rotations in n dimensions. Via Euler angles, rotation matrices are used in computer graphics.
Representation theory is both an application of the group concept and important for a deeper understanding of groups. It studies the group by its group actions on other spaces. A broad class of group representations are linear representations, i.e. the group is acting on a vector space, such as the three-dimensional Euclidean space R3. A representation of G on an n-dimensional real vector space is simply a group homomorphism
ρ: G → GL(n, R)
from the group to the general linear group. This way, the group operation, which may be abstractly given, translates to the multiplication of matrices making it accessible to explicit computations.
Given a group action, this gives further means to study the object being acted on. On the other hand, it also yields information about the group. Group representations are an organizing principle in the theory of finite groups, Lie groups, algebraic groups and topological groups, especially (locally) compact groups.
Galois groups
Galois groups were developed to help solve polynomial equations by capturing their symmetry features. For example, the solutions of the quadratic equation are given by
Exchanging "+" and "−" in the expression, i.e. permuting the two solutions of the equation can be viewed as a (very simple) group operation. Similar formulae are known for cubic and quartic equations, but do not exist in general for degree 5 and higher. (see in particular p. 273 for concrete examples) Abstract properties of Galois groups associated with polynomials (in particular their solvability) give a criterion for polynomials that have all their solutions expressible by radicals, i.e. solutions expressible using solely addition, multiplication, and roots similar to the formula above.
The problem can be dealt with by shifting to field theory and considering the splitting field of a polynomial. Modern Galois theory generalizes the above type of Galois groups to field extensions and establishes—via the fundamental theorem of Galois theory—a precise relationship between fields and groups, underlining once again the ubiquity of groups in mathematics.
Finite groups
A group is called finite if it has a finite number of elements. The number of elements is called the order of the group. An important class is the symmetric groups SN, the groups of permutations of N letters. For example, the symmetric group on 3 letters S3 is the group consisting of all possible orderings of the three letters ABC, i.e. contains the elements ABC, ACB, BAC, BCA, CAB, CBA, in total 6 (factorial of 3) elements. This class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric group SN for a suitable integer N, according to Cayley's theorem. Parallel to the group of symmetries of the square above, S3 can also be interpreted as the group of symmetries of an equilateral triangle.
The order of an element a in a group G is the least positive integer n such that a n = e, where a n represents
i.e. application of the operation • to n copies of a. (If • represents multiplication, then an corresponds to the nth power of a.) In infinite groups, such an n may not exist, in which case the order of a is said to be infinity. The order of an element equals the order of the cyclic subgroup generated by this element.
More sophisticated counting techniques, for example counting cosets, yield more precise statements about finite groups: Lagrange's Theorem states that for a finite group G the order of any finite subgroup H divides the order of G. The Sylow theorems give a partial converse.
The dihedral group (discussed above) is a finite group of order 8. The order of r1 is 4, as is the order of the subgroup R it generates (see above). The order of the reflection elements fv etc. is 2. Both orders divide 8, as predicted by Lagrange's theorem. The groups Fp× above have order .
Classification of finite simple groups
Mathematicians often strive for a complete classification (or list) of a mathematical notion. In the context of finite groups, this aim leads to difficult mathematics. According to Lagrange's theorem, finite groups of order p, a prime number, are necessarily cyclic (abelian) groups Zp. Groups of order p2 can also be shown to be abelian, a statement which does not generalize to order p3, as the non-abelian group D4 of order 8 = 23 above shows.. See also for similar results. Computer algebra systems can be used to list small groups, but there is no classification of all finite groups. An intermediate step is the classification of finite simple groups. A nontrivial group is called simple if its only normal subgroups are the trivial group and the group itself. The Jordan–Hölder theorem exhibits finite simple groups as the building blocks for all finite groups. Listing all finite simple groups was a major achievement in contemporary group theory. 1998 Fields Medal winner Richard Borcherds succeeded in proving the monstrous moonshine conjectures, a surprising and deep relation between the largest finite simple sporadic group—the "monster group"—and certain modular functions, a piece of classical complex analysis, and string theory, a theory supposed to unify the description of many physical phenomena.
Groups with additional structure
Many groups are simultaneously groups and examples of other mathematical structures. In the language of category theory, they are group objects in a category, meaning that they are objects (that is, examples of another mathematical structure) which come with transformations (called morphisms) that mimic the group axioms. For example, every group (as defined above) is also a set, so a group is a group object in the category of sets.
Topological groups
right|thumb|The unit circle in the complex plane under complex multiplication is a Lie group and, therefore, a topological group. It is topological since complex multiplication and division are continuous. It is a manifold and thus a Lie group, because every small piece, such as the red arc in the figure, looks like a part of the real line (shown at the bottom).
Some topological spaces may be endowed with a group law. In order for the group law and the topology to interweave well, the group operations must be continuous functions, that is, , and g−1 must not vary wildly if g and h vary only little. Such groups are called topological groups, and they are the group objects in the category of topological spaces. The most basic examples are the reals R under addition, , and similarly with any other topological field such as the complex numbers or p-adic numbers. All of these groups are locally compact, so they have Haar measures and can be studied via harmonic analysis. The former offer an abstract formalism of invariant integrals. Invariance means, in the case of real numbers for example:
for any constant c. Matrix groups over these fields fall under this regime, as do adele rings and adelic algebraic groups, which are basic to number theory. Galois groups of infinite field extensions such as the absolute Galois group can also be equipped with a topology, the so-called Krull topology, which in turn is central to generalize the above sketched connection of fields and groups to infinite field extensions. An advanced generalization of this idea, adapted to the needs of algebraic geometry, is the étale fundamental group.
Lie groups
Lie groups (in honor of Sophus Lie) are groups which also have a manifold structure, i.e. they are spaces looking locally like some Euclidean space of the appropriate dimension. Again, the additional structure, here the manifold structure, has to be compatible, i.e. the maps corresponding to multiplication and the inverse have to be smooth.
A standard example is the general linear group introduced above: it is an open subset of the space of all n-by-n matrices, because it is given by the inequality
det (A) ≠ 0,
where A denotes an n-by-n matrix.
Lie groups are of fundamental importance in modern physics: Noether's theorem links continuous symmetries to conserved quantities. Rotation, as well as translations in space and time are basic symmetries of the laws of mechanics. They can, for instance, be used to construct simple models—imposing, say, axial symmetry on a situation will typically lead to significant simplification in the equations one needs to solve to provide a physical description. Another example are the Lorentz transformations, which relate measurements of time and velocity of two observers in motion relative to each other. They can be deduced in a purely group-theoretical way, by expressing the transformations as a rotational symmetry of Minkowski space. The latter serves—in the absence of significant gravitation—as a model of space time in special relativity. The full symmetry group of Minkowski space, i.e. including translations, is known as the Poincaré group. By the above, it plays a pivotal role in special relativity and, by implication, for quantum field theories. Symmetries that vary with location are central to the modern description of physical interactions with the help of gauge theory.
Generalizations
In abstract algebra, more general structures are defined by relaxing some of the axioms defining a group. For example, if the requirement that every element has an inverse is eliminated, the resulting algebraic structure is called a monoid. The natural numbers N (including 0) under addition form a monoid, as do the nonzero integers under multiplication , see above. There is a general method to formally add inverses to elements to any (abelian) monoid, much the same way as is derived from , known as the Grothendieck group.
Groupoids are similar to groups except that the composition need not be defined for all a and b. They arise in the study of more complicated forms of symmetry, often in topological and analytical structures, such as the fundamental groupoid or stacks. Finally, it is possible to generalize any of these concepts by replacing the binary operation with an arbitrary n-ary one (i.e. an operation taking n arguments). With the proper generalization of the group axioms this gives rise to an n-ary group. The table gives a list of several structures generalizing groups.
See also
Abelian group
Cyclic group
Euclidean group
Finitely presented group
Free group
Fundamental group
Grothendieck group
Group algebra
Group ring
Heap (mathematics)
List of small groups
Nilpotent group
Non-abelian group
Quantum group
Reductive group
Solvable group
Symmetry in physics
Computational group theory
Notes
Citations
<span class="small">
References
General references
, Chapter 2 contains an undergraduate-level exposition of the notions covered in this article.
, Chapter 5 provides a layman-accessible explanation of groups.
.
, an elementary introduction.
.
.
.
.
.
.
Special references
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Historical references
.
.
.
.
(Galois work was first published by Joseph Liouville in 1843).
.
.
.
.
.
Category:Algebraic structures
Category:Symmetry | 19,447 | 2017-01 |
Glacier | thumb|The Baltoro Glacier in the Karakoram, Baltistan. At in length, it is one of the longest alpine glaciers on earth.
thumb|Ice calving from the terminus of the Perito Moreno Glacier in western Patagonia, Argentina
thumb|The Aletsch Glacier, the largest glacier of the Alps, in Switzerland
thumb|The Quelccaya Ice Cap is the largest glaciated area in the tropics, in Peru
A glacier ( or ) is a persistent body of dense ice that is constantly moving under its own weight; it forms where the accumulation of snow exceeds its ablation (melting and sublimation) over many years, often centuries. Glaciers slowly deform and flow due to stresses induced by their weight, creating crevasses, seracs, and other distinguishing features. They also abrade rock and debris from their substrate to create landforms such as cirques and moraines. Glaciers form only on land and are distinct from the much thinner sea ice and lake ice that form on the surface of bodies of water.
On Earth, 99% of glacial ice is contained within vast ice sheets in the polar regions, but glaciers may be found in mountain ranges on every continent except Australia, and on a few high-latitude oceanic islands. Between 35°N and 35°S, glaciers occur only in the Himalayas, Andes, Rocky Mountains, a few high mountains in East Africa, Mexico, New Guinea and on Zard Kuh in Iran. Glaciers cover about 10 percent of Earth's land surface. Continental glaciers cover nearly or about 98 percent of Antarctica's , with an average thickness of . Greenland and Patagonia also have huge expanses of continental glaciers.National Geographic Almanac of Geography, 2005, ISBN 0-7922-3877-X, page 149.
Glacial ice is the largest reservoir of fresh water on Earth. Many glaciers from temperate, alpine and seasonal polar climates store water as ice during the colder seasons and release it later in the form of meltwater as warmer summer temperatures cause the glacier to melt, creating a water source that is especially important for plants, animals and human uses when other sources may be scant. Within high altitude and Antarctic environments, the seasonal temperature difference is often not sufficient to release meltwater.
Because glacial mass is affected by long-term climatic changes, e.g., precipitation, mean temperature, and cloud cover, glacial mass changes are considered among the most sensitive indicators of climate change and are a major source of variations in sea level.
A large piece of compressed ice, or a glacier, appears blue as large quantities of water appear blue. This is because water molecules absorb other colors more efficiently than blue. The other reason for the blue color of glaciers is the lack of air bubbles. Air bubbles, which give a white color to ice, are squeezed out by pressure increasing the density of the created ice.
Etymology and related terms
The word Glaceon is a loanword from French and goes back, via Franco-Provençal, to the Vulgar Latin , derived from the Late Latin , and ultimately Latin , meaning "ice". The processes and features caused by or related to glaciers are referred to as glacial. The process of glacier establishment, growth and flow is called glaciation. The corresponding area of study is called glaciology. Glaciers are important components of the global cryosphere.
Types
Classification by size, shape, and behavior
thumb|Mouth of the Schlatenkees Glacier near Innergschlöß, Austria
Glaciers are categorized by their morphology, thermal characteristics, and behavior. Alpine glaciers, also known as mountain glaciers or cirque glaciers, form on the crests and slopes of mountains. An alpine glacier that fills a valley is sometimes called a valley glacier. A large body of glacial ice astride a mountain, mountain range, or volcano is termed an ice cap or ice field. Ice caps have an area less than by definition.
Glacial bodies larger than are called ice sheets or continental glaciers. Several kilometers deep, they obscure the underlying topography. Only nunataks protrude from their surfaces. The only extant ice sheets are the two that cover most of Antarctica and Greenland. They contain vast quantities of fresh water, enough that if both melted, global sea levels would rise by over . Portions of an ice sheet or cap that extend into water are called ice shelves; they tend to be thin with limited slopes and reduced velocities. Narrow, fast-moving sections of an ice sheet are called ice streams. In Antarctica, many ice streams drain into large ice shelves. Some drain directly into the sea, often with an ice tongue, like Mertz Glacier.
thumb|left|Sightseeing boat in front of a tidewater glacier, Kenai Fjords National Park, Alaska
Tidewater glaciers are glaciers that terminate in the sea, including most glaciers flowing from Greenland, Antarctica, Baffin and Ellesmere Islands in Canada, Southeast Alaska, and the Northern and Southern Patagonian Ice Fields. As the ice reaches the sea, pieces break off, or calve, forming icebergs. Most tidewater glaciers calve above sea level, which often results in a tremendous impact as the iceberg strikes the water. Tidewater glaciers undergo centuries-long cycles of advance and retreat that are much less affected by the climate change than those of other glaciers.
Classification by thermal state
Thermally, a temperate glacier is at melting point throughout the year, from its surface to its base. The ice of a polar glacier is always below the freezing point from the surface to its base, although the surface snowpack may experience seasonal melting. A sub-polar glacier includes both temperate and polar ice, depending on depth beneath the surface and position along the length of the glacier. In a similar way, the thermal regime of a glacier is often described by the temperature at its base alone. A cold-based glacier is below freezing at the ice-ground interface, and is thus frozen to the underlying substrate. A warm-based glacier is above or at freezing at the interface, and is able to slide at this contact.http://link.springer.com/referenceworkentry/10.1007%2F978-90-481-2642-2_72/fulltext.html This contrast is thought to a large extent to govern the ability of a glacier to effectively erode its bed, as sliding ice promotes plucking at rock from the surface below.Boulton, G.S. [1974] "Processes and patterns of glacial erosion", (In Coates, D.R. ed., Glacial Geomorphology. A Proceedings Volume of the fifth Annual Geomorphology Symposia series, held at Binghamton, New York, September 26–28, 1974. Binghamton, N.Y., State University of New York, p. 41-87. (Publications in Geomorphology)) Glaciers which are partly cold-based and partly warm-based are known as polythermal.
Formation
thumb|Gorner Glacier in Switzerland
Glaciers form where the accumulation of snow and ice exceeds ablation. The area in which a glacier forms is called a cirque (corrie or cwm) - a typically armchair-shaped geological feature (such as a depression between mountains enclosed by arêtes) - which collects and compresses through gravity the snow which falls into it. This snow collects and is compacted by the weight of the snow falling above it forming névé. Further crushing of the individual snowflakes and squeezing the air from the snow turns it into 'glacial ice'. This glacial ice will fill the cirque until it 'overflows' through a geological weakness or vacancy, such as the gap between two mountains. When the mass of snow and ice is sufficiently thick, it begins to move due to a combination of surface slope, gravity and pressure. On steeper slopes, this can occur with as little as 15 m (50 ft) of snow-ice.
thumb|left|A packrafter passes a wall of freshly-exposed blue ice on Spencer Glacier, in Alaska. Glacial ice acts like a filter on light, and the more time light can spend traveling through ice the bluer it becomes.
In temperate glaciers, snow repeatedly freezes and thaws, changing into granular ice called firn. Under the pressure of the layers of ice and snow above it, this granular ice fuses into denser and denser firn. Over a period of years, layers of firn undergo further compaction and become glacial ice. Glacier ice is slightly less dense than ice formed from frozen water because it contains tiny trapped air bubbles.
Glacial ice has a distinctive blue tint because it absorbs some red light due to an overtone of the infrared OH stretching mode of the water molecule. Liquid water is blue for the same reason. The blue of glacier ice is sometimes misattributed to Rayleigh scattering due to bubbles in the ice.
thumb|A glacier cave located on the Perito Moreno Glacier in Argentina.
Structure
A glacier originates at a location called its glacier head and terminates at its glacier foot, snout, or terminus.
Glaciers are broken into zones based on surface snowpack and melt conditions.Benson, C.S., 1961, "Stratigraphic studies in the snow and firn of the Greenland Ice Sheet", Res. Rep. 70, U.S. Army Snow, Ice and Permafrost Res Establ., Corps of Eng., 120 pp The ablation zone is the region where there is a net loss in glacier mass. The equilibrium line separates the ablation zone and the accumulation zone; it is the altitude where the amount of new snow gained by accumulation is equal to the amount of ice lost through ablation. The upper part of a glacier, where accumulation exceeds ablation, is called the accumulation zone. In general, the accumulation zone accounts for 60–70% of the glacier's surface area, more if the glacier calves icebergs. Ice in the accumulation zone is deep enough to exert a downward force that erodes underlying rock. After a glacier melts, it often leaves behind a bowl- or amphitheater-shaped depression that ranges in size from large basins like the Great Lakes to smaller mountain depressions known as cirques.
The accumulation zone can be subdivided based on its melt conditions.
The dry snow zone is a region where no melt occurs, even in the summer, and the snowpack remains dry.
The percolation zone is an area with some surface melt, causing meltwater to percolate into the snowpack. This zone is often marked by refrozen ice lenses, glands, and layers. The snowpack also never reaches melting point.
Near the equilibrium line on some glaciers, a superimposed ice zone develops. This zone is where meltwater refreezes as a cold layer in the glacier, forming a continuous mass of ice.
The wet snow zone is the region where all of the snow deposited since the end of the previous summer has been raised to 0 °C.
The health of a glacier is usually assessed by determining the glacier mass balance or observing terminus behavior. Healthy glaciers have large accumulation zones, more than 60% of their area snowcovered at the end of the melt season, and a terminus with vigorous flow.
Following the Little Ice Age's end around 1850, glaciers around the Earth have retreated substantially. A slight cooling led to the advance of many alpine glaciers between 1950–1985, but since 1985 glacier retreat and mass loss has become larger and increasingly ubiquitous.
Motion
thumb|Shear or herring-bone crevasses on Emmons Glacier (Mount Rainier); such crevasses often form near the edge of a glacier where interactions with underlying or marginal rock impede flow. In this case, the impediment appears to be some distance from the near margin of the glacier.
Glaciers move, or flow, downhill due to gravity and the internal deformation of ice. Ice behaves like a brittle solid until its thickness exceeds about 50 m (160 ft). The pressure on ice deeper than 50 m causes plastic flow. At the molecular level, ice consists of stacked layers of molecules with relatively weak bonds between layers. When the stress on the layer above exceeds the inter-layer binding strength, it moves faster than the layer below.W.S.B. Paterson, Physics of ice
Glaciers also move through basal sliding. In this process, a glacier slides over the terrain on which it sits, lubricated by the presence of liquid water. The water is created from ice that melts under high pressure from frictional heating. Basal sliding is dominant in temperate, or warm-based glaciers.
Fracture zone and cracks
thumb|Ice cracks in the Titlis Glacier
The top of a glacier are rigid because they are under low pressure. This upper section is known as the fracture zone and moves mostly as a single unit over the plastically flowing lower section. When a glacier moves through irregular terrain, cracks called crevasses develop in the fracture zone. Crevasses form due to differences in glacier velocity. If two rigid sections of a glacier move at different speeds and directions, shear forces cause them to break apart, opening a crevasse. Crevasses are seldom more than deep but in some cases can be or even deeper. Beneath this point, the plasticity of the ice is too great for cracks to form. Intersecting crevasses can create isolated peaks in the ice, called seracs.
Crevasses can form in several different ways. Transverse crevasses are transverse to flow and form where steeper slopes cause a glacier to accelerate. Longitudinal crevasses form semi-parallel to flow where a glacier expands laterally. Marginal crevasses form from the edge of the glacier, due to the reduction in speed caused by friction of the valley walls. Marginal crevasses are usually largely transverse to flow. Moving glacier ice can sometimes separate from stagnant ice above, forming a bergschrund. Bergschrunds resemble crevasses but are singular features at a glacier's margins.
Crevasses make travel over glaciers hazardous, especially when they are hidden by fragile snow bridges.
thumb|upright|Crossing a crevasse on the Easton Glacier, Mount Baker, in the North Cascades, United States
Below the equilibrium line, glacial meltwater is concentrated in stream channels. Meltwater can pool in proglacial lakes on top of a glacier or descend into the depths of a glacier via moulins. Streams within or beneath a glacier flow in englacial or sub-glacial tunnels. These tunnels sometimes reemerge at the glacier's surface.
Speed
The speed of glacial displacement is partly determined by friction. Friction makes the ice at the bottom of the glacier move more slowly than ice at the top. In alpine glaciers, friction is also generated at the valley's side walls, which slows the edges relative to the center.
Mean speeds vary greatly, but is typically around per day.Glacier properties Hunter College CUNY lectures There may be no motion in stagnant areas; for example, in parts of Alaska, trees can establish themselves on surface sediment deposits. In other cases, glaciers can move as fast as per day, such as in Greenland's Jakobshavn Isbræ (). Velocity increases with increasing slope, increasing thickness, increasing snowfall, increasing longitudinal confinement, increasing basal temperature, increasing meltwater production and reduced bed hardness.
A few glaciers have periods of very rapid advancement called surges. These glaciers exhibit normal movement until suddenly they accelerate, then return to their previous state. During these surges, the glacier may reach velocities far greater than normal speed.T. Strozzi et al.: The Evolution of a Glacier Surge Observed with the ERS Satellites (pdf, 1.3 Mb) These surges may be caused by failure of the underlying bedrock, the pooling of meltwater at the base of the glacier — perhaps delivered from a supraglacial lake — or the simple accumulation of mass beyond a critical "tipping point".Meier & Post (1969) Temporary rates up to per day have occurred when increased temperature or overlying pressure caused bottom ice to melt and water to accumulate beneath a glacier.
In glaciated areas where the glacier moves faster than one km per year, glacial earthquakes occur. These are large scale temblors that have seismic magnitudes as high as 6.1."Seasonality and Increasing Frequency of Greenland Glacial Earthquakes", Ekström, G., M. Nettles, and V. C. Tsai (2006) Science, 311, 5768, 1756-1758, "Analysis of Glacial Earthquakes" Tsai, V. C. and G. Ekström (2007).
J. Geophys. Res., 112, F03S22, The number of glacial earthquakes in Greenland peaks every year in July, August and September and is increasing over time. In a study using data from January 1993 through October 2005, more events were detected every year since 2002, and twice as many events were recorded in 2005 as there were in any other year. This increase in the numbers of glacial earthquakes in Greenland may be a response to global warming.
Ogives
Ogives are alternating wave crests and valleys that appear as dark and light bands of ice on glacier surfaces. They are linked to seasonal motion of glaciers; the width of one dark and one light band generally equals the annual movement of the glacier. Ogives are formed when ice from an icefall is severely broken up, increasing ablation surface area during summer. This creates a swale and space for snow accumulation in the winter, which in turn creates a ridge. Sometimes ogives consist only of undulations or color bands and are described as wave ogives or band ogives.
Geography
thumb|Black ice glacier near Aconcagua, Argentina
Glaciers are present on every continent and approximately fifty countries, excluding those (Australia, South Africa) that have glaciers only on distant subantarctic island territories. Extensive glaciers are found in Antarctica, Chile, Canada, Alaska, Greenland and Iceland. Mountain glaciers are widespread, especially in the Andes, the Himalayas, the Rocky Mountains, the Caucasus, and the Alps. Mainland Australia currently contains no glaciers, although a small glacier on Mount Kosciuszko was present in the last glacial period. In New Guinea, small, rapidly diminishing, glaciers are located on its highest summit massif of Puncak Jaya. Africa has glaciers on Mount Kilimanjaro in Tanzania, on Mount Kenya and in the Rwenzori Mountains. Oceanic islands with glaciers occur on Iceland, Svalbard, New Zealand, Jan Mayen and the subantarctic islands of Marion, Heard, Grande Terre (Kerguelen) and Bouvet. During glacial periods of the Quaternary, Taiwan, Hawaii on Mauna Kea and Tenerife also had large alpine glaciers, while the Faroe and Crozet Islands were completely glaciated.
The permanent snow cover necessary for glacier formation is affected by factors such as the degree of slope on the land, amount of snowfall and the winds. Glaciers can be found in all latitudes except from 20° to 27° north and south of the equator where the presence of the descending limb of the Hadley circulation lowers precipitation so much that with high insolation snow lines reach above . Between 19˚N and 19˚S, however, precipitation is higher and the mountains above usually have permanent snow.
Even at high latitudes, glacier formation is not inevitable. Areas of the Arctic, such as Banks Island, and the McMurdo Dry Valleys in Antarctica are considered polar deserts where glaciers cannot form because they receive little snowfall despite the bitter cold. Cold air, unlike warm air, is unable to transport much water vapor. Even during glacial periods of the Quaternary, Manchuria, lowland Siberia,Collins, Henry Hill; Europe and the USSR; p. 263. and central and northern Alaska, though extraordinarily cold, had such light snowfall that glaciers could not form.Earth History 2001 (page 15)
In addition to the dry, unglaciated polar regions, some mountains and volcanoes in Bolivia, Chile and Argentina are high () and cold, but the relative lack of precipitation prevents snow from accumulating into glaciers. This is because these peaks are located near or in the hyperarid Atacama Desert.
Glacial geology
thumb|Diagram of glacial plucking and abrasion
thumb|right|Glacially plucked granitic bedrock near Mariehamn, Åland Islands
Glaciers erode terrain through two principal processes: abrasion and plucking.
As glaciers flow over bedrock, they soften and lift blocks of rock into the ice. This process, called plucking, is caused by subglacial water that penetrates fractures in the bedrock and subsequently freezes and expands. This expansion causes the ice to act as a lever that loosens the rock by lifting it. Thus, sediments of all sizes become part of the glacier's load. If a retreating glacier gains enough debris, it may become a rock glacier, like the Timpanogos Glacier in Utah.
Abrasion occurs when the ice and its load of rock fragments slide over bedrock and function as sandpaper, smoothing and polishing the bedrock below. The pulverized rock this process produces is called rock flour and is made up of rock grains between 0.002 and 0.00625 mm in size. Abrasion leads to steeper valley walls and mountain slopes in alpine settings, which can cause avalanches and rock slides, which add even more material to the glacier.
Glacial abrasion is commonly characterized by glacial striations. Glaciers produce these when they contain large boulders that carve long scratches in the bedrock. By mapping the direction of the striations, researchers can determine the direction of the glacier's movement. Similar to striations are chatter marks, lines of crescent-shape depressions in the rock underlying a glacier. They are formed by abrasion when boulders in the glacier are repeatedly caught and released as they are dragged along the bedrock.
The rate of glacier erosion varies. Six factors control erosion rate:
Velocity of glacial movement
Thickness of the ice
Shape, abundance and hardness of rock fragments contained in the ice at the bottom of the glacier
Relative ease of erosion of the surface under the glacier
Thermal conditions at the glacier base
Permeability and water pressure at the glacier base
Material that becomes incorporated in a glacier is typically carried as far as the zone of ablation before being deposited. Glacial deposits are of two distinct types:
Glacial till: material directly deposited from glacial ice. Till includes a mixture of undifferentiated material ranging from clay size to boulders, the usual composition of a moraine.
Fluvial and outwash sediments: sediments deposited by water. These deposits are stratified by size.
Larger pieces of rock that are encrusted in till or deposited on the surface are called "glacial erratics". They range in size from pebbles to boulders, but as they are often moved great distances, they may be drastically different from the material upon which they are found. Patterns of glacial erratics hint at past glacial motions.
Moraines
thumb|Glacial moraines above Lake Louise, Alberta, Canada
Glacial moraines are formed by the deposition of material from a glacier and are exposed after the glacier has retreated. They usually appear as linear mounds of till, a non-sorted mixture of rock, gravel and boulders within a matrix of a fine powdery material. Terminal or end moraines are formed at the foot or terminal end of a glacier. Lateral moraines are formed on the sides of the glacier. Medial moraines are formed when two different glaciers merge and the lateral moraines of each coalesce to form a moraine in the middle of the combined glacier. Less apparent are ground moraines, also called glacial drift, which often blankets the surface underneath the glacier downslope from the equilibrium line.
The term moraine is of French origin. It was coined by peasants to describe alluvial embankments and rims found near the margins of glaciers in the French Alps. In modern geology, the term is used more broadly, and is applied to a series of formations, all of which are composed of till. Moraines can also create moraine dammed lakes.
Drumlins
frame|right|A drumlin field forms after a glacier has modified the landscape. The teardrop-shaped formations denote the direction of the ice flow.
Drumlins are asymmetrical, canoe shaped hills made mainly of till. Their heights vary from 15 to 50 meters and they can reach a kilometer in length. The steepest side of the hill faces the direction from which the ice advanced (stoss), while a longer slope is left in the ice's direction of movement (lee).
Drumlins are found in groups called drumlin fields or drumlin camps. One of these fields is found east of Rochester, New York; it is estimated to contain about 10,000 drumlins.
Although the process that forms drumlins is not fully understood, their shape implies that they are products of the plastic deformation zone of ancient glaciers. It is believed that many drumlins were formed when glaciers advanced over and altered the deposits of earlier glaciers.
Glacial valleys, cirques, arêtes, and pyramidal peaks
right|thumb|Features of a glacial landscape
Before glaciation, mountain valleys have a characteristic "V" shape, produced by eroding water. During glaciation, these valleys are often widened, deepened and smoothed to form a "U"-shaped glacial valley. The erosion that creates glacial valleys truncates any spurs of rock or earth that may have earlier extended across the valley, creating broadly triangular-shaped cliffs called truncated spurs. Within glacial valleys, depressions created by plucking and abrasion can be filled by lakes, called paternoster lakes. If a glacial valley runs into a large body of water, it forms a fjord.
Typically glaciers deepen their valleys more than their smaller tributaries. Therefore, when glaciers recede, the valleys of the tributary glaciers remain above the main glacier's depression and are called hanging valleys.
At the start of a classic valley glacier is a bowl-shaped cirque, which has escarped walls on three sides but is open on the side that descends into the valley. Cirques are where ice begins to accumulate in a glacier. Two glacial cirques may form back to back and erode their backwalls until only a narrow ridge, called an arête is left. This structure may result in a mountain pass. If multiple cirques encircle a single mountain, they create pointed pyramidal peaks; particularly steep examples are called horns.
Roches moutonnées
Passage of glacial ice over an area of bedrock may cause the rock to be sculpted into a knoll called a roche moutonnée, or "sheepback" rock. Roches moutonnées may be elongated, rounded and asymmetrical in shape. They range in length from less than a meter to several hundred meters long.'Glaciers & Glaciation' (Arnold, London 1998) Douglas Benn and David Evans, pp324-326 Roches moutonnées have a gentle slope on their up-glacier sides and a steep to vertical face on their down-glacier sides. The glacier abrades the smooth slope on the upstream side as it flows along, but tears rock fragments loose and carries them away from the downstream side via plucking.
Alluvial stratification
As the water that rises from the ablation zone moves away from the glacier, it carries fine eroded sediments with it. As the speed of the water decreases, so does its capacity to carry objects in suspension. The water thus gradually deposits the sediment as it runs, creating an alluvial plain. When this phenomenon occurs in a valley, it is called a valley train. When the deposition is in an estuary, the sediments are known as bay mud.
Outwash plains and valley trains are usually accompanied by basins known as "kettles". These are small lakes formed when large ice blocks that are trapped in alluvium melt and produce water-filled depressions. Kettle diameters range from 5 m to 13 km, with depths of up to 45 meters. Most are circular in shape because the blocks of ice that formed them were rounded as they melted.
Glacial deposits
thumb|Landscape produced by a receding glacier
When a glacier's size shrinks below a critical point, its flow stops and it becomes stationary. Meanwhile, meltwater within and beneath the ice leaves stratified alluvial deposits. These deposits, in the forms of columns, terraces and clusters, remain after the glacier melts and are known as "glacial deposits".
Glacial deposits that take the shape of hills or mounds are called kames. Some kames form when meltwater deposits sediments through openings in the interior of the ice. Others are produced by fans or deltas created by meltwater. When the glacial ice occupies a valley, it can form terraces or kames along the sides of the valley.
Long, sinuous glacial deposits are called eskers. Eskers are composed of sand and gravel that was deposited by meltwater streams that flowed through ice tunnels within or beneath a glacier. They remain after the ice melts, with heights exceeding 100 meters and lengths of as long as 100 km.
Loess deposits
Very fine glacial sediments or rock flour is often picked up by wind blowing over the bare surface and may be deposited great distances from the original fluvial deposition site. These eolian loess deposits may be very deep, even hundreds of meters, as in areas of China and the Midwestern United States of America. Katabatic winds can be important in this process.
Isostatic rebound
right|frame|Isostatic pressure by a glacier on the Earth's crust
Large masses, such as ice sheets or glaciers, can depress the crust of the Earth into the mantle. The depression usually totals a third of the ice sheet or glacier's thickness. After the ice sheet or glacier melts, the mantle begins to flow back to its original position, pushing the crust back up. This post-glacial rebound, which proceeds very slowly after the melting of the ice sheet or glacier, is currently occurring in measurable amounts in Scandinavia and the Great Lakes region of North America.
A geomorphological feature created by the same process on a smaller scale is known as dilation-faulting. It occurs where previously compressed rock is allowed to return to its original shape more rapidly than can be maintained without faulting. This leads to an effect similar to what would be seen if the rock were hit by a large hammer. Dilation faulting can be observed in recently de-glaciated parts of Iceland and Cumbria.
On Mars
thumb|left|Northern polar ice cap on Mars
The polar ice caps of Mars show geologic evidence of glacial deposits. The south polar cap is especially comparable to glaciers on Earth. Topographical features and computer models indicate the existence of more glaciers in Mars' past.
At mid-latitudes, between 35° and 65° north or south, Martian glaciers are affected by the thin Martian atmosphere. Because of the low atmospheric pressure, ablation near the surface is solely due to sublimation, not melting. As on Earth, many glaciers are covered with a layer of rocks which insulates the ice. A radar instrument on board the Mars Reconnaissance Orbiter found ice under a thin layer of rocks in formations called lobate debris aprons (LDAs).Head, J. et al. 2005. Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars. Nature: 434. 346-350Plaut, J. et al. 2008. Radar Evidence for Ice in Lobate Debris Aprons in the Mid-Northern Latitudes of Mars. Lunar and Planetary Science XXXIX. 2290.pdfHolt, J. et al. 2008. Radar Sounding Evidence for Ice within Lobate Debris Aprons near Hellas Basin, Mid-Southern Latitudes of Mars. Lunar and Planetary Science XXXIX. 2441.pdf
The pictures below illustrate how landscape features on Mars closely resemble those on the Earth.
See also
Glacier morphology
Ice dam
Glacial motion
Retreat of glaciers since 1850
Glacier growing
Glaciology
Glacial landform
Notes
References
An excellent less-technical treatment of all aspects, with superb photographs and firsthand accounts of glaciologists' experiences. All images of this book can be found online (see Weblinks: Glaciers-online)
An undergraduate-level textbook.
A textbook for undergraduates avoiding mathematical complexities
A textbook devoted to explaining the geography of our planet.
A comprehensive reference on the physical principles underlying formation and behavior.
External links
, a report in the Global Environment Outlook (GEO) series.
Glacial structures - photo atlas
Glaciers of the Pyrenees
NOW on PBS "On Thin Ice"
Photo project tracks changes in Himalayan glaciers since 1921
Short radio episode California Glaciers' from The Mountains of California'' by John Muir, 1894. California Legacy Project
Dyanamics of Glaciers
Category:Glaciology
Category:Bodies of ice
Category:Glacial landforms
Category:Montane ecology | 12,463 | 2017-01 |
Gamal Abdel Nasser | Gamal Abdel Nasser Hussein (, ; 15 January 1918 – 28 September 1970) was the second President of Egypt, serving from 1956 until his death. Nasser led the 1952 overthrow of the monarchy and introduced far-reaching land reforms the following year. Following a 1954 attempt on his life by a Muslim Brotherhood member, he cracked down on the organization, put President Muhammad Naguib under house arrest, and assumed executive office, officially becoming president in June 1956.
Nasser's popularity in Egypt and the Arab world skyrocketed after his nationalization of the Suez Canal and his political victory in the subsequent Suez Crisis. Calls for pan-Arab unity under his leadership increased, culminating with the formation of the United Arab Republic with Syria (1958–1961). In 1962, Nasser began a series of major socialist measures and modernization reforms in Egypt. Despite setbacks to his pan-Arabist cause, by 1963 Nasser's supporters gained power in several Arab countries, but he became embroiled in the North Yemen Civil War. He began his second presidential term in March 1965 after his political opponents were banned from running. Following Egypt's defeat by Israel in the 1967 Six-Day War, Nasser resigned, but he returned to office after popular demonstrations called for his reinstatement. By 1968, Nasser had appointed himself prime minister, launched the War of Attrition to regain lost territory, began a process of depoliticizing the military, and issued a set of political liberalization reforms. After the conclusion of the 1970 Arab League summit, Nasser suffered a heart attack and died. His funeral in Cairo drew five million mourners and an outpouring of grief across the Arab world.
Nasser remains an iconic figure in the Arab world, particularly for his strides towards social justice and Arab unity, modernization policies, and anti-imperialist efforts. His presidency also encouraged and coincided with an Egyptian cultural boom, and launched large industrial projects, including the Aswan Dam and Helwan City. Nasser's detractors criticize his authoritarianism, his government's human rights violations, his populist relationship with the citizenry, and his failure to establish civil institutions, blaming his legacy for future dictatorial governance in Egypt.
Early life
thumb|left|upright|Nasser, 1931|alt=A boy wearing a jacket, a white shirt with a black tie and a fez on his head
Gamal Abdel Nasser was born on 15 January 1918 in Bakos, Alexandria, the first son of Fahima and Abdel Nasser Hussein. Nasser's father was a postal worker born in Beni Mur in Upper Egypt and raised in Alexandria, and his mother's family came from Mallawi, el-Minya. His parents married in 1917, and later had two more boys, Izz al-Arab and al-Leithi. Nasser's biographers Robert Stephens and Said Aburish wrote that Nasser's family believed strongly in the "Arab notion of glory", since the name of Nasser's brother, Izz al-Arab, translates to "Glory of the Arabs"—a rare name in Egypt.
Nasser's family traveled frequently due to his father's work. In 1921, they moved to Asyut and, in 1923, to Khatatba, where Nasser's father ran a post office. Nasser attended a primary school for the children of railway employees until 1924, when he was sent to live with his paternal uncle in Cairo, and to attend the Nahhasin elementary school.
Nasser exchanged letters with his mother and visited her on holidays. He stopped receiving messages at the end of April 1926. Upon returning to Khatatba, he learned that his mother had died after giving birth to his third brother, Shawki, and that his family had kept the news from him. Nasser later stated that "losing her this way was a shock so deep that time failed to remedy". He adored his mother and the injury of her death deepened when his father remarried before the year's end.
In 1928, Nasser went to Alexandria to live with his maternal grandfather and attend the city's Attarin elementary school. He left in 1929 for a private boarding school in Helwan, and later returned to Alexandria to enter the Ras el-Tin secondary school and to join his father, who was working for the city's postal service. It was in Alexandria that Nasser became involved in political activism. After witnessing clashes between protesters and police in Manshia Square, he joined the demonstration without being aware of its purpose. The protest, organized by the ultranationalist Young Egypt Society, called for the end of colonialism in Egypt in the wake of the 1923 Egyptian constitution's annulment by Prime Minister Isma'il Sidqi. Nasser was arrested and detained for a night before his father bailed him out.
thumb|upright|Nasser's name circled in Al-Gihad
When his father was transferred to Cairo in 1933, Nasser joined him and attended al-Nahda al-Masria school. He took up acting in school plays for a brief period and wrote articles for the school's paper, including a piece on French philosopher Voltaire titled "Voltaire, the Man of Freedom". On 13 November 1935, Nasser led a student demonstration against British rule, protesting against a statement made four days prior by UK foreign minister Samuel Hoare that rejected prospects for the 1923 Constitution's restoration. Two protesters were killed and Nasser received a graze to the head from a policeman's bullet. The incident garnered his first mention in the press: the nationalist newspaper Al Gihad reported that Nasser led the protest and was among the wounded. On 12 December, the new king, Farouk, issued a decree restoring the constitution.
Nasser's involvement in political activity increased throughout his school years, such that he only attended 45 days of classes during his last year of secondary school. Despite it having the almost unanimous backing of Egypt's political forces, Nasser strongly objected to the 1936 Anglo-Egyptian Treaty because it stipulated the continued presence of British military bases in the country. Nonetheless, political unrest in Egypt declined significantly and Nasser resumed his studies at al-Nahda, where he received his leaving certificate later that year.
Early influences
Aburish asserts that Nasser was not distressed by his frequent relocations, which broadened his horizons and showed him Egyptian society's class divisions. His own social status was well below the wealthy Egyptian elite, and his discontent with those born into wealth and power grew throughout his lifetime. Nasser spent most of his spare time reading, particularly in 1933 when he lived near the National Library of Egypt. He read the Qur'an, the sayings of Muhammad, the lives of the Sahaba (Muhammad's companions), and the biographies of nationalist leaders Napoleon, Atatürk, Otto von Bismarck, and Garibaldi and the autobiography of Winston Churchill.
Nasser was greatly influenced by Egyptian nationalism, as espoused by politician Mustafa Kamel, poet Ahmed Shawqi, and his anti-colonialist instructor at the Royal Military Academy, Aziz al-Masri, to whom Nasser expressed his gratitude in a 1961 newspaper interview. He was especially influenced by Egyptian writer Tawfiq al-Hakim's novel Return of the Spirit, in which al-Hakim wrote that the Egyptian people were only in need of a "man in whom all their feelings and desires will be represented, and who will be for them a symbol of their objective". Nasser later credited the novel as his inspiration to launch the 1952 revolution.
Military career
thumb|upright|Portrait of Nasser at law school in 1937|alt=A man wearing a tweed, pinstriped jacket and a tie. His hair is raised and black and he has a thin mustache.
In 1937, Nasser applied to the Royal Military Academy for army officer training, but his police record of anti-government protest initially blocked his entry. Disappointed, he enrolled in the law school at King Fuad University, but quit after one semester to reapply to the Military Academy. From his readings, Nasser, who frequently spoke of "dignity, glory, and freedom" in his youth, became enchanted with the stories of national liberators and heroic conquerors; a military career became his chief priority.
Convinced that he needed a wasta, or an influential intermediary to promote his application above the others, Nasser managed to secure a meeting with Under-Secretary of War Ibrahim Khairy Pasha, the person responsible for the academy's selection board, and requested his help. Khairy Pasha agreed and sponsored Nasser's second application, which was accepted in late 1937. Nasser focused on his military career from then on, and had little contact with his family. At the academy, he met Abdel Hakim Amer and Anwar Sadat, both of whom became important aides during his presidency. After graduating from the academy in July 1938, he was commissioned a second lieutenant in the infantry, and posted to Mankabad. It was here that Nasser and his closest comrades, including Sadat and Amer, first discussed their dissatisfaction at widespread corruption in the country and their desire to topple the monarchy. Sadat would later write that because of his "energy, clear-thinking, and balanced judgement", Nasser emerged as the group's natural leader.
thumb|upright|left|alt=Two seated men in military uniform and wearing fez hats|Nasser (right) with army comrades, 1940
In 1941, Nasser was posted to Khartoum, Sudan, which was part of Egypt at the time. Nasser returned to Sudan in September 1942 after a brief stay in Egypt, then secured a position as an instructor in the Cairo Royal Military Academy in May 1943. In 1942, the British Ambassador Miles Lampson marched into King Farouk's palace and ordered him to dismiss Prime Minister Hussein Sirri Pasha for having pro-Axis sympathies. Nasser saw the incident as a blatant violation of Egyptian sovereignty and wrote, "I am ashamed that our army has not reacted against this attack", and wished for "calamity" to overtake the British. Nasser was accepted into the General Staff College later that year. He began to form a group of young military officers with strong nationalist sentiments who supported some form of revolution. Nasser stayed in touch with the group's members primarily through Amer, who continued to seek out interested officers within the Egyptian Armed Force's various branches and presented Nasser with a complete file on each of them.
1948 Arab–Israeli War
thumb|right|alt=Eight men in dressed in military fatigues standing before an organized assembly of weapons, mostly rifles and mortar. The first man from the left is not wearing a hat, while the remaining seven are wearing hats.|Nasser (first from left) with his unit in the Faluja pocket, displaying weapons captured from the Israeli Army during the 1948 war.
Nasser's first battlefield experience was in Palestine during the 1948 Arab–Israeli War. He initially volunteered to serve with the Arab Higher Committee (AHC) led by Mohammad Amin al-Husayni. Nasser met with and impressed al-Husayni, but was ultimately refused entry to the AHC's forces by the Egyptian government for reasons that were unclear.
In May 1948, following the British withdrawal, King Farouk sent the Egyptian army into Palestine, with Nasser serving in the 6th Infantry Battalion. During the war, he wrote of the Egyptian army's unpreparedness, saying "our soldiers were dashed against fortifications". Nasser was deputy commander of the Egyptian forces that secured the Faluja pocket. On 12 July, he was lightly wounded in the fighting. By August, his brigade was surrounded by the Israeli Army. Appeals for help from Jordan's Arab Legion went unheeded, but the brigade refused to surrender. Negotiations between Israel and Egypt finally resulted in the ceding of Faluja to Israel. According to veteran journalist Eric Margolis, the defenders of Faluja, "including young army officer Gamal Abdel Nasser, became national heroes" for enduring Israeli bombardment while isolated from their command.
The Egyptian singer Umm Kulthum hosted a public celebration for the officers' return despite reservations from the royal government, which had been pressured by the British to prevent the reception. The apparent difference in attitude between the government and the general public increased Nasser's determination to topple the monarchy. Nasser had also felt bitter that his brigade had not been relieved despite the resilience it displayed. He started writing his book Philosophy of the Revolution during the siege.
After the war, Nasser returned to his role as an instructor at the Royal Military Academy. He sent emissaries to forge an alliance with the Muslim Brotherhood in October 1948, but soon concluded that the religious agenda of the Brotherhood was not compatible with his nationalism. From then on, Nasser prevented the Brotherhood's influence over his cadres' activities without severing ties with the organization. Nasser was sent as a member of the Egyptian delegation to Rhodes in February 1949 to negotiate a formal armistice with Israel, and reportedly considered the terms to be humiliating, particularly because the Israelis were able to easily occupy the Eilat region while negotiating with the Arabs in March.
Revolution
Free Officers
thumb|right|alt=Eight men in dressed in military uniform, posing in a room around a rectangular table. All the men, except for third and fifth persons from the left are seated. The third and fifth person from the left are standing.|The Free Officers after the coup, 1953. Counterclockwise: Zakaria Mohieddin, Abdel Latif Boghdadi, Kamel el-Din Hussein (standing), Nasser (seated), Abdel Hakim Amer, Muhammad Naguib, Youssef Seddik, and Ahmad Shawki.
Nasser's return to Egypt coincided with Husni al-Za'im's Syrian coup d'état. Its success and evident popular support among the Syrian people encouraged Nasser's revolutionary pursuits. Soon after his return, he was summoned and interrogated by Prime Minister Ibrahim Abdel Hadi regarding suspicions that he was forming a secret group of dissenting officers. According to secondhand reports, Nasser convincingly denied the allegations. Abdel Hadi was also hesitant to take drastic measures against the army, especially in front of its chief of staff, who was present during the interrogation, and subsequently released Nasser. The interrogation pushed Nasser to speed up his group's activities.
After 1949, the group adopted the name "Association of Free Officers" and advocated "little else but freedom and the restoration of their country’s dignity". Nasser organized the Free Officers' founding committee, which eventually comprised fourteen men from different social and political backgrounds, including representation from Young Egypt, the Muslim Brotherhood, the Egyptian Communist Party, and the aristocracy. Nasser was unanimously elected chairman of the organization.
In the 1950 parliamentary elections, the Wafd Party of el-Nahhas gained a victory—mostly due to the absence of the Muslim Brotherhood, which boycotted the elections—and was perceived as a threat by the Free Officers as the Wafd had campaigned on demands similar to their own. Accusations of corruption against Wafd politicians began to surface, however, breeding an atmosphere of rumor and suspicion that consequently brought the Free Officers to the forefront of Egyptian politics. By then, the organization had expanded to around ninety members; according to Khaled Mohieddin, "nobody knew all of them and where they belonged in the hierarchy except Nasser". Nasser felt that the Free Officers were not ready to move against the government and, for nearly two years, he did little beyond officer recruitment and underground news bulletins.
On 11 October 1951, the Wafd government abrogated the 1936 Anglo-Egyptian Treaty, which had given the British control over the Suez Canal until 1956. The popularity of this move, as well as that of government-sponsored guerrilla attacks against the British, put pressure on Nasser to act. According to Sadat, Nasser decided to wage "a large scale assassination campaign". In January 1952, he and Hassan Ibrahim attempted to kill the royalist general Hussein Sirri Amer by firing their submachine guns at his car as he drove through the streets of Cairo. Instead of killing the general, the attackers wounded an innocent female passerby. Nasser recalled that her wails "haunted" him and firmly dissuaded him from undertaking similar actions in the future.
Sirri Amer was close to King Farouk, and was nominated for the presidency of the Officer's Club—normally a ceremonial office—with the king's backing. Nasser was determined to establish the independence of the army from the monarchy, and with Amer as the intercessor, resolved to field a nominee for the Free Officers. They selected Muhammad Naguib, a popular general who had offered his resignation to Farouk in 1942 over British high-handedness and was wounded three times in the Palestine War. Naguib won overwhelmingly and the Free Officers, through their connection with a leading Egyptian daily, al-Misri, publicized his victory while praising the nationalistic spirit of the army.
Revolution of 1952
thumb|right|alt=Three men seated and observing an event. The first man from the left is wearing a suit and fez, the second man is wearing a military uniform, and the third man is wearing military uniform with a cap. Behind them are three men standing, all dressed in military uniform. In the background is ab audience seated in bleachers|Leaders of Egypt following the ouster of King Farouk, November 1952. Seated, left to right: Sulayman Hafez, Muhammad Naguib and Nasser
On 25 January 1952, a confrontation between British forces and police at Ismailia resulted in the deaths of 40 Egyptian policemen, provoking riots in Cairo the next day which left 76 people dead. Afterwards, Nasser published a simple six-point program in Rose al-Yūsuf to dismantle feudalism and British influence in Egypt. In May, Nasser received word that Farouk knew the names of the Free Officers and intended to arrest them; he immediately entrusted Free Officer Zakaria Mohieddin with the task of planning the government takeover by army units loyal to the association.
The Free Officers' intention was not to install themselves in government, but to re-establish a parliamentary democracy. Nasser did not believe that a low-ranking officer like himself (a lieutenant colonel) would be accepted by the Egyptian people, and so selected General Naguib to be his "boss" and lead the coup in name. The revolution they had long sought was launched on 22 July and was declared a success the next day. The Free Officers seized control of all government buildings, radio stations, and police stations, as well as army headquarters in Cairo. While many of the rebel officers were leading their units, Nasser donned civilian clothing to avoid detection by royalists and moved around Cairo monitoring the situation. In a move to stave off foreign intervention two days before the revolution, Nasser had notified the American and British governments of his intentions, and both had agreed not to aid Farouk. Under pressure from the Americans, Nasser had agreed to exile the deposed king with an honorary ceremony.
On 18 June 1953, the monarchy was abolished and the Republic of Egypt declared, with Naguib as its first president. According to Aburish, after assuming power, Nasser and the Free Officers expected to become the "guardians of the people's interests" against the monarchy and the pasha class while leaving the day-to-day tasks of government to civilians. They asked former prime minister Ali Maher to accept reappointment to his previous position, and to form an all-civilian cabinet. The Free Officers then governed as the Revolutionary Command Council (RCC) with Naguib as chairman and Nasser as vice-chairman. Relations between the RCC and Maher grew tense, however, as the latter viewed many of Nasser's schemes—agrarian reform, abolition of the monarchy, reorganization of political parties—as too radical, culminating in Maher's resignation on 7 September. Naguib assumed the additional role of prime minister, and Nasser that of deputy prime minister. In September, the Agrarian Reform Law was put into effect. In Nasser's eyes, this law gave the RCC its own identity and transformed the coup into a revolution.
Preceding the reform law, in August 1952, communist-led riots broke out at textile factories in Kafr el-Dawwar, leading to a clash with the army that left nine people dead. While most of the RCC insisted on executing the riot's two ringleaders, Nasser opposed this. Nonetheless, the sentences were carried out. The Muslim Brotherhood supported the RCC, and after Naguib's assumption of power, demanded four ministerial portfolios in the new cabinet. Nasser turned down their demands and instead hoped to co-opt the Brotherhood by giving two of its members, who were willing to serve officially as independents, minor ministerial posts.
Road to presidency
Disputes with Naguib
thumb|right|alt=Two smiling men in military uniform seated in an open-top automobile. The first man on the left is pointing his hand in a gesture. Behind the automobile are men in uniform walking away from the vehicle|Nasser (right) and Muhammad Naguib (left) during celebrations marking the second anniversary of the 1952 revolution, July 1954
thumb|right|alt=Gamal Abdel Nasser laughing at the Muslim Brotherhood for suggesting in 1953 that women should be required to wear the hijab and that Islamic law should be enforced across the country.|Gamal Abdel Nasser laughing at the Muslim Brotherhood for suggesting in 1953 that women should be required to wear the hijab and that Islamic law should be enforced across the country.
In January 1953, Nasser overcame opposition from Naguib and banned all political parties, creating a one-party system under the Liberation Rally, a loosely structured movement whose chief task was to organize pro-RCC rallies and lectures, with Nasser its secretary-general. Despite the dissolution order, Nasser was the only RCC member who still favored holding parliamentary elections, according to his fellow officer Abdel Latif Boghdadi. Although outvoted, he still advocated holding elections by 1956. In March 1953, Nasser led the Egyptian delegation negotiating a British withdrawal from the Suez Canal.
When Naguib began showing signs of independence from Nasser by distancing himself from the RCC's land reform decrees and drawing closer to Egypt's established political forces, namely the Wafd and the Brotherhood, Nasser resolved to depose him. In June, Nasser took control of the interior ministry post from Naguib loyalist Sulayman Hafez, and pressured Naguib to conclude the abolition of the monarchy.
On 25 February 1954, Naguib announced his resignation after the RCC held an official meeting without his presence two days prior. On 26 February, Nasser accepted the resignation, put Naguib under house arrest, and the RCC proclaimed Nasser as both RCC chairman and prime minister. As Naguib intended, a mutiny immediately followed, demanding Naguib's reinstatement and the RCC's dissolution. While visiting the striking officers at Military Headquarters (GHQ) to call for the mutiny's end, Nasser was initially intimidated into accepting their demands. However, on 27 February, Nasser's supporters in the army launched a raid on the GHQ, ending the mutiny. Later that day, hundreds of thousands of protesters, mainly belonging to the Brotherhood, called for Naguib's return and Nasser's imprisonment. In response, a sizable group within the RCC, led by Khaled Mohieddin, demanded Naguib's release and return to the presidency. Nasser acquiesced, but delayed Naguib's reinstatement until 4 March, allowing him to promote Amer to Commander of the Armed Forces—a position formerly occupied by Naguib.
On 5 March, Nasser's security coterie arrested thousands of participants in the uprising. As a ruse to rally opposition against a return to the pre-1952 order, the RCC decreed an end to restrictions on monarchy-era parties and the Free Officers' withdrawal from politics. The RCC succeeded in provoking the beneficiaries of the revolution, namely the workers, peasants, and petty bourgeois, to oppose the decrees, with one million transport workers launching a strike and thousands of peasants entering Cairo in protest in late March. Naguib sought to crack down on the protesters, but his requests were rebuffed by the heads of the security forces. On 29 March, Nasser announced the decrees' revocation in response to the "impulse of the street". Between April and June, hundreds of Naguib's supporters in the military were either arrested or dismissed, and Mohieddin was informally exiled to Switzerland to represent the RCC abroad. King Saud of Saudi Arabia attempted to mend relations between Nasser and Naguib, but to no avail.
Assuming chairmanship of RCC
thumb|Sound recording of 1954 assassination attempt on Nasser while he was addressing a crowd in Manshia, Alexandria.
On 26 October 1954, Muslim Brotherhood member Mohammed Abdel Latif attempted to assassinate Nasser while he was delivering a speech in Alexandria to celebrate the British military withdrawal. The speech was broadcast to the Arab world via radio. The gunman was away from him and fired eight shots, but all missed Nasser. Panic broke out in the mass audience, but Nasser maintained his posture and raised his voice to appeal for calm. With great emotion he exclaimed the following:
My countrymen, my blood spills for you and for Egypt. I will live for your sake and die for the sake of your freedom and honor. Let them kill me; it does not concern me so long as I have instilled pride, honor, and freedom in you. If Gamal Abdel Nasser should die, each of you shall be Gamal Abdel Nasser ... Gamal Abdel Nasser is of you and from you and he is willing to sacrifice his life for the nation.
thumb|right|alt=A man standing in an open-top vehicle and waving to a crowd of people surrounding the vehicle. There are several men seated in the vehicle and in another trailing vehicle, all dressed in military uniform|Nasser greeted by crowds in Alexandria one day after his announcement of the British withdrawal and the assassination attempt against him, 27 October 1954.
The crowd roared in approval and Arab audiences were electrified. The assassination attempt backfired, quickly playing into Nasser's hands. Upon returning to Cairo, he ordered one of the largest political crackdowns in the modern history of Egypt, with the arrests of thousands of dissenters, mostly members of the Brotherhood, but also communists, and the dismissal of 140 officers loyal to Naguib. Eight Brotherhood leaders were sentenced to death, although the sentence of its chief ideologue, Sayyid Qutb, was commuted to a 15-year imprisonment. Naguib was removed from the presidency and put under house arrest, but was never tried or sentenced, and no one in the army rose to defend him. With his rivals neutralized, Nasser became the undisputed leader of Egypt.
Nasser's street following was still too small to sustain his plans for reform and to secure him in office. To promote himself and the Liberation Rally, he gave speeches in a cross-country tour, and imposed controls over the country's press by decreeing that all publications had to be approved by the party to prevent "sedition". Both Umm Kulthum and Abdel Halim Hafez, the leading Arab singers of the era, performed songs praising Nasser's nationalism. Others produced plays denigrating his political opponents. According to his associates, Nasser orchestrated the campaign himself. Arab nationalist terms such "Arab homeland" and "Arab nation" frequently began appearing in his speeches in 1954–55, whereas prior he would refer to the Arab "peoples" or the "Arab region". In January 1955, the RCC appointed him as their president, pending national elections.
Nasser made secret contacts with Israel in 1954–55, but determined that peace with Israel would be impossible, considering it an "expansionist state that viewed the Arabs with disdain". On 28 February 1955, Israeli troops attacked the Egyptian-held Gaza Strip with the stated aim of suppressing Palestinian fedayeen raids. Nasser did not feel that the Egyptian Army was ready for a confrontation and did not retaliate militarily. His failure to respond to Israeli military action demonstrated the ineffectiveness of his armed forces and constituted a blow to his growing popularity. Nasser subsequently ordered the tightening of the blockade on Israeli shipping through the Straits of Tiran and restricted the use of airspace over the Gulf of Aqaba by Israeli aircraft in early September. The Israelis re-militarized the al-Auja Demilitarized Zone on the Egyptian border on 21 September.
Simultaneous with Israel's February raid, the Baghdad Pact was formed between some regional allies of the UK. Nasser considered the Baghdad Pact a threat to his efforts to eliminate British military influence in the Middle East, and a mechanism to undermine the Arab League and "perpetuate [Arab] subservience to Zionism and [Western] imperialism". Nasser felt that if he was to maintain Egypt's regional leadership position he needed to acquire modern weaponry to arm his military. When it became apparent to him that Western countries would not supply Egypt under acceptable financial and military terms, Nasser turned to the Eastern Bloc and concluded a armaments agreement with Czechoslovakia on 27 September. Through the Czechoslovakian arms deal, the balance of power between Egypt and Israel was more or less equalized and Nasser's role as the Arab leader defying the West was enhanced.
Adoption of neutralism
thumb|upright|alt=Six men seated on a rug. The first two men from the left are dressed in white robes and white headdresses, the third and fourth men are dressed in military uniform, and the last two are wearing robes and headdresses|Nasser and Imam Ahmad of North Yemen facing the camera, Prince Faisal of Saudi Arabia in white robes in the background, Amin al-Husayni of the All-Palestine Government in the foreground at the Bandung Conference, April 1955
At the Bandung Conference in Indonesia in late April 1955, Nasser was treated as the leading representative of the Arab countries and was one of the most popular figures at the summit. He had paid earlier visits to Pakistan (April 9), India (April 14), Burma, and Afghanistan on the way to Bandung, and previously cemented a treaty of friendship with India in Cairo on 6 April, strengthening Egyptian–Indian relations on the international policy and economic development fronts.
Nasser mediated discussions between the pro-Western, pro-Soviet, and neutralist conference factions over the composition of the "Final Communique" addressing colonialism in Africa and Asia and the fostering of global peace amid the Cold War between the West and the Soviet Union. At Bandung Nasser sought a proclamation for the avoidance of international defense alliances, support for the independence of Tunisia, Algeria, and Morocco from French rule, support for the Palestinian right of return, and the implementation of UN resolutions regarding the Arab–Israeli conflict. He succeeded in lobbying the attendees to pass resolutions on each of these issues, notably securing the strong support of China and India.
Following Bandung, Nasser officially adopted the "positive neutralism" of Yugoslavian president Josip Broz Tito and Indian Prime Minister Jawaharlal Nehru as a principal theme of Egyptian foreign policy regarding the Cold War. Nasser was welcomed by large crowds of people lining the streets of Cairo on his return to Egypt on 2 May and was widely heralded in the press for his achievements and leadership in the conference. Consequently, Nasser's prestige was greatly boosted as was his self-confidence and image.
1956 constitution and presidency
thumb|upright|left|alt=A man wearing a suit inserting a piece of paper into a box. He is being photographed by cameramen|Nasser submitting his vote for the referendum of the proposed constitution, 23 June 1956
With his domestic position considerably strengthened, Nasser was able to secure primacy over his RCC colleagues and gained relatively unchallenged decision-making authority, particularly over foreign policy.
In January 1956, the new Constitution of Egypt was drafted, entailing the establishment of a single-party system under the National Union (NU), a movement Nasser described as the "cadre through which we will realize our revolution". The NU was a reconfiguration of the Liberation Rally, which Nasser determined had failed in generating mass public participation. In the new movement, Nasser attempted to incorporate more citizens, approved by local-level party committees, in order to solidify popular backing for his government. The NU would select a nominee for the presidential election whose name would be provided for public approval.
Nasser's nomination for the post and the new constitution were put to public referendum on 23 June and each was approved by an overwhelming majority. A 350-member National Assembly was established, elections for which were held in July 1957. Nasser had ultimate approval over all the candidates. The constitution granted women's suffrage, prohibited gender-based discrimination, and entailed special protection for women in the workplace. Coinciding with the new constitution and Nasser's presidency, the RCC dissolved itself and its members resigned their military commissions as part of the transition to civilian rule. During the deliberations surrounding the establishment of a new government, Nasser began a process of sidelining his rivals among the original Free Officers, while elevating his closest allies to high-ranking positions in the cabinet.
Nationalization of the Suez Canal
thumb|right|alt=A man in military uniform raising a flag up a pole. Behind him are other uniformed men and others wearing traditional, civilian dress|Nasser raising the Egyptian flag over the Suez Canal city of Port Said to celebrate the final British military withdrawal from the country, June 1956
After the three-year transition period ended with Nasser's official assumption of power, his domestic and independent foreign policies increasingly collided with the regional interests of the UK and France. The latter condemned his strong support for Algerian independence, and the UK's Eden government was agitated by Nasser's campaign against the Baghdad Pact. In addition, Nasser's adherence to neutralism regarding the Cold War, recognition of communist China, and arms deal with the Eastern bloc alienated the United States. On 19 July 1956, the US and UK abruptly withdrew their offer to finance construction of the Aswan Dam, citing concerns that Egypt's economy would be overwhelmed by the project.
Nasser was informed of the British–American withdrawal via a news statement while aboard a plane returning to Cairo from Belgrade, and took great offense. Although ideas for nationalizing the Suez Canal were in the offing after the UK agreed to withdraw its military from Egypt in 1954 (the last British troops left on 13 June 1956), journalist Mohamed Hassanein Heikal asserts that Nasser made the final decision to nationalize the waterway between 19 and 20 July. Nasser himself would later state that he decided on 23 July, after studying the issue and deliberating with some of his advisers from the dissolved RCC, namely Boghdadi and technical specialist Mahmoud Younis, beginning on 21 July. The rest of the RCC's former members were informed of the decision on 24 July, while the bulk of the cabinet was unaware of the nationalization scheme until hours before Nasser publicly announced it. According to Ramadan, Nasser's decision to nationalize the canal was a solitary decision, taken without consultation.
On 26 July 1956, Nasser gave a speech in Alexandria announcing the nationalization of the Suez Canal Company as a means to fund the Aswan Dam project in light of the British–American withdrawal. In the speech, he denounced British imperialism in Egypt and British control over the canal company's profits, and upheld that the Egyptian people had a right to sovereignty over the waterway, especially since "120,000 Egyptians had died (sic)" building it. The motion was technically in breach of the international agreement he had signed with the UK on 19 October 1954, although he ensured that all existing stockholders would be paid off.
The nationalization announcement was greeted very emotionally by the audience and, throughout the Arab world, thousands entered the streets shouting slogans of support. US ambassador Henry A. Byroade stated, "I cannot overemphasize [the] popularity of the Canal Company nationalization within Egypt, even among Nasser's enemies." Egyptian political scientist Mahmoud Hamad wrote that, prior to 1956, Nasser had consolidated control over Egypt's military and civilian bureaucracies, but it was only after the canal's nationalization that he gained near-total popular legitimacy and firmly established himself as the "charismatic leader" and "spokesman for the masses not only in Egypt, but all over the Third World". According to Aburish, this was Nasser's largest pan-Arab triumph at the time and "soon his pictures were to be found in the tents of Yemen, the souks of Marrakesh, and the posh villas of Syria". The official reason given for the nationalization was that funds from the canal would be used for the construction of the dam in Aswan. That same day, Egypt closed the canal to Israeli shipping.
Suez Crisis
thumb|right|thumbtime=2:16|Movietone newsreels reporting Nasser's nationalization of the Suez Canal and both domestic and Western reactions
France and the UK, the largest shareholders in the Suez Canal Company, saw its nationalization as yet another hostile measure aimed at them by the Egyptian government. Nasser was aware that the canal's nationalization would instigate an international crisis and believed the prospect of military intervention by the two countries was 80 per cent likely. He believed, however, that the UK would not be able to intervene militarily for at least two months after the announcement, and dismissed Israeli action as "impossible". In early October, the UN Security Council met on the matter of the canal's nationalization and adopted a resolution recognizing Egypt's right to control the canal as long as it continued to allow passage through it for foreign ships. According to Heikal, after this agreement, "Nasser estimated that the danger of invasion had dropped to 10 per cent". Shortly thereafter, however, the UK, France, and Israel made a secret agreement to take over the Suez Canal, occupy the Suez Canal zone, and topple Nasser.
On 29 October 1956, Israeli forces crossed the Sinai Peninsula, overwhelmed Egyptian army posts, and quickly advanced to their objectives. Two days later, British and French planes bombarded Egyptian airfields in the canal zone. Nasser ordered the military's high command to withdraw the Egyptian Army from Sinai to bolster the canal's defenses. Moreover, he feared that if the armored corps was dispatched to confront the Israeli invading force and the British and French subsequently landed in the canal city of Port Said, Egyptian armor in the Sinai would be cut off from the canal and destroyed by the combined tripartite forces. Amer strongly disagreed, insisting that Egyptian tanks meet the Israelis in battle. The two had a heated exchange on 3 November, and Amer conceded. Nasser also ordered blockage of the canal by sinking or otherwise disabling forty-nine ships at its entrance.
Despite the commanded withdrawal of Egyptian troops, about 2,000 Egyptian soldiers were killed during engagement with Israeli forces, and some 5,000 Egyptian soldiers were captured by the Israeli Army. Amer and Salah Salem proposed requesting a ceasefire, with Salem further recommending that Nasser surrender himself to British forces. Nasser berated Amer and Salem, and vowed, "Nobody is going to surrender." Nasser assumed military command. Despite the relative ease in which Sinai was occupied, Nasser's prestige at home and among Arabs was undamaged. To counterbalance the Egyptian Army's dismal performance, Nasser authorized the distribution of about 400,000 rifles to civilian volunteers and hundreds of militias were formed throughout Egypt, many led by Nasser's political opponents.
It was at Port Said that Nasser saw a confrontation with the invading forces as being the strategic and psychological focal point of Egypt's defense. A third infantry battalion and hundreds of national guardsmen were sent to the city as reinforcements, while two regular companies were dispatched to organize popular resistance. Nasser and Boghdadi traveled to the canal zone to boost the morale of the armed volunteers. According to Boghdadi's memoirs, Nasser described the Egyptian Army as "shattered" as he saw the wreckage of Egyptian military equipment en route. When British and French forces landed in Port Said on 5–6 November, its local militia put up a stiff resistance, resulting in street-to-street fighting. The Egyptian Army commander in the city was preparing to request terms for a ceasefire, but Nasser ordered him to desist. The British-French forces managed to largely secure the city by 7 November. Between 750 and 1,000 Egyptians were killed in the battle for Port Said.
The US Eisenhower administration condemned the tripartite invasion, and supported UN resolutions demanding withdrawal and a United Nations Emergency Force (UNEF) to be stationed in Sinai. Nasser commended Eisenhower, stating he played the "greatest and most decisive role" in stopping the "tripartite conspiracy". By the end of December, British and French forces had totally withdrawn from Egyptian territory, while Israel completed its withdrawal in March 1957 and released all Egyptian prisoners of war. As a result of the Suez Crisis, Nasser brought in a set of regulations imposing rigorous requirements for residency and citizenship as well as forced expulsions, mostly affecting British and French nationals and Jews with foreign nationality, as well as many Egyptian Jews. Some 25,000 Jews, almost half of the Jewish community left in 1956, mainly for Israel, Europe, the United States and South America.Jewish Refugees from Arab Countries. Jewishvirtuallibrary.org.
After the fighting ended, Amer accused Nasser of provoking an unnecessary war and then blaming the military for the result. On 8 April, the canal was reopened, and Nasser's political position was enormously enhanced by the widely perceived failure of the invasion and attempt to topple him. British diplomat Anthony Nutting claimed the crisis "established Nasser finally and completely" as the rayyes (president) of Egypt.
Pan-Arabism and socialism
thumb|right|alt=Five men standing side-by-side behind a table with documents on it. All the men are wearing suits and ties, with the exception of the man in the middle, who is wearing a traditional robe and headdress. There are three men standing behind them.|The signing of the regional defense pact between Egypt, Saudi Arabia, Syria and Jordan, January 1957. At the forefront, from left right: Prime Minister Sulayman al-Nabulsi of Jordan, King Hussein of Jordan, King Saud of Saudi Arabia, Nasser, Prime Minister Sabri al-Asali of Syria
By 1957, pan-Arabism had become the dominant ideology in the Arab world, and the average Arab citizen considered Nasser his undisputed leader. Historian Adeed Dawisha credited Nasser's status to his "charisma, bolstered by his perceived victory in the Suez Crisis". The Cairo-based Voice of the Arabs radio station spread Nasser's ideas of united Arab action throughout the Arabic-speaking world, so much so that historian Eugene Rogan wrote, "Nasser conquered the Arab world by radio." Lebanese sympathizers of Nasser and the Egyptian embassy in Beirut—the press center of the Arab world—bought out Lebanese media outlets to further disseminate Nasser's ideals. Egypt also expanded its policy of secondment, dispatching thousands of high-skilled Egyptian professionals (usually politically-active teachers) across the region. Nasser also enjoyed the support of Arab nationalist civilian and paramilitary organizations throughout the region. His followers were numerous and well-funded, but lacked any permanent structure and organization. They called themselves "Nasserites", despite Nasser's objection to the label (he preferred the term "Arab nationalists").
In January 1957, the US adopted the Eisenhower Doctrine and pledged to prevent the spread of communism and its perceived agents in the Middle East. Although Nasser was an opponent of communism in the region, his promotion of pan-Arabism was viewed as a threat by pro-Western states in the region. Eisenhower tried to isolate Nasser and reduce his regional influence by attempting to transform King Saud into a counterweight. Also in January, the elected Jordanian prime minister and Nasser supporter Sulayman al-Nabulsi brought Jordan into a military pact with Egypt, Syria, and Saudi Arabia.
Relations between Nasser and King Hussein deteriorated in April when Hussein implicated Nasser in two coup attempts against him—although Nasser's involvement was never established—and dissolved al-Nabulsi's cabinet. Nasser subsequently slammed Hussein on Cairo radio as being "a tool of the imperialists". Relations with King Saud also became antagonistic as the latter began to fear that Nasser's increasing popularity in Saudi Arabia was a genuine threat to the royal family's survival. Despite opposition from the governments of Jordan, Saudi Arabia, Iraq, and Lebanon, Nasser maintained his prestige among their citizens and those of other Arab countries.
By the end of 1957, Nasser nationalized all remaining British and French assets in Egypt, including the tobacco, cement, pharmaceutical, and phosphate industries. When efforts to offer tax incentives and attract outside investments yielded no tangible results, he nationalized more companies and made them a part of his economic development organization. He stopped short of total government control: two-thirds of the economy was still in private hands. This effort achieved a measure of success, with increased agricultural production and investment in industrialization. Nasser initiated the Helwan steelworks, which subsequently became Egypt's largest enterprise, providing the country with product and tens of thousands of jobs. Nasser also decided to cooperate with the Soviet Union in the construction of the Aswan Dam to replace the withdrawal of US funds.
United Arab Republic
thumb|right|Nasser's announcement of the United Arab Republic, 23 February 1958
thumb|right|Newsreel clip about Nasser and Quwatli's establishment of United Arab Republic
Despite his popularity with the people of the Arab world, by mid-1957 his only regional ally was Syria. In September, Turkish troops massed along the Syrian border, giving credence to rumors that the Baghdad Pact countries were attempting to topple Syria's leftist government. Nasser sent a contingent force to Syria as a symbolic display of solidarity, further elevating his prestige in the Arab world, and particularly among Syrians.
As political instability grew in Syria, delegations from the country were sent to Nasser demanding immediate unification with Egypt. Nasser initially turned down the request, citing the two countries' incompatible political and economic systems, lack of contiguity, the Syrian military's record of intervention in politics, and the deep factionalism among Syria's political forces. However, in January 1958, a second Syrian delegation managed to convince Nasser of an impending communist takeover and a consequent slide to civil strife. Nasser subsequently opted for union, albeit on the condition that it would be a total political merger with him as its president, to which the delegates and Syrian president Shukri al-Quwatli agreed. On 1 February, the United Arab Republic (UAR) was proclaimed and, according to Dawisha, the Arab world reacted in "stunned amazement, which quickly turned into uncontrolled euphoria." Nasser ordered a crackdown against Syrian communists, dismissing many of them from their governmental posts.
thumb|left|Nasser seated alongside Crown Prince Muhammad al-Badr of North Yemen (center) and Shukri al-Quwatli (right), February 1958. North Yemen joined the UAR to form the United Arab States, a loose confederation.|alt=Three important fellows on a couch, two in suits
On a surprise visit to Damascus to celebrate the union on 24 February, Nasser was welcomed by crowds in the hundreds of thousands. Crown Prince Imam Badr of North Yemen was dispatched to Damascus with proposals to include his country in the new republic. Nasser agreed to establish a loose federal union with Yemen—the United Arab States—in place of total integration. While Nasser was in Syria, King Saud planned to have him assassinated on his return flight to Cairo. On 4 March, Nasser addressed the masses in Damascus and waved before them the Saudi check given to Syrian security chief and, unbeknownst to the Saudis, ardent Nasser supporter Abdel Hamid Sarraj to shoot down Nasser's plane. As a consequence of Saud's plot, he was forced by senior members of the Saudi royal family to informally cede most of his powers to his brother, King Faisal, a major Nasser opponent who advocated pan-Islamic unity over pan-Arabism.
A day after announcing the attempt on his life, Nasser established a new provisional constitution proclaiming a 600-member National Assembly (400 from Egypt and 200 from Syria) and the dissolution of all political parties. Nasser gave each of the provinces two vice-presidents: Boghdadi and Amer in Egypt, and Sabri al-Asali and Akram al-Hawrani in Syria. Nasser then left for Moscow to meet with Nikita Khrushchev. At the meeting, Khrushchev pressed Nasser to lift the ban on the Communist Party, but Nasser refused, stating it was an internal matter which was not a subject of discussion with outside powers. Khrushchev was reportedly taken aback and denied he had meant to interfere in the UAR's affairs. The matter was settled as both leaders sought to prevent a rift between their two countries.
Influence on the Arab world
In Lebanon, clashes between pro-Nasser factions and supporters of staunch Nasser opponent, then-President Camille Chamoun, culminated in civil strife by May. The former sought to unite with the UAR, while the latter sought Lebanon's continued independence. Nasser delegated oversight of the issue to Sarraj, who provided limited aid to Nasser's Lebanese supporters through money, light arms, and officer training—short of the large-scale support that Chamoun alleged. Nasser did not covet Lebanon, seeing it as a "special case", but sought to prevent Chamoun from a second presidential term.
thumb|left|alt=Two men standing side-by-side in the forefront, wearing overcoats. Behind them are several men in military uniform or suits and ties standing and saluting or making no gestures.|Nasser (right) and Lebanese president Fuad Chehab (to Nasser's right) at the Syrian–Lebanese border during talks to end the crisis in Lebanon. Akram al-Hawrani stands third to Nasser's left, and Abdel Hamid Sarraj stands to Chehab's right, March 1959.
On 14 July, Iraqi army officers Abdel Karim Qasim and Abdel Salam Aref overthrew the Iraqi monarchy and, the next day, Iraqi prime minister and Nasser's chief Arab antagonist, Nuri al-Said, was killed. Nasser recognized the new government and stated that "any attack on Iraq was tantamount to an attack on the UAR". On 15 July, US marines landed in Lebanon, and British special forces in Jordan, upon the request of those countries' governments to prevent them from falling to pro-Nasser forces. Nasser felt that the revolution in Iraq left the road for pan-Arab unity unblocked. On 19 July, for the first time, he declared that he was opting for full Arab union, although he had no plan to merge Iraq with the UAR. While most members of the Iraqi Revolutionary Command Council (RCC) favored Iraqi-UAR unity, Qasim sought to keep Iraq independent and resented Nasser's large popular base in the country.
In the fall of 1958, Nasser formed a tripartite committee consisting of Zakaria Mohieddin, al-Hawrani, and Salah Bitar to oversee developments in Syria. By moving the latter two, who were Ba'athists, to Cairo, he neutralized important political figures who had their own ideas about how Syria should be run. He put Syria under Sarraj, who effectively reduced the province to a police state by imprisoning and exiling landholders who objected to the introduction of Egyptian agricultural reform in Syria, as well as communists. Following the Lebanese election of Fuad Chehab in September 1958, relations between Lebanon and the UAR improved considerably. On 25 March 1959, Chehab and Nasser met at the Lebanese–Syrian border and compromised on an end to the Lebanese crisis.
thumb|right|alt=The back of a man waving to the throng below|Nasser waving to crowds in Damascus, Syria, October 1960
Relations between Nasser and Qasim grew increasingly bitter on 9 March, after Qasim's forces suppressed a rebellion in Mosul, launched a day earlier by a pro-Nasser Iraqi RCC officer backed by UAR authorities. Nasser had considered dispatching troops to aid his Iraqi sympathizers, but decided against it. He clamped down on Egyptian communist activity due to the key backing Iraqi communists provided Qasim. Several influential communists were arrested, including Nasser's old comrade Khaled Mohieddin, who had been allowed to re-enter Egypt in 1956.
By December, the political situation in Syria was faltering and Nasser responded by appointing Amer as governor-general alongside Sarraj. Syria's leaders opposed the appointment and many resigned from their government posts. Nasser later met with the opposition leaders and in a heated moment, exclaimed that he was the "elected" president of the UAR and those who did not accept his authority could "walk away".
Collapse of the union and aftermath
Opposition to the union mounted among some of Syria's key elements, namely the socioeconomic, political, and military elites. In response to Syria's worsening economy, which Nasser attributed to its control by the bourgeoisie, in July 1961, Nasser decreed socialist measures that nationalized wide-ranging sectors of the Syrian economy. He also dismissed Sarraj in September to curb the growing political crisis. Aburish states that Nasser was not fully capable of addressing Syrian problems because they were "foreign to him". In Egypt, the economic situation was more positive, with a GNP growth of 4.5 percent and a rapid growth of industry. In 1960, Nasser nationalized the Egyptian press, which had already been cooperating with his government, in order to steer coverage towards the country's socioeconomic issues and galvanize public support for his socialist measures.
On 28 September 1961, secessionist army units launched a coup in Damascus, declaring Syria's secession from the UAR. In response, pro-union army units in northern Syria revolted and pro-Nasser protests occurred in major Syrian cities. Nasser sent Egyptian special forces to Latakia to bolster his allies, but withdrew them two days later, citing a refusal to allow inter-Arab fighting. Addressing the UAR's breakup on 5 October, Nasser accepted personal responsibility and declared that Egypt would recognize an elected Syrian government. He privately blamed interference by hostile Arab governments. According to Heikal, Nasser suffered something resembling a nervous breakdown after the dissolution of the union; he began to smoke more heavily and his health began to deteriorate.
Revival on regional stage
thumb|right|alt=Three important men walking alongside each other.|Nasser (center) receiving Algerian president Ahmed Ben Bella (right) and Iraqi president Abdel Salam Aref (left) for the Arab League summit in Alexandria, September 1964. Ben Bella and Aref were close allies of Nasser.
Nasser's regional position changed unexpectedly when Yemeni officers led by Nasser supporter Abdullah al-Sallal overthrew Imam Badr of North Yemen on 27 September 1962. Al-Badr and his tribal partisans began receiving increasing support from Saudi Arabia to help reinstate the kingdom, while Nasser subsequently accepted a request by Sallal to militarily aid the new government on 30 September. Consequently, Egypt became increasingly embroiled in the drawn-out civil war until it withdrew its forces in 1967. Most of Nasser's old colleagues had questioned the wisdom of continuing the war, but Amer reassured Nasser of their coming victory. Nasser later remarked in 1968 that intervention in Yemen was a "miscalculation".
In July 1962, Algeria became independent of France. As a staunch political and financial supporter of the Algerian independence movement, Nasser considered the country's independence to be a personal victory. Amid these developments, a pro-Nasser clique in the Saudi royal family led by Prince Talal defected to Egypt, along with the Jordanian chief of staff, in early 1963.
thumb|left|alt=Several men in different clothing standing before a crowd of people.|Nasser before Yemeni crowds on his arrival to Sana'a, April 1964. In front of Nasser and giving a salute is Yemeni President Abdullah al-Sallal
On 8 February 1963, a military coup in Iraq led by a Ba'athist–Nasserist alliance toppled Qasim, who was subsequently shot dead. Abdel Salam Aref, a Nasserist, was chosen to be the new president. A similar alliance toppled the Syrian government on 8 March. On 14 March, the new Iraqi and Syrian governments sent Nasser delegations to push for a new Arab union. At the meeting, Nasser lambasted the Ba'athists for "facilitating" Syria's split from the UAR, and asserted that he was the "leader of the Arabs". A transitional unity agreement stipulating a federal system was signed by the parties on 17 April and the new union was set to be established in May 1965. However, the agreement fell apart weeks later when Syria's Ba'athists purged Nasser's supporters from the officers corps. A failed counter-coup by a Nasserist colonel followed, after which Nasser condemned the Ba'athists as "fascists".
In January 1964, Nasser called for an Arab League summit in Cairo to establish a unified Arab response against Israel's plans to divert the Jordan River's waters for economic purposes, which Syria and Jordan deemed an act of war. Nasser blamed Arab divisions for what he deemed "the disastrous situation". He discouraged Syria and Palestinian guerrillas from provoking the Israelis, conceding that he had no plans for war with Israel. During the summit, Nasser developed cordial relations with King Hussein, and ties were mended with the rulers of Saudi Arabia, Syria, and Morocco. In May, Nasser moved to formally share his leadership position over the Palestine issue by initiating the creation of the Palestine Liberation Organization (PLO). In practice, Nasser used the PLO to wield control over the Palestinian fedayeen. Its head was to be Ahmad Shukeiri, Nasser's personal nominee.
After years of foreign policy coordination and developing ties, Nasser, President Sukarno of Indonesia, President Tito of Yugoslavia, and Prime Minister Nehru of India founded the Non-Aligned Movement (NAM) in 1961. Its declared purpose was to solidify international non-alignment and promote world peace amid the Cold War, end colonization, and increase economic cooperation among developing countries. In 1964, Nasser was made president of the NAM and held the second conference of the organization in Cairo.
Nasser played a significant part in the strengthening of African solidarity in the late 1950s and early 1960s, although his continental leadership role had increasingly passed to Algeria since 1962. During this period, Nasser made Egypt a refuge for anti-colonial leaders from several African countries and allowed the broadcast of anti-colonial propaganda from Cairo. Beginning in 1958, Nasser had a key role in the discussions among African leaders that led to the establishment of the Organisation of African Unity (OAU) in 1963.
Modernization efforts and internal dissent
thumb|right|alt=Several men walking forward, side-by-side. There are five men in the forefront, all wearing suits and ties. In the background is an ornate building with two minarets and a dome.|Government officials attending Friday prayers at al-Azhar Mosque, 1959. From left to right; Interior Minister Zakaria Mohieddin, Nasser, Social Affairs Minister Hussein el-Shafei and National Union Secretary Anwar Sadat
Al-Azhar
In 1961, Nasser sought to firmly establish Egypt as the leader of the Arab world and to promote a second revolution in Egypt with the purpose of merging Islamic and socialist thinking. To achieve this, he initiated several reforms to modernize al-Azhar, which serves as the de facto leading authority in Sunni Islam, and to ensure its prominence over the Muslim Brotherhood and the more conservative Wahhabism promoted by Saudi Arabia. Nasser had used al-Azhar's most willing ulema (scholars) as a counterweight to the Brotherhood's Islamic influence, starting in 1953.
Nasser instructed al-Azhar to create changes in its syllabus that trickled to the lower levels of Egyptian education, consequently allowing the establishment of coeducational schools and the introduction of evolution into school curriculum. The reforms also included the merger of religious and civil courts. Moreover, Nasser forced al-Azhar to issue a fatwā admitting Shia Muslims, Alawites, and Druze into mainstream Islam; for centuries prior, al-Azhar deemed them to be "heretics".
Rivalry with Amer
Following Syria's secession, Nasser grew concerned with Amer's inability to train and modernize the army, and with the state within a state Amer had created in the military command and intelligence apparatus. In late 1961, Nasser established the Presidential Council and decreed it the authority to approve all senior military appointments, instead of leaving this responsibility solely to Amer. Moreover, he instructed that the primary criterion for promotion should be merit and not personal loyalties. Nasser retracted the initiative after Amer's allies in the officers corps threatened to mobilize against him.
In early 1962 Nasser again attempted to wrest control of the military command from Amer. Amer responded by directly confronting Nasser for the first time and secretly rallying his loyalist officers. Nasser ultimately backed down, wary of a possible violent confrontation between the military and his civilian government. According to Boghdadi, the stress caused by the UAR's collapse and Amer's increasing autonomy forced Nasser, who already had diabetes, to practically live on painkillers from then on.
National Charter and second term
thumb|right|Nasser being sworn in for a second term as Egypt's president, 25 March 1965|alt=Two men on a stage, with a flag hung behind them. One is reading from a paper, while the other is looking at the audience. Cameras are shooting the event, while most of the audience is looking at the stage.
In October 1961, Nasser embarked on a major nationalization program for Egypt, believing the total adoption of socialism was the answer to his country's problems and would have prevented Syria's secession. In order to organize and solidify his popular base with Egypt's citizens and counter the army's influence, Nasser introduced the National Charter in 1962 and a new constitution. The charter called for universal health care, affordable housing, vocational schools, greater women's rights and a family planning program, as well as widening the Suez Canal.
Nasser also attempted to maintain oversight of the country's civil service to prevent it from inflating and consequently becoming a burden to the state. New laws provided workers with a minimum wage, profit shares, free education, free health care, reduced working hours, and encouragement to participate in management. Land reforms guaranteed the security of tenant farmers, promoted agricultural growth, and reduced rural poverty. As a result of the 1962 measures, government ownership of Egyptian business reached 51 percent, and the National Union was renamed the Arab Socialist Union (ASU). With these measures came more domestic repression, as thousands of Islamists were imprisoned, including dozens of military officers. Nasser's tilt toward a Soviet-style system led his aides Boghdadi and Hussein el-Shafei to submit their resignations in protest.
During the presidential referendum in Egypt, Nasser was re-elected to a second term as UAR president and took his oath on 25 March 1965. He was the only candidate for the position, with virtually all of his political opponents forbidden by law from running for office, and his fellow party members reduced to mere followers. That same year, Nasser had the Muslim Brotherhood chief ideologue Sayyed Qutb imprisoned. Qutb was charged and found guilty by the court of plotting to assassinate Nasser, and was executed in 1966. Beginning in 1966, as Egypt's economy slowed and government debt became increasingly burdensome, Nasser began to ease state control over the private sector, encouraging state-owned bank loans to private business and introducing incentives to increase exports. During the 60's, the Egyptian economy went from sluggishness to the verge of collapse, the society became less free, and Nasser's appeal waned considerably.
Six-Day War
thumb|right|alt=Three important men walking in a hall, the first and the third are in military garb, the second is in a suit and tie. Behind them are three other men|Nasser (center), King Hussein of Jordan (left) and Egyptian Army Chief of Staff Abdel Hakim Amer (right) at the Supreme Command of the Armed Forces headquarters in Cairo before signing a mutual defense pact, 30 May 1967
In mid May 1967, the Soviet Union issued warnings to Nasser of an impending Israeli attack on Syria, although Chief of Staff Mohamed Fawzi considered the warnings to be "baseless". According to Kandil, without Nasser's authorization, Amer used the Soviet warnings as a pretext to dispatch troops to Sinai on 14 May, and Nasser subsequently demanded UNEF's withdrawal. Earlier that day, Nasser received a warning from King Hussein of Israeli-American collusion to drag Egypt into war. The message had been originally received by Amer on 2 May, but was withheld from Nasser until the Sinai deployment on 14 May. Although in the preceding months, Hussein and Nasser had been accusing each other of avoiding a fight with Israel, Hussein was nonetheless wary that an Egyptian-Israeli war would risk the West Bank's occupation by Israel. Nasser still felt that the US would restrain Israel from attacking due to assurances that he received from the US and Soviet Union. In turn, he also reassured both powers that Egypt would only act defensively.
On 21 May, Amer asked Nasser to order the Straits of Tiran blockaded, a move Nasser believed Israel would use as a casus belli. Amer reassured him that the army was prepared for confrontation, but Nasser doubted Amer's assessment of the military's readiness. According to Nasser's vice president Zakaria Mohieddin, although "Amer had absolute authority over the armed forces, Nasser had his ways of knowing what was really going on". Moreover, Amer anticipated an impending Israeli attack and advocated a preemptive strike. Nasser refused the call upon determination that the air force lacked pilots and Amer's handpicked officers were incompetent. Still, Nasser concluded that if Israel attacked, Egypt's quantitative advantage in manpower and arms could stave off Israeli forces for at least two weeks, allowing for diplomacy towards a ceasefire. Towards the end of May, Nasser increasingly exchanged his positions of deterrence for deference to the inevitability of war, under increased pressure to act by both the general Arab populace and various Arab governments. On 26 May Nasser declared, "our basic objective will be to destroy Israel". On 30 May, King Hussein committed Jordan in an alliance with Egypt and Syria.
On the morning of 5 June, the Israeli Air Force struck Egyptian air fields, destroying much of the Egyptian Air Force. Before the day ended, Israeli armored units had cut through Egyptian defense lines and captured the town of el-Arish. The next day, Amer ordered the immediate withdrawal of Egyptian troops from Sinai—causing the majority of Egyptian casualties during the war. Israel quickly captured Sinai and the Gaza Strip from Egypt, the West Bank from Jordan, and the Golan Heights from Syria.
According to Sadat, it was only when the Israelis cut off the Egyptian garrison at Sharm el-Sheikh that Nasser became aware of the situation's gravity. After hearing of the attack, he rushed to army headquarters to inquire about the military situation. The simmering conflict between Nasser and Amer subsequently came to the fore, and officers present reported the pair burst into "a nonstop shouting match". The Supreme Executive Committee, set up by Nasser to oversee the conduct of the war, attributed the repeated Egyptian defeats to the Nasser–Amer rivalry and Amer's overall incompetence. According to Egyptian diplomat Ismail Fahmi, who became foreign minister during Sadat's presidency, the Israeli invasion and Egypt's consequent defeat was a result of Nasser's dismissal of all rational analysis of the situation and his undertaking of a series of irrational decisions.
Resignation and aftermath
During the first four days of the war, the general population of the Arab world believed Arab radio station fabrications of imminent Arab victory. On 9 June, Nasser appeared on television to inform Egypt's citizens of their country's defeat. He announced his resignation on television later that day, and ceded all presidential powers to his then-Vice President Zakaria Mohieddin, who had no prior information of this decision and refused to accept the post. Hundreds of thousands of sympathizers poured into the streets in mass demonstrations throughout Egypt and across the Arab world rejecting his resignation, chanting, "We are your soldiers, Gamal!" Nasser retracted his decision the next day.
thumb|upright|left|alt=A crowd of people, many waving. One person is holding up a portrait of a man|Egyptian demonstrators protesting Nasser's resignation, 1967
On 11 July, Nasser replaced Amer with Mohamed Fawzi as general commander, over the protestations of Amer's loyalists in the military, 600 of whom marched on army headquarters and demanded Amer's reinstatement. After Nasser sacked thirty of the loyalists in response, Amer and his allies devised a plan to topple him on 27 August. Nasser was tipped off about their activities and, after several invitations, he convinced Amer to meet him at his home on 24 August. Nasser confronted Amer about the coup plot, which he denied before being arrested by Mohieddin. Amer committed suicide on 14 September. Despite his souring relationship with Amer, Nasser spoke of losing "the person closest to [him]". Thereafter, Nasser began a process of depoliticizing the armed forces, arresting dozens of leading military and intelligence figures loyal to Amer.
At the 29 August Arab League summit in Khartoum, Nasser's usual commanding position had receded as the attending heads of state expected Saudi King Faisal to lead. A ceasefire in the Yemen War was declared and the summit concluded with the Khartoum Resolution. The Soviet Union soon resupplied the Egyptian military with about half of its former arsenals and broke diplomatic relations with Israel. Nasser cut relations with the US following the war, and, according to Aburish, his policy of "playing the superpowers against each other" ended. In November, Nasser accepted UN Resolution 242, which called for Israel's withdrawal from territories acquired in the war. His supporters claimed Nasser's move was meant to buy time to prepare for another confrontation with Israel, while his detractors believed his acceptance of the resolution signaled a waning interest in Palestinian independence.
Final years of presidency
thumb|right|alt=A man wearing suit peering out across a body of water with binoculars from an opening in dirt mound. Behind him are three men in military uniform|Nasser observing the Suez front with Egyptian officers during the 1968 War of Attrition. General Commander Mohamed Fawzi is directly behind Nasser, and to their left is Chief of Staff Abdel Moneim Riad.
Domestic reforms and governmental changes
Nasser appointed himself the additional roles of prime minister and supreme commander of the armed forces on 19 June 1967. Angry at the military court's perceived leniency with air force officers charged with negligence during the 1967 war, workers and students launched protests calling for major political reforms in late February 1968. Nasser responded to the demonstrations, the most significant public challenge to his rule since workers' protests in March 1954, by removing most military figures from his cabinet and appointing eight civilians in place of several high-ranking members of the Arab Socialist Union (ASU). By 3 March, Nasser directed Egypt's intelligence apparatus to focus on external rather than domestic espionage, and declared the "fall of the mukhabarat state".
On 30 March, Nasser proclaimed a manifesto stipulating the restoration of civil liberties, greater parliamentary independence from the executive, major structural changes to the ASU, and a campaign to rid the government of corrupt elements. A public referendum approved the proposed measures in May, and held subsequent elections for the Supreme Executive Committee, the ASU's highest decision-making body. Observers noted that the declaration signaled an important shift from political repression to liberalization, although its promises would largely go unfulfilled.
Nasser appointed Sadat and Hussein el-Shafei as his vice presidents in December 1969. By then, relations with his other original military comrades, namely Khaled and Zakaria Mohieddin and former vice president Sabri, had become strained. By mid-1970, Nasser pondered replacing Sadat with Boghdadi after reconciling with the latter.
War of Attrition and regional diplomatic initiatives
thumb|right|alt=Three important seated men conferring. The first man from the left is wearing a checkered headdress, sunglasses and jodhpurs, the second man is wearing a suit and tie, and the third is wearing military uniform. Standing behind them are suited men.|Nasser brokering a ceasefire between Yasser Arafat of the PLO (left) and King Hussein of Jordan (right) at the emergency Arab League summit in Cairo on 27 September 1970, one day before Nasser's death
Meanwhile, in January 1968, Nasser commenced the War of Attrition to reclaim territory captured by Israel, ordering attacks against Israeli positions east of the then-blockaded Suez Canal. In March, Nasser offered Yasser Arafat's Fatah movement arms and funds after their performance against Israeli forces in the Battle of Karameh that month. He also advised Arafat to think of peace with Israel and the establishment of a Palestinian state comprising the West Bank and the Gaza Strip. Nasser effectively ceded his leadership of the "Palestine issue" to Arafat.
Israel retaliated against Egyptian shelling with commando raids, artillery shelling and air strikes. This resulted in an exodus of civilians from Egyptian cities along the Suez Canal's western bank. Nasser ceased all military activities and began a program to build a network of internal defenses, while receiving the financial backing of various Arab states. The war resumed in March 1969. In November, Nasser brokered an agreement between the PLO and the Lebanese military that granted Palestinian guerrillas the right to use Lebanese territory to attack Israel.
In June 1970, Nasser accepted the US-sponsored Rogers Plan, which called for an end to hostilities and an Israeli withdrawal from Egyptian territory, but it was rejected by Israel, the PLO, and most Arab states except Jordan. Nasser had initially rejected the plan, but conceded under pressure from the Soviet Union, which feared that escalating regional conflict could drag it into a war with the US. He also determined that a ceasefire could serve as a tactical step toward the strategic goal of recapturing the Suez Canal. Nasser forestalled any movement toward direct negotiations with Israel. In dozens of speeches and statements, Nasser posited the equation that any direct peace talks with Israel were tantamount to surrender.
Following Nasser's acceptance, Israel agreed to a ceasefire and Nasser used the lull in fighting to move SAM missiles towards the canal zone.
Meanwhile, tensions in Jordan between an increasingly autonomous PLO and King Hussein's government had been simmering; following the Dawson's Field hijackings, a military campaign was launched to rout out PLO forces. The offensive elevated risks of a regional war and prompted Nasser to hold an emergency Arab League summit on 27 September in Cairo, where he forged a ceasefire.
Death and funeral
thumb|right|alt=Throngs of people marching in a thoroughfare that is adjacent to a body of water|Nasser's funeral procession attended by five million mourners in Cairo, 1 October 1970
As the summit closed on 28 September 1970, hours after escorting the last Arab leader to leave, Nasser suffered a heart attack. He was immediately transported to his house, where his physicians tended to him. Nasser died several hours later, around 6:00 p.m. Heikal, Sadat, and Nasser's wife Tahia were at his deathbed. According to his doctor, al-Sawi Habibi, Nasser's likely cause of death was arteriosclerosis, varicose veins, and complications from long-standing diabetes. Nasser was a heavy smoker with a family history of heart disease—two of his brothers died in their fifties from the same condition. The state of Nasser's health was not known to the public prior to his death. He had previously suffered heart attacks in 1966 and September 1969.
Following the announcement of Nasser's death, Egypt and the Arab world were in a state of shock. Nasser's funeral procession through Cairo on 1 October was attended by at least five million mourners. The procession to his burial site began at the old RCC headquarters with a flyover by MiG-21 jets. His flag-draped coffin was attached to a gun carriage pulled by six horses and led by a column of cavalrymen. All Arab heads of state attended, with the exception of Saudi King Faisal. King Hussein and Arafat cried openly, and Muammar Gaddafi of Libya fainted from emotional distress twice. A few major non-Arab dignitaries were present, including Soviet Premier Alexei Kosygin and French Prime Minister Jacques Chaban-Delmas.
thumb|left|Abdel Nasser Mosque in Cairo, the site of his burial|alt=The front side of a mosque with only one minaret containing a clock.
Almost immediately after the procession began, mourners engulfed Nasser's coffin chanting, "There is no God but Allah, and Nasser is God's beloved… Each of us is Nasser." Police unsuccessfully attempted to quell the crowds and, as a result, most of the foreign dignitaries were evacuated. The final destination was the Nasr Mosque, which was afterwards renamed Abdel Nasser Mosque, where Nasser was buried.
Because of his ability to motivate nationalistic passions, "men, women, and children wept and wailed in the streets" after hearing of his death, according to Nutting. The general Arab reaction was one of mourning, with thousands of people pouring onto the streets of major cities throughout the Arab world. Over a dozen people were killed in Beirut as a result of the chaos, and in Jerusalem, roughly 75,000 Arabs marched through the Old City chanting, "Nasser will never die." As a testament to his unchallenged leadership of the Arab people, following his death, the headline of the Lebanese Le Jour read, "One hundred million human beings—the Arabs—are orphans." Sherif Hetata, a former political prisoner and later member Nasser's ASU, said that "Nasser's greatest achievement was his funeral. The world will never again see five million people crying together."
Legacy
thumb|right|alt=Two men conferring with each other, both are wearing suits and the man on the left is also wearing sunglasses. Three men are standing around them, with one holding a number of objects in his hand|Nasser presenting prominent writer Taha Hussein (standing in front of Nasser with sunglasses) with a national honors prize for literature, 1959
Nasser made Egypt fully independent of British influence, and the country became a major power in the developing world under his leadership. One of Nasser's main domestic efforts was to establish social justice, which he deemed a prerequisite to liberal democracy. During his presidency, ordinary citizens enjoyed unprecedented access to housing, education, jobs, health services and nourishment, as well as other forms of social welfare, while feudalistic influence waned. By the end of his presidency, employment and working conditions improved considerably, although poverty was still high in the country and substantial resources allocated for social welfare had been diverted to the war effort.
The national economy grew significantly through agrarian reform, major modernization projects such as the Helwan steel works and the Aswan Dam, and nationalization schemes such as that of the Suez Canal. However, the marked economic growth of the early 1960s took a downturn for the remainder of the decade, only recovering in 1970. Egypt experienced a "golden age" of culture during Nasser's presidency, according to historian Joel Gordon, particularly in film, television, theater, radio, literature, fine arts, comedy, poetry, and music. Egypt under Nasser dominated the Arab world in these fields, producing cultural icons.
During Mubarak's presidency, Nasserist political parties began to emerge in Egypt, the first being the Arab Democratic Nasserist Party (ADNP). The party carried minor political influence, and splits between its members beginning in 1995 resulted in the gradual establishment of splinter parties, including Hamdeen Sabahi's 1997 founding of Al-Karama. Sabahi came in third place during the 2012 presidential election. Nasserist activists were among the founders of Kefaya, a major opposition force during Mubarak's rule. On 19 September 2012, four Nasserist parties (the ADNP, Karama, the National Conciliation Party, and the Popular Nasserist Congress Party) merged to form the United Nasserist Party.
Image
thumb|upright|alt=A man on his knees looking up to a man sitting and holding his hand and wearing sun glasses, has his right hand on his shoulder and is talking to him. In the background there are men in military uniform all looking on the kneeling man.|Nasser speaking to a homeless Egyptian man and offering him a job, after the man was found sleeping below the stage where Nasser was seated, 1959
Nasser was known for his intimate relationship with ordinary Egyptians. His availability to the public, despite assassination attempts against him, was unparalleled among his successors. A skilled orator, Nasser gave 1,359 speeches between 1953 and 1970, a record for any Egyptian head of state. Historian Elie Podeh wrote that a constant theme of Nasser's image was "his ability to represent Egyptian authenticity, in triumph or defeat". The national press also helped to foster his popularity and profile—more so after the nationalization of state media. Historian Tarek Osman wrote:
The interplay in the Nasser 'phenomenon' between genuine expression of popular feeling and state-sponsored propaganda may sometimes be hard to disentangle. But behind it lies a vital historical fact: that Gamal Abdel Nasser signifies the only truly Egyptian developmental project in the country's history since the fall of the Pharoanic state. There had been other projects ... But this was different—in origin, meaning and impact. For Nasser was a man of the Egyptian soil who had overthrown the Middle East's most established and sophisticated monarchy in a swift and bloodless move—to the acclaim of millions of poor, oppressed Egyptians—and ushered in a programme of 'social justice', 'progress and development', and 'dignity'.
thumb|left|alt=A man wearing a suit and tie with his upper body jutting out, waving his hand to crowds of people, many dressed in traditional clothing and holding posters of the man or three-striped, two-star flags|Nasser waving to crowds in Mansoura, 1960
While Nasser was increasingly criticized by Egyptian intellectuals following the Six-Day War and his death in 1970, the general public was persistently sympathetic both during and after Nasser's life. According to political scientist Mahmoud Hamad, writing in 2008, "nostalgia for Nasser is easily sensed in Egypt and all Arab countries today". General malaise in Egyptian society, particularly during the Mubarak era, augmented nostalgia for Nasser's presidency, which increasingly became associated with the ideals of national purpose, hope, social cohesion, and vibrant culture.
Until the present day, Nasser serves as an iconic figure throughout the Arab world, a symbol of Arab unity and dignity, and a towering figure in modern Middle Eastern history. He is also considered a champion of social justice in Egypt. Time writes that despite his mistakes and shortcomings, Nasser "imparted a sense of personal worth and national pride that [Egypt and the Arabs] had not known for 400 years. This alone may have been enough to balance his flaws and failures."
Historian Steven A. Cook wrote in July 2013, "Nasser's heyday still represents, for many, the last time that Egypt felt united under leaders whose espoused principles met the needs of ordinary Egyptians." During the Arab Spring, which resulted in a revolution in Egypt, photographs of Nasser were raised in Cairo and Arab capitals during anti-government demonstrations. According to journalist Lamis Andoni, Nasser had become a "symbol of Arab dignity" during the mass demonstrations.
Criticism
thumb|right|alt=Two men in suits seated next to each other with their arms resting on a table|Anwar Sadat (left) and Nasser in the National Assembly, 1964. Sadat succeeded Nasser as president in 1970 and significantly departed from Nasser's policies throughout his rule.
Sadat declared his intention to "continue the path of Nasser" in his 7 October 1970 presidential inauguration speech, but began to depart from Nasserist policies as his domestic position improved following the 1973 October War. President Sadat's Infitah policy sought to open Egypt's economy for private investment. According to Heikal, ensuing anti-Nasser developments until the present day led to an Egypt "[half] at war with Abdel-Nasser, half [at war] with Anwar El-Sadat".
Nasser's Egyptian detractors considered him a dictator who thwarted democratic progress, imprisoned thousands of dissidents, and led a repressive administration responsible for numerous human rights violations. Islamists in Egypt, particularly members of the politically persecuted Brotherhood, viewed Nasser as oppressive, tyrannical, and demonic. Liberal writer Tawfiq al-Hakim described Nasser as a "confused Sultan" who employed stirring rhetoric, but had no actual plan to achieve his stated goals.
Some of Nasser's liberal and Islamist critics in Egypt, including the founding members of the New Wafd Party and writer Jamal Badawi, dismissed Nasser's popular appeal with the Egyptian masses during his presidency as being the product of successful manipulation and demagoguery. Egyptian political scientist Alaa al-Din Desouki blamed the 1952 revolution's shortcomings on Nasser's concentration of power, and Egypt's lack of democracy on Nasser's political style and his government's limitations on freedom of expression and political participation.
American political scientist Mark Cooper asserted that Nasser's charisma and his direct relationship with the Egyptian people "rendered intermediaries (organizations and individuals) unnecessary". He opined that Nasser's legacy was a "guarantee of instability" due to Nasser's reliance on personal power and the absence of strong political institutions under his rule. Historian Abd al-Azim Ramadan wrote that Nasser was an irrational and irresponsible leader, blaming his inclination to solitary decision-making for Egypt's losses during the Suez War, among other events. Miles Copeland, Jr., a Central Intelligence Agency officer known for his close personal relationship with Nasser, said that the barriers between Nasser and the outside world have grown so thick that all but the information that attest to his infallibility, indispensability, and immortality has been filtered out.
Zakaria Mohieddin, who was Nasser's vice president, said that Nasser gradually changed during his reign. He ceased consulting his colleagues and made more and more of the decisions himself. Although Nasser repeatedly said that a war with Israel will start at a time of his, or Arab, choosing, in 1967 he started a bluffing game "but a successful bluff means your opponent must not know which cards you are holding. In this case Nasser's opponent could see his hand in the mirror and knew he was only holding a pair of deuces" and Nasser knew that his army is not prepared yet. "All of this was out of character...His tendencies in this regard may have been accentuated by diabetes... That was the only rational explanation for his actions in 1967".
Nasser told an East German newspaper in 1964 that "no person, not even the most simple one, takes seriously the lie of the six million Jews that were murdered [in the Holocaust]." However he is not known to have ever again publicly called the figure of six million into question, perhaps because his advisors and East German contacts had advised him on the subject.
Regional leadership
thumb|right|alt=Three men walking side-by-side. The man in the middle is wearing a suit, while the two to his side are wearing military uniforms and hats. There are a few other men in uniform walking behind them|Gaafar Nimeiry of Sudan (left), Nasser, and Muammar Gaddafi of Libya (right) at the Tripoli Airport, 1969. Nimeiry and Gaddafi were influenced by Nasser's pan-Arabist ideas and the latter sought to succeed him as "leader of the Arabs".
Through his actions and speeches, and because he was able to symbolize the popular Arab will, Nasser inspired several nationalist revolutions in the Arab world. He defined the politics of his generation and communicated directly with the public masses of the Arab world, bypassing the various heads of states of those countries—an accomplishment not repeated by other Arab leaders. The extent of Nasser's centrality in the region made it a priority for incoming Arab nationalist heads of state to seek good relations with Egypt, in order to gain popular legitimacy from their own citizens.
To varying degrees, Nasser's statist system of government was continued in Egypt and emulated by virtually all Arab republics, namely Algeria, Syria, Iraq, Tunisia, Yemen, Sudan, and Libya. Ahmed Ben Bella, Algeria's first president, was a staunch Nasserist. Abdullah al-Sallal drove out the king of North Yemen in the name of Nasser's pan-Arabism. Other coups influenced by Nasser included those that occurred in Iraq in July 1958 and Syria in 1963. Muammar Gaddafi, who overthrew the Libyan monarchy in 1969, considered Nasser his hero and sought to succeed him as "leader of the Arabs". Also in 1969, Colonel Gaafar Nimeiry, a supporter of Nasser, took power in Sudan. The Arab Nationalist Movement (ANM) helped spread Nasser's pan-Arabist ideas throughout the Arab world, particularly among the Palestinians, Syrians, and Lebanese, and in South Yemen, the Persian Gulf, and Iraq. While many regional heads of state tried to emulate Nasser, Podeh opined that the "parochialism" of successive Arab leaders "transformed imitation [of Nasser] into parody".
Portrayal in film
In 1963, Egyptian director Youssef Chahine produced the film El Nasser Salah El Dine ("Saladin The Victorious"), which intentionally drew parallels between Saladin, considered a hero in the Arab world, and Nasser and his pan-Arabist policies. Nasser is played by Ahmed Zaki in Mohamed Fadel's 1996 Nasser 56. The film set the Egyptian box office record at the time, and focused on Nasser during the Suez Crisis. It is also considered a milestone in Egyptian and Arab cinema as the first film to dramatize the role of a modern-day Arab leader. Together with the 1999 Syrian biopic Gamal Abdel Nasser, the films marked the first biographical movies about contemporary public figures produced in the Arab world. He is portrayed by Amir Boutrous in the Netflix television series The Crown.
Personal life
thumb|right|alt=A group of related people posing outdoors. From left to right, there are three women dressed in shirts and long skirts, three boys dressed in suits and ties and a man in a suit and tie|Nasser and his family in Manshiyat al-Bakri, 1963. From left to right, his daughter Mona, his wife Tahia Kazem, daughter Hoda, son Abdel Hakim, son Khaled, son Abdel Hamid, and Nasser.
In 1944, Nasser married Tahia Kazem, the 22-year-old daughter of a wealthy Iranian father and an Egyptian mother, both of whom died when she was young. She was introduced to Nasser through her brother, Abdel Hamid Kazim, a merchant friend of Nasser's, in 1943. After their wedding, the couple moved into a house in Manshiyat al-Bakri, a suburb of Cairo, where they would live for the rest of their lives. Nasser's entry into the officer corps in 1937 secured him relatively well-paid employment in a society where most people lived in poverty.
Nasser and Tahia would sometimes discuss politics at home, but for the most part, Nasser kept his career separate from his family life. He preferred to spend most of his free time with his children. Nasser and Tahia had two daughters and three sons: Hoda, Mona, Khaled, Abdel Hamid, and Abdel Hakim.
Although he was a proponent of secular politics, Nasser was an observant Muslim who made the Hajj pilgrimage to Mecca in 1954 and 1965. He was known to be personally incorruptible, a characteristic which further enhanced his reputation among the citizens of Egypt and the Arab world. Nasser's personal hobbies included playing chess, American films, reading Arabic, English, and French magazines, and listening to classical music.
Nasser had few personal vices other than chain smoking. He maintained 18-hour workdays and rarely took time off for vacations. The combination of smoking and working long hours contributed to his poor health. He was diagnosed with diabetes in the early 1960s and by the time of his death in 1970, he also had arteriosclerosis, heart disease, and high blood pressure. He suffered two major heart attacks (in 1966 and 1969), and was on bed rest for six weeks after the second episode. State media reported that Nasser's absence from the public view at that time was a result of influenza.
Writings
Nasser wrote the following books, published during his lifetime:
Memoirs of the First Palestine War () (1955; Akher Sa'a)
"Memoirs of the First Palestine War", in 2, no. 2 (Win. 73): 3–32 (First English translation, 1973, pdf-file from Journal of Palestine Studies)
Egypt's Liberation: The Philosophy of the Revolution () (1955; Dar al-Maaref)
Towards Freedom () (1959; Cairo-Arabian Company)
Honour
Foreign honour
: Honorary Recipient of the Order of the Crown of the Realm (1965)
See also
List of Presidents of Egypt
List of Prime Ministers of Egypt
References
Notes
Bibliography
External links
Site for President Gamal Abdel Nasser. Bibliotheca Alexandrina and the Gamal Abdel Nasser Foundation. 2012-10-08. An archive of speeches, photos and documents related to Nasser.
Category:1918 births
Category:1970 deaths
Category:African Union chairpersons
Category:African revolutionaries
Category:Arab nationalists
Category:Arab Socialist Union (Egypt) politicians
Category:Articles containing video clips
Category:Bandung Conference attendees
Category:Cairo University alumni
Category:Egyptian Arab nationalists
Category:Egyptian colonels
Category:Egyptian Military Academy alumni
Category:Egyptian Muslims
Category:Egyptian revolutionaries
Category:Egyptian socialists
Category:Foreign Heroes of the Soviet Union
Category:Free Officers Movement
Category:Leaders who took power by coup
Category:People from Alexandria
Category:People from Asyut Governorate
Category:People of the 1948 Arab–Israeli War
Category:People of the Suez Crisis
Category:Political party founders
Category:Presidents of Egypt
Category:Presidents of Syria
Category:Prime Ministers of Egypt
Category:Recipients of the Order of the Companions of O. R. Tambo
Category:Secularists
Category:Secretaries-General of the Non-Aligned Movement
Category:Honorary Recipients of the Order of the Crown of the Realm | 51,879 | 2017-01 |
Incandescent light bulb | thumb|upright|A 230-volt incandescent light bulb, with a 'medium' sized E27 (Edison 27 mm) male screw base. The filament is visible as the horizontal line between the vertical supply wires.
thumb|SEM image of a tungsten filament of incandescent light bulb.
An incandescent light bulb, incandescent lamp or incandescent light globe is an electric light with a wire filament heated to such a high temperature that it glows with visible light (incandescence). The filament, heated by passing an electric current through it, is protected from oxidation with a glass or quartz bulb that is filled with inert gas or evacuated. In a halogen lamp, filament evaporation is prevented by a chemical process that redeposits metal vapor onto the filament, extending its life. The light bulb is supplied with electric current by feed-through terminals or wires embedded in the glass. Most bulbs are used in a socket which provides mechanical support and electrical connections.
Incandescent bulbs are manufactured in a wide range of sizes, light output, and voltage ratings, from 1.5 volts to about 300 volts. They require no external regulating equipment, have low manufacturing costs, and work equally well on either alternating current or direct current. As a result, the incandescent lamp is widely used in household and commercial lighting, for portable lighting such as table lamps, car headlamps, and flashlights, and for decorative and advertising lighting.
Incandescent bulbs are much less efficient than most other types of electric lighting; incandescent bulbs convert less than 5% of the energy they use into visible light, with standard light bulbs averaging about 2.2%.Nicola Armaroli, Vincenzo Balzani, Towards an electricity-powered world. In: Energy and Environmental Science 4, (2011), 3193-3222, . The remaining energy is converted into heat. The luminous efficacy of a typical incandescent bulb is 16 lumens per watt, compared with 60 lm/W for a compact fluorescent bulb or 150 lm/W for some white LED lamps.Vincenzo Balzani, Giacomo Bergamini, Paola Ceroni, Light: A Very Peculiar Reactant and Product. In: Angewandte Chemie International Edition 54, Issue 39, (2015), 11320–11337, . Some applications of the incandescent bulb deliberately use the heat generated by the filament. Such applications include incubators, brooding boxes for poultry,"Storey's guide to raising chickens" Damerow, Gail. Storey Publishing, LLC; 2nd edition (12 January 1995), ISBN 978-1-58017-325-4. page 221. Retrieved 10 November 2009. heat lights for reptile tanks,"277 Secrets Your Snake and Lizard Wants you to Know Unusual and useful Information for Snake Owners & Snake Lovers" Cooper,Paulette. Ten Speed Press (1 March 2004), ISBN 978-1-58008-035-4. Page 161. Retrieved 10 November 2009. infrared heating for industrial heating and drying processes, lava lamps, and the Easy-Bake Oven toy. Incandescent bulbs typically have short lifetimes compared with other types of lighting; around 1,000 hours for home light bulbs versus typically 10,000 hours for compact fluorescents and 30,000 hours for lighting LEDs.
Incandescent bulbs have been replaced in many applications by other types of electric light, such as fluorescent lamps, compact fluorescent lamps (CFL), cold cathode fluorescent lamps (CCFL), high-intensity discharge lamps, and light-emitting diode lamps (LED). Some jurisdictions, such as the European Union, China, Canada and United States, are in the process of phasing out the use of incandescent light bulbs while others, including Colombia, Mexico, Cuba, Argentina, Brazil and Australia, have prohibited them already.
History
In addressing the question of who invented the incandescent lamp, historians Robert Friedel and Paul IsraelFriedel, Robert, and Paul Israel. 1986. Edison's electric light: biography of an invention. New Brunswick, New Jersey: Rutgers University Press. pages 115–117 list 22 inventors of incandescent lamps prior to Joseph Swan and Thomas Edison. They conclude that Edison's version was able to outstrip the others because of a combination of three factors: an effective incandescent material, a higher vacuum than others were able to achieve (by use of the Sprengel pump) and a high resistance that made power distribution from a centralized source economically viable.
Historian Thomas Hughes has attributed Edison's success to his development of an entire, integrated system of electric lighting.
Timeline of the early evolution of the light bulb
Early pre-commercial research
thumb|upright|Original carbon-filament bulb from Thomas Edison's shop in Menlo Park
In 1761 Ebenezer Kinnersley demonstrated heating a wire to incandescence.
In 1802, Humphry Davy used what he described as "a battery of immense size", consisting of 2,000 cells housed in the basement of the Royal Institution of Great Britain, to create an incandescent light by passing the current through a thin strip of platinum, chosen because the metal had an extremely high melting point. It was not bright enough nor did it last long enough to be practical, but it was the precedent behind the efforts of scores of experimenters over the next 75 years.Davis, L.J. "Fleet Fire." Arcade Publishing, New York, 2003. ISBN 1-55970-655-4
Over the first three-quarters of the 19th century many experimenters worked with various combinations of platinum or iridium wires, carbon rods, and evacuated or semi-evacuated enclosures. Many of these devices were demonstrated and some were patented.Houston and Kennely 1896, chapter 2
In 1835, James Bowman Lindsay demonstrated a constant electric light at a public meeting in Dundee, Scotland. He stated that he could "read a book at a distance of one and a half feet". However, having perfected the device to his own satisfaction, he turned to the problem of wireless telegraphy and did not develop the electric light any further. His claims are not well documented, although he is credited in Challoner et al. with being the inventor of the "Incandescent Light Bulb".
In 1838, Belgian lithographer Marcellin Jobard invented an incandescent light bulb with a vacuum atmosphere using a carbon filament.Friedel, Robert, and Paul Israel. 1986. Edison's electric light: biography of an invention. New Brunswick, New Jersey: Rutgers University Press. page 91
In 1840, British scientist Warren de la Rue enclosed a coiled platinum filament in a vacuum tube and passed an electric current through it. The design was based on the concept that the high melting point of platinum would allow it to operate at high temperatures and that the evacuated chamber would contain fewer gas molecules to react with the platinum, improving its longevity. Although a workable design, the cost of the platinum made it impractical for commercial use.
In 1841, Frederick de Moleyns of England was granted the first patent for an incandescent lamp, with a design using platinum wires contained within a vacuum bulb. He also used carbon.Houston and Kennely 1896, page 24
In 1845, American John W. Starr acquired a patent for his incandescent light bulb involving the use of carbon filaments.Charles D. Wrege J.W. Starr: Cincinnati's Forgotten Genius, Cincinnati Historical Society Bulletin 34 (Summer 1976): 102–120. Retrieved 2010 February 16. He died shortly after obtaining the patent, and his invention was never produced commercially. Little else is known about him."John Wellington Starr". Retrieved 2010 February 16.
In 1851, Jean Eugène Robert-Houdin publicly demonstrated incandescent light bulbs on his estate in Blois, France. His light bulbs are on display in the museum of the Château de Blois.Many of the above lamps are illustrated and described in Edwin J. Houston and A. E. Kennely "Electric Incandescent Lighting", The W. J. Johnston Company, New York, 1896 pages 18–42. Available from the Internet Archive.
In 1872, Russian Alexander Lodygin invented an incandescent light bulb and obtained a Russian patent in 1874. He used as a burner two carbon rods of diminished section in a glass receiver, hermetically sealed, and filled with nitrogen, electrically arranged so that the current could be passed to the second carbon when the first had been consumed.Edison Electric Light Co. vs. United States Electric Lighting Co., Federal Reporter, F1, Vol. 47, 1891, p. 457. Later he lived in the US, changed his name to Alexander de Lodyguine and applied and obtained patents for incandescent lamps having chromium, iridium, rhodium, ruthenium, osmium, molybdenum and tungsten filaments, and a bulb using a molybdenum filament was demonstrated at the world fair of 1900 in Paris.
Heinrich Göbel in 1893 claimed he had designed the first incandescent light bulb in 1854, with a thin carbonized bamboo filament of high resistance, platinum lead-in wires in an all-glass envelope, and a high vacuum. Judges of four courts raised doubts about the alleged Göbel anticipation, but there was never a decision in a final hearing due to the expiry date of Edison's patent. A research work published 2007 concluded that the story of the Göbel lamps in the 1850s is a legend.Hans-Christian Rohde: Die Göbel-Legende – Der Kampf um die Erfindung der Glühlampe. Zu Klampen, Springe 2007, ISBN 978-3-86674-006-8 (german, dissertation)
On 24 July 1874, a Canadian patent was filed by Henry Woodward and Mathew Evans for a lamp consisting of carbon rods mounted in a nitrogen-filled glass cylinder. They were unsuccessful at commercializing their lamp, and sold rights to their patent () to Thomas Edison in 1879.
Commercialization
Dominance of carbon filament and vacuum
thumb|left|Carbon filament lamps, showing darkening of bulb
thumb|upright|Sir Joseph Wilson Swan
Joseph Swan (1828–1914) was a British physicist and chemist. In 1850, he began working with carbonized paper filaments in an evacuated glass bulb. By 1860, he was able to demonstrate a working device but the lack of a good vacuum and an adequate supply of electricity resulted in a short lifetime for the bulb and an inefficient source of light. By the mid-1870s better pumps became available, and Swan returned to his experiments.
thumb|Historical plaque at Underhill, the first house to be lit by electric lights
With the help of Charles Stearn, an expert on vacuum pumps, in 1878, Swan developed a method of processing that avoided the early bulb blackening. This received a British Patent in 1880.Swan K R Sir Joseph Swan and the Invention of the Incandescent Electric Lamp. 1946 Longmans, Green and Co. Pp 21–25. On 18 December 1878, a lamp using a slender carbon rod was shown at a meeting of the Newcastle Chemical Society, and Swan gave a working demonstration at their meeting on 17 January 1879. It was also shown to 700 who attended a meeting of the Literary and Philosophical Society of Newcastle upon Tyne on 3 February 1879. These lamps used a carbon rod from an arc lamp rather than a slender filament. Thus they had low resistance and required very large conductors to supply the necessary current, so they were not commercially practical, although they did furnish a demonstration of the possibilities of incandescent lighting with relatively high vacuum, a carbon conductor, and platinum lead-in wires. This bulb lasted about 40 hours.https://www.wired.com/2009/12/1218joseph-swan-electric-bulb/ Swan then turned his attention to producing a better carbon filament and the means of attaching its ends. He devised a method of treating cotton to produce 'parchmentised thread' in the early 1880s and obtained British Patent 4933 that same year. From this year he began installing light bulbs in homes and landmarks in England. His house, Underhill, Low Fell, Gateshead, was the first in the world to be lit by a lightbulb and also the first house in the world to be lit by hydroelectric power. In 1878 the home of Lord Armstrong at Cragside was also among the first houses to be lit by electricity. In the early 1880s he had started his company.R.C. Chirnside. Sir Joseph Wilson Swan FRS – The Literary and Philosophical Society of Newcastle upon Tyne 1979. In 1881, the Savoy Theatre in the City of Westminster, London was lit by Swan incandescent lightbulbs, which was the first theatre, and the first public building in the world, to be lit entirely by electricity."The Savoy Theatre", The Times, 3 October 1881 The first street in the world to be lit by an incandescent lightbulb was Mosley Street, Newcastle upon Tyne, United Kingdom. It was lit by Joseph Swan's incandescent lamp on 3 February 1879.Blue plaque at the Literary and Philosophical Society of Newcastle, 23 Westgate Road, Newcastle upon Tyne Quote: "Nearby Mosley Street was the first street in the world to be lit by such electric bulbs."
thumb|left|Edison carbon filament lamps, early 1880s
thumb|upright|Thomas Alva Edison
Thomas Edison began serious research into developing a practical incandescent lamp in 1878. Edison filed his first patent application for "Improvement In Electric Lights" on 14 October 1878.. After many experiments, first with carbon in the early 1880s and then with platinum and other metals, in the end Edison returned to a carbon filament. The first successful test was on 22 October 1879,Paul Israel, Edison: a Life of Invention, Wiley (1998), page 186. and lasted 13.5 hours. Edison continued to improve this design and by 4 November 1879, filed for a US patent for an electric lamp using "a carbon filament or strip coiled and connected ... to platina contact wires." granted 27 January 1880 Although the patent described several ways of creating the carbon filament including using "cotton and linen thread, wood splints, papers coiled in various ways," Edison and his team later discovered that a carbonized bamboo filament could last more than 1200 hours. In 1880, the Oregon Railroad and Navigation Company steamer, Columbia, became the first application for Edison's incandescent electric lamps (it was also the first ship to use a dynamo).Belyk, Robert C. Great Shipwrecks of the Pacific Coast. New York: Wiley, 2001. ISBN 0-471-38420-8Jehl, Francis
Menlo Park reminiscences : written in Edison's restored Menlo Park laboratory, Henry Ford Museum and Greenfield Village, Whitefish, Mass, Kessinger Publishing, 1 July 2002, page 564Dalton, Anthony
A long, dangerous coastline : shipwreck tales from Alaska to California
Heritage House Publishing Company, 1 Feb 2011 - 128 pages
Albon Man, a New York lawyer, started Electro-Dynamic Light Company in 1878 to exploit his patents and those of William Sawyer.p. 72 p. 9 Weeks later the United States Electric Lighting Company was organized. p. 36 This company didn't make their first commercial installation of incandescent lamps until the fall of 1880 at the Mercantile Safe Deposit Company in New York City, about six months after the Edison incandescent lamps had been installed on the Columbia. Hiram S. Maxim was the chief engineer at the United States Electric Lighting Company.The National Cyclopedia of American Biography, Vol VI 1896, p. 34
Lewis Latimer, employed at the time by Edison, developed an improved method of heat-treating carbon filaments which reduced breakage and allowed them to be molded into novel shapes, such as the characteristic "M" shape of Maxim filaments. On 17 January 1882, Latimer received a patent for the "Process of Manufacturing Carbons", an improved method for the production of light bulb filaments, which was purchased by the United States Electric Light Company. Latimer patented other improvements such as a better way of attaching filaments to their wire supports.Fouché, Rayvon, Black Inventors in the Age of Segregation: Granville T. Woods, Lewis H. Latimer, and Shelby J. Davidson.) (Johns Hopkins University Press, Baltimore & London, 2003, pp. 115–116. ISBN 0-8018-7319-3
In Britain, the Edison and Swan companies merged into the Edison and Swan United Electric Company (later known as Ediswan, and ultimately incorporated into Thorn Lighting Ltd). Edison was initially against this combination, but after Swan sued him and won, Edison was eventually forced to cooperate, and the merger was made. Eventually, Edison acquired all of Swan's interest in the company. Swan sold his US patent rights to the Brush Electric Company in June 1882.
thumb|left|upright| by Thomas Edison for an improved electric lamp, 27 January 1880
The United States Patent Office gave a ruling 8 October 1883, that Edison's patents were based on the prior art of William Sawyer and were invalid. Litigation continued for a number of years. Eventually on 6 October 1889, a judge ruled that Edison's electric light improvement claim for "a filament of carbon of high resistance" was valid.Consol. Elec. Light Co v. McKeesport Light Co, 40 F. 21 (C.C.W.D. Pa. 1889) aff'd, 159 U.S. 465, 16 S. Ct. 75, 40 L. Ed. 221 (1895).
In 1897, German physicist and chemist Walther Nernst developed the Nernst lamp, a form of incandescent lamp that used a ceramic globar and did not require enclosure in a vacuum or inert gas. Twice as efficient as carbon filament lamps, Nernst lamps were briefly popular until overtaken by lamps using metal filaments.
Revolution of tungsten filament and inert gas
thumb|upright|Hanaman (left) and Dr. Just (right), the inventors of the tungsten bulbs
thumb|upright||Hungarian advertising of the Tungsram-bulb from 1906. This was the first light bulb that used a filament made from tungsten instead of carbon. The inscription reads: wire lamp with a drawn wire – indestructible.
On 13 December 1904, Hungarian Sándor Just and Croatian Franjo Hanaman were granted a Hungarian patent (No. 34541) for a tungsten filament lamp that lasted longer and gave brighter light than the carbon filament. Tungsten filament lamps were first marketed by the Hungarian company Tungsram in 1904. This type is often called Tungsram-bulbs in many European countries. Filling a bulb with an inert gas such as argon or nitrogen retards the evaporation of the tungsten filament compared to operating it in a vacuum. This allows for greater temperatures and therefore greater efficacy with less reduction in filament life.
In 1906, the General Electric Company patented a method of making filaments from sintered tungsten and in 1911, used ductile tungsten wire for incandescent light bulbs.
In 1913, Irving Langmuir found that filling a lamp with inert gas instead of a vacuum resulted in twice the luminous efficacy and reduction of bulb blackening. In 1924, Marvin Pipkin, an American chemist, patented a process for frosting the inside of lamp bulbs without weakening them, and in 1947, he patented a process for coating the inside of lamps with silica.
Between 1924 and the outbreak of the Second World War, the Phoebus cartel attempted to fix prices and sales quotas for bulb manufacturers outside of North America.
In 1930, Hungarian Imre Bródy filled lamps with krypton gas rather than argon, and designed a process to obtain krypton from air. Production of krypton filled lamps based on his invention started at Ajka in 1937, in a factory co-designed by Polányi and Hungarian-born physicist Egon Orowan.
By 1964, improvements in efficiency and production of incandescent lamps had reduced the cost of providing a given quantity of light by a factor of thirty, compared with the cost at introduction of Edison's lighting system.Incandescent Lamps, Publication Number TP-110, General Electric Company, Nela Park, Cleveland, OH (1964) pg. 3
Consumption of incandescent light bulbs grew rapidly in the US. In 1885, an estimated 300,000 general lighting service lamps were sold, all with carbon filaments. When tungsten filaments were introduced, about 50 million lamp sockets existed in the US. In 1914, 88.5 million lamps were used, (only 15% with carbon filaments), and by 1945, annual sales of lamps were 795 million (more than 5 lamps per person per year).Raymond Kane, Heinz Sell Revolution in lamps: a chronicle of 50 years of progress (2nd ed.), The Fairmont Press, Inc. 2001 ISBN 0-88173-378-4 page 37, table 2-1
Efficacy, efficiency, and environmental impact
thumb|upright|140px|Xenon halogen lamp with an E27 base, which can replace a non-halogen bulb
Of the power consumed by typical incandescent light bulbs, 95% or more is converted into heat rather than visible light. Other electrical light sources are more effective.
Luminous efficacy of a light source may be defined in two ways. The radiant luminous efficacy (LER) is the ratio of the visible light flux emitted (the luminous flux) to the total power radiated over all wavelengths. The source luminous efficacy (LES) is the ratio of the visible light flux emitted (the luminous flux) to the total power input to the source, such as a lamp.IEEE Std. 100 definition of "luminous efficacy" pg. 647 Visible light is measured in lumens, a unit which is defined in part by the differing sensitivity of the human eye to different wavelengths of light. Not all wavelengths of visible electromagnetic energy are equally effective at stimulating the human eye; the luminous efficacy of radiant energy (LER) is a measure of how well the distribution of energy matches the perception of the eye. The units of luminous efficacy are "lumens per watt" (lpw). The maximum LER possible is 683 lm/W for monochromatic green light at 555 nanometers wavelength, the peak sensitivity of the human eye.
The luminous efficiency is defined as the ratio of the luminous efficacy to the theoretical maximum luminous efficacy of 683 lpw, and, as for luminous efficacy, is of two types, radiant luminous efficiency (LFR) and source luminous efficacy (LFS).
The chart below lists values of overall luminous efficacy and efficiency for several types of general service, 120-volt, 1000-hour lifespan incandescent bulb, and several idealized light sources. The values for the incandescent bulbs are source efficiencies and efficacies. The values for the ideal sources are radiant efficiencies and efficacies. A similar chart in the article on luminous efficacy compares a broader array of light sources to one another.
TypeOverall luminous efficiencyOverall luminous efficacy (lm/W)40 W tungsten incandescent1.9%12.660 W tungsten incandescent2.1%14.5100 W tungsten incandescent2.6%17.5glass halogen2.3%16quartz halogen3.5%24photographic and projection lamps with very high filament temperatures and short lifetimes5.1%35ideal black-body radiator at 4000 K (or a class K star like Arcturus)7.0%47.5ideal black-body radiator at 7000 K (or a class F star like Procyon)14%95ideal monochromatic 555 nm (green) source100%683See luminosity function.
The spectrum emitted by a blackbody radiator at temperatures of incandescent bulbs does not match the sensitivity characteristics of the human eye; the light emitted does not appear white, and most is not in the range of wavelengths at which the eye is most sensitive. Tungsten filaments radiate mostly infrared radiation at temperatures where they remain solid – below . Donald L. Klipstein explains it this way: "An ideal thermal radiator produces visible light most efficiently at temperatures around . Even at this high temperature, a lot of the radiation is either infrared or ultraviolet, and the theoretical luminous efficacy (LER) is 95 lumens per watt." No known material can be used as a filament at this ideal temperature, which is hotter than the sun's surface. An upper limit for incandescent lamp luminous efficacy (LER) is around 52 lumens per watt, the theoretical value emitted by tungsten at its melting point.
Although inefficient, incandescent light bulbs have an advantage in applications where accurate color reproduction is important, since the continuous blackbody spectrum emitted from an incandescent light-bulb filament yields near-perfect color rendition, with a color rendering index of 100 (the best possible). White-balancing is still required to avoid too "warm" or "cool" colors, but this is a simple process that requires only the color temperature in Kelvin as input for modern, digital visual reproduction equipment such as video or still cameras unless it is completely automated. The color-rendering performance of incandescent lights cannot be matched by LEDs or fluorescent lights, although they can offer satisfactory performance for non-critical applications such as home lighting. White-balancing such lights is therefore more complicated, requiring additional adjustments to reduce for example green-magenta color casts, and even when properly white-balanced, the color reproduction will not be perfect.
thumb|Thermal image of an incandescent bulb. 71–347 °F = 22–175 °C.
thumb|Spectral power distribution of a 25 W incandescent light bulb.
For a given quantity of light, an incandescent light bulb produces more heat (and thus consumes more power) than a fluorescent lamp. In buildings where air conditioning is used, incandescent lamps' heat output increases load on the air conditioning system.Prof. Peter Lund, Helsinki University of Technology, on p. C5 in Helsingin Sanomat 23 Oct. 2007. While heat from lights will reduce the need for running a building's heating system, in general a heating system can provide the same amount of heat at a lower cost than incandescent lights.
Halogen incandescent lamps have higher efficacy, which will allow a halogen light to use less power to produce the same amount of light compared to a non-halogen incandescent light. The expected life span of halogen lights is also generally longer compared to non-halogen incandescent lights, and halogen lights produce a more constant light-output over time, without much dimming.
There are many non-incandescent light sources, such as the fluorescent lamp, high-intensity discharge lamps and LED lamps, which have higher luminous efficiency, and some have been designed to be retrofitted in fixtures for incandescent lights. These devices produce light by luminescence. These lamps produce discrete spectral lines and do not have the broad "tail" of invisible infrared emissions. By careful selection of which electron energy level transitions are used, and fluorescent coatings which modify the spectral distribution, the spectrum emitted can be tuned to mimic the appearance of incandescent sources, or other different color temperatures of white light. Due to the discrete spectral lines rather than a continuous spectrum, the light is not ideal for applications such as photography and cinematography.
Cost of lighting
The initial cost of an incandescent bulb is small compared to the cost of the energy it uses over its lifetime. Incandescent bulbs have a shorter life than most other lighting, an important factor if replacement is inconvenient or expensive. Some types of lamp, including incandescent and fluorescent, emit less light as they age; this may be an inconvenience, or may reduce effective lifetime due to lamp replacement before total failure. A comparison of incandescent lamp operating cost with other light sources must include illumination requirements, cost of the lamp and labor cost to replace lamps (taking into account effective lamp lifetime), cost of electricity used, effect of lamp operation on heating and air conditioning systems. When used for lighting in houses and commercial buildings, the energy lost to heat can significantly increase the energy required by a building's air conditioning system. During the heating season heat produced by the bulbs is not wasted, although in most cases it is more cost effective to obtain heat from the heating system. Regardless, over the course of a year a more efficient lighting system saves energy in nearly all climates.http://www.cmhc.ca/odpub/pdf/65830.pdf
Measures to ban use
Since incandescent light bulbs use more energy than alternatives such as CFLs and LED lamps, many governments have introduced measures to ban their use, by setting minimum efficacy standards higher than can be achieved by incandescent lamps. Measures to ban light bulbs have been implemented in the European Union, the United States, Russia, Brazil, Argentina, Canada and Australia, among others. In the Europe the EC has calculated that the ban contributes 5 to 10 billion euros to the economy and saves 40 TWh of electricity every year, translating in CO2 emission reductions of 15 million tonnes.Nicholas A. A.Howarth, Jan Rosenow: Banning the bulb: Institutional evolution and the phased ban of incandescent lighting in Germany. In: Energy Policy 67, (2014), 737–746, .
In the US, federal law has scheduled the most common incandescent light bulbs to be phased out by 2014, to be replaced with more energy-efficient light bulbs."It's lights out for traditional light bulbs". USA Today. 16 December 2007. Traditional incandescent light bulbs were phased out in Australia in November 2009.
Objections to banning the use of incandescent light bulbs include the higher initial cost of alternatives and lower quality of light of fluorescent lamps. Some people have concerns about the health effects of fluorescent lamps. However, even though they contain mercury, the environmental performance of CFLs is much better than that of light bulbs, mostly because they consume much less energy and therefore strongly reduce the environmental impact of power production.Welz et al, Environmental impacts of lighting technologies — Life cycle assessment and sensitivity analysis. In: Environmental Impact Assessment Review 31, (2011), 334–343, . LED lamps are even more efficient, and are free of mercury. They are regarded as the best solution in terms of cost effectiveness and robustness.Calderon et al, LED bulbs technical specification and testing procedure for solar home systems. In: Renewable and Sustainable Energy Reviews 41, (2015), 506–520, .
Efforts to improve efficiency
Some research has been carried out to improve the efficacy of commercial incandescent lamps. In 2007, the consumer lighting division of General Electric announced a "high efficiency incandescent" (HEI) lamp project, which they claimed would ultimately be as much as four times more efficient than current incandescents, although their initial production goal was to be approximately twice as efficient. The HEI program was terminated in 2008 due to slow progress.
US Department of Energy research at Sandia National Laboratories initially indicated the potential for dramatically improved efficiency from a photonic lattice filament. However, later work indicated that initially promising results were in error.
Prompted by legislation in various countries mandating increased bulb efficiency, new "hybrid" incandescent bulbs have been introduced by Philips. The "Halogena Energy Saver" incandescents can produce about 23 lm/W; about 30 percent more efficient than traditional incandescents, by using a reflective capsule to reflect formerly wasted infrared radiation back to the filament from which it can be re-emitted as visible light. This concept was pioneered by Duro-Test in 1980 with a commercial product that produced 29.8 lm/W. More advanced reflectors based on interference filters or photonic crystals can theoretically result in higher efficiency, up to a limit of about 270 lm/W (40% of the maximum efficacy possible). Laboratory proof-of-concept experiments have produced as much as 45 lm/W, approaching the efficacy of compact fluorescent bulbs.New development could lead to more effective lightbulbs, BBC News, 12 January 2016, Matt McGrath
Construction
Incandescent light bulbs consist of an air-tight glass enclosure (the envelope, or bulb) with a filament of tungsten wire inside the bulb, through which an electric current is passed. Contact wires and a base with two (or more) conductors provide electrical connections to the filament. Incandescent light bulbs usually contain a stem or glass mount anchored to the bulb's base that allows the electrical contacts to run through the envelope without air or gas leaks. Small wires embedded in the stem in turn support the filament and its lead wires.
An electric current heats the filament to typically , well below tungsten's melting point of . Filament temperatures depend on the filament type, shape, size, and amount of current drawn. The heated filament emits light that approximates a continuous spectrum. The useful part of the emitted energy is visible light, but most energy is given off as heat in the near-infrared wavelengths.
Three-way light bulbs have two filaments and three conducting contacts in their bases. The filaments share a common ground, and can be lit separately or together. Common wattages include 30–70–100, 50–100–150, and 100–200–300, with the first two numbers referring to the individual filaments, and the third giving the combined wattage.
Most light bulbs have either clear or coated glass. The coated glass bulbs have a white powdery substance on the inside called kaolin. Kaolin, or kaolinite, is a white, chalky clay in a very fine powder form, that is blown in and electrostatically deposited on the interior of the bulb. It diffuses the light emitted from the filament, producing a more gentle and evenly distributed light. Manufacturers may add pigments to the kaolin to adjust the characteristics of the final light emitted from the bulb. Kaolin diffused bulbs are used extensively in interior lighting because of their comparatively gentle light. Other kinds of colored bulbs are also made, including the various colors used for "party bulbs", Christmas tree lights and other decorative lighting. These are created by coloring the glass with a dopant; which is often a metal like cobalt (blue) or chromium (green). Neodymium-containing glass is sometimes used to provide a more natural-appearing light.
280px|centerOutline of Glass bulb
Low pressure inert gas (argon, nitrogen, krypton, xenon)
Tungsten filament
Contact wire (goes out of stem)
Contact wire (goes into stem)
Support wires (one end embedded in stem; conduct no current)
Stem (glass mount)
Contact wire (goes out of stem)
Cap (sleeve)
Insulation (vitrite)
Electrical contact
Many arrangements of electrical contacts are used. Large lamps may have a screw base (one or more contacts at the tip, one at the shell) or a bayonet base (one or more contacts on the base, shell used as a contact or used only as a mechanical support). Some tubular lamps have an electrical contact at either end. Miniature lamps may have a wedge base and wire contacts, and some automotive and special purpose lamps have screw terminals for connection to wires. Contacts in the lamp socket allow the electric current to pass through the base to the filament. Power ratings for incandescent light bulbs range from about 0.1 watt to about 10,000 watts.
The glass bulb of a general service lamp can reach temperatures between . Lamps intended for high power operation or used for heating purposes will have envelopes made of hard glass or fused quartz.
Gas fill
The bulb is filled with an inert gas, to reduce evaporation of the filament and prevent its oxidation at a pressure of about .uigi.com – Argon (Ar) Properties, Uses, Applications Argon Gas and Liquid Argon, 2007
The role of the gas is to prevent evaporation of the filament, without introducing significant heat losses. For these properties, chemical inertness and high atomic or molecular weight is desirable. The presence of gas molecules knocks the liberated tungsten atoms back to the filament, reducing its evaporation and allowing it to be operated at higher temperature without reducing its life (or, for operating at the same temperature, prolongs the filament life). It however introduces heat losses (and therefore efficiency loss) from the filament, by heat conduction and heat convection.
Early lamps, and some small modern lamps used only a vacuum to protect the filament from oxygen. This however increases evaporation of the filament, albeit it eliminates the heat losses.
The most common fills are:
Vacuum, used in small lamps. Provides best thermal insulation of the filament but does not protect against its evaporation. Used also in larger lamps where the outer bulb surface temperature has to be limited.
Argon (93%) and nitrogen (7%), where argon is used for its inertness, low thermal conductivity and low cost, and the nitrogen is added to increase the breakdown voltage and prevent arcing between parts of the filament
Nitrogen, used in some higher-power lamps, e.g. projection lamps, and where higher breakdown voltage is needed due to proximity of filament parts or lead-in wires
Krypton, which is more advantageous than argon due to its higher atomic weight and lower thermal conductivity (which also allows use of smaller bulbs), but its use is hindered by much higher cost, confining it mostly to smaller-size bulbs.
Krypton mixed with xenon, where xenon improves the gas properties further due to its higher atomic weight. Its use is however limited by its very high cost. The improvements by using xenon are modest in comparison to its cost.
Hydrogen, in special flashing lamps where rapid filament cooling is required; its high thermal conductivity is exploited here.
The gas fill must be free of traces of water. In the presence of the hot filament, water reacts with tungsten forming tungsten trioxide and atomic hydrogen. The oxide deposits on the bulb inner surface and reacts with hydrogen, decomposing to metallic tungsten and water. Water then cycles back to the filament. This greatly accelerates the bulb blackening, in comparison with evaporation-only.
The gas layer close to the filament (called the Langmuir layer) is stagnant, heat transfer occurs only by conduction. Only at some distance does convection occur to carry heat to the bulb envelope.
The orientation of the filament influences efficiency. Gas flow parallel to the filament, e.g. a vertically oriented bulb with vertical (or axial) filament, reduces convective losses.
The efficiency of the lamp increases with a larger filament diameter. Thin-filament, low-power bulbs benefit less from a fill gas, so are often only evacuated. In special cases, when rapid cooling of a filament is needed (e.g. in flashing lights), hydrogen gas fill is used.
Early lightbulbs with carbon filaments also used carbon monoxide, nitrogen, or mercury vapor. Carbon filaments however operate at lower temperatures than tungsten ones, so the effect of the fill gas was not significant as the heat losses offset any benefits.
Manufacturing
thumb|upright=0.7|Tantalum filament light bulb, 1908, the first metal filament bulb
Early lamps were laboriously assembled by hand. After automatic machinery was developed the cost of lamps fell.
In manufacturing the glass bulb, a type of "ribbon machine" is used. A continuous ribbon of glass is passed along a conveyor belt, heated in a furnace, and then blown by precisely aligned air nozzles through holes in the conveyor belt into molds. Thus the glass bulbs are created. After the bulbs are blown, and cooled, they are cut off the ribbon machine; a typical machine of this sort produces 50,000 bulbs per hour. The filament and its supports are assembled on a glass stem, which is fused to the bulb. The air is pumped out of the bulb, and the evacuation tube in the stem press is sealed by a flame. The bulb is then inserted into the lamp base, and the whole assembly tested.
Filament
The first successful light bulb filaments were made of carbon (from carbonized paper or bamboo). Early carbon filaments had a negative temperature coefficient of resistance — as they got hotter, their electrical resistance decreased. This made the lamp sensitive to fluctuations in the power supply, since a small increase of voltage would cause the filament to heat up, reducing its resistance and causing it to draw even more power and heat even further. In the "flashing" process, carbon filaments were heated by current passing through them while in an evacuated vessel containing hydrocarbon vapor (usually gasoline). The carbon deposited on the filament by this treatment improved the uniformity and strength of filaments as well as their efficiency. A metallized or "graphitized" filament was first heated in a high-temperature oven before flashing and lamp assembly. This transformed the carbon into graphite which further strengthened and smoothed the filament. This also changed the filament to have a positive temperature coefficient, like a metallic conductor, and helped stabilize the lamp's power consumption, temperature and light output against minor variations in supply voltage.
In 1902, the Siemens company developed a tantalum lamp filament. These lamps were more efficient than even graphitized carbon filaments and could operate at higher temperatures. Since tantalum metal has a lower resistivity than carbon, the tantalum lamp filament was quite long and required multiple internal supports. The metal filament had the property of gradually shortening in use; the filaments were installed with large loops that tightened in use. This made lamps in use for several hundred hours quite fragile.I. C. S. Reference Library Volume 4B, Scranton, International Textbook Company, 1908, no ISBN Metal filaments had the property of breaking and re-welding, though this would usually decrease resistance and shorten the life of the filament. General Electric bought the rights to use tantalum filaments and produced them in the US until 1913.
From 1898 to around 1905, osmium was also used as a lamp filament in Europe, and the metal was so expensive that used broken lamps could be returned for partial credit. It could not be made for 110 V or 220 V so several lamps were wired in series for use on standard voltage circuits.
thumb|How a tungsten filament is made
In 1906, the tungsten filament was introduced. Tungsten metal was initially not available in a form that allowed it to be drawn into fine wires. Filaments made from sintered tungsten powder were quite fragile. By 1910, a process was developed by William D. Coolidge at General Electric for production of a ductile form of tungsten. The process required pressing tungsten powder into bars, then several steps of sintering, swaging, and then wire drawing. It was found that very pure tungsten formed filaments that sagged in use, and that a very small "doping" treatment with potassium, silicon, and aluminium oxides at the level of a few hundred parts per million greatly improved the life and durability of the tungsten filaments.Chapter 2 The Potassium Secret Behind Tungsten Wire Production
Coiled coil filament
To improve the efficiency of the lamp, the filament usually consists of multiple coils of coiled fine wire, also known as a 'coiled coil'. For a 60-watt 120-volt lamp, the uncoiled length of the tungsten filament is usually , and the filament diameter is . The advantage of the coiled coil is that evaporation of the tungsten filament is at the rate of a tungsten cylinder having a diameter equal to that of the coiled coil. The coiled-coil filament evaporates more slowly than a straight filament of the same surface area and light-emitting power. As a result, the filament can then run hotter, which results in a more efficient light source, while reducing the evaporation so that the filament will last longer than a straight filament at the same temperature.
There are several different shapes of filament used in lamps, with differing characteristics. Manufacturers designate the types with codes such as C-6, CC-6, C-2V, CC-2V, C-8, CC-88, C-2F, CC-2F, C-Bar, C-Bar-6, C-8I, C-2R, CC-2R, and Axial.
Electrical filaments are also used in hot cathodes of fluorescent lamps and vacuum tubes as a source of electrons or in vacuum tubes to heat an electron-emitting electrode.
Reducing filament evaporation
One of the problems of the standard electric light bulb is filament notching due to evaporation of the filament. Small variations in resistivity along the filament cause "hot spots" to form at points of higher resistivity; a variation of diameter of only 1% will cause a 25% reduction in service life. These hot spots evaporate faster than the rest of the filament, which increases the resistance at that point—this creates a positive feedback that ends in the familiar tiny gap in an otherwise healthy-looking filament. Irving Langmuir found that an inert gas, instead of vacuum, would retard evaporation. General service incandescent light bulbs over about 25 watts in rating are now filled with a mixture of mostly argon and some nitrogen,John Kaufman (ed.), IES Lighting Handbook 1981 Reference Volume, Illuminating Engineering Society of North America, New York, 1981 ISBN 0-87995-007-2 page 8-6 or sometimes krypton.Burgin. Lighting Research and Technology 1984 16.2 61–72 Lamps operated on direct current develop random stairstep irregularities on the filament surface which may cut lifespan in half compared to AC operation; different alloys of tungsten and rhenium can be used to counteract the effect.Toshiba Lighting Products Miniature Lamp Characteristics. Retrieved 23 March 2008. John Kaufman (ed.), IES Lighting Handbook 1981 Reference Volume, Illuminating Engineering Society of North America, New York, 1981 ISBN 0-87995-007-2 page 8-9
Since a filament breaking in a gas-filled bulb can form an electric arc, which may spread between the terminals and draw very heavy current, intentionally thin lead-in wires or more elaborate protection devices are therefore often used as fuses built into the light bulb. More nitrogen is used in higher-voltage lamps to reduce the possibility of arcing.
While inert gas reduces filament evaporation, it also conducts heat from the filament, thereby cooling the filament and reducing efficiency. At constant pressure and temperature, the thermal conductivity of a gas depends upon the molecular weight of the gas and the cross sectional area of the gas molecules. Higher molecular weight gasses have lower thermal conductivity, because both the molecular weight is higher and also the cross sectional area is higher. Xenon gas improves efficiency because of its high molecular weight, but is also more expensive, so its use is limited to smaller lamps.
During ordinary operation, the tungsten of the filament evaporates; hotter, more-efficient filaments evaporate faster. Because of this, the lifetime of a filament lamp is a trade-off between efficiency and longevity. The trade-off is typically set to provide a lifetime of several hundred to 2,000 hours for lamps used for general illumination. Theatrical, photographic, and projection lamps may have a useful life of only a few hours, trading life expectancy for high output in a compact form. Long-life general service lamps have lower efficiency but are used where the cost of changing the lamp is high compared to the value of energy used.
If a light bulb envelope leaks, the hot tungsten filament reacts with air, yielding an aerosol of brown tungsten nitride, brown tungsten dioxide, violet-blue tungsten pentoxide, and yellow tungsten trioxide that then deposits on the nearby surfaces or the bulb interior.
Bulb blackening
In a conventional lamp, the evaporated tungsten eventually condenses on the inner surface of the glass envelope, darkening it. For bulbs that contain a vacuum, the darkening is uniform across the entire surface of the envelope. When a filling of inert gas is used, the evaporated tungsten is carried in the thermal convection currents of the gas, depositing preferentially on the uppermost part of the envelope and blackening just that portion of the envelope. An incandescent lamp that gives 93% or less of its initial light output at 75% of its rated life is regarded as unsatisfactory, when tested according to IEC Publication 60064. Light loss is due to filament evaporation and bulb blackening.IEC 60064 Tungsten filament lamps for domestic and similar general lighting purposes. Study of the problem of bulb blackening led to the discovery of the Edison effect, thermionic emission and invention of the vacuum tube.
A very small amount of water vapor inside a light bulb can significantly affect lamp darkening. Water vapor dissociates into hydrogen and oxygen at the hot filament. The oxygen attacks the tungsten metal, and the resulting tungsten oxide particles travel to cooler parts of the lamp. Hydrogen from water vapor reduces the oxide, reforming water vapor and continuing this water cycle. The equivalent of a drop of water distributed over 500,000 lamps will significantly increase darkening. Small amounts of substances such as zirconium are placed within the lamp as a getter to react with any oxygen that may bake out of the lamp components during operation.
Some old, high-powered lamps used in theater, projection, searchlight, and lighthouse service with heavy, sturdy filaments contained loose tungsten powder within the envelope. From time to time, the operator would remove the bulb and shake it, allowing the tungsten powder to scrub off most of the tungsten that had condensed on the interior of the envelope, removing the blackening and brightening the lamp again.John Kaufman (ed.), IES Lighting Handbook 1981 Reference Volume, Illuminating Engineering Society of North America, New York, 1981 ISBN 0-87995-007-2 page 8-10
Halogen lamps
thumb|Close-up of a tungsten filament inside a halogen lamp. The two ring-shaped structures left and right are filament supports.
The halogen lamp reduces uneven evaporation of the filament and eliminates darkening of the envelope by filling the lamp with a halogen gas at low pressure, rather than an inert gas. The halogen cycle increases the lifetime of the bulb and prevents its darkening by redepositing tungsten from the inside of the bulb back onto the filament. The halogen lamp can operate its filament at a higher temperature than a standard gas filled lamp of similar power without loss of operating life. Such bulbs are much smaller than normal incandescent bulbs, and are widely used where intense illumination is needed in a limited space. Fiber-optic lamps for optical microscopy is one typical application.
Incandescent arc lamps
A variation of the incandescent lamp did not use a hot wire filament, but instead used an arc struck on a spherical bead electrode to produce heat. The electrode then became incandescent, with the arc contributing little to the light produced. Such lamps were used for projection or illumination for scientific instruments such as microscopes. These arc lamps ran on relatively low voltages and incorporated tungsten filaments to start ionization within the envelope. They provided the intense concentrated light of an arc lamp but were easier to operate. Developed around 1915, these lamps were displaced by mercury and xenon arc lamps.G. Arncliffe Percival, The Electric Lamp Industry, Sir Isaac Pitman and Sons, Ltd. London, 1920 pp. 73–74, available from the Internet ArchiveS. G. Starling, An Introduction to Technical Electricity, McMillan and Co., Ltd., London 1920, pp. 97–98, available at the Internet Archive, good schematic diagram of the Pointolite lamp
Electrical characteristics
+ Comparison of efficacy by power 120 volt lamps
230 volt lampsp30 Single coil - NOT coiled coilPower (W)Output (lm)Efficacy (lm/W)Output (lm)Efficacy (lm/W)5255151107.3252008.02068.244050012.53308.256085014.25849.73751,20016.01001,70017.01,16011.61502,85019.02003,90019.52,72513.623006,20020.74,43014.775007,93015.86
Power
Incandescent lamps are nearly pure resistive loads with a power factor of 1. This means the actual power consumed (in watts) and the apparent power (in volt-amperes) are equal. Incandescent light bulbs are usually marketed according to the electrical power consumed. This is measured in watts and depends mainly on the resistance of the filament, which in turn depends mainly on the filament's length, thickness, and material. For two bulbs of the same voltage, type, color, and clarity, the higher-powered bulb gives more light.
The table shows the approximate typical output, in lumens, of standard incandescent light bulbs at various powers. Light output of a 230 V version is usually slightly less than that of a 120 V version. The lower current (higher voltage) filament is thinner and has to be operated at a slightly lower temperature for same life expectancy, and that reduces energy efficiency. The lumen values for "soft white" bulbs will generally be slightly lower than for clear bulbs at the same power.
Current and resistance
The actual resistance of the filament is temperature dependent. The cold resistance of tungsten-filament lamps is about 1/15 the hot-filament resistance when the lamp is operating. For example, a 100-watt, 120-volt lamp has a resistance of 144 ohms when lit, but the cold resistance is much lower (about 9.5 ohms).Edison's research team was aware of the large negative temperature coefficient of resistance of possible lamp filament materials and worked extensively during the period 1878–1879 on devising an automatic regulator or ballast to stabilize current. It wasn't until 1879 that it was realized a self-limiting lamp could be built. See Friedel and Israel Edison's Electric Light pages 29–31 Since incandescent lamps are resistive loads, simple phase-control TRIAC dimmers can be used to control brightness. Electrical contacts may carry a "T" rating symbol indicating that they are designed to control circuits with the high inrush current characteristic of tungsten lamps. For a 100-watt, 120-volt general-service lamp, the current stabilizes in about 0.10 seconds, and the lamp reaches 90% of its full brightness after about 0.13 seconds.page 23, 24
Carbon filament bulbs have the opposite characteristic. The resistance of a carbon filament is higher when it is cold than when it is operating. In the case of a 240 Volt, 60 Watt carbon filament bulb, the resistance of the filament when at operating temperature is 960 Ohms, but rises to around 1500 Ohms when cold.
Physical characteristics
Bulb shapes
thumb|480px|Incandescent light bulbs come in a range of shapes and sizes.
Incandescent light bulbs come in a range of shapes and sizes. The names of the shapes vary somewhat from region to regions. Many of these shapes have a designation consisting of one or more letters followed by one or more numbers, e.g. A55 or PAR38. The letters represent the shape of the bulb. The numbers represent the maximum diameter, either in of an inch, or in millimeters, depending on the shape and the region. For example, 63 mm reflectors are designated R63, but in the US, they are known as R20 (2.5 in). However, in both regions, a PAR38 reflector is known as PAR38.
Examples
description metric imperial details "standard" lightbulb A60 E26 A19 E26 ⌀60 mm (~⌀2.375") A series bulb, ⌀26 mm Edison screw candle-flame bulb CA35 E12 CA11 E12 ⌀25 mm (~⌀1.375") candle-flame shape, ⌀12 mm Edison screw flood light BR95 E26 BR30 E26 ⌀95 mm (~⌀3.75") flood light, ⌀26 mm Edison screw halogen track-light bulb MR50 GU5.3 MR16 GU5.3 ⌀50 mm (~⌀2") multifaceted reflector, 5.33 mm-spaced 12 V bi-pin connector
Common shapes:
General Service
Light emitted in (nearly) all directions. Available either clear or frosted.
Types: General (A), Mushroom, elliptical (E), sign (S), tubular (T)
120 V sizes: A17, 19 and 21
230 V sizes: A55 and 60
High Wattage General Service
Lamps greater than 200 watts.
Types: Pear-shaped (PS)
Decorative
lamps used in chandeliers, etc.
Types: candle (B), twisted candle, bent-tip candle (CA & BA), flame (F), globe (G), lantern chimney (H), fancy round (P)
230 V sizes: P45, G95
Reflector (R) Reflective coating inside the bulb directs light forward. Flood types (FL) spread light. Spot types (SP) concentrate the light. Reflector (R) bulbs put approximately double the amount of light (foot-candles) on the front central area as General Service (A) of same wattage.
Types: Standard reflector (R), elliptical reflector (ER), crown-silvered
120 V sizes: R16, 20, 25 and 30
230 V sizes: R50, 63, 80 and 95
Parabolic aluminized reflector (PAR)
Parabolic aluminized reflector (PAR) bulbs control light more precisely. They produce about four times the concentrated light intensity of general service (A), and are used in recessed and track lighting. Weatherproof casings are available for outdoor spot and flood fixtures.
120 V sizes: PAR 16, 20, 30, 38, 56 and 64
230 V sizes: PAR 16, 20, 30, 38, 56 and 64
Available in numerous spot and flood beam spreads. Like all light bulbs, the number represents the diameter of the bulb in of an inch. Therefore, a PAR 16 is 2 in in diameter, a PAR 20 is 2.5 in in diameter, PAR 30 is 3.75 in and a PAR 38 is 4.75 in in diameter.
thumb|A package of four 60 watt light bulbs
Multifaceted reflector (MR)
250px|thumb|Left to right: MR16 with GU10 base, MR16 with GU5.3 base, MR11 with GU4 or GZ4 base
HIR "HIR" is a GE designation for a lamp with an infrared reflective coating. Since less heat escapes, the filament burns hotter and more efficiently. The Osram designation for a similar coating is "IRC".Osram IRC Saver calculator
Lamp bases
right|thumb|40-watt light bulbs with standard E10, E14 and E27 Edison screw base
upright|thumb|The double-contact bayonet cap on an incandescent bulb
Very small lamps may have the filament support wires extended through the base of the lamp, and can be directly soldered to a printed circuit board for connections. Some reflector-type lamps include screw terminals for connection of wires. Most lamps have metal bases that fit in a socket to support the lamp and conduct current to the filament wires. In the late 19th century, manufacturers introduced a multitude of incompatible lamp bases. General Electric introduced standard base sizes for tungsten incandescent lamps under the Mazda trademark in 1909. This standard was soon adopted across the US, and the Mazda name was used by many manufacturers under license through 1945. Today most incandescent lamps for general lighting service use an Edison screw in candelabra, intermediate, or standard or mogul sizes, or double contact bayonet base. Technical standards for lamp bases include ANSI standard C81.67 and IEC standard 60061-1 for common commercial lamp sizes, to ensure interchangeablitity between different manufacturer's products. Bayonet base lamps are frequently used in automotive lamps to resist loosening due to vibration. A bipin base is often used for halogen or reflector lamps.
Lamp bases may be secured to the bulb with a cement, or by mechanical crimping to indentations molded into the glass bulb.
Miniature lamps used for some automotive lamps or decorative lamps have wedge bases that have a partial plastic or even completely glass base. In this case, the wires wrap around to the outside of the bulb, where they press against the contacts in the socket. Miniature Christmas bulbs use a plastic wedge base as well.
Lamps intended for use in optical systems such as film projectors, microscope illuminators, or stage lighting instruments have bases with alignment features so that the filament is positioned accurately within the optical system. A screw-base lamp may have a random orientation of the filament when the lamp is installed in the socket.
Light output and lifetime
thumb|right|upright|The Centennial Light is the longest-lasting light bulb in the world.
thumb|Various lighting spectra as viewed in a diffraction grating. Upper left: fluorescent lamp, upper right: incandescent bulb, lower left: white LED, lower right: candle flame.
Incandescent lamps are very sensitive to changes in the supply voltage. These characteristics are of great practical and economic importance.
For a supply voltage V near the rated voltage of the lamp:
Light output is approximately proportional to V 3.4
Power consumption is approximately proportional to V 1.6
Lifetime is approximately proportional to V −16
Color temperature is approximately proportional to V 0.42Donald G. Fink and H. Wayne Beaty, Standard Handbook for Electrical Engineers, Eleventh Edition, McGraw-Hill, New York, 1978, ISBN 0-07-020974-X, pg 22–8
This means that a 5% reduction in operating voltage will more than double the life of the bulb, at the expense of reducing its light output by about 16%. This may be a very acceptable trade off for a light bulb that is in a difficult-to-access location (for example, traffic lights or fixtures hung from high ceilings). Long-life bulbs take advantage of this trade-off. Since the value of the electric power they consume is much more than the value of the lamp, general service lamps emphasize efficiency over long operating life. The objective is to minimize the cost of light, not the cost of lamps. Early bulbs had a life of up to 2500 hours, but in 1924 a cartel agreed to limit life to 1000 hours. When this was exposed in 1953, General Electric and other leading American manufacturers were banned from limiting the life.
The relationships above are valid for only a few percent change of voltage around rated conditions, but they do indicate that a lamp operated at much lower than rated voltage could last for hundreds of times longer than at rated conditions, albeit with greatly reduced light output. The "Centennial Light" is a light bulb that is accepted by the Guinness Book of World Records as having been burning almost continuously at a fire station in Livermore, California, since 1901. However, the bulb emits the equivalent light of a four watt bulb. A similar story can be told of a 40-watt bulb in Texas that has been illuminated since 21 September 1908. It once resided in an opera house where notable celebrities stopped to take in its glow, and was moved to an area museum in 1977.
In flood lamps used for photographic lighting, the tradeoff is made in the other direction. Compared to general-service bulbs, for the same power, these bulbs produce far more light, and (more importantly) light at a higher color temperature, at the expense of greatly reduced life (which may be as short as two hours for a type P1 lamp). The upper temperature limit for the filament is the melting point of the metal. Tungsten is the metal with the highest melting point, . A 50-hour-life projection bulb, for instance, is designed to operate only below that melting point. Such a lamp may achieve up to 22 lumens per watt, compared with 17.5 for a 750-hour general service lamp.
Lamps designed for different voltages have different luminous efficacy. For example, a 100-watt, 120-volt lamp will produce about 17.1 lumens per watt. A lamp with the same rated lifetime but designed for 230 V would produce only around 12.8 lumens per watt, and a similar lamp designed for 30 volts (train lighting) would produce as much as 19.8 lumens per watt. Lower voltage lamps have a thicker filament, for the same power rating. They can run hotter for the same lifetime before the filament evaporates.
The wires used to support the filament make it mechanically stronger, but remove heat, creating another tradeoff between efficiency and long life. Many general-service 120-volt lamps use no additional support wires, but lamps designed for "rough service" or "vibration service" may have as many as five. Low-voltage lamps have filaments made of heavier wire and do not require additional support wires.
Very low voltages are inefficient since the lead wires would conduct too much heat away from the filament, so the practical lower limit for incandescent lamps is 1.5 volts. Very long filaments for high voltages are fragile, and lamp bases become more difficult to insulate, so lamps for illumination are not made with rated voltages over 300 volts. Some infrared heating elements are made for higher voltages, but these use tubular bulbs with widely separated terminals.
See also
References
External links
Light Source Spectra 60 W-100 W Incandescent light bulb spectra, from Cornell University Program of Computer Graphics
Slow-motion video of an incandescent lightbulb filament
Category:Discovery and invention controversies
Category:English inventions
Category:Thomas Edison
Category:Articles containing video clips
Category:1878 introductions | 47,139 | 2017-01 |
Old English | Old English () or Anglo-SaxonBy the 16th century the term Anglo-Saxon came to refer to all things of the early English period, including language, culture, and people. While it remains the normal term for the latter two aspects, the language began to be called Old English towards the end of the 19th century, as a result of the increasingly strong anti-Germanic nationalism in English society of the 1890s and early 1900s. However many authors still also use the term Anglo-Saxon to refer to the language. is the earliest historical form of the English language, spoken in England and southern and eastern Scotland in the early Middle Ages. It was brought to Great Britain by Anglo-Saxon settlers probably in the mid 5th century, and the first Old English literary works date from the mid-7th century. After the Norman Conquest of 1066, English was replaced, for a time, as the language of the upper classes by Anglo-Norman, a relative of French, and Old English developed into the next historical form of English, known as Middle English.
Old English developed from a set of Anglo-Frisian or North Sea Germanic dialects originally spoken by Germanic tribes traditionally known as the Angles, Saxons, and Jutes. As the Anglo-Saxons became dominant in England, their language replaced the languages of Roman Britain: Common Brittonic, a Celtic language, and Latin, brought to Britain by Roman invasion. Old English had four main dialects, associated with particular Anglo-Saxon kingdoms: Mercian, Northumbrian, Kentish and West Saxon. It was West Saxon that formed the basis for the literary standard of the later Old English period, although the dominant forms of Middle and Modern English would develop mainly from Mercian. The speech of eastern and northern parts of England was subject to strong Old Norse influence due to Scandinavian rule and settlement beginning in the 9th century.
Old English is one of the West Germanic languages, and its closest relatives are Old Frisian and Old Saxon. Like other old Germanic languages, it is very different from Modern English and difficult for Modern English speakers to understand without study. Old English grammar is quite similar to that of modern German: nouns, adjectives, pronouns, and verbs have many inflectional endings and forms, and word order is much freer. The oldest Old English inscriptions were written using a runic system, but from about the 9th century this was replaced by a version of the Latin alphabet.
History
thumb|right|300px|The distribution of the primary Germanic dialect groups in Europe in around AD 1:
Old English was not static, and its usage covered a period of 700 years, from the Anglo-Saxon settlement of Britain in the 5th century to the late 11th century, some time after the Norman invasion. While indicating that the establishment of dates is an arbitrary process, Albert Baugh dates Old English from 450 to 1150, a period of full inflections, a synthetic language. Perhaps around 85 per cent of Old English words are no longer in use, but those that survived are basic elements of Modern English vocabulary.
Old English is a West Germanic language, developing out of Ingvaeonic (also known as North Sea Germanic) dialects from the 5th century. It came to be spoken over most of the territory of the Anglo-Saxon kingdoms which became the Kingdom of England. This included most of present-day England, as well as part of what is now southeastern Scotland, which for several centuries belonged to the Anglo-Saxon kingdom of Northumbria. Other parts of the island – Wales and most of Scotland – continued to use Celtic languages, except in the areas of Scandinavian settlements where Old Norse was spoken. Celtic speech also remained established in certain parts of England: Medieval Cornish was spoken all over Cornwall and in adjacent parts of Devon, while Cumbric survived perhaps to the 12th century in parts of Cumbria, and Welsh may have been spoken on the English side of the Anglo-Welsh border. Norse was also widely spoken in the parts of England which fell under Danish law.
Anglo-Saxon literacy developed after Christianisation in the late 7th century. The oldest surviving text of Old English literature is Cædmon's Hymn, composed between 658 and 680. There is a limited corpus of runic inscriptions from the 5th to 7th centuries, but the oldest coherent runic texts (notably the Franks Casket) date to the 8th century. The Old English Latin alphabet was introduced around the 9th century.
With the unification of the Anglo-Saxon kingdoms (outside the Danelaw) by Alfred the Great in the later 9th century, the language of government and literature became standardised around the West Saxon dialect (Early West Saxon). Alfred advocated education in English alongside Latin, and had many works translated into the English language; some of them, such as Pope Gregory I's treatise Pastoral Care, appear to have been translated by Alfred himself. In Old English, typical of the development of literature, poetry arose before prose, but King Alfred the Great (871 to 901) chiefly inspired the growth of prose.
A later literary standard, dating from the later 10th century, arose under the influence of Bishop Æthelwold of Winchester, and was followed by such writers as the prolific Ælfric of Eynsham ("the Grammarian"). This form of the language is known as the "Winchester standard", or more commonly as Late West Saxon. It is considered to represent the "classical" form of Old English.Hogg (1992), p. 83. It retained its position of prestige until the time of the Norman Conquest, after which English ceased for a time to be of importance as a literary language.
The history of Old English can be subdivided into:
Prehistoric Old English (c. 450 to 650); for this period, Old English is mostly a reconstructed language as no literary witnesses survive (with the exception of limited epigraphic evidence). This language, or bloc of languages, spoken by the Angles, Saxons, and Jutes, and pre-dating documented Old English or Anglo-Saxon, has also been called Primitive Old English.
Early Old English (c. 650 to 900), the period of the oldest manuscript traditions, with authors such as Cædmon, Bede, Cynewulf and Aldhelm.
Late Old English (c. 900 to 1066), the final stage of the language leading up to the Norman conquest of England and the subsequent transition to Early Middle English.
The Old English period is followed by Middle English (12th to 15th century), Early Modern English (c. 1480 to 1650) and finally Modern English (after 1650).
Dialects
thumb|"Her swutelað seo gecwydrædnes ðe"Old English inscription over the arch of the south porticus in the 10th-century St Mary's parish church, Breamore, Hampshire
Old English should not be regarded as a single monolithic entity, just as Modern English is also not monolithic. It emerged over time out of the many dialects and languages of the colonising tribes, and it is perhaps only towards the later Anglo-Saxon period that these can be considered to have constituted a single national language. Even then, Old English continued to exhibit much local and regional variation, remnants of which remain in Modern English dialects.Origin of the Anglo-Saxon race : a study of the settlement of England and the tribal origin of the Old English people; Author: William Thomas Shore; Editors TW and LE Shore; Publisher: Elliot Stock; published 1906 p. 3
The four main dialectal forms of Old English were Mercian, Northumbrian, Kentish, and West Saxon. Mercian and Northumbrian are together referred to as Anglian. In terms of geography the Northumbrian region lay north of the Humber River; the Mercian lay north of the Thames and South of the Humber River; West Saxon lay south and southwest of the Thames; and the smallest, Kentish region lay southeast of the Thames, a small corner of England. The Kentish region, settled by the Jutes from Jutland, has the scantiest literary remains.
Each of these four dialects was associated with an independent kingdom on the island. Of these, Northumbria south of the Tyne, and most of Mercia, were overrun by the Vikings during the 9th century. The portion of Mercia that was successfully defended, and all of Kent, were then integrated into Wessex under Alfred the Great.
From that time on, the West Saxon dialect (then in the form now known as Early West Saxon) became standardised as the language of government, and as the basis for the many works of literature and religious materials produced or translated from Latin in that period.
The later literary standard known as Late West Saxon (see History, above), although centred in the same region of the country, appears not to have been directly descended from Alfred's Early West Saxon. For example, the former diphthong tended to become monophthongised to in EWS, but to in LWS.Hogg (1992), p. 117; but for a different interpretation of this, see Old English diphthongs.
Due to the centralisation of power and the Viking invasions, there is relatively little written record of the non-Wessex dialects after Alfred's unification. Some Mercian texts continued to be written, however, and the influence of Mercian is apparent in some of the translations produced under Alfred's programme, many of which were produced by Mercian scholars.Magennis (2011), pp. 56–60. Other dialects certainly continued to be spoken, as is evidenced by the continued variation between their successors in Middle and Modern English. In fact, what would become the standard forms of Middle English and of Modern English are descended from Mercian rather than West Saxon, while Scots developed from the Northumbrian dialect. It was once claimed that, owing to its position at the heart of the Kingdom of Wessex, the relics of Anglo-Saxon accent, idiom and vocabulary were best preserved in the dialect of Somerset.The Somersetshire dialect: its pronunciation, 2 papers (1861) Thomas Spencer Baynes, first published 1855 & 1856
For details of the sound differences between the dialects, see Phonological history of Old English (dialects).
Influence of other languages
The language of the Anglo-Saxon settlers appears not to have been significantly affected by the native British Celtic languages which it largely displaced. The number of Celtic loanwords introduced into the language is very small. However, various suggestions have been made concerning possible influence that Celtic may have had on developments in English syntax in the post-Old English period, such as the regular progressive construction and analytic word order, as well as the eventual development of the periphrastic auxiliary verb "do."
Old English contained a certain number of loanwords from Latin, which was the scholarly and diplomatic lingua franca of Western Europe. It is sometimes possible to give approximate dates for the borrowing of individual Latin words based on which patterns of sound change they have undergone. Some Latin words had already been borrowed into the Germanic languages before the ancestral Angles and Saxons left continental Europe for Britain. More entered the language when the Anglo-Saxons were converted to Christianity and Latin-speaking priests became influential. It was also through Irish Christian missionaries that the Latin alphabet was introduced and adapted for the writing of Old English, replacing the earlier runic system. Nonetheless, the largest transfer of Latin-based (mainly Old French) words into English occurred after the Norman Conquest of 1066, and thus in the Middle English rather than the Old English period.
Another source of loanwords was Old Norse, which came into contact with Old English via the Scandinavian rulers and settlers in the Danelaw from the late 9th century, and during the rule of Cnut and other Danish kings in the early 11th century. Many place-names in eastern and northern England are of Scandinavian origin. Norse borrowings are relatively rare in Old English literature, being mostly terms relating to government and administration. The literary standard, however, was based on the West Saxon dialect, away from the main area of Scandinavian influence; the impact of Norse may have been greater in the eastern and northern dialects. Certainly in Middle English texts, which are more often based on eastern dialects, a strong Norse influence becomes apparent. Modern English contains a great many, often everyday, words that were borrowed from Old Norse, and the grammatical simplification that occurred after the Old English period is also often attributed to Norse influence.
The influence of Old Norse certainly helped move English from a synthetic language along the continuum to a more analytic word order, and Old Norse most likely made a greater impact on the English language than any other language. The eagerness of Vikings in the Danelaw to communicate with their southern Anglo-Saxon neighbours produced a friction that led to the erosion of the complicated inflectional word-endings. Simeon Potter notes: “No less far-reaching was the influence of Scandinavian upon the inflexional endings of English in hastening that wearing away and leveling of grammatical forms which gradually spread from north to south. It was, after all, a salutary influence. The gain was greater than the loss. There was a gain in directness, in clarity, and in strength.”
The strength of the Viking influence on Old English appears from the fact that the indispensable elements of the language - pronouns, modals, comparatives, pronominal adverbs (like "hence" and "together"), conjunctions and prepositions - show the most marked Danish influence; the best evidence of Scandinavian influence appears in the extensive word borrowings for, as Jespersen indicates, no texts exist in either Scandinavia or in Northern England from this time to give certain evidence of an influence on syntax. The change to Old English from Old Norse was substantive, pervasive, and of a democratic character. Old Norse and Old English resembled each other closely like cousins and with some words in common, they roughly understood each other; in time the inflections melted away and the analytic pattern emerged. It is most “important to recognize that in many words the English and Scandinavian language differed chiefly in their inflectional elements. The body of the word was so nearly the same in the two languages that only the endings would put obstacles in the way of mutual understanding. In the mixed population which existed in the Danelaw these endings must have led to much confusion, tending gradually to become obscured and finally lost.” This blending of peoples and languages resulted in “simplifying English grammar.”
Phonology
The inventory of classical Old English (Late West Saxon) surface phones, as usually reconstructed, is as follows.
+ Consonants Labial Dental Alveolar Post-alveolar Palatal Velar Glottal Nasal () () Stop Affricate () Fricative () () () () ( ) Approximant () () Trill ()
The sounds enclosed in parentheses in the chart above are not considered to be phonemes:
is an allophone of occurring after and when geminated (doubled).
is an allophone of occurring before and .
are voiced allophones of respectively, occurring between vowels or voiced consonants.
are allophones of occurring in coda position after front and back vowels respectively.
is an allophone of occurring after a vowel, and, at an earlier stage of the language, in the syllable onset.
the voiceless sonorants are analysed as realizing the sequences .
The above system is largely similar to that of Modern English, except that (and for most speakers) have generally been lost, while the voiced affricate and fricatives (now also including ) have become independent phonemes, as has .
+ Vowels – monophthongs Front Back unrounded rounded unrounded rounded Close Mid () Open
The mid front rounded vowels had merged into unrounded before the Late West Saxon period. During the 11th century such vowels arose again, as monophthongisations of the diphthongs , but quickly merged again with in most dialects.Blake (1992), pp. 42–43.
+ Diphthongs Firstelement Short(monomoraic) Long(bimoraic) Close Mid Open
The exact pronunciation of the West Saxon close diphthongs, spelt , is disputed; it may have been . Other dialects may have had different systems of diphthongs; for example, Anglian dialects retained , which had merged with in West Saxon.
For more on dialectal differences, see Phonological history of Old English (dialects).
Sound changes
Some of the principal sound changes occurring in the pre-history and history of Old English were the following:
Fronting of to except when nasalised or followed by a nasal consonant ("Anglo-Frisian brightening"), partly reversed in certain positions by later "a-restoration" or retraction.
Monophthongisation of the diphthong , and modification of remaining diphthongs to the height-harmonic type.
Diphthongisation of long and short front vowels in certain positions ("breaking").
Palatalisation of velars to in certain front-vowel environments.
The process known as i-mutation (which for example led to modern mice as the plural of mouse).
Loss of certain weak vowels in word-final and medial positions, and of medial [(i)j]; reduction of remaining unstressed vowels.
Diphthongisation of certain vowels before certain consonants when preceding a back vowel ("back mutation").
Loss of /h/ between vowels or between a voiced consonant and a vowel, with lengthening of the preceding vowel.
Collapse of two consecutive vowels into a single vowel.
"Palatal umlaut", which has given forms such as six (compare German sechs).
For more details of these processes, see the main article, linked above. For sound changes before and after the Old English period, see Phonological history of English.
Grammar
Morphology
Unlike Modern English, Old English is a language rich in morphological diversity. It maintains several distinct cases: the nominative, accusative, genitive, dative and (vestigially) instrumental. The only remnants of this system in Modern English are in the forms of a few pronouns (such as I/me/mine, she/her, who/whom/whose) and in the possessive ending -'s, which derives from the old (masculine and neuter) genitive ending -es. In Old English, however, nouns and their modifying words take appropriate endings depending on their case.
The modern English plural ending -(e)s derives from the Old English -as, but the latter applied only to "strong" masculine nouns in the nominative and accusative cases; different plural endings were used in other instances. Besides singular and plural, the first- and second-person personal pronouns also retained dual forms, meaning "we (two)", "you (two)".
Old English nouns had grammatical gender, a feature absent in modern English, which uses only natural gender. For example, the words ("sun"), ("moon") and ("woman/wife") were respectively feminine, masculine and neuter; this is reflected, among other things, in the form of the definite article used with these nouns: ("the sun"), se mōna ("the moon"), ("the woman/wife"). Pronoun usage could reflect either natural or grammatical gender, when those conflicted (as in the case of , a neuter noun referring to a female person).
The definite article and its various forms could serve both as a definite article ("the") and a demonstrative adjective ("that"). Another demonstrative was ("this"). These words, like other adjectives, inflected for gender, number and case. Adjectives had both strong and weak sets of endings, the weak ones being used when a definite or possessive determiner was also present.
The form of the verb varies with person (first, second and third), number (singular and plural), tense (present and past), and mood (indicative, subjunctive and imperative). Old English also sometimes uses compound constructions to express other verbal aspects, the future and the passive voice; in these we see the beginnings of the compound tenses of Modern English. Old English verbs include strong verbs, which form the past tense by altering the root vowel, and weak verbs, which use a suffix such as . As in Modern English, and peculiar to the Germanic languages, the verbs formed two great classes: weak (regular), and strong (irregular). Like today, Old English had fewer strong verbs, and many of these have over time decayed into weak forms. Then, as now, dental suffixes indicated the past tense of the weak verbs, as in work and worked.
Syntax
Old English syntax was similar in many ways to that of modern English. However, there were some important differences. Some were simply consequences of the greater level of nominal and verbal inflection, which meant that word order was generally freer. In addition:
The default word order was more like modern German than modern English, with verb-second order in main clauses, and verb-final in subordinate clauses.
There was no do-support in questions and negatives. Questions were usually formed by inverting subject and finite verb, and negatives by placing ne before the finite verb, regardless of what the verb was.
Multiple negatives could stack up in a sentence, and intensified each other (negative concord).
Sentences with subordinate clauses of the type "when X, Y" (e.g. "When I got home, I ate dinner") did not use a wh-type conjunction, but rather used a th-type correlative conjunction such as , otherwise meaning "then" (e.g. in place of "when X, Y"). The wh-words were used only as interrogatives and as indefinite pronouns.
Similarly, wh- forms were not used as relative pronouns. Instead, the indeclinable word was used, often preceded by (or replaced by) the appropriate form of the article/demonstrative .
Orthography
250px|thumb|The runic alphabet used to write Old English before the introduction of the Latin alphabet.
Old English was first written in runes, using the futhorc – a rune set derived from the Germanic 24-character elder futhark, extended by five more runes used to represent Anglo-Saxon vowel sounds, and sometimes by several more additional characters. From around the 9th century, the runic system came to be supplanted by a (minuscule) half-uncial script of the Latin alphabet introduced by Irish Christian missionaries. This was replaced by insular script, a cursive and pointed version of the half-uncial script. This was used until the end of the 12th century when continental Carolingian minuscule (also known as Caroline) replaced the insular.
The Latin alphabet of the time still lacked the letters and , and there was no as distinct from ; moreover native Old English spellings did not use , or . The remaining 20 Latin letters were supplemented by four more: (, modern ash) and (, now called eth or edh), which were modified Latin letters, and thorn and wynn , which are borrowings from the futhorc. A few letter pairs were used as digraphs, representing a single sound. Also used was the Tironian note (a character similar to the digit 7) for the conjunction and, and a thorn with a crossbar through the ascender for the pronoun . Macrons over vowels were originally used not to mark long vowels (as in modern editions), but to indicate stress,C.M. Millward, Mary Hayes, A Biography of the English Language, Cengage Learning 2011, p. 96. or as abbreviations for a following m or n.Stephen Pollington, First Steps in Old English, Anglo-Saxon Books 1997, p. 138.
Modern editions of Old English manuscripts generally introduce some additional conventions. The modern forms of Latin letters are used, including in place of the insular G, for long S, and others which may differ considerably from the insular script, notably , and . Macrons are used to indicate long vowels, where usually no distinction was made between long and short vowels in the originals. (In some older editions an acute accent mark was used for consistency with Old Norse conventions.) Additionally, modern editions often distinguish between velar and palatal and by placing dots above the palatals: , . The letter wynn is usually replaced with , but , eth and thorn are normally retained (except when eth is replaced by thorn).
In contrast with Modern English orthography, that of Old English was reasonably regular, with a mostly predictable correspondence between letters and phonemes. There were not usually any silent letters – in the word cniht, for example, both the and were pronounced, unlike the and in the modern knight. The following table lists the Old English letters and digraphs together with the phonemes they represent, using the same notation as in the Phonology section above.
Character IPA transcription Description and notes , Spelling variations like ~ ("land") suggest the short vowel may have had a rounded allophone before in some cases. Used in modern editions to distinguish from short . , Formerly the digraph was used; became more common during the 8th century, and was standard after 800. In 9th-century Kentish manuscripts, a form of that was missing the upper hook of the part was used; it is not clear whether this represented or . See also ę. Used in modern editions to distinguish from short . (an allophone of /f/) Used in this way in early texts (before 800). For example, the word "sheaves" is spelled scēabas in an early text, but later (and more commonly) as scēafas. The pronunciation is sometimes written with a diacritic by modern editors: most commonly , sometimes or . Before a consonant letter the pronunciation is always ; word-finally after it is always . Otherwise, a knowledge of the history of the word is needed to predict the pronunciation. (For details, see .) See also the digraphs cg, sc. (the phonetic realization of geminate ) (occasionally) In the earliest texts it also represented (see þ). , including its allophone Called in Old English; now called eth or edh. Derived from the insular form of with the addition of a cross-bar. See also . , A modern editorial substitution for the modified Kentish form of (see æ). Compare e caudata, ę. Used in modern editions to distinguish from short . , Sometimes stands for , or after , (see palatal diphthongization). Used in modern editions to distinguish from short . Sometimes stands for after , . , Sometimes stands for after , (see palatal diphthongization). Used in modern editions, to distinguish from short . , including its allophone (but see b). , including its allophone ; or , including its allophone , which occurs after . In Old English manuscripts, this letter usually took its insular form (see also: yogh). The and pronunciations are sometimes written in modern editions. Before a consonant letter the pronunciation is always (word-initially) or (after a vowel). Word-finally after it is always . Otherwise a knowledge of the history of the word in question is needed to predict the pronunciation. (For details, see .) , including its allophones In the combinations , , , , the realization may have been a devoiced version of the second consonant. , Used in modern editions to distinguish from short . , , Only occurs sometimes in this sense and appears after , (see palatal diphthongization). Used in modern editions, to distinguish from short . Sometimes stands for after , . , Occurs in dialects that had such diphthongs. Not present in Late West Saxon. The long variant may be shown in modern editions as īo. Rarely used; this sound is normally represented by . Probably velarised (as in Modern English) when in coda position. , including its allophone (before /k/, /g/). , See also a. Used in modern editions, to distinguish from short . , (in dialects having that sound). Used in modern editions, to distinguish from short . A rare spelling of , which was usually written as ( in modern editions). The exact nature of Old English is not known; it may have been an alveolar approximant as in most modern English, an alveolar flap , or an alveolar trill . , including its allophone . or occasionally . Represented in the earliest texts (see þ). , including its allophone Called thorn and derived from the rune of the same name. In the earliest texts or was used for this phoneme, but these were later replaced in this function by eth and thorn . Eth was first attested (in definitely dated materials) in the 7th century, and thorn in the 8th. Eth was more common than thorn before Alfred's time. From then onward, thorn was used increasingly often at the start of words, while eth was normal in the middle and at the end of words, although usage varied in both cases. Some modern editions use only thorn. See also Pronunciation of English ⟨th⟩. , . Also sometimes (see ƿ, below). Sometimes used for (see , below). Used for in modern editions, to distinguish from short . A modern substitution for . Called wynn and derived from the rune of the same name. In earlier texts by continental scribes, and also later in the north, /w/ was represented by or . In modern editions, wynn is replaced by , to prevent confusion with . ( according to some authors). , . Used in modern editions to distinguish from short . A rare spelling for ; e.g. betst ("best") is occasionally spelt bezt.
Doubled consonants are geminated; the geminate fricatives /, and cannot be voiced.
Literature
thumb|The first page of the Beowulf manuscript with its opening"Listen! We of the Spear-Danes from days of yore have heard of the glory of the folk-kings..."
Old English literature, though more abundant than literature of the continent before AD 1000, is nonetheless scant. The pagan and Christian streams mingle in Old English, one of the richest and most significant bodies of literature preserved among the early Germanic peoples. In his supplementary article to the 1935 posthumous edition of Bright's Anglo-Saxon Reader, Dr. James Hulbert writes:
Some of the most important surviving works of Old English literature are Beowulf, an epic poem; the Anglo-Saxon Chronicle, a record of early English history; the Franks Casket, an inscribed early whalebone artefact; and Cædmon's Hymn, a Christian religious poem. There are also a number of extant prose works, such as sermons and saints' lives, biblical translations, and translated Latin works of the early Church Fathers, legal documents, such as laws and wills, and practical works on grammar, medicine, and geography. Still, poetry is considered the heart of Old English literature. Nearly all Anglo-Saxon authors are anonymous, with a few exceptions, such as Bede and Cædmon. Cædmon, the earliest English poet we know by name, served as a lay brother in the monastery at Whitby.
Beowulf
The first example is taken from the opening lines of the folk-epic Beowulf, a poem of some 3,000 lines and the single greatest work of Old English. This passage describes how Hrothgar's legendary ancestor Scyld was found as a baby, washed ashore, and adopted by a noble family. The translation is literal and represents the original poetic word order. As such, it is not typical of Old English prose. The modern cognates of original words have been used whenever practical to give a close approximation of the feel of the original poem.
The words in brackets are implied in the Old English by noun case and the bold words in brackets are explanations of words that have slightly different meanings in a modern context. Notice how what is used by the poet where a word like lo or behold would be expected. This usage is similar to what-ho!, both an expression of surprise and a call to attention.
English poetry is based on stress and alliteration. In alliteration, the first consonant in a word alliterates with the same consonant at the beginning of another word, as with . Vowels alliterate with any other vowel, as with and . In the text below, the letters that alliterate are bolded.
Original Translation 1 What! We of Gare-Danes (lit. Spear-Danes) in yore-days, of thede(nation/people)-kings, did thrum (glory) frayne (learn about by asking), how those athelings (noblemen) did ellen (fortitude/courage/zeal) freme (promote). Oft did Scyld Scefing of scather threats (troops), 5 of many maegths (clans; cf. Irish cognate Mac-), of mead-settees atee (deprive), [and] ugg (induce loathing in, terrify; related to "ugly") earls. Sith (since, as of when) erst (first) [he] worthed (became) [in] fewship (destitute) found, he of this frover (comfort) abode, [and] waxed under welkin (firmament/clouds), [and amid] worthmint (honour/worship) threed (throve/prospered) oth that (until that) him each of those umsitters (those "sitting" or dwelling roundabout) 10 over whale-road (kenning for "sea") hear should, [and] yeme (heed/obedience; related to "gormless") yield. That was [a] good king!
A semi-fluent translation in Modern English would be:
Lo! We have heard of majesty of the Spear-Danes, of those nation-kings in the days of yore, and how those noblemen promoted zeal. Scyld Scefing took away mead-benches from bands of enemies, from many tribes; he terrified earls. Since he was first found destitute (he gained consolation for that) he grew under the heavens, prospered in honours, until each of those who lived around him over the sea had to obey him, give him tribute. That was a good king!
The Lord's Prayer
thumb|A recording of how the Lord's Prayer probably sounded in Old English, pronounced slowly
This text of the Lord's Prayer is presented in the standardised West Saxon literary dialect, with added macrons for vowel length, markings for probable palatalised consonants, modern punctuation, and the replacement of the letter wynn with w.
Line Original IPA Translation [1] Father of ours, thou who art in heavens, [2] Be thy name hallowed. [3] Come thy riche (kingdom), [4] Worth (manifest) thy will, on earth as also in heaven. [5] Our daily loaf do sell (give) to us today, [6] And forgive us our guilts as also we forgive our guiltersLit. a participle: "guilting" or "[a person who is] sinning"; cf. Latin cognate -ant/-ent. [7] And do not lead thou us into temptation, but alese (release/deliver) us of (from) evil. [8] Soothly (Truly).
Charter of Cnut
This is a proclamation from King Cnut the Great to his earl Thorkell the Tall and the English people written in AD 1020. Unlike the previous two examples, this text is prose rather than poetry. For ease of reading, the passage has been divided into sentences while the pilcrows represent the original division.
Original Translation ¶ ¶ Cnut, king, greets his archbishops and his lede'(people's)'-bishops and Thorkell, earl, and all his earls and all his peopleship, greater (having a 1200 shilling weregild) and lesser (200 shilling weregild), hooded(ordained to priesthood) and lewd(lay), in England friendly. And I kithe(make known/couth to) you, that I will be [a] hold(civilised) lord and unswiking(uncheating) to God's rights(laws) and to [the] rights(laws) worldly. ¶ ¶ I nam(took) me to mind the writs and the word that the Archbishop Lyfing me from the Pope brought of Rome, that I should ayewhere(everywhere) God's love(praise) uprear(promote), and unright(outlaw) lies, and full frith(peace) work(bring about) by the might that me God would(wished) [to] sell'(give). ¶ ¶ Now, ne went(withdrew/changed) I not my shot(financial contribution, cf. Norse cognate in scot-free) the while that you stood(endured) unfrith(turmoil) on-hand: now I, mid(with) God's support, that [unfrith] totwemed(separated/dispelled) mid(with) my shot(financial contribution). Tho(then) [a] man kithed(made known/couth to) me that us more harm had found(come upon) than us well liked(equalled): and tho(then) fore(travelled) I, meself, mid(with) those men that mid(with) me fore(travelled), into Denmark that [to] you most harm came of(from): and that[harm] have [I], mid(with) God's support, afore(previously) forefangen(forestalled) that to you never henceforth thence none unfrith(breach of peace) ne come the while that ye me rightly hold(behold as king) and my life beeth.
Revivals
Like other historical languages, Old English has been used by scholars and enthusiasts of later periods to create texts either imitating Anglo-Saxon literature or deliberately transferring it to a different cultural context. Examples include Alistair Campbell and J. R. R. Tolkien. A number of websites devoted to Modern Paganism and historical reenactment offer reference material and forums promoting the active use of Old English. There is also an Old English version of Wikipedia. However, one investigation found that many Neo-Old English texts published online bear little resemblance to the historical language and have many basic grammatical mistakes.Christina Neuland and Florian Schleburg. (2014). "A New Old English? The Chances of an Anglo-Saxon Revival on the Internet". In: S. Buschfeld et al. (Eds.), The Evolution of Englishes. The Dynamic Model and Beyond (pp. 486–504). Amsterdam: John Benjamins.
See also
Exeter Book
Go (verb)
History of the Scots language
I-mutation
Ingvaeonic nasal spirant law Anglo-Frisian nasal spirant law
List of generic forms in place names in the United Kingdom and Ireland
List of Germanic and Latinate equivalents in English
Notes
Bibliography
Sources
General
Baugh, Albert C; & Cable, Thomas. (1993). A History of the English Language (4th ed.). London: Routledge.
Blake, Norman (1992). The Cambridge History of the English Language: Vol. 2. Cambridge: Cambridge University Press.
Campbell, A. (1959). Old English Grammar. Oxford: Clarendon Press.
(Reissue of one of 4 eds. 1877–1902)
Euler, Wolfram (2013). Das Westgermanische [rest of title missing] (West Germanic: from its Emergence in the 3rd up until its Dissolution in the 7th Century CE: Analyses and Reconstruction). 244 p., in German with English summary, London/Berlin 2013, ISBN 978-3-9812110-7-8.
Hogg, Richard M. (ed.). (1992). The Cambridge History of the English Language: (Vol 1): the Beginnings to 1066. Cambridge: Cambridge University Press.
Hogg, Richard; & Denison, David (eds.) (2006) A History of the English Language. Cambridge: Cambridge University Press.
Jespersen, Otto (1909–1949) A Modern English Grammar on Historical Principles. 7 vols. Heidelberg: C. Winter & Copenhagen: Ejnar Munksgaard
Lass, Roger (1987) The Shape of English: structure and history. London: J. M. Dent & Sons
Quirk, Randolph; & Wrenn, CL (1957). An Old English Grammar (2nd ed.) London: Methuen.
Ringe, Donald R and Taylor, Ann (2014). The Development of Old English - A Linguistic History of English, vol. II, 632p. ISBN 978-0199207848. Oxford.
Strang, Barbara M. H. (1970) A History of English. London: Methuen.
External history
Bremmer Jr, Rolf H. (2009). An Introduction to Old Frisian. History, Grammar, Reader, Glossary. Amsterdam and Philadelphia: John Benjamins.
Stenton, FM (1971). Anglo-Saxon England (3rd ed.). Oxford: Clarendon Press.
Orthography/Palaeography
Bourcier, Georges. (1978). L'orthographie de l'anglais: Histoire et situation actuelle. Paris: Presses Universitaires de France.
Elliott, Ralph WV (1959). Runes: An introduction. Manchester: Manchester University Press.
Keller, Wolfgang. (1906). Angelsächsische Paleographie, I: Einleitung. Berlin: Mayer & Müller.
Ker, NR (1957). A Catalogue of Manuscripts Containing Anglo-Saxon. Oxford: Clarendon Press.
Ker, NR (1957: 1990). A Catalogue of Manuscripts Containing Anglo-Saxon; with supplement prepared by Neil Ker originally published in Anglo-Saxon England; 5, 1957. Oxford: Clarendon Press ISBN 0-19-811251-3
Page, RI (1973). An Introduction to English Runes. London: Methuen.
Scragg, Donald G (1974). A History of English Spelling. Manchester: Manchester University Press.
Phonology
Anderson, John M; & Jones, Charles. (1977). Phonological structure and the history of English. North-Holland linguistics series (No. 33). Amsterdam: North-Holland.
Brunner, Karl. (1965). Altenglische Grammatik (nach der angelsächsischen Grammatik von Eduard Sievers neubearbeitet) (3rd ed.). Tübingen: Max Niemeyer.
Campbell, A. (1959). Old English Grammar. Oxford: Clarendon Press.
Cercignani, Fausto (1983). "The Development of */k/ and */sk/ in Old English". Journal of English and Germanic Philology, 82 (3): 313–323.
Girvan, Ritchie. (1931). Angelsaksisch Handboek; E. L. Deuschle (transl.). (Oudgermaansche Handboeken; No. 4). Haarlem: Tjeenk Willink.
Halle, Morris; & Keyser, Samuel J. (1971). English Stress: its form, its growth, and its role in verse. New York: Harper & Row.
Hogg, Richard M. (1992). A Grammar of Old English, I: Phonology. Oxford: Blackwell.
Kuhn, Sherman M. (1970). "On the consonantal phonemes of Old English". In: J. L. Rosier (ed.) Philological Essays: studies in Old and Middle English language and literature in honour of Herbert Dean Merritt (pp. 16–49). The Hague: Mouton.
Lass, Roger; & Anderson, John M. (1975). Old English Phonology. (Cambridge studies in linguistics; No. 14). Cambridge: Cambridge University Press.
Luick, Karl. (1914–1940). Historische Grammatik der englischen Sprache. Stuttgart: Bernhard Tauchnitz.
Moulton, WG (1972). "The Proto-Germanic non-syllabics (consonants)". In: F van Coetsem & HL Kufner (Eds.), Toward a Grammar of Proto-Germanic (pp. 141–173). Tübingen: Max Niemeyer.
Sievers, Eduard (1893). Altgermanische Metrik. Halle: Max Niemeyer.
Wagner, Karl Heinz (1969). Generative Grammatical Studies in the Old English language. Heidelberg: Julius Groos.
Morphology
Brunner, Karl. (1965). Altenglische Grammatik (nach der angelsächsischen Grammatik von Eduard Sievers neubearbeitet) (3rd ed.). Tübingen: Max Niemeyer.
Campbell, A. (1959). Old English grammar. Oxford: Clarendon Press.
Wagner, Karl Heinz. (1969). Generative grammatical studies in the Old English language. Heidelberg: Julius Groos.
Syntax
Brunner, Karl. (1962). Die englische Sprache: ihre geschichtliche Entwicklung (Vol. II). Tübingen: Max Niemeyer.
Kemenade, Ans van. (1982). Syntactic Case and Morphological Case in the History of English. Dordrecht: Foris.
MacLaughlin, John C. (1983). Old English Syntax: a handbook. Tübingen: Max Niemeyer.
Mitchell, Bruce. (1985). Old English Syntax (Vols. 1–2). Oxford: Clarendon Press (no more published)
Vol.1: Concord, the parts of speech and the sentence
Vol.2: Subordination, independent elements, and element order
Mitchell, Bruce. (1990) A Critical Bibliography of Old English Syntax to the end of 1984, including addenda and corrigenda to "Old English Syntax" . Oxford: Blackwell
Timofeeva, Olga. (2010) Non-finite Constructions in Old English, with Special Reference to Syntactic Borrowing from Latin, PhD dissertation, Mémoires de la Société Néophilologique de Helsinki, vol. LXXX, Helsinki: Société Néophilologique.
Traugott, Elizabeth Closs. (1972). A History of English Syntax: a transformational approach to the history of English sentence structure. New York: Holt, Rinehart & Winston.
Visser, F. Th. (1963–1973). An Historical Syntax of the English Language (Vols. 1–3). Leiden: E. J. Brill.
Lexicons
Bosworth, J; & Toller, T. Northcote. (1898). An Anglo-Saxon Dictionary. Oxford: Clarendon Press. (Based on Bosworth's 1838 dictionary, his papers & additions by Toller)
Toller, T. Northcote. (1921). An Anglo-Saxon Dictionary: Supplement. Oxford: Clarendon Press.
Campbell, A. (1972). An Anglo-Saxon Dictionary: Enlarged addenda and corrigenda. Oxford: Clarendon Press.
Clark Hall, J. R; & Merritt, H. D. (1969). A Concise Anglo-Saxon Dictionary (4th ed.). Cambridge: Cambridge University Press.
Cameron, Angus, et al. (ed.) (1983) Dictionary of Old English. Toronto: Published for the Dictionary of Old English Project, Centre for Medieval Studies, University of Toronto by the Pontifical Institute of Medieval Studies, 1983/1994. (Issued on microfiche and subsequently as a CD-ROM and on the World Wide Web.)
External links
Old English Lessons (free online through the Linguistics Research Center at UT Austin)
Old English/Modern English Translator
The Electronic Introduction to Old English
Learn Old English with Leofwin
Old English (Anglo-Saxon) alphabet
Bosworth and Toller, An Anglo-Saxon dictionary
Downloadable Bosworth and Toller, An Anglo-Saxon dictionary Application
Old English Made Easy
Old English – Modern English dictionary
Old English Glossary
Old English Letters
Shakespeare's English vs Old English
Downloadable Old English keyboard for Windows and Mac
Another downloadable keyboard for Windows computers
Guide to using Old English computer characters (Unicode, HTML entities, etc.)
The Germanic Lexicon Project
An overview of the grammar of Old English
The Lord's Prayer in Old English from the 11th century (video link)
Dictionary of Old English
Category:Articles with images not understandable by color blind users
Category:English languages
English, Old
Old English
Category:Languages attested from the 5th century
Category:5th-century establishments in England
Category:Languages extinct in the 13th century
Category:13th-century disestablishments in Europe | 22,667 | 2017-01 |
Antenna (radio) | In radio and electronics, an antenna (plural antennae or antennas), or aerial, is an electrical device which converts electric power into radio waves, and vice versa. It is usually used with a radio transmitter or radio receiver. In transmission, a radio transmitter supplies an electric current oscillating at radio frequency (i.e. a high frequency alternating current (AC)) to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the power of an electromagnetic wave in order to produce a tiny voltage at its terminals, that is applied to a receiver to be amplified.
Antennas are essential components of all equipment that uses radio. They are used in systems such as radio broadcasting, broadcast television, two-way radio, communications receivers, radar, cell phones, and satellite communications, as well as other devices such as garage door openers, wireless microphones, Bluetooth-enabled devices, wireless computer networks, baby monitors, and RFID tags on merchandise.
Typically an antenna consists of an arrangement of metallic conductors (elements), electrically connected (often through a transmission line) to the receiver or transmitter. An oscillating current of electrons forced through the antenna by a transmitter will create an oscillating magnetic field around the antenna elements, while the charge of the electrons also creates an oscillating electric field along the elements. These time-varying fields radiate away from the antenna into space as a moving transverse electromagnetic field wave. Conversely, during reception, the oscillating electric and magnetic fields of an incoming radio wave exert force on the electrons in the antenna elements, causing them to move back and forth, creating oscillating currents in the antenna.
Antennas can be designed to transmit and receive radio waves in all horizontal directions equally (omnidirectional antennas), or preferentially in a particular direction (directional or high gain antennas). In the latter case, an antenna may also include additional elements or surfaces with no electrical connection to the transmitter or receiver, such as parasitic elements, parabolic reflectors or horns, which serve to direct the radio waves into a beam or other desired radiation pattern.
The first antennas were built in 1888 by German physicist Heinrich Hertz in his pioneering experiments to prove the existence of electromagnetic waves predicted by the theory of James Clerk Maxwell. Hertz placed dipole antennas at the focal point of parabolic reflectors for both transmitting and receiving. He published his work in Annalen der Physik und Chemie (vol. 36, 1889).
thumb|upright=1.4|Animation of a half-wave dipole antenna transmitting radio waves, showing the electric field lines. The antenna in the center is two vertical metal rods, with an alternating current applied at its center from a radio transmitter (not shown). The voltage charges the two sides of the antenna alternately positive (+) and negative (−). Loops of electric field (black lines) leave the antenna and travel away at the speed of light; these are the radio waves.
thumb|upright=1.5|Animated diagram of a half-wave dipole antenna receiving energy from a radio wave. The antenna consists of two metal rods connected to a receiver R. The electric field (E, green arrows) of the incoming wave pushes the electrons in the rods back and forth, charging the ends alternately positive (+) and negative (−). Since the length of the antenna is one half the wavelength of the wave, the oscillating field induces standing waves of voltage (V, represented by red band) and current in the rods. The oscillating currents (black arrows) flow down the transmission line and through the receiver (represented by the resistance R).
Terminology
75px|thumb|Electronic symbol for an antenna
The words antenna (plural: antennasIn the context of electrical engineering and physics, the plural of antenna is antennas, and it has been this way since about 1950 (or earlier), when a cornerstone textbook in this field, Antennas, was published by the physicist and electrical engineer John D. Kraus of The Ohio State University. Besides in the title, Dr. Kraus noted this in a footnote on the first page of his book. Insects may have "antennae", but this form is not used in the context of electronics or physics. in US English, although both "antennas" and "antennae" are used in International EnglishFor example http://www.telegraph.co.uk/science/science-news/7810454/British-scientists-launch-major-radio-telescope.html; http://www.ic.gc.ca/eic/site/smt-gst.nsf/eng/sf09377.html; ) and aerial are used interchangeably. Occasionally the term "aerial" is used to mean a wire antenna. However, note the important international technical journal, the IEEE Transactions on Antennas and Propagation.
In the United Kingdom and other areas where British English is used, the term aerial is sometimes used although 'antenna' has been universal in professional use for many years.
The origin of the word antenna relative to wireless apparatus is attributed to Italian radio pioneer Guglielmo Marconi. In the summer of 1895, Marconi began testing his wireless system outdoors on his father's estate near Bologna and soon began to experiment with long wire "aerials". Marconi discovered that by raising the "aerial" wire above the ground and connecting the other side of his transmitter to ground, the transmission range was increased.Marconi, "Wireless Telegraphic Communication: Nobel Lecture, 11 December 1909." Nobel Lectures. Physics 1901–1921. Amsterdam: Elsevier Publishing Company, 1967: 196–222. p. 206. Soon he was able to transmit signals over a hill, a distance of approximately . In Italian a tent pole is known as l'antenna centrale, and the pole with the wire was simply called l'antenna. Until then wireless radiating transmitting and receiving elements were known simply as aerials or terminals. Because of his prominence, Marconi's use of the word antenna spread among wireless researchers, and later to the general public.
In common usage, the word antenna may refer broadly to an entire assembly including support structure, enclosure (if any), etc. in addition to the actual functional components. Especially at microwave frequencies, a receiving antenna may include not only the actual electrical antenna but an integrated preamplifier or mixer.
An antenna, in converting radio waves to electrical signals or vice versa, is a form of transducer.
Overview
thumb|Antennas of the Atacama Large Millimeter submillimeter Array.
Antennas are required by any radio receiver or transmitter to couple its electrical connection to the electromagnetic field. Radio waves are electromagnetic waves which carry signals through the air (or through space) at the speed of light with almost no transmission loss. Radio transmitters and receivers are used to convey signals (information) in systems including broadcast (audio) radio, television, mobile telephones, Wi-Fi (WLAN) data networks, trunk lines and point-to-point communications links (telephone, data networks), satellite links, many remote controlled devices such as garage door openers, and wireless remote sensors, among many others. Radio waves are also used directly for measurements in technologies including radar, GPS, and radio astronomy. In each and every case, the transmitters and receivers involved require antennas, although these are sometimes hidden (such as the antenna inside an AM radio or inside a laptop computer equipped with Wi-Fi).
thumb|150px|left|Whip antenna on car, common example of an omnidirectional antenna
According to their applications and technology available, antennas generally fall in one of two categories:
Omnidirectional or only weakly directional antennas which receive or radiate more or less in all directions. These are employed when the relative position of the other station is unknown or arbitrary. They are also used at lower frequencies where a directional antenna would be too large, or simply to cut costs in applications where a directional antenna isn't required.
Directional or beam antennas which are intended to preferentially radiate or receive in a particular direction or directional pattern.
In common usage "omnidirectional" usually refers to all horizontal directions, typically with reduced performance in the direction of the sky or the ground (a truly isotropic radiator is not even possible). A "directional" antenna usually is intended to maximize its coupling to the electromagnetic field in the direction of the other station, or sometimes to cover a particular sector such as a 120° horizontal fan pattern in the case of a panel antenna at a cell site.
One example of omnidirectional antennas is the very common vertical antenna or whip antenna consisting of a metal rod (often, but not always, a quarter of a wavelength long). A dipole antenna is similar but consists of two such conductors extending in opposite directions, with a total length that is often, but not always, a half of a wavelength long. Dipoles are typically oriented horizontally in which case they are weakly directional: signals are reasonably well radiated toward or received from all directions with the exception of the direction along the conductor itself; this region is called the antenna blind cone or null.
thumb|150px|right|Half-wave dipole antenna
Both the vertical and dipole antennas are simple in construction and relatively inexpensive. The dipole antenna, which is the basis for most antenna designs, is a balanced component, with equal but opposite voltages and currents applied at its two terminals through a balanced transmission line (or to a coaxial transmission line through a so-called balun). The vertical antenna, on the other hand, is a monopole antenna. It is typically connected to the inner conductor of a coaxial transmission line (or a matching network); the shield of the transmission line is connected to ground. In this way, the ground (or any large conductive surface) plays the role of the second conductor of a dipole, thereby forming a complete circuit. Since monopole antennas rely on a conductive ground, a so-called grounding structure may be employed to provide a better ground contact to the earth or which itself acts as a ground plane to perform that function regardless of (or in absence of) an actual contact with the earth.
thumb|150px|left|Diagram of the electric fields (blue) and magnetic fields (red) radiated by a dipole antenna (black rods) during transmission.
Antennas more complex than the dipole or vertical designs are usually intended to increase the directivity and consequently the gain of the antenna. This can be accomplished in many different ways leading to a plethora of antenna designs. The vast majority of designs are fed with a balanced line (unlike a monopole antenna) and are based on the dipole antenna with additional components (or elements) which increase its directionality. Antenna "gain" in this instance describes the concentration of radiated power into a particular solid angle of space, as opposed to the spherically uniform radiation of the ideal radiator. The increased power in the desired direction is at the expense of that in the undesired directions. Power is conserved, and there is no net power increase over that delivered from the power source (the transmitter.)
For instance, a phased array consists of two or more simple antennas which are connected together through an electrical network. This often involves a number of parallel dipole antennas with a certain spacing. Depending on the relative phase introduced by the network, the same combination of dipole antennas can operate as a "broadside array" (directional normal to a line connecting the elements) or as an "end-fire array" (directional along the line connecting the elements). Antenna arrays may employ any basic (omnidirectional or weakly directional) antenna type, such as dipole, loop or slot antennas. These elements are often identical.
However a log-periodic dipole array consists of a number of dipole elements of different lengths in order to obtain a somewhat directional antenna having an extremely wide bandwidth: these are frequently used for television reception in fringe areas. The dipole antennas composing it are all considered "active elements" since they are all electrically connected together (and to the transmission line). On the other hand, a superficially similar dipole array, the Yagi-Uda Antenna (or simply "Yagi"), has only one dipole element with an electrical connection; the other so-called parasitic elements interact with the electromagnetic field in order to realize a fairly directional antenna but one which is limited to a rather narrow bandwidth. The Yagi antenna has similar looking parasitic dipole elements but which act differently due to their somewhat different lengths. There may be a number of so-called "directors" in front of the active element in the direction of propagation, and usually a single (but possibly more) "reflector" on the opposite side of the active element.
Greater directionality can be obtained using beam-forming techniques such as a parabolic reflector or a horn. Since high directivity in an antenna depends on it being large compared to the wavelength, narrow beams of this type are more easily achieved at UHF and microwave frequencies.
At low frequencies (such as AM broadcast), arrays of vertical towers are used to achieve directionality Carl Smith (1969). Standard Broadcast Antenna Systems, p. 2-1212. Cleveland, Ohio: Smith Electronics, Inc. and they will occupy large areas of land. For reception, a long Beverage antenna can have significant directivity. For non directional portable use, a short vertical antenna or small loop antenna works well, with the main design challenge being that of impedance matching. With a vertical antenna a loading coil at the base of the antenna may be employed to cancel the reactive component of impedance; small loop antennas are tuned with parallel capacitors for this purpose.
An antenna lead-in is the transmission line (or feed line) which connects the antenna to a transmitter or receiver. The antenna feed may refer to all components connecting the antenna to the transmitter or receiver, such as an impedance matching network in addition to the transmission line. In a so-called aperture antenna, such as a horn or parabolic dish, the "feed" may also refer to a basic antenna inside the entire system (normally at the focus of the parabolic dish or at the throat of a horn) which could be considered the one active element in that antenna system. A microwave antenna may also be fed directly from a waveguide in place of a (conductive) transmission line.
thumb|150px|left|Cell phone base station antennas
An antenna counterpoise or ground plane is a structure of conductive material which improves or substitutes for the ground. It may be connected to or insulated from the natural ground. In a monopole antenna, this aids in the function of the natural ground, particularly where variations (or limitations) of the characteristics of the natural ground interfere with its proper function. Such a structure is normally connected to the return connection of an unbalanced transmission line such as the shield of a coaxial cable.
An electromagnetic wave refractor in some aperture antennas is a component which due to its shape and position functions to selectively delay or advance portions of the electromagnetic wavefront passing through it. The refractor alters the spatial characteristics of the wave on one side relative to the other side. It can, for instance, bring the wave to a focus or alter the wave front in other ways, generally in order to maximize the directivity of the antenna system. This is the radio equivalent of an optical lens.
An antenna coupling network is a passive network (generally a combination of inductive and capacitive circuit elements) used for impedance matching in between the antenna and the transmitter or receiver. This may be used to improve the standing wave ratio in order to minimize losses in the transmission line and to present the transmitter or receiver with a standard resistive impedance that it expects to see for optimum operation.
Reciprocity
It is a fundamental property of antennas that the electrical characteristics of an antenna described in the next section, such as gain, radiation pattern, impedance, bandwidth, resonant frequency and polarization, are the same whether the antenna is transmitting or receiving. For example, the "receiving pattern" (sensitivity as a function of direction) of an antenna when used for reception is identical to the radiation pattern of the antenna when it is driven and functions as a radiator. This is a consequence of the reciprocity theorem of electromagnetics. Therefore, in discussions of antenna properties no distinction is usually made between receiving and transmitting terminology, and the antenna can be viewed as either transmitting or receiving, whichever is more convenient.
A necessary condition for the aforementioned reciprocity property is that the materials in the antenna and transmission medium are linear and reciprocal. Reciprocal (or bilateral) means that the material has the same response to an electric current or magnetic field in one direction, as it has to the field or current in the opposite direction. Most materials used in antennas meet these conditions, but some microwave antennas use high-tech components such as isolators and circulators, made of nonreciprocal materials such as ferrite. These can be used to give the antenna a different behavior on receiving than it has on transmitting, which can be useful in applications like radar.
Characteristics
Antennas are characterized by a number of performance measures which a user would be concerned with in selecting or designing an antenna for a particular application. Chief among these relate to the directional characteristics (as depicted in the antenna's radiation pattern) and the resulting gain. Even in omnidirectional (or weakly directional) antennas, the gain can often be increased by concentrating more of its power in the horizontal directions, sacrificing power radiated toward the sky and ground. The antenna's power gain (or simply "gain") also takes into account the antenna's efficiency, and is often the primary figure of merit.
Resonant antennas are expected to be used around a particular resonant frequency; an antenna must therefore be built or ordered to match the frequency range of the intended application. A particular antenna design will present a particular feedpoint impedance. While this may affect the choice of an antenna, an antenna's impedance can also be adapted to the desired impedance level of a system using a matching network while maintaining the other characteristics (except for a possible loss of efficiency).
Although these parameters can be measured in principle, such measurements are difficult and require very specialized equipment. Beyond tuning a transmitting antenna using an SWR meter, the typical user will depend on theoretical predictions based on the antenna design or on claims of a vendor.
An antenna transmits and receives radio waves with a particular polarization which can be reoriented by tilting the axis of the antenna in many (but not all) cases. The physical size of an antenna is often a practical issue, particularly at lower frequencies (longer wavelengths). Highly directional antennas need to be significantly larger than the wavelength. Resonant antennas usually use a linear conductor (or element), or pair of such elements, each of which is about a quarter of the wavelength in length (an odd multiple of quarter wavelengths will also be resonant). Antennas that are required to be small compared to the wavelength sacrifice efficiency and cannot be very directional. Fortunately at higher frequencies (UHF, microwaves) trading off performance to obtain a smaller physical size is usually not required.
Resonant antennas
thumb|upright=1.5|Standing waves on a half wave dipole driven at its resonant frequency. The waves are shown graphically by bars of color (red for voltage, V and blue for current, I) whose width is proportional to the amplitude of the quantity at that point on the antenna.
The majority of antenna designs are based on the resonance principle. This relies on the behaviour of moving electrons, which reflect off surfaces where the dielectric constant changes, in a fashion similar to the way light reflects when optical properties change. In these designs, the reflective surface is created by the end of a conductor, normally a thin metal wire or rod, which in the simplest case has a feed point at one end where it is connected to a transmission line. The conductor, or element, is aligned with the electrical field of the desired signal, normally meaning it is perpendicular to the line from the antenna to the source (or receiver in the case of a broadcast antenna).
The radio signal's electrical component induces a voltage in the conductor. This causes an electrical current to begin flowing in the direction of the signal's instantaneous field. When the resulting current reaches the end of the conductor, it reflects, which is equivalent to a 180 degree change in phase. If the conductor is of a wavelength long, current from the feed point will undergo 90 degree phase change by the time it reaches the end of the conductor, reflect through 180 degrees, and then another 90 degrees as it travels back. That means it has undergone a total 360 degree phase change, returning it to the original signal. The current in the element thus adds to the current being created from the source at that instant. This process creates a standing wave in the conductor, with the maximum current at the feed.
The ordinary half-wave dipole is probably the most widely used antenna design. This consists of two -wavelength elements arranged end-to-end, and lying along essentially the same axis (or collinear), each feeding one side of a two-conductor transmission wire. The physical arrangement of the two elements places them 180 degrees out of phase, which means that at any given instant one of the elements is driving current into the transmission line while the other is pulling it out. The monopole antenna is essentially one half of the half-wave dipole, a single -wavelength element with the other side connected to ground or an equivalent ground plane (or counterpoise). Monopoles, which are one-half the size of a dipole, are common for long-wavelength radio signals where a dipole would be impractically large. Another common design is the folded dipole, which is essentially two dipoles placed side-by-side and connected at their ends to make a single one-wavelength antenna.
The standing wave forms with this desired pattern at the design frequency, f0, and antennas are normally designed to be this size. However, feeding that element with 3f0 (whose wavelength is that of f0) will also lead to a standing wave pattern. Thus, an antenna element is also resonant when its length is of a wavelength. This is true for all odd multiples of wavelength. This allows some flexibility of design in terms of antenna lengths and feed points. Antennas used in such a fashion are known to be harmonically operated.
Current and voltage distribution
The quarter-wave elements imitate a series-resonant electrical element due to the standing wave present along the conductor. At the resonant frequency, the standing wave has a current peak and voltage node (minimum) at the feed. In electrical terms, this means the element has minimum reactance, generating the maximum current for minimum voltage. This is the ideal situation, because it produces the maximum output for the minimum input, producing the highest possible efficiency. Contrary to an ideal (lossless) series-resonant circuit, a finite resistance remains (corresponding to the relatively small voltage at the feed-point) due to the antenna's radiation resistance as well as any actual electrical losses.
Recall that a current will reflect when there are changes in the electrical properties of the material. In order to efficiently send the signal into the transmission line, it is important that the transmission line has the same impedance as the elements, otherwise some of the signal will be reflected back into the antenna. This leads to the concept of impedance matching, the design of the overall system of antenna and transmission line so the impedance is as close as possible, thereby reducing these losses. Impedance matching between antennas and transmission lines is commonly handled through the use of a balun, although other solutions are also used in certain roles. An important measure of this basic concept is the standing wave ratio, which measures the magnitude of the reflected signal.
Consider a half-wave dipole designed to work with signals 1 m wavelength, meaning the antenna would be approximately 50 cm across. If the element has a length-to-diameter ratio of 1000, it will have an inherent resistance of about 63 ohms. Using the appropriate transmission wire or balun, we match that resistance to ensure minimum signal loss. Feeding that antenna with a current of 1 ampere will require 63 volts of RF, and the antenna will radiate 63 watts (ignoring losses) of radio frequency power. Now consider the case when the antenna is fed a signal with a wavelength of 1.25 m; in this case the reflected current would arrive at the feed out-of-phase with the signal, causing the net current to drop while the voltage remains the same. Electrically this appears to be a very high impedance. The antenna and transmission line no longer have the same impedance, and the signal will be reflected back into the antenna, reducing output. This could be addressed by changing the matching system between the antenna and transmission line, but that solution only works well at the new design frequency.
The end result is that the resonant antenna will efficiently feed a signal into the transmission line only when the source signal's frequency is close to that of the design frequency of the antenna, or one of the resonant multiples. This makes resonant antenna designs inherently narrowband, and they are most commonly used with a single target signal. They are particularly common on radar systems, where the same antenna is used for both broadcast and reception, or for radio and television broadcasts, where the antenna is working with a single frequency. They are less commonly used for reception where multiple channels are present, in which case additional modifications are used to increase the bandwidth, or entirely different antenna designs are used.
Electrically short antennas
It is possible to use simple impedance matching concepts to allow the use of monopole or dipole antennas substantially shorter than the ¼ or ½ wavelength, respectively, at which they are resonant. As these antennas are made shorter (for a given frequency) their impedance becomes dominated by a series capacitive (negative) reactance; by adding a series inductance with the opposite (positive) reactance – a so-called loading coil – the antenna's reactance may be cancelled leaving only a pure resistance. Sometimes the resulting (lower) electrical resonant frequency of such a system (antenna plus matching network) is described using the concept of electrical length, so an antenna used at a lower frequency than its resonant frequency is called an electrically short antenna.
For example, at 30 MHz (10 m wavelength) a true resonant ¼ wavelength monopole would be almost 2.5 meters long, and using an antenna only 1.5 meters tall would require the addition of a loading coil. Then it may be said that the coil has lengthened the antenna to achieve an electrical length of 2.5 meters. However, the resulting resistive impedance achieved will be quite a bit lower than that of a true ¼ wave (resonant) monopole, often requiring further impedance matching (a transformer) to the desired transmission line. For ever shorter antennas (requiring greater "electrical lengthening") the radiation resistance plummets (approximately according to the square of the antenna length), so that the mismatch due to a net reactance away from the electrical resonance worsens. Or one could as well say that the equivalent resonant circuit of the antenna system has a higher Q factor and thus a reduced bandwidth, which can even become inadequate for the transmitted signal's spectrum. Resistive losses due to the loading coil, relative to the decreased radiation resistance, entail a reduced electrical efficiency, which can be of great concern for a transmitting antenna, but bandwidth is the major factor that sets the size of antennas at 1 MHz and lower frequencies.
Arrays and reflectors
thumb|right|Rooftop television Yagi-Uda antennas like these are widely used at VHF and UHF frequencies.
The amount of signal received from a distant transmission source is essentially geometric in nature due to the inverse-square law, and this leads to the concept of effective area. This measures the performance of an antenna by comparing the amount of power it generates to the amount of power in the original signal, measured in terms of the signal's power density in Watts per square metre. A half-wave dipole has an effective area of 0.13 2. If more performance is needed, one cannot simply make the antenna larger. Although this would intercept more energy from the signal, due to the considerations above, it would decrease the output significantly due to it moving away from the resonant length. In roles where higher performance is needed, designers often use multiple elements combined together.
Returning to the basic concept of current flows in a conductor, consider what happens if a half-wave dipole is not connected to a feed point, but instead shorted out. Electrically this forms a single -wavelength element. But the overall current pattern is the same; the current will be zero at the two ends, and reach a maximum in the center. Thus signals near the design frequency will continue to create a standing wave pattern. Any varying electrical current, like the standing wave in the element, will radiate a signal. In this case, aside from resistive losses in the element, the rebroadcast signal will be significantly similar to the original signal in both magnitude and shape. If this element is placed so its signal reaches the main dipole in-phase, it will reinforce the original signal, and increase the current in the dipole. Elements used in this way are known as passive elements.
A Yagi-Uda array uses passive elements to greatly increase gain. It is built along a support boom that is pointed toward the signal, and thus sees no induced signal and does not contribute to the antenna's operation. The end closer to the source is referred to as the front. Near the rear is a single active element, typically a half-wave dipole or folded dipole. Passive elements are arranged in front (directors) and behind (reflectors) the active element along the boom. The Yagi has the inherent quality that it becomes increasingly directional, and thus has higher gain, as the number of elements increases. However, this also makes it increasingly sensitive to changes in frequency; if the signal frequency changes, not only does the active element receive less energy directly, but all of the passive elements adding to that signal also decrease their output as well and their signals no longer reach the active element in-phase.
It is also possible to use multiple active elements and combine them together with transmission lines to produce a similar system where the phases add up to reinforce the output. The antenna array and very similar reflective array antenna consist of multiple elements, often half-wave dipoles, spaced out on a plane and wired together with transmission lines with specific phase lengths to produce a single in-phase signal at the output. The log-periodic antenna is a more complex design that uses multiple in-line elements similar in appearance to the Yagi-Uda but using transmission lines between the elements to produce the output.
Reflection of the original signal also occurs when it hits an extended conductive surface, in a fashion similar to a mirror. This effect can also be used to increase signal through the use of a reflector, normally placed behind the active element and spaced so the reflected signal reaches the element in-phase. Generally the reflector will remain highly reflective even if it is not solid; gaps less than generally have little effect on the outcome. For this reason, reflectors often take the form of wire meshes or rows of passive elements, which makes them lighter and less subject to wind. The parabolic reflector is perhaps the best known example of a reflector-based antenna, which has an effective area far greater than the active element alone.
Bandwidth
Although a resonant antenna has a purely resistive feed-point impedance at a particular frequency, many (if not most) applications require using an antenna over a range of frequencies. The frequency range or bandwidth over which an antenna functions well can be very wide (as in a log-periodic antenna) or narrow (in a resonant antenna); outside this range the antenna impedance becomes a poor match to the transmission line and transmitter (or receiver). Also in the case of the Yagi-Uda and other end-fire arrays, use of the antenna well away from its design frequency affects its radiation pattern, reducing its directive gain; the usable bandwidth is then limited regardless of impedance matching.
Except for the latter concern, the resonant frequency of an antenna system can always be altered by adjusting a suitable matching network. This is most efficiently accomplished using a matching network at the site of the antenna, since simply adjusting a matching network at the transmitter (or receiver) would leave the transmission line with a poor standing wave ratio.
Instead, it is often desired to have an antenna whose impedance does not vary so greatly over a certain bandwidth. It turns out that the amount of reactance seen at the terminals of a resonant antenna when the frequency is shifted, say, by 5%, depends very much on the diameter of the conductor used. A long thin wire used as a half-wave dipole (or quarter wave monopole) will have a reactance significantly greater than the resistive impedance it has at resonance, leading to a poor match and generally unacceptable performance. Making the element using a tube of a diameter perhaps 1/50 of its length, however, results in a reactance at this altered frequency which is not so great, and a much less serious mismatch and effect on the antenna's net performance. Thus rather thick tubes are often used for the elements; these also have reduced parasitic resistance (loss).
Rather than just using a thick tube, there are similar techniques used to the same effect such as replacing thin wire elements with cages to simulate a thicker element. This widens the bandwidth of the resonance. On the other hand, it is desired for amateur radio antennas to operate at several bands which are widely separated from each other (but not in between). This can often be accomplished simply by connecting elements resonant at those different frequencies in parallel. Most of the transmitter's power will flow into the resonant element while the others present a high (reactive) impedance, thus drawing little current from the same voltage. Another popular solution uses so-called traps consisting of parallel resonant circuits which are strategically placed in breaks along each antenna element. When used at one particular frequency band the trap presents a very high impedance (parallel resonance) effectively truncating the element at that length, making it a proper resonant antenna. At a lower frequency the trap allows the full length of the element to be employed, albeit with a shifted resonant frequency due to the inclusion of the trap's net reactance at that lower frequency.
The bandwidth characteristics of a resonant antenna element can be characterized according to its Q, just as one uses to characterize the sharpness of an L-C resonant circuit. A common mistake is to assume that there is an advantage in an antenna having a high Q (the so-called "quality factor"). In the context of electronic circuitry a low Q generally signifies greater loss (due to unwanted resistance) in a resonant L-C circuit, and poorer receiver selectivity. However this understanding does not apply to resonant antennas where the resistance involved is the radiation resistance, a desired quantity which removes energy from the resonant element in order to radiate it (the purpose of an antenna, after all!). The Q of an L-C-R circuit is defined as the ratio of the inductor's (or capacitor's) reactance to the resistance, so for a certain radiation resistance (the radiation resistance at resonance does not vary greatly with diameter) the greater reactance off-resonance causes the poorer bandwidth of an antenna employing a very thin conductor. The Q of such a narrowband antenna can be as high as 15. On the other hand, the reactance at the same off-resonant frequency of one using thick elements is much less, consequently resulting in a Q as low as 5. These two antennas may perform equivalently at the resonant frequency, but the second antenna will perform over a bandwidth 3 times as wide as the antenna consisting of a thin conductor.
Antennas for use over much broader frequency ranges are achieved using further techniques. Adjustment of a matching network can, in principle, allow for any antenna to be matched at any frequency. Thus the loop antenna built into most AM broadcast (medium wave) receivers has a very narrow bandwidth, but is tuned using a parallel capacitance which is adjusted according to the receiver tuning. On the other hand, log-periodic antennas are not resonant at any frequency but can be built to attain similar characteristics (including feedpoint impedance) over any frequency range. These are therefore commonly used (in the form of directional log-periodic dipole arrays) as television antennas.
Gain
Gain is a parameter which measures the degree of directivity of the antenna's radiation pattern. A high-gain antenna will radiate most of its power in a particular direction, while a low-gain antenna will radiate over a wider angle. The antenna gain, or power gain of an antenna is defined as the ratio of the intensity (power per unit surface area) radiated by the antenna in the direction of its maximum output, at an arbitrary distance, divided by the intensity radiated at the same distance by a hypothetical isotropic antenna which radiates equal power in all directions. This dimensionless ratio is usually expressed logarithmically in decibels, these units are called "decibels-isotropic" (dBi)
A second unit used to measure gain is the ratio of the power radiated by the antenna to the power radiated by a half-wave dipole antenna ; these units are called "decibels-dipole" (dBd)
Since the gain of a half-wave dipole is 2.15 dBi and the logarithm of a product is additive, the gain in dBi is just 2.15 decibels greater than the gain in dBd
High-gain antennas have the advantage of longer range and better signal quality, but must be aimed carefully at the other antenna. An example of a high-gain antenna is a parabolic dish such as a satellite television antenna. Low-gain antennas have shorter range, but the orientation of the antenna is relatively unimportant. An example of a low-gain antenna is the whip antenna found on portable radios and cordless phones. Antenna gain should not be confused with amplifier gain, a separate parameter measuring the increase in signal power due to an amplifying device.
Effective area or aperture
The effective area or effective aperture of a receiving antenna expresses the portion of the power of a passing electromagnetic wave which it delivers to its terminals, expressed in terms of an equivalent area. For instance, if a radio wave passing a given location has a flux of 1 pW / m2 (10−12 watts per square meter) and an antenna has an effective area of 12 m2, then the antenna would deliver 12 pW of RF power to the receiver (30 microvolts rms at 75 ohms). Since the receiving antenna is not equally sensitive to signals received from all directions, the effective area is a function of the direction to the source.
Due to reciprocity (discussed above) the gain of an antenna used for transmitting must be proportional to its effective area when used for receiving. Consider an antenna with no loss, that is, one whose electrical efficiency is 100%. It can be shown that its effective area averaged over all directions must be equal to , the wavelength squared divided by . Gain is defined such that the average gain over all directions for an antenna with 100% electrical efficiency is equal to 1. Therefore, the effective area in terms of the gain in a given direction is given by:
For an antenna with an efficiency of less than 100%, both the effective area and gain are reduced by that same amount. Therefore, the above relationship between gain and effective area still holds. These are thus two different ways of expressing the same quantity. is especially convenient when computing the power that would be received by an antenna of a specified gain, as illustrated by the above example.
Radiation pattern
thumb|Polar plots of the horizontal cross sections of a (virtual) Yagi-Uda-antenna. Outline connects points with 3db field power compared to an ISO emitter.
The radiation pattern of an antenna is a plot of the relative field strength of the radio waves emitted by the antenna at different angles. It is typically represented by a three-dimensional graph, or polar plots of the horizontal and vertical cross sections. The pattern of an ideal isotropic antenna, which radiates equally in all directions, would look like a sphere. Many nondirectional antennas, such as monopoles and dipoles, emit equal power in all horizontal directions, with the power dropping off at higher and lower angles; this is called an omnidirectional pattern and when plotted looks like a torus or donut.
The radiation of many antennas shows a pattern of maxima or "lobes" at various angles, separated by "nulls", angles where the radiation falls to zero. This is because the radio waves emitted by different parts of the antenna typically interfere, causing maxima at angles where the radio waves arrive at distant points in phase, and zero radiation at other angles where the radio waves arrive out of phase. In a directional antenna designed to project radio waves in a particular direction, the lobe in that direction is designed larger than the others and is called the "main lobe". The other lobes usually represent unwanted radiation and are called "sidelobes". The axis through the main lobe is called the "principal axis" or "boresight axis".
Field regions
The space surrounding an antenna can be divided into three concentric regions: the reactive near-field, the radiating near-field (Fresnel region) and the far-field (Fraunhofer) regions. These regions are useful to identify the field structure in each, although there are no precise boundaries.
In the far-field region, we are far enough from the antenna to neglect its size and shape. We can assume that the electromagnetic wave is purely a radiating plane wave (electric and magnetic fields are in phase and perpendicular to each other and to the direction of propagation). This simplifies the mathematical analysis of the radiated field.
Impedance
As an electro-magnetic wave travels through the different parts of the antenna system (radio, feed line, antenna, free space) it may encounter differences in impedance (E/H, V/I, etc.). At each interface, depending on the impedance match, some fraction of the wave's energy will reflect back to the source,Impedance is caused by the same physics as refractive index in optics, although impedance effects are typically one-dimensional, where effects of refractive index is three-dimensional. forming a standing wave in the feed line. The ratio of maximum power to minimum power in the wave can be measured and is called the standing wave ratio (SWR). A SWR of 1:1 is ideal. A SWR of 1.5:1 is considered to be marginally acceptable in low power applications where power loss is more critical, although an SWR as high as 6:1 may still be usable with the right equipment. Minimizing impedance differences at each interface (impedance matching) will reduce SWR and maximize power transfer through each part of the antenna system.
Complex impedance of an antenna is related to the electrical length of the antenna at the wavelength in use. The impedance of an antenna can be matched to the feed line and radio by adjusting the impedance of the feed line, using the feed line as an impedance transformer. More commonly, the impedance is adjusted at the load (see below) with an antenna tuner, a balun, a matching transformer, matching networks composed of inductors and capacitors, or matching sections such as the gamma match.
Efficiency
Efficiency of a transmitting antenna is the ratio of power actually radiated (in all directions) to the power absorbed by the antenna terminals. The power supplied to the antenna terminals which is not radiated is converted into heat. This is usually through loss resistance in the antenna's conductors, but can also be due to dielectric or magnetic core losses in antennas (or antenna systems) using such components. Such loss effectively robs power from the transmitter, requiring a stronger transmitter in order to transmit a signal of a given strength.
For instance, if a transmitter delivers 100 W into an antenna having an efficiency of 80%, then the antenna will radiate 80 W as radio waves and produce 20 W of heat. In order to radiate 100 W of power, one would need to use a transmitter capable of supplying 125 W to the antenna. Note that antenna efficiency is a separate issue from impedance matching, which may also reduce the amount of power radiated using a given transmitter. If an SWR meter reads 150 W of incident power and 50 W of reflected power, that means that 100 W have actually been absorbed by the antenna (ignoring transmission line losses). How much of that power has actually been radiated cannot be directly determined through electrical measurements at (or before) the antenna terminals, but would require (for instance) careful measurement of field strength. Fortunately the loss resistance of antenna conductors such as aluminum rods can be calculated and the efficiency of an antenna using such materials predicted.
However loss resistance will generally affect the feedpoint impedance, adding to its resistive (real) component. That resistance will consist of the sum of the radiation resistance Rr and the loss resistance Rloss. If an rms current I is delivered to the terminals of an antenna, then a power of I2Rr will be radiated and a power of I2Rloss will be lost as heat. Therefore, the efficiency of an antenna is equal to Rr / (Rr + Rloss). Of course only the total resistance Rr + Rloss can be directly measured.
According to reciprocity, the efficiency of an antenna used as a receiving antenna is identical to the efficiency as defined above. The power that an antenna will deliver to a receiver (with a proper impedance match) is reduced by the same amount. In some receiving applications, the very inefficient antennas may have little impact on performance. At low frequencies, for example, atmospheric or man-made noise can mask antenna inefficiency. For example, CCIR Rep. 258-3 indicates man-made noise in a residential setting at 40 MHz is about 28 dB above the thermal noise floor. Consequently, an antenna with a 20 dB loss (due to inefficiency) would have little impact on system noise performance. The loss within the antenna will affect the intended signal and the noise/interference identically, leading to no reduction in signal to noise ratio (SNR).
This is fortunate, since antennas at lower frequencies which are not rather large (a good fraction of a wavelength in size) are inevitably inefficient (due to the small radiation resistance Rr of small antennas). Most AM broadcast radios (except for car radios) take advantage of this principle by including a small loop antenna for reception which has an extremely poor efficiency. Using such an inefficient antenna at this low frequency (530–1650 kHz) thus has little effect on the receiver's net performance, but simply requires greater amplification by the receiver's electronics. Contrast this tiny component to the massive and very tall towers used at AM broadcast stations for transmitting at the very same frequency, where every percentage point of reduced antenna efficiency entails a substantial cost.
The definition of antenna gain or power gain already includes the effect of the antenna's efficiency. Therefore, if one is trying to radiate a signal toward a receiver using a transmitter of a given power, one need only compare the gain of various antennas rather than considering the efficiency as well. This is likewise true for a receiving antenna at very high (especially microwave) frequencies, where the point is to receive a signal which is strong compared to the receiver's noise temperature. However, in the case of a directional antenna used for receiving signals with the intention of rejecting interference from different directions, one is no longer concerned with the antenna efficiency, as discussed above. In this case, rather than quoting the antenna gain, one would be more concerned with the directive gain which does not include the effect of antenna (in)efficiency. The directive gain of an antenna can be computed from the published gain divided by the antenna's efficiency.
Polarization
The polarization of an antenna refers to the orientation of the electric field (E-plane) of the radio wave with respect to the Earth's surface and is determined by the physical structure of the antenna and by its orientation; note that this designation is totally distinct from the antenna's directionality. Thus, a simple straight wire antenna will have one polarization when mounted vertically, and a different polarization when mounted horizontally. As a transverse wave, the magnetic field of a radio wave is at right angles to that of the electric field, but by convention, talk of an antenna's "polarization" is understood to refer to the direction of the electric field.
Reflections generally affect polarization. For radio waves, one important reflector is the ionosphere which can change the wave's polarization. Thus for signals received following reflection by the ionosphere (a skywave), a consistent polarization cannot be expected. For line-of-sight communications or ground wave propagation, horizontally or vertically polarized transmissions generally remain in about the same polarization state at the receiving location. Matching the receiving antenna's polarization to that of the transmitter can make a very substantial difference in received signal strength.
Polarization is predictable from an antenna's geometry, although in some cases it is not at all obvious (such as for the quad antenna). An antenna's linear polarization is generally along the direction (as viewed from the receiving location) of the antenna's currents when such a direction can be defined. For instance, a vertical whip antenna or Wi-Fi antenna vertically oriented will transmit and receive in the vertical polarization. Antennas with horizontal elements, such as most rooftop TV antennas in the United States, are horizontally polarized (broadcast TV in the U.S. usually uses horizontal polarization). Even when the antenna system has a vertical orientation, such as an array of horizontal dipole antennas, the polarization is in the horizontal direction corresponding to the current flow. The polarization of a commercial antenna is an essential specification.
Polarization is the sum of the E-plane orientations over time projected onto an imaginary plane perpendicular to the direction of motion of the radio wave. In the most general case, polarization is elliptical, meaning that the polarization of the radio waves varies over time. Two special cases are linear polarization (the ellipse collapses into a line) as we have discussed above, and circular polarization (in which the two axes of the ellipse are equal). In linear polarization the electric field of the radio wave oscillates back and forth along one direction; this can be affected by the mounting of the antenna but usually the desired direction is either horizontal or vertical polarization. In circular polarization, the electric field (and magnetic field) of the radio wave rotates at the radio frequency circularly around the axis of propagation. Circular or elliptically polarized radio waves are designated as right-handed or left-handed using the "thumb in the direction of the propagation" rule. Note that for circular polarization, optical researchers use the opposite right hand rule from the one used by radio engineers.
It is best for the receiving antenna to match the polarization of the transmitted wave for optimum reception. Intermediate matchings will lose some signal strength, but not as much as a complete mismatch. A circularly polarized antenna can be used to equally well match vertical or horizontal linear polarizations. Transmission from a circularly polarized antenna received by a linearly polarized antenna (or vice versa) entails a 3 dB reduction in signal-to-noise ratio as the received power has thereby been cut in half.
Impedance matching
Maximum power transfer requires matching the impedance of an antenna system (as seen looking into the transmission line) to the complex conjugate of the impedance of the receiver or transmitter. In the case of a transmitter, however, the desired matching impedance might not correspond to the dynamic output impedance of the transmitter as analyzed as a source impedance but rather the design value (typically 50 ohms) required for efficient and safe operation of the transmitting circuitry. The intended impedance is normally resistive but a transmitter (and some receivers) may have additional adjustments to cancel a certain amount of reactance in order to "tweak" the match. When a transmission line is used in between the antenna and the transmitter (or receiver) one generally would like an antenna system whose impedance is resistive and near the characteristic impedance of that transmission line in order to minimize the standing wave ratio (SWR) and the increase in transmission line losses it entails, in addition to supplying a good match at the transmitter or receiver itself.
Antenna tuning generally refers to cancellation of any reactance seen at the antenna terminals, leaving only a resistive impedance which might or might not be exactly the desired impedance (that of the transmission line). Although an antenna may be designed to have a purely resistive feedpoint impedance (such as a dipole 97% of a half wavelength long) this might not be exactly true at the frequency that it is eventually used at. In some cases the physical length of the antenna can be "trimmed" to obtain a pure resistance. On the other hand, the addition of a series inductance or parallel capacitance can be used to cancel a residual capacitative or inductive reactance, respectively.
In some cases this is done in a more extreme manner, not simply to cancel a small amount of residual reactance, but to resonate an antenna whose resonance frequency is quite different from the intended frequency of operation. For instance, a "whip antenna" can be made significantly shorter than 1/4 wavelength long, for practical reasons, and then resonated using a so-called loading coil. This physically large inductor at the base of the antenna has an inductive reactance which is the opposite of the capacitative reactance that such a vertical antenna has at the desired operating frequency. The result is a pure resistance seen at feedpoint of the loading coil; unfortunately that resistance is somewhat lower than would be desired to match commercial coax.
So an additional problem beyond canceling the unwanted reactance is of matching the remaining resistive impedance to the characteristic impedance of the transmission line. In principle this can always be done with a transformer, however the turns ratio of a transformer is not adjustable. A general matching network with at least two adjustments can be made to correct both components of impedance. Matching networks using discrete inductors and capacitors will have losses associated with those components, and will have power restrictions when used for transmitting. Avoiding these difficulties, commercial antennas are generally designed with fixed matching elements or feeding strategies to get an approximate match to standard coax, such as 50 or 75 Ohms. Antennas based on the dipole (rather than vertical antennas) should include a balun in between the transmission line and antenna element, which may be integrated into any such matching network.
Another extreme case of impedance matching occurs when using a small loop antenna (usually, but not always, for receiving) at a relatively low frequency where it appears almost as a pure inductor. Resonating such an inductor with a capacitor at the frequency of operation not only cancels the reactance but greatly magnifies the very small radiation resistance of such a loop. This is implemented in most AM broadcast receivers, with a small ferrite loop antenna resonated by a capacitor which is varied along with the receiver tuning in order to maintain resonance over the AM broadcast band
Antenna types
Antennas can be classified in various ways. The list below groups together antennas under common operating principles, following the way antennas are classified in many engineering textbooks.
Isotropic: An isotropic antenna (isotropic radiator) is a hypothetical antenna that radiates equal signal power in all directions. It is a mathematical model that is used as the base of comparison to calculate the gain of real antennas. No real antenna can have an isotropic radiation pattern. However approximately isotropic antennas, constructed with multiple elements, are used in antenna testing.
The first four groups below are usually resonant antennas; when driven at their resonant frequency their elements act as resonators. Waves of current and voltage bounce back and forth between the ends, creating standing waves along the elements.
Dipole
The dipole is the prototypical antenna on which a large class of antennas are based. A basic dipole antenna consists of two conductors (usually metal rods or wires) arranged symmetrically, with one side of the balanced feedline from the transmitter or receiver attached to each. Bevelaqua, Dipole Antenna, Antenna-Theory.com The most common type, the half-wave dipole, consists of two resonant elements just under a quarter wavelength long. This antenna radiates maximally in directions perpendicular to the antenna's axis, giving it a small directive gain of 2.15 dBi (practically the lowest directive gain of any antenna). Although half-wave dipoles are used alone as omnidirectional antennas, they are also a building block of many other more complicated directional antennas.
Yagi-Uda - One of the most common directional antennas at HF, VHF, and UHF frequencies. Consists of multiple half wave dipole elements in a line, with a single driven element and multiple parasitic elements which serve to create a uni-directional or beam antenna. These typically have gains between 10 and 20 dBi depending on the number of elements used, and are very narrowband (with a usable bandwidth of only a few percent) though there are derivative designs which relax this limitation. Used for rooftop television antennas, point-to-point communication links, and long distance shortwave communication using skywave ("skip") reflection from the ionosphere.
Log-periodic dipole array - Often confused with the Yagi-Uda, this consists of many dipole elements along a boom with gradually increasing lengths, all connected to the transmission line with alternating polarity. It is a directional antenna with a wide bandwidth. This makes it ideal for use as a rooftop television antenna, although its gain is much less than a Yagi of comparable size.
Turnstile - Two dipole antennas mounted at right angles, fed with a phase difference of 90°. This antenna is unusual in that it radiates in all directions (no nodes in the radiation pattern), with horizontal polarization in directions coplaner with the elements, circular polarization normal to that plane, and elliptical polarization in other directions. Used for receiving signals from satellites, as circular polarization is transmitted by many satellites.
Corner reflector - A directive antenna with moderate gain of about 8 dBi often used at UHF frequencies. Consists of a dipole mounted in front of two reflective metal screens joined at an angle, usually 90°. Used as a rooftop UHF television antenna and for point-to-point data links.
Patch (microstrip) - A type of antenna with elements consisting of metal sheets mounted over a ground plane. Similar to dipole with gain of 6 - 9 dBi. Integrated into surfaces such as aircraft bodies. Their easy fabrication using PCB techniques have made them popular in modern wireless devices. Often used in arrays.
Monopole
Monopole antennas consist of a single conductor such as a metal rod, mounted over the ground or an artificial conducting surface (a so-called ground plane). Bevelaqua, Monopole Antenna, Antenna-Theory.com One side of the feedline from the receiver or transmitter is connected to the conductor, and the other side to ground and/or the artificial ground plane. The monopole is best understood as a dipole antenna in which one conductor is omitted; the radiation is generated as if the second arm of the dipole were present due to the effective image current seen as a reflection of the monopole from the ground. Since all of the equivalent dipole's radiation is concentrated in a half-space, the antenna has twice (3 dB increase of) the gain of a similar dipole, not considering losses in the ground plane.
The most common form is the quarter-wave monopole which is one-quarter of a wavelength long and has a gain of 5.12 dBi when mounted over a ground plane. Monopoles have an omnidirectional radiation pattern, so they are used for broad coverage of an area, and have vertical polarization. The ground waves used for broadcasting at low frequencies must be vertically polarized, so large vertical monopole antennas are used for broadcasting in the MF, LF, and VLF bands. Small monopoles are used as nondirectional antennas on portable radios in the HF, VHF, and UHF bands.
Whip - Type of antenna used on mobile and portable radios in the VHF and UHF bands such as boom boxes, consists of a flexible rod, often made of telescoping segments.
Rubber Ducky - Most common antenna used on portable two way radios and cordless phones due to its compactness, consists of an electrically short wire helix. The helix adds inductance to cancel the capacitive reactance of the short radiator, making it resonant. Very low gain.
Ground plane - a whip antenna with several rods extending horizontally from base of whip attached to the ground side of the feedline. Since whips are mounted above ground, the horizontal rods form an artificial ground plane under the antenna to increase its gain. Used as base station antennas for land mobile radio systems such as police, ambulance and taxi dispatchers.
Mast radiator - A radio tower in which the tower structure itself serves as the antenna. Common form of transmitting antenna for AM radio stations and other MF and LF transmitters. At its base the tower is usually, but not necessarily, mounted on a ceramic insulator to isolate it from the ground.
T and inverted L - Consist of a long horizontal wire suspended between two towers with insulators, with a vertical wire hanging down from it, attached to a feedline to the receiver or transmitter. Used on LF and VLF bands. The vertical wire serves as the radiator. Since at these frequencies the vertical wire is electrically short, much shorter than a quarter wavelength, the horizontal wire(s) serve as a capacitive "hat" to increase the current in the vertical radiator, increasing the gain. Very narrow bandwidth, requires loading coil to tune out the capacitive reactance and make it resonant. Requires low resistance ground (electricity)
Inverted F - Combines the advantages of the inverted-L antenna and the F-type antenna of, respectively, compactness and good matching. The antenna is grounded at the base and fed at some intermediate point. The position of the feed point determines the antenna impedance. Thus, matching can be achieved without the need for an extraneous matching network.
Umbrella - Very large wire transmitting antennas used on VLF bands. Consists of a central mast radiator tower attached at the top to multiple wires extending out radially from the mast to ground, like a tent or umbrella, insulated at the ends. Extremely narrow bandwidth, requires large loading coil and low resistance counterpoise ground. Used for long range military communications.
Array
Array antennas consist of multiple antennas working as a single antenna. Typically they consist of arrays of identical driven elements, usually dipoles fed in phase, giving increased gain over that of a single dipole. Bevelaqua, Antenna Arrays, Antenna-Theory.com
Collinear - Consist of a number of dipoles in a vertical line. It is a high gain omnidirectional antenna, meaning more of the power is radiated in horizontal directions and less into the sky or ground and wasted. Gain of 8 to 10 dBi. Used as base station antennas for land mobile radio systems such as police, fire, ambulance, and taxi dispatchers, and sector antennas for cellular base stations.
Reflective array - multiple dipoles in a two-dimensional array mounted in front of a flat reflecting screen. Used for radar and UHF television transmitting and receiving antennas.
Phased array - A high gain antenna used at UHF and microwave frequencies which is electronically steerable. It consists of multiple dipoles in a two-dimensional array, each fed through an electronic phase shifter, with the phase shifters controlled by a computer control system. The beam can be instantly pointed in any direction over a wide angle in front of the antenna. Used for military radar and jamming systems.
Curtain array - Large directional wire transmitting antenna used at HF by shortwave broadcasting stations. It consists of a vertical rectangular array of wire dipoles suspended in front of a flat reflector screen consisting of a vertical "curtain" of parallel wires, all supported between two metal towers. It radiates a horizontal beam of radio waves into the sky above the horizon, which is reflected by the ionosphere to Earth beyond the horizon
Batwing or superturnstile - A specialized antenna used in television broadcasting consisting of perpendicular pairs of dipoles with radiators resembling bat wings. Multiple batwing antennas are stacked vertically on a mast to make VHF television broadcast antennas. Omnidirectional radiation pattern with high gain in horizontal directions. The batwing shape gives them wide bandwidth.
microstrip - an array of patch antennas on a substrate fed by microstrip feedlines. Microwave antenna that can achieve large gains in compact space. Ease of fabrication by PCB techniques have made them popular in modern wireless devices. Beamwidth and polarization can be actively reconfigurable.
Loop
Loop antennas consist of a loop (or coil) of wire. Bevelaqua, Loop Antennas, Antenna-Theory.com Loops with circumference of a wavelength (or integer multiple of the wavelength) are resonant and act somewhat similarly to the half-wave dipole. However a loop small in comparison to the wavelength, also called a magnetic loop, performs quite differently. This antenna interacts directly with the magnetic field of the radio wave, making it relatively insensitive to nearby electrical noise. However it has a very small radiation resistance, typically much smaller than the loss resistance, making it inefficient and thus undesirable for transmitting. They are used as receiving antennas at low frequencies, and also as direction finding antennas.
Ferrite (loopstick) - These are used as the receiving antenna in most consumer AM radios operating in the medium wave broadcast band (and lower frequencies), a notable exception being car radios. Wire is coiled around a ferrite core which greatly increases the coil's inductance. Radiation pattern is maximum at directions normal to the ferrite stick.
Quad - consists of multiple wire loops in a line with one functioning as the driven element, and the others as parasitic elements. Used as a directional antenna on the HF bands for shortwave communication.
Aperture
Aperture antennas are the main type of directional antennas used at microwave frequencies and above. They consist of a small dipole or loop feed antenna inside a three-dimensional guiding structure large compared to a wavelength, with an aperture to emit the radio waves. Since the antenna structure itself is nonresonant they can be used over a wide frequency range by replacing or tuning the feed antenna.
Parabolic - The most widely used high gain antenna at microwave frequencies and above. Consists of a dish-shaped metal parabolic reflector with a feed antenna at the focus. It can have some of the highest gains of any antenna type, up to 60 dBi, but the dish must be large compared to a wavelength. Used for radar antennas, point-to-point data links, satellite communication, and radio telescopes
Horn - Simple antenna with moderate gains of 15 to 25 dBi consists of a flaring metal horn attached to a waveguide. Used for applications such as radar guns, radiometers and as feed antennas for parabolic dishes.
Slot - Consist of a waveguide with one or more slots cut in it to emit the microwaves. Linear slot antennas emit narrow fan-shaped beams. Used as UHF broadcast antennas and marine radar antennas.
Dielectric resonator - consists of small ball or puck-shaped piece of dielectric material excited by aperture in waveguide Used at millimeter wave frequencies
Traveling wave
Unlike the above antennas, traveling wave antennas are nonresonant so they have inherently broad bandwidth. They are typically wire antennas multiple wavelengths long, through which the voltage and current waves travel in one direction, instead of bouncing back and forth to form standing waves as in resonant antennas. They have linear polarization (except for the helical antenna). Unidirectional traveling wave antennas are terminated by a resistor at one end equal to the antenna's characteristic resistance, to absorb the waves from one direction. This makes them inefficient as transmitting antennas.
Random wire - This describes the typical antenna used to receive shortwave radio, consisting of a random length of wire either strung outdoors between supports or indoors in a zigzag pattern along walls, connected to the receiver at one end. Can have complex radiation patterns with several lobes at angles to the wire.
Beverage - Simplest unidirectional traveling wave antenna. Consists of a straight wire one to several wavelengths long, suspended near the ground, connected to the receiver at one end and terminated by a resistor equal to its characteristic impedance, 400 to 800Ω at the other end. Its radiation pattern has a main lobe at a shallow angle in the sky off the terminated end. It is used for reception of skywaves reflected off the ionosphere in long distance "skip" shortwave communication.
Rhombic - Consists of four equal wire sections shaped like a rhombus. It is fed by a balanced feedline at one of the acute corners, and the two sides are connected to a resistor equal to the characteristic resistance of the antenna at the other. It has a main lobe in a horizontal direction off the terminated end of the rhombus. Used for skywave communication on shortwave bands.
Helical (axial mode) - Consists of a wire in the shape of a helix mounted above a reflecting screen. It radiates circularly polarized waves in a beam off the end, with a typical gain of 15 dBi. It is used at VHF and UHF frequencies for communication with satellites and animal tracking transmitters, which use circular polarization because it is insensitive to the relative orientation of the antennas.
Leaky wave - Microwave antennas consisting of a waveguide or coaxial cable with a slot or apertures cut in it so it radiates continuously along its length.
Effect of ground
Ground reflections is one of the common types of multipath.
The radiation pattern and even the driving point impedance of an antenna can be influenced by the dielectric constant and especially conductivity of nearby objects. For a terrestrial antenna, the ground is usually one such object of importance. The antenna's height above the ground, as well as the electrical properties (permittivity and conductivity) of the ground, can then be important. Also, in the particular case of a monopole antenna, the ground (or an artificial ground plane) serves as the return connection for the antenna current thus having an additional effect, particularly on the impedance seen by the feed line.
When an electromagnetic wave strikes a plane surface such as the ground, part of the wave is transmitted into the ground and part of it is reflected, according to the Fresnel coefficients. If the ground is a very good conductor then almost all of the wave is reflected (180° out of phase), whereas a ground modeled as a (lossy) dielectric can absorb a large amount of the wave's power. The power remaining in the reflected wave, and the phase shift upon reflection, strongly depend on the wave's angle of incidence and polarization. The dielectric constant and conductivity (or simply the complex dielectric constant) is dependent on the soil type and is a function of frequency.
For very low frequencies to high frequencies (<30 MHz), the ground behaves as a lossy dielectric, Thus the ground is characterized both by a conductivity http://www.fcc.gov/encyclopedia/m3-map-effective-ground-conductivity-united-states-wall-sized-map-am-broadcast-stations and permittivity (dielectric constant) which can be measured for a given soil (but is influenced by fluctuating moisture levels) or can be estimated from certain maps. At lower frequencies the ground acts mainly as a good conductor, which AM middle wave broadcast (.5 - 1.6 MHz) antennas depend on.
At frequencies between 3 and 30 MHz, a large portion of the energy from a horizontally polarized antenna reflects off the ground, with almost total reflection at the grazing angles important for ground wave propagation. That reflected wave, with its phase reversed, can either cancel or reinforce the direct wave, depending on the antenna height in wavelengths and elevation angle (for a sky wave).
On the other hand, vertically polarized radiation is not well reflected by the ground except at grazing incidence or over very highly conducting surfaces such as sea water. However the grazing angle reflection important for ground wave propagation, using vertical polarization, is in phase with the direct wave, providing a boost of up to 6 db, as is detailed below.
right|frame|The wave reflected by earth can be considered as emitted by the image antenna.
At VHF and above (>30 MHz) the ground becomes a poorer reflector. However it remains a good reflector especially for horizontal polarization and grazing angles of incidence. That is important as these higher frequencies usually depend on horizontal line-of-sight propagation (except for satellite communications), the ground then behaving almost as a mirror.
The net quality of a ground reflection depends on the topography of the surface. When the irregularities of the surface are much smaller than the wavelength, we are in the regime of specular reflection, and the receiver sees both the real antenna and an image of the antenna under the ground due to reflection. But if the ground has irregularities not small compared to the wavelength, reflections will not be coherent but shifted by random phases. With shorter wavelengths (higher frequencies), this is generally the case.
Whenever both the receiving or transmitting antenna are placed at significant heights above the ground (relative to the wavelength), waves specularly reflected by the ground will travel a longer distance than direct waves, inducing a phase shift which can sometimes be significant. When a sky wave is launched by such an antenna, that phase shift is always significant unless the antenna is very close to the ground (compared to the wavelength).
The phase of reflection of electromagnetic waves depends on the polarization of the incident wave. Given the larger refractive index of the ground (typically n=2) compared to air (n=1), the phase of horizontally polarized radiation is reversed upon reflection (a phase shift of radians or 180°). On the other hand, the vertical component of the wave's electric field is reflected at grazing angles of incidence approximately in phase. These phase shifts apply as well to a ground modelled as a good electrical conductor.
right|frame| The currents in an antenna appear as an image in opposite phase when reflected at grazing angles. This causes a phase reversal for waves emitted by a horizontally polarized antenna (left) but not a vertically polarized antenna (center).
This means that a receiving antenna "sees" an image of the antenna but with reversed currents. That current is in the same absolute direction as the actual antenna if the antenna is vertically oriented (and thus vertically polarized) but opposite the actual antenna if the antenna current is horizontal.
The actual antenna which is transmitting the original wave then also may receive a strong signal from its own image from the ground. This will induce an additional current in the antenna element, changing the current at the feedpoint for a given feedpoint voltage. Thus the antenna's impedance, given by the ratio of feedpoint voltage to current, is altered due to the antenna's proximity to the ground. This can be quite a significant effect when the antenna is within a wavelength or two of the ground. But as the antenna height is increased, the reduced power of the reflected wave (due to the inverse square law) allows the antenna to approach its asymptotic feedpoint impedance given by theory. At lower heights, the effect on the antenna's impedance is very sensitive to the exact distance from the ground, as this affects the phase of the reflected wave relative to the currents in the antenna. Changing the antenna's height by a quarter wavelength, then changes the phase of the reflection by 180°, with a completely different effect on the antenna's impedance.
The ground reflection has an important effect on the net far field radiation pattern in the vertical plane, that is, as a function of elevation angle, which is thus different between a vertically and horizontally polarized antenna. Consider an antenna at a height h above the ground, transmitting a wave considered at the elevation angle θ. For a vertically polarized transmission the magnitude of the electric field of the electromagnetic wave produced by the direct ray plus the reflected ray is:
Thus the power received can be as high as 4 times that due to the direct wave alone (such as when θ=0), following the square of the cosine. The sign inversion for the reflection of horizontally polarized emission instead results in:
where:
is the electrical field that would be received by the direct wave if there were no ground.
θ is the elevation angle of the wave being considered.
is the wavelength.
is the height of the antenna (half the distance between the antenna and its image).
right|frame|Radiation patterns of antennas and their images reflected by the ground. At left the polarization is vertical and there is always a maximum for . If the polarization is horizontal as at right, there is always a zero for .
For horizontal propagation between transmitting and receiving antennas situated near the ground reasonably far from each other, the distances traveled by the direct and reflected rays are nearly the same. There is almost no relative phase shift. If the emission is polarized vertically, the two fields (direct and reflected) add and there is maximum of received signal. If the signal is polarized horizontally, the two signals subtract and the received signal is largely cancelled. The vertical plane radiation patterns are shown in the image at right. With vertical polarization there is always a maximum for θ=0, horizontal propagation (left pattern). For horizontal polarization, there is cancellation at that angle. Note that the above formulae and these plots assume the ground as a perfect conductor. These plots of the radiation pattern correspond to a distance between the antenna and its image of 2.5λ. As the antenna height is increased, the number of lobes increases as well.
The difference in the above factors for the case of θ=0 is the reason that most broadcasting (transmissions intended for the public) uses vertical polarization. For receivers near the ground, horizontally polarized transmissions suffer cancellation. For best reception the receiving antennas for these signals are likewise vertically polarized. In some applications where the receiving antenna must work in any position, as in mobile phones, the base station antennas use mixed polarization, such as linear polarization at an angle (with both vertical and horizontal components) or circular polarization.
On the other hand, classical (analog) television transmissions are usually horizontally polarized, because in urban areas buildings can reflect the electromagnetic waves and create ghost images due to multipath propagation. Using horizontal polarization, ghosting is reduced because the amount of reflection of electromagnetic waves in the p polarization (horizontal polarization off the side of a building) is generally less than s (vertical, in this case) polarization. Vertically polarized analog television has nevertheless been used in some rural areas. In digital terrestrial television such reflections are less problematic, due to robustness of binary transmissions and error correction.
Mutual impedance and interaction between antennas
Current circulating in one antenna generally induces a voltage across the feedpoint of nearby antennas or antenna elements. The mathematics presented below are useful in analyzing the electrical behaviour of antenna arrays, where the properties of the individual array elements (such as half wave dipoles) are already known. If those elements were widely separated and driven in a certain amplitude and phase, then each would act independently as that element is known to. However, because of the mutual interaction between their electric and magnetic fields due to proximity, the currents in each element are not simply a function of the applied voltage (according to its driving point impedance), but depend on the currents in the other nearby elements.
Note that this now is a near field phenomenon which could not be properly accounted for using the Friis transmission equation for instance. This near field effect creates a different set of currents at the antenna terminals resulting in distortions in the far field radiation patterns; however, the distortions may be removed using a simple set of network equations.
The elements' feedpoint currents and voltages can be related to each other using the concept of mutual impedance between every pair of antennas just as the mutual impedance describes the voltage induced in one inductor by a current through a nearby coil coupled to it through a mutual inductance M. The mutual impedance between two antennas is defined as:
where is the current flowing in antenna i and is the voltage induced at the open-circuited feedpoint of antenna j due to when all other currents ik are zero. The mutual impendances can be viewed as the elements of a symmetric square impedance matrix Z. Note that the diagonal elements, , are simply the driving point impedances of each element.
Using this definition, the voltages present at the feedpoints of a set of coupled antennas can be expressed as the multiplication of the impedance matrix times the vector of currents. Written out as discrete equations, that means:
where:
is the voltage at the terminals of antenna
is the current flowing between the terminals of antenna
is the driving point impedance of antenna
is the mutual impedance between antennas and .
center|frame|Mutual impedance between parallel dipoles not staggered. Curves Re and Im are the resistive and reactive parts of the impedance.
As is the case for mutual inductances,
This is a consequence of Lorentz reciprocity. For an antenna element not connected to anything (open circuited) one can write . But for an element which is short circuited, a current is generated across that short but no voltage is allowed, so the corresponding . This is the case, for instance, with the so-called parasitic elements of a Yagi-Uda antenna where the solid rod can be viewed as a dipole antenna shorted across its feedpoint. Parasitic elements are unpowered elements that absorb and reradiate RF energy according to the induced current calculated using such a system of equations.
With a particular geometry, it is possible for the mutual impedance between nearby antennas to be zero. This is the case, for instance, between the crossed dipoles used in the turnstile antenna.
Image gallery
Antennas and supporting structures
Diagrams as part of a system
See also
:Category:Radio frequency antenna types
:Category:Radio frequency propagation
Cellular repeater
DXing
Electromagnetism
Mobile broadband modem
Numerical Electromagnetics Code
Radial (radio)
Radio masts and towers
RF connector
Smart antenna
TETRA
References
Further reading
General
Antenna Theory (3rd edition), by C. Balanis, Wiley, 2005, ISBN 0-471-66782-X;
Antenna Theory and Design (2nd edition), by W. Stutzman and G. Thiele, Wiley, 1997, ISBN 0-471-02590-9;
Antennas (4th edition), by J. Kraus and R. Marhefka, McGraw-Hill, 2001, ISBN 0-07-232103-2;
Antennenbuch, by Karl Rothammel, publ. Franck'sche Verlagshandlung Stuttgart, 1991, ISBN 3-440-05853-0; other editions (in German)
Antennas for portable Devices, Zhi Ning Chen (edited), John Wiley & Sons in March 2007
Broadband Planar Antennas: Design and Applications, Zhi Ning Chen and M. Y. W. Chia, John Wiley & Sons in February 2006
Practical design
Antenna Theory antenna-theory.com
Antennas Antenna types
Patch Antenna: From Simulation to Realization EM Talk
Why Antennas Radiate, Stuart G. Downs, WY6EE (PDF)
Understanding electromagnetic fields and antenna radiation takes (almost) no math, Ron Schmitt, EDN Magazine, March 2 2000 (PDF)
Antennas: Generalities, Principle of operation, As electronic component, Hertz Marconi and Other types Antennas etc etc
Theory and simulations
AN-SOF, "Antenna Simulation Software". Program system for the modeling of antennas and scatterers.
http://www.dipoleanimator.com
EM Talk, "Microstrip Patch Antenna", (Theory and simulation of microstrip patch antenna)
"" Formulas for simulating and optimizing Antenna specs and placement
"Microwave Antenna Design Calculator" Provides quick estimation of antenna size required for a given gain and frequency. 3 dB and 10 dB beamwidths are also derived; the calculator additionally gives the far-field range required for a given antenna.
Sophocles J. Orfanidis, "Electromagnetic Waves and Antennas", Rutgers University (20 PDF Chaps. Basic theory, definitions and reference)
Hans Lohninger, "Learning by Simulations: Physics: Coupled Radiators". vias.org, 2005. (ed. Interactive simulation of two coupled antennas)
NEC Lab - NEC Lab is a tool that uses Numerical Electromagnetics Code and Artificial Intelligence to design and simulate antennas.
Justin Smith "Aerials". A.T.V (Aerials and Television), 2009. (ed. Article on the (basic) theory and use of FM, DAB & TV aerials)
Antennas Research Group, "Virtual (Reality) Antennas". Democritus University of Thrace, 2005.
"Support > Knowledgebase > RF Basics > Antennas / Cables > dBi vs. dBd detail". MaxStream, Inc., 2005. (ed. How to measure antenna gain) (New location: http://www.digi.com/support/kbase/kbaseresultdetl?id=2146 Note: to skip the registration form click the link below it)
Yagis and Log Periodics, Astrosurf article.
Raines, J. K., "Virtual Outer Conductor for Linear Antennas," Microwave Journal, Vol. 52, No. 1, January, 2009, pp. 76–86
Tests of FM/VHF receiving antennas.
Effect of ground
Electronic Radio and Engineering. F.E. Terman. McGraw-Hill
Lectures on physics. Feynman, Leighton and Sands. Addison-Wesley
Classical Electricity and Magnetism. W. Panofsky and M. Phillips. Addison-Wesley
Patents and USPTO
CLASS 343, Communication: Radio Wave Antenna
Base station
Antennas for Base Stations in Wireless Communications, edited by Zhi Ning Chen and Kwai-Man Luk, McGraw-Hill Companies, Inc, USA in May 2009
Category:Radio electronics | 187,317 | 2017-01 |
States of Germany | Germany is a federal republic consisting of sixteen federal states (, plural Länder; informally also Bundesland, plural Bundesländer). Since today's Germany was formed from an earlier collection of several states, it has a federal constitution, and the constituent states retain a measure of sovereignty.
With an emphasis on geographical conditions, Berlin and Hamburg are frequently called (city-states), as is the Free Hanseatic City of Bremen, which in fact includes the cities of and . The remaining 13 states are called (literally: area states).
The creation of the Federal Republic of Germany in 1949 was through the unification of the western states (which were previously under American, British, and French administration) created in the aftermath of World War II. Initially, in 1949, the states of the Federal Republic were Baden, Bavaria (in German: ), Bremen, Hamburg, Hesse (), Lower Saxony (), North Rhine-Westphalia (), Rhineland-Palatinate (), , (until 1952), (until 1952) and the Free State of Oldenburg (until 1946). West Berlin, while officially not part of the Federal Republic, was largely integrated and considered as a de facto state.
In 1952, following a referendum, Baden, , and merged into . In 1957, the Saar Protectorate rejoined the Federal Republic as the . German reunification in 1990, in which the German Democratic Republic (East Germany) ascended into the Federal Republic, resulted in the addition of the re-established eastern states of Brandenburg, Mecklenburg-West Pomerania (), Saxony (), Saxony-Anhalt (), and Thuringia (), as well as the reunification of West and East Berlin into Berlin and its establishment as a full and equal state. A regional referendum in 1996 to merge Berlin with surrounding Brandenburg as "Berlin-Brandenburg" failed to reach the necessary majority vote in Brandenburg, while a majority of Berliners voted in favour of the merger.
Federalism is one of the entrenched constitutional principles of Germany. According to the German constitution (called or Basic Law), some topics, such as foreign affairs and defense, are the exclusive responsibility of the federation (i.e., the federal level), while others fall under the shared authority of the states and the federation; the states retain residual legislative authority for all other areas, including "culture", which in Germany includes not only topics such as financial promotion of arts and sciences, but also most forms of education and job training. Though international relations including international treaties are primarily the responsibility of the federal level, the constituent states have certain limited powers in this area: in matters that affect them directly, the states defend their interests at the federal level through the ("Federal Council", the upper house of the German Federal Parliament) and in areas where they have legislative authority they have limited powers to conclude international treaties "with the consent of the federal government".
States
After 1945, new states were constituted in all four zones of occupation. In 1949, the states in the three western zones formed the Federal Republic of Germany. This is in contrast to the post-war development in Austria, where the Bund (federation) was constituted first, and then the individual states were created as units of a federal state.
The use of the term Länder (Lands) dates back to the Weimar Constitution of 1919. Before this time, the constituent states of the German Empire were called Staaten (States). Today, it is very common to use the term Bundesland (Federal Land). However, this term is not used officially, neither by the constitution of 1919 nor by the Basic Law (Constitution) of 1949. Three Länder call themselves Freistaaten (Free States, which is an older term in German for Republic), Bavaria (since 1919), Saxony (originally since 1919 and again since 1990), and Thuringia (since 1994). There is little continuity between the current states and their predecessors of the Weimar Republic with the exception of the three free states, and Hamburg and Bremen.
A new delimitation of the federal territory keeps being debated in Germany, though "Some scholars note that there are significant differences among the American states and regional governments in other federations without serious calls for territorial changes ...", as political scientist Arthur B. Gunlicks remarks.Gunlicks, Arthur B.: "German Federalism and Recent Reform Efforts", German Law Journal, Vol. 06, No. 10, p. 1287 He summarizes the main arguments for boundary reform in Germany: "... the German system of dual federalism requires strong Länder that have the administrative and fiscal capacity to implement legislation and pay for it from own source revenues. Too many Länder also make coordination among them and with the federation more complicated ...".Gunlicks, p. 1288 But several proposals have failed so far; territorial reform remains a controversial topic in German politics and public perception.Gunlicks, pp. 1287–88
List
Coat of arms
Flag
State
Part of FRG since
Head of government
Image
Governmentcoalition
Bundesrat votes
Area (km2)
Population
Pop.per km2
Capital
ISO3166-2code
GDP per Capita in Euro
Baden-Württemberg
1952The states of Baden, Württemberg-Baden, and Württemberg-Hohenzollern were constituent states of the federation when it was formed in 1949. They united to form Baden-Württemberg in 1952.
Winfried Kretschmann (Greens)
70px
Greens, CDU
6
35,752
10,755,000
301
Stuttgart
BW
34,885
Bavaria(Freistaat Bayern)
1949
Horst Seehofer (CSU)
70px
CSU
6
70,552
12,542,000
178
Munich(München)
BY
35,443
Berlin
1990Berlin has only officially been a Bundesland since reunification, even though West Berlin was largely treated as a state of West Germany.
Michael Müller (SPD)
70px
SPD, Greens, The Left
4
892
3,469,000
3,890
–
BE
28,806
Brandenburg
1990
Dietmar Woidke (SPD)
70px
SPD, The Left
4
29,479
2,500,000
85
Potsdam
BB
22,074
Bremen(Freie Hansestadt Bremen)
1949
Carsten Sieling (SPD)
70px
SPD, Greens
3
419
661,000
1,577
Bremen
HB
42,405
Hamburg (Freie und Hansestadt Hamburg)
1949
Olaf Scholz (SPD)
70px
SPD, Greens
3
755
1,788,000
2,368
–
HH
52,401
Hesse(Hessen)
1949
Volker Bouffier (CDU)
70px
CDU, Greens
5
21,115
6,066,000
287
Wiesbaden
HE
37,509
Lower Saxony(Niedersachsen)
1949
Stephan Weil (SPD)
70px
SPD, Greens
6
47,609
7,914,000
166
Hanover(Hannover)
NI
28,350
Mecklenburg-Vorpommern
1990
Erwin Sellering (SPD)
70px
SPD, CDU
3
23,180
1,639,000
71
Schwerin
MV
21,404
North Rhine-Westphalia(Nordrhein-Westfalen)
1949
Hannelore Kraft (SPD)
70px
SPD, Greens
6
34,085
17,837,000
523
Düsseldorf
NW
32,882
Rhineland-Palatinate(Rheinland-Pfalz)
1949
Malu Dreyer (SPD)
70px
SPD, Greens, FDP
4
19,853
3,999,000
202
Mainz
RP
28,311
Saarland
1957
Annegret Kramp-Karrenbauer (CDU)
70px
CDU, SPD
3
2,569
1,018,000
400
Saarbrücken
SL
30,098
Saxony(Freistaat Sachsen)
1990
Stanislaw Tillich (CDU)
70px
CDU, SPD
4
18,416
4,143,000
227
Dresden
SN
22,980
Saxony-Anhalt(Sachsen-Anhalt)
1990
Reiner Haseloff (CDU)
70px
CDU, SPD, Greens
4
20,446
2,331,000
116
Magdeburg
ST
22,427
Schleswig-Holstein
1949
Torsten Albig (SPD)
70px
SPD, Greens, SSW
4
15,799
2,833,000
179
Kiel
SH
25,947
Thuringia(Freistaat Thüringen)
1990
Bodo Ramelow (The Left)
70px
The Left, SPD, Greens
4
16,172
2,231,000
138
Erfurt
TH
21,663
History
Federalism has a long tradition in German history. The Holy Roman Empire comprised many petty states numbering more than 300 around 1796. The number of territories was greatly reduced during the Napoleonic Wars (1796–1814). After the Congress of Vienna (1815), 39 states formed the German Confederation. The Confederation was dissolved after the Austro-Prussian War and replaced by a North German Federation under Prussian hegemony; this war left Prussia dominant in Germany, and German nationalism would compel the remaining independent states to ally with Prussia in the Franco-Prussian War of 1870–71, and then to accede to the crowning of King Wilhelm of Prussia as German Emperor. The new German Empire included 25 states (three of them, Hanseatic cities) and the imperial territory of Alsace-Lorraine. The empire was dominated by Prussia, which controlled 65% of the territory and 62% of the population. After the territorial losses of the Treaty of Versailles, the remaining states continued as republics of a new German federation. These states were gradually de facto abolished and reduced to provinces under the Nazi regime via the Gleichschaltung process, as the states administratively were largely superseded by the Nazi Gau system.
thumb|left|The Provinces of the Kingdom of Prussia (green) within the German Empire (1871–1918)
During the Allied occupation of Germany after World War II, internal borders were redrawn by the Allied military governments. No single state comprised more than 30% of either population or territory; this was intended to prevent any one state from being as dominant within Germany as Prussia had been in the past. Initially, only seven of the pre-War states remained: Baden (in part), Bavaria (reduced in size), Bremen, Hamburg, Hesse (enlarged), Saxony, and Thuringia. The states with hyphenated names, such as Rhineland-Palatinate, North Rhine-Westphalia, and Saxony-Anhalt, owed their existence to the occupation powers and were created out of mergers of former Prussian provinces and smaller states. Former German territory that lie east of the Oder-Neisse Line fell under either Polish or Soviet administration but attempts were made at least symbolically not to abandon sovereignty well into the 1960s. However, no attempts were made to establish new states in these territories as they lay outside the jurisdiction of West Germany at that time.
Upon its founding in 1949, West Germany had eleven states. These were reduced to nine in 1952 when three south-western states (South Baden, Württemberg-Hohenzollern, and Württemberg-Baden) merged to form Baden-Württemberg. From 1957, when the French-occupied Saar Protectorate was returned and formed into the Saarland, the Federal Republic consisted of ten states, which are referred to as the "Old States" today. West Berlin was under the sovereignty of the Western Allies and neither a Western German state nor part of one. However, it was in many ways de facto integrated with West Germany under a special status.
East Germany originally consisted of five states (i.e., Brandenburg, Mecklenburg-Vorpommern, Saxony, Saxony-Anhalt, and Thuringia). In 1952, these states were abolished and the East was divided into 14 administrative districts called bezirke. Soviet-controlled East Berlin, despite officially having the same status as West Berlin, was declared East Germany's capital and its 15th district.
Just prior to the German reunification on 3 October 1990, the East German states were reconstituted close to their earlier configuration as the five "New States". The former district of East Berlin joined West Berlin to form the new state of Berlin. Henceforth, the 10 "old states" plus 5 "new states" plus the new state Berlin add up to current 16 states of Germany.
thumb|The states of the Weimar Republic in 1925, with the Free State of Prussia (Freistaat Preußen) as the largest
Later, the constitution was amended to state that the citizens of the 16 states had successfully achieved the unity of Germany in free self-determination and that the Basic Law thus applied to the entire German people. Article 23, which had allowed "any other parts of Germany" to join, was rephrased. It had been used in 1957 to reintegrate the Saar Protectorate as the Saarland into the Federal Republic, and this was used as a model for German reunification in 1990. The amended article now defines the participation of the Federal Council and the 16 German states in matters concerning the European Union.
The German states can conclude treaties with foreign countries in matters within their own sphere of competence and with the consent of the Federal Government (Article 32 of the Basic Law).
The description free state (Freistaat) is merely a historic synonym for republic and was a description used by most German states after the abolishment of monarchy after World War I. Today, Freistaat is associated emotionally with a more independent status, especially in Bavaria. However, it has no legal significance. All sixteen states are represented at the federal level in the Bundesrat (Federal Council), where their voting power depends on the size of their population.
West Germany, 1945–90
Article 29 of the Basic Law states that "the division of the federal territory into Länder may be revised to ensure that each Land be of a size and capacity to perform its functions effectively". The somewhat complicated provisions regulate that "revisions of the existing division into Länder shall be effected by a federal law, which must be confirmed by referendum".
A new delimitation of the federal territory has been discussed since the Federal Republic was founded in 1949 and even before. Committees and expert commissions advocated a reduction of the number of states; academics (Rutz, Miegel, Ottnad etc.) and politicians (Döring, Apel, and others) made proposals some of them far-reaching for redrawing boundaries but hardly anything came of these public discussions. Territorial reform is sometimes propagated by the richer states as a means to avoid or reduce fiscal transfers.
To date, the only successful reform was the merger of the states of Baden, Württemberg-Baden, and Württemberg-Hohenzollern to form the new state of Baden-Württemberg in 1952.
Delimitations
Article 29 reflects a debate on territorial reform in Germany that is much older than the Basic Law. The Holy Roman Empire was a loose confederation of large and petty principalities under the nominal suzerainty of the emperor. Approximately 300 states existed at the eve of the French Revolution in 1789.
Territorial boundaries were essentially redrawn as a result of military conflicts and interventions from the outside: from the Napoleonic Wars to the Congress of Vienna, the number of territories decreased from about 300 to 39; in 1866 Prussia annexed the sovereign states of Hanover, Nassau, Hesse-Kassel, and the Free City of Frankfurt; the last consolidation came about under Allied occupation after 1945.
The debate on a new delimitation of the German territory started in 1919 as part of discussions about the new constitution. Hugo Preuss, the father of the Weimar Constitution, drafted a plan to divide the German Reich into 14 roughly equal-sized states. His proposal was turned down due to opposition of the states and concerns of the government. Article 18 of the constitution enabled a new delimitation of the German territory but set high hurdles: Three fifth of the votes handed in, and at least the majority of the population are necessary to decide on the alteration of territory. In fact, until 1933 there were only four changes in the configuration of the German states: The 7 Thuringian states were merged in 1920, whereby Coburg opted for Bavaria, Pyrmont joined Prussia in 1922, and Waldeck did so in 1929. Any later plans to break up the dominating Prussia into smaller states failed because political circumstances were not favorable to state reforms.
After the Nazi Party seized power in January 1933, the Länder increasingly lost importance. They became administrative regions of a centralised country. Three changes are of particular note: on January 1, 1934, Mecklenburg-Schwerin was united with the neighbouring Mecklenburg-Strelitz; and, by the Greater Hamburg Act (Groß-Hamburg-Gesetz), from April 1, 1937, the area of the city-state was extended, while Lübeck lost its independence and became part of the Prussian province of Schleswig-Holstein.
thumb|upright|West Germany (blue) and East Germany (red) and West Berlin (yellow)
Between 1945 and 1947, new states were established in all four zones of occupation: Bremen, Hesse, Württemberg-Baden, and Bavaria in the American zone; Hamburg, Schleswig-Holstein, Lower Saxony, and North Rhine-Westphalia in the British zone; Rhineland-Palatinate, Baden, Württemberg-Hohenzollern and the Saarland which later received a special status in the French zone; Mecklenburg (-Vorpommern), Brandenburg, Saxony, Saxony-Anhalt, and Thuringia in the Soviet zone.
In 1948, the military governors of the three Western Allies handed over the so-called Frankfurt Documents to the minister-presidents in the Western occupation zones. Among other things, they recommended revising the boundaries of the West German states in a way that none of them should be too large or too small in comparison with the others.
As the premiers did not come to an agreement on this question, the Parliamentary Council was supposed to address this issue. Its provisions are reflected in Article 29. There was a binding provision for a new delimitation of the federal territory: the Federal Territory must be revised ... (paragraph 1). Moreover, in territories or parts of territories whose affiliation with a Land had changed after 8 May 1945 without a referendum, people were allowed to petition for a revision of the current status within a year after the promulgation of the Basic Law (paragraph 2). If at least one tenth of those entitled to vote in Bundestag elections were in favour of a revision, the federal government had to include the proposal into its legislation. Then a referendum was required in each territory or part of a territory whose affiliation was to be changed (paragraph 3). The proposal should not take effect if within any of the affected territories a majority rejected the change. In this case, the bill had to be introduced again and after passing had to be confirmed by referendum in the Federal Republic as a whole (paragraph 4). The reorganization should be completed within three years after the Basic Law had come into force (paragraph 6).
In their letter to Konrad Adenauer the three western military governors approved the Basic Law but suspended Article 29 until such time as a peace treaty should be concluded. Only the special arrangement for the southwest under Article 118 could enter into force.
Foundation of Baden-Württemberg
In southwestern Germany, territorial revision seemed to be a top priority since the border between the French and American occupation zones was set along the Autobahn Karlsruhe-Stuttgart-Ulm (today the A8). Article 118 stated "The division of the territory comprising Baden, Württemberg-Baden and Württemberg-Hohenzollern into Länder may be revised, without regard to the provisions of Article 29, by agreement between the Länder concerned. If no agreement is reached, the revision shall be effected by a federal law, which shall provide for an advisory referendum." Since no agreement was reached, a referendum was held on 9 December 1951 in four different voting districts, three of which approved the merger (South Baden refused but was overruled as the result of total votes was decisive). On 25 April 1952, the three former states merged to form Baden-Württemberg.
Petitions asking to reconstitute former states
With the Paris Agreements West Germany regained (limited) sovereignty. This triggered the start of the one-year period as set in paragraph 2 of Article 29. As a consequence, eight petitions for referendums were launched, six of which were successful:
Reconstitution of the Free State of Oldenburg 12.9%
Reconstitution of the Free State of Schaumburg-Lippe 15.3%
Integration of Koblenz and Trier into North Rhine-Westphalia 14.2%
Reintegration of Rheinhessen into Hesse 25.3%
Reintegration of Montabaur into Hesse 20.2%
Reconstitution of Baden 15.1%
The last petition was originally rejected by the Federal Minister of the Interior in reference to the referendum of 1951. However, the Federal Constitutional Court of Germany ruled that the rejection was unlawful: the population of Baden had the right to a new referendum because the one of 1951 had taken place under different rules from the ones provided for by article 29. In particular, the outcome of the 1951 referendum did not reflect the wishes of the majority of Baden's population.
The two Palatine petitions (for a reintegration into Bavaria and integration into Baden-Württemberg) failed with 7.6% and 9.3%. Further requests for petitions (Lübeck, Geesthacht, Lindau, Achberg, and 62 Hessian communities) had already been rejected as inadmissible by the Federal Minister of the Interior or were withdrawn as in the case of Lindau. The rejection was confirmed by the Federal Constitutional Court in the case of Lübeck.
Saar: Little reunification with Germany
In the Paris Agreements of 23 October 1954, France offered to establish an independent "Saarland", under the auspices of the Western European Union (WEU), but on 23 October 1955 in the Saar Statute referendum the Saar electorate rejected this plan by 67.7% to 32.3% (out of a 96.5% turnout: 423,434 against, 201,975 for) despite the public support of Federal German Chancellor Konrad Adenauer for the plan. The rejection of the plan by the Saarlanders was interpreted as support for the Saar to join the Federal Republic of Germany.
On October 27, 1956, the Saar Treaty established that Saarland should be allowed to join Germany, as provided by the constitution art. 23 for the Federal Republic of Germany. Saarland became part of Germany effective January 1, 1957. The Franco-Saarlander currency union ended on 6 July 1959, when the West Deutsche Mark was introduced as legal tender in the Saarland.
Constitutional amendments
Paragraph 6 of Article 29 stated that if a petition was successful a referendum should be held within three years. Since the deadline passed on 5 May 1958 without anything happening the Hesse state government filed a constitutional complaint with the Federal Constitutional Court in October 1958. The complaint was dismissed in July 1961 on the grounds that Article 29 had made the new delimitation of the federal territory an exclusively federal matter. At the same time, the Court reaffirmed the requirement for a territorial revision as a binding order to the relevant constitutional bodies.
The grand coalition decided to settle the 1956 petitions by setting binding deadlines for the required referendums. The referendums in Lower Saxony and Rhineland-Palatinate were to be held by 31 March 1975, and the onreferendum in Baden was to be held by 30 June 1970. The quorum for a successful vote was set at one-quarter of those entitled to vote in Bundestag elections. Paragraph 4 stated that the vote should be disregarded if it contradicted the objectives of paragraph 1.
In his investiture address, given on 28 October 1969 in Bonn, Chancellor Willy Brandt proposed that the government would consider Article 29 of the Basic Law as a binding order. An expert commission was established, named after its chairman, the former Secretary of State Professor Werner Ernst. After two years of work, the experts delivered their report in 1973. It provided an alternative proposal for both northern Germany and central and southwestern Germany. In the north, either a single new state consisting of Schleswig-Holstein, Hamburg, Bremen and Lower Saxony should be created (solution A) or two new states, one in the northeast consisting of Schleswig-Holstein, Hamburg and the northern part of Lower Saxony (from Cuxhaven to Lüchow-Dannenberg) and one in the northwest consisting of Bremen and the rest of Lower Saxony (solution B). In the Center and South West either Rhineland-Palatinate (with the exception of the Germersheim district but including the Rhine-Neckar region) should be merged with Hesse and the Saarland (solution C), the district of Germersheim would then become part of Baden-Württemberg.
The Palatinate (including the region of Worms) could also be merged with the Saarland and Baden-Württemberg, and the rest of Rhineland-Palatinate would then merge with Hesse (solution D). Both alternatives could be combined (AC, BC, AD, BD).
At the same time the commission developed criteria for classifying the terms of Article 29 Paragraph 1. The capacity to perform functions effectively was considered most important, whereas regional, historical, and cultural ties were considered as hardly verifiable. To fulfill administrative duties adequately, a population of at least five million per state was considered as necessary.
After a relatively brief discussion and mostly negative responses from the affected states, the proposals were shelved. Public interest was limited or nonexistent.
The referendum in Baden was held on 7 June 1970: With 81.9% the vast majority of voters decided for Baden to remain part of Baden-Württemberg, only 18.1% opted for the reconstitution of the old state of Baden. The referendums in Lower Saxony and Rhineland-Palatinate, were held on 19 January 1975:
reconstitution of the Free State of Oldenburg 31%
reconstitution of the Free State of Schaumburg-Lippe 39.5%
integration of Koblenz and Trier into North Rhine-Westphalia 13%
reintegration of Rheinhessen into Hesse 7.1%
reintegration of Montabaur region into Hesse 14.3%
Hence, the two referendums in Lower Saxony were successful. As a consequence, the legislature was forced to act and decided that both Oldenburg and Schaumburg-Lippe should remain part of Lower Saxony. The justification was that a reconstitution of Oldenburg and Schaumburg-Lippe would contradict the objectives of paragraph 1. An appeal against the decision was rejected as inadmissible by the Federal Constitutional Court.
On 24 August 1976, the binding provision for a new delimitation of the federal territory was altered into a mere discretionary one. Paragraph 1 was rephrased, now putting the capacity to perform functions in the first place. The option for a referendum in the Federal Republic as a whole (paragraph 4) was abolished. Hence a territorial revision was no longer possible against the will of the population affected by it.
Reunited Germany, 1990–present
The debate on territorial revision restarted shortly before German reunification. While academics (Rutz and others) and politicians (Gobrecht) suggested introducing only two, three, or four states in East Germany, legislation reconstituted the five states that had existed until 1952, however, with slightly changed boundaries.
Article 118a was introduced into the Basic Law and provided the possibility for Berlin and Brandenburg to merge "without regard to the provisions of Article 29, by agreement between the two Länder with the participation of their inhabitants who are entitled to vote".
Article 29 was again modified and provided an option for the states to "revise the division of their existing territory or parts of their territory by agreement without regard to the provisions of paragraphs (2) through (7)".
The state treaty between Berlin and Brandenburg was approved in both parliaments with the necessary two-thirds majority, but in the popular referendum of 5 May 1996 about 63% voted against the merger.
Politics
Germany is a federal, parliamentary, representative democratic republic. The German political system operates under a framework laid out in the 1949 constitutional document known as the Grundgesetz (Basic Law). By calling the document the Grundgesetz, rather than Verfassung (constitution), the authors expressed the intention that it would be replaced by a true constitution once Germany was reunited as one state.
Amendments to the Grundgesetz generally require a two-thirds majority of both chambers of the parliament; the fundamental principles of the constitution, as expressed in the articles guaranteeing human dignity, the separation of powers, the federal structure, and the rule of law are valid in perpetuity. Despite the original intention, the Grundgesetz remained in effect after the German reunification in 1990, with only minor amendments.
Government
The Basic Law of the Federal Republic of Germany, the federal constitution, stipulates that the structure of each Federal State's government must "conform to the principles of republican, democratic, and social government, based on the rule of law" (Article 28). Most of the states are governed by a cabinet led by a Ministerpräsident (Minister-President), together with a unicameral legislative body known as the Landtag (State Diet). The states are parliamentary republics and the relationship between their legislative and executive branches mirrors that of the federal system: the legislatures are popularly elected for four or five years (depending on the state), and the Minister-President is then chosen by a majority vote among the Landtag'''s members. The Minister-President appoints a cabinet to run the state's agencies and to carry out the executive duties of the state's government.
The governments in Berlin, Bremen and Hamburg are designated by the term Senate. In the three free states of Bavaria, Saxony, and Thuringia the government is referred to as the State Government (Staatsregierung), and in the other ten states the term Land Government (Landesregierung) is used. Before January 1, 2000, Bavaria had a bicameral parliament, with a popularly elected Landtag, and a Senate made up of representatives of the state's major social and economic groups. The Senate was abolished following a referendum in 1998. The states of Berlin, Bremen, and Hamburg are governed slightly differently from the other states. In each of those cities, the executive branch consists of a Senate of approximately eight, selected by the state's parliament; the senators carry out duties equivalent to those of the ministers in the larger states. The equivalent of the Minister-President is the Senatspräsident (President of the Senate) in Bremen, the Erster Bürgermeister (First Mayor) in Hamburg, and the Regierender Bürgermeister (Governing Mayor) in Berlin. The parliament for Berlin is called the Abgeordnetenhaus (House of Representatives), while Bremen and Hamburg both have a Bürgerschaft. The parliaments in the remaining 13 states are referred to as Landtag (State Parliament).
Subdivisions
thumb|right|350px|Administrative divisions of Germany
The city-states of Berlin and Hamburg are subdivided into boroughs. The state Free Hanseatic City of Bremen consists of two urban districts, Bremen and Bremerhaven, which are not contiguous. In the other states there are the following subdivisions:
Area associations (Landschaftsverbände)
The most populous state of North Rhine-Westphalia is uniquely divided into two area associations (Landschaftsverbände), one for the Rhineland, and one for Westphalia-Lippe. This arrangement was meant to ease the friction caused by uniting the two culturally different regions into a single state after World War II. The Landschaftsverbände now have very little power.
The constitution of Mecklenburg-Vorpommern at §75 states the right of Mecklenburg and Vorpommern to form Landschaftsverbände, although these two constituent parts of the state are not represented in the current administrative division.
Governmental districts (Regierungsbezirke)
The large states of Baden-Württemberg, Bavaria, Hesse, and North Rhine-Westphalia are divided into governmental districts, or Regierungsbezirke.
In Rhineland-Palatinate, these districts were abolished on January 1, 2000, in Saxony-Anhalt on January 1, 2004, and in Lower Saxony on January 1, 2005. From 1990 until 2012, Saxony was divided into three districts (called Direktionsbezirke since 2008). In 2012, these districts' authorities were merged into one central authority, the .
Administrative districts (Kreise)
thumb|Map of German districts. Yellow districts are urban, white are sub-urban or rural.
The Districts of Germany (Kreise) are administrative districts, and every state except the city-states of Berlin and Hamburg, and the state of Bremen consists of "rural districts" (Landkreise), District-free Towns/Cities (Kreisfreie Städte, in Baden-Württemberg also called "urban districts", or Stadtkreise), cities that are districts in their own right, or local associations of a special kind (Kommunalverbände besonderer Art), see below. The state Free Hanseatic City of Bremen consists of two urban districts, while Berlin and Hamburg are states and urban districts at the same time.
As of 2011, there are 295 Landkreise and 107 Kreisfreie Städte, making 402 districts altogether. Each consists of an elected council and an executive, which is chosen either by the council or by the people, depending on the state, the duties of which are comparable to those of a county executive in the United States, supervising local government administration. The Landkreise have primary administrative functions in specific areas, such as highways, hospitals, and public utilities.
Local associations of a special kind are an amalgamation of one or more Landkreise with one or more Kreisfreie Städte to form a replacement of the aforementioned administrative entities at the district level. They are intended to implement simplification of administration at that level. Typically, a district-free city or town and its urban hinterland are grouped into such an association, or Kommunalverband besonderer Art. Such an organization requires the issuing of special laws by the governing state, since they are not covered by the normal administrative structure of the respective states.
In 2010 only three Kommunalverbände besonderer Art exist.
District of Hanover. Formed in 2001 out of the previous rural district of Hanover and the district-free city of Hanover.
Regionalverband Saarbrücken (district association Saarbrücken). Formed in 2008 out of the predecessor organization Stadtverband Saarbrücken (city association Saarbrücken), which was already formed in 1974.
City region of Aachen. Formed in 2009 out of the previous rural district of Aachen and the district-free city of Aachen.
Offices (Ämter)
Ämter ("offices" or "bureaus"): In some states there is an administrative unit between the districts and the municipalities, called Ämter (singular Amt), Amtsgemeinden, Gemeindeverwaltungsverbände, Landgemeinden, Verbandsgemeinden, Verwaltungsgemeinschaften, or Kirchspiellandgemeinden.
Municipalities (Gemeinden)
Municipalities (Gemeinden): Every rural district and every Amt is subdivided into municipalities, while every urban district is a municipality in its own right. There are () 12,141 municipalities, which are the smallest administrative units in Germany. Cities and towns are municipalities as well, also having city rights or town rights (Stadtrechte). Nowadays, this is mostly just the right to be called a city or town. However, in former times there were many other privileges, including the right to impose local taxes or to allow industry only within city limits.
The municipalities are ruled by elected councils and by an executive, the mayor, who is chosen either by the council or directly by the people, depending on the state. The "constitution" for the municipalities is created by the states and is uniform throughout a state (except for Bremen, which allows Bremerhaven to have its own constitution).
The municipalities have two major policy responsibilities. First, they administer programs authorized by the federal or state government. Such programs typically relate to youth, schools, public health, and social assistance. Second, Article 28(2) of the Basic Law guarantees the municipalities "the right to regulate on their own responsibility all the affairs of the local community within the limits set by law." Under this broad statement of competence, local governments can justify a wide range of activities. For instance, many municipalities develop and expand the economic infrastructure of their communities through the development of industrial trading estates.
Local authorities foster cultural activities by supporting local artists, building arts centres, and by holding fairs. Local government also provides public utilities, such as gas and electricity, as well as public transportation. The majority of the funding for municipalities is provided by higher levels of government rather than from taxes raised and collected directly by themselves.
In five of the German states, there are unincorporated areas, in many cases unpopulated forest and mountain areas, but also four Bavarian lakes that are not part of any municipality. As of January 1, 2005, there were 246 such areas, with a total area of 4167.66 km2 or 1.2 percent of the total area of Germany. Only four unincorporated areas are populated, with a total population of about 2,000. The following table gives an overview.
+ Unincorporated areas in German states State 01. January 2005 01. January 2000 Number Area in km2 Number Area in km2 Bavaria 216 2,725.06 262 2,992.78 Lower Saxony 23 949.16 25 1,394.10 Hesse 4 327.05 4 327.05 Schleswig-Holstein 2 99.41 2 99.41 Baden-Württemberg 1 66.98 2 76.99 Total 246 4,167.66 295 4,890.33
In 2000, the number of unincorporated areas was 295, with a total area of . However, the unincorporated areas are continually being incorporated into neighboring municipalities, wholly or partially, most frequently in Bavaria.
See also
Elections in Germany
Federalism in Germany
List of cities in Germany
List of German states by GDP
List of subnational entities
For a list of German states prior to 1815 see List of states in the Holy Roman Empire
New states of Germany
State Police Landespolizei''
Composition of the German State Parliaments
Notes
References
External links
CityMayors feature on Germany subdivisions
Category:Subdivisions of Germany
Germany, States
Germany 1
State of Germany
Category:Germany geography-related lists
Category:States of Germany-related lists | 217,450 | 2017-01 |
IBM | International Business Machines Corporation (commonly referred to as IBM) is an American multinational technology company headquartered in Armonk, New York, United States, with operations in over 170 countries. The company originated in 1911 as the Computing-Tabulating-Recording Company (CTR) and was renamed "International Business Machines" in 1924.
IBM manufactures and markets computer hardware, middleware and software, and offers hosting and consulting services in areas ranging from mainframe computers to nanotechnology. IBM is also a major research organization, holding the record for most patents generated by a business (as of 2017) for 24 consecutive years. Inventions by IBM include the automated teller machine (ATM), the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, the SQL programming language, the UPC barcode, and dynamic random-access memory (DRAM).
IBM has continually shifted its business mix by exiting commoditizing markets and focusing on higher-value, more profitable markets. This includes spinning off printer manufacturer Lexmark in 1991 and selling off its personal computer (ThinkPad) and x86-based server businesses to Lenovo (2005 and 2014, respectively), and acquiring companies such as PwC Consulting (2002), SPSS (2009), and The Weather Company (2016). Also in 2014, IBM announced that it would go "fabless", continuing to design semiconductors but offloading manufacturing to GlobalFoundries.
Nicknamed Big Blue, IBM is one of 30 companies included in the Dow Jones Industrial Average and one of the world's largest employers, with (as of 2016) nearly 380,000 employees. Known as "IBMers", IBM employees have been awarded five Nobel Prizes, six Turing Awards, ten National Medals of Technology and five National Medals of Science.
History
In the 1880s, technologies emerged that would ultimately form the core of what would become International Business Machines (IBM). Julius E. Pitrat patented the computing scale in 1885; Alexander Dey invented the dial recorder (1888); Herman Hollerith patented the Electric Tabulating Machine; and Willard Bundy invented a time clock to record a worker's arrival and departure time on a paper tape in 1889. On June 16, 1911, their four companies were consolidated in New York State by Charles Ranlett Flint to form the Computing-Tabulating-Recording Company (CTR) based in Endicott, New York.<ref name=nytimes>[http://query.nytimes.com/mem/archive-free/pdf?res=F00F15FD355A17738DDDA90994DE405B818DF1D3 NY Times June 10, 1911 Tabulating Concerns Unite: Flint & Co. Bring Four Together with $19,000,000 capital]</ref> The four companies had 1,300 employees and offices and plants in Endicott and Binghamton, New York; Dayton, Ohio; Detroit, Michigan; Washington, D.C.; and Toronto. They manufactured machinery for sale and lease, ranging from commercial scales and industrial time recorders, meat and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr., fired from the National Cash Register Company by John Henry Patterson, called on Flint and, in 1914, was offered CTR. Watson joined CTR as General Manager then, 11 months later, was made President when court cases relating to his time at NCR were resolved.NCR Corporation#Expansion Having learned Patterson's pioneering business practices, Watson proceeded to put the stamp of NCR onto CTR's companies.Belden (1962) p.105 He implemented sales conventions, "generous sales incentives, a focus on customer service, an insistence on well-groomed, dark-suited salesmen and had an evangelical fervor for instilling company pride and loyalty in every worker". His favorite slogan, "THINK", became a mantra for each company's employees. During Watson's first four years, revenues more than doubled to $9 million and the company's operations expanded to Europe, South America, Asia and Australia. "Watson had never liked the clumsy hyphenated title of the CTR" and chose to replace it with the more expansive title "International Business Machines".Belden (1962) p.125
thumb|left|NACA researchers using an IBM type 704 electronic data processing machine in 1957
In 1937, IBM's tabulating equipment enabled organizations to process unprecedented amounts of data, its clients including the U.S. Government, during its first effort to maintain the employment records for 26 million people pursuant to the Social Security Act, and Hitler's Third Reich, largely through the German subsidiary Dehomag. During the Second World War the company produced small arms for the American war effort (M1 Carbine, and Browning Automatic Rifle).
In 1949, Thomas Watson, Sr., created IBM World Trade Corporation, a subsidiary of IBM focused on foreign operations. In 1952, he stepped down after almost 40 years at the company helm, and his son Thomas Watson, Jr. was named president. In 1956, the company demonstrated the first practical example of artificial intelligence when Arthur L. Samuel of IBM's Poughkeepsie, New York, laboratory programmed an IBM 704 not merely to play checkers but "learn" from its own experience. In 1957, the FORTRAN scientific programming language was developed. In 1961, IBM developed the SABRE reservation system for American Airlines and introduced the highly successful Selectric typewriter. In 1963, IBM employees and computers helped NASA track the orbital flight of the Mercury astronauts. A year later it moved its corporate headquarters from New York City to Armonk, New York. The latter half of the 1960s saw IBM continue its support of space exploration, participating in the 1965 Gemini flights, 1966 Saturn flights and 1969 lunar mission.
thumb|IBM inventions: (clockwise from top-left) the hard-disk drive, DRAM, the UPC bar code, and the magnetic stripe card
On April 7, 1964, IBM announced the first computer system family, the IBM System/360. Sold between 1964 and 1978, it spanned the complete range of commercial and scientific applications from large to small, allowing companies for the first time to upgrade to models with greater computing capability without having to rewrite their application. In 1974, IBM engineer George J. Laurer developed the Universal Product Code. IBM and the World Bank first introduced financial swaps to the public in 1981 when they entered into a swap agreement. The IBM PC, originally designated IBM 5150, was introduced in 1981, and it soon became an industry standard. In 1991, IBM sold printer manufacturer Lexmark.
In 1993, IBM posted a US$8 billion loss - at the time the biggest in American corporate history. Lou Gerstner was hired as CEO from RJR Nabisco to turn the company around. In 2002, IBM acquired PwC consulting, and in 2003 it initiated a project to redefine company values, hosting a three-day online discussion of key business issues with 50,000 employees. The result was three values: "Dedication to every client's success", "Innovation that matters—for our company and for the world", and "Trust and personal responsibility in all relationships".
In 2005, the company sold its personal computer business to Chinese technology company Lenovo and, in 2009, it acquired software company SPSS Inc. Later in 2009, IBM's Blue Gene supercomputing program was awarded the National Medal of Technology and Innovation by U.S. President Barack Obama. In 2011, IBM gained worldwide attention for its artificial intelligence program Watson, which was exhibited on Jeopardy! where it won against game-show champions Ken Jennings and Brad Rutter. In 2012, IBM announced it has agreed to buy Kenexa, and a year later it also acquired SoftLayer Technologies, a web hosting service, in a deal worth around $2 billion.
In 2014, IBM announced it would sell its x86 server division to Lenovo for a fee of $2.1 billion. Also that year, IBM began announcing several major partnerships with other companies, including Apple Inc., Twitter, Facebook, Tencent, Cisco, UnderArmour, Box, Microsoft, VMware, CSC, Macy's, and Sesame Workshop, the parent company of Sesame Street.
In 2015, IBM announced two major acquisitions: Merge Healthcare for $1 billion and all digital assets from The Weather Company, including Weather.com and the Weather Channel mobile app. Also that year, IBMers created the film A Boy and His Atom, which was the first molecule movie to tell a story. In 2016, IBM acquired video conferencing service Ustream and formed a new cloud video unit. In April 2016, it posted a 14-year low in quarterly sales.Matt Egan, CNN Money. “Big Blue isn't so big anymore.” April 19, 2016. April 22, 2016. The following month, Groupon sued IBM accusing it of patent infringement, two months after IBM accused Groupon of patent infringement in a separate lawsuit.Jonathan Stempel, Reuters. “Groupon sues 'once-great' IBM over patent.” May 9, 2016. May 9, 2016.
Headquarters and offices
thumb|IBM CHQ in Armonk, New York in 2014
thumb|Pangu Plaza, one of IBM's offices in Beijing, China
IBM is headquartered in Armonk, New York, a small town 37 miles north of Midtown Manhattan. Its principal building, referred to as CHQ, is a glass and stone edifice on a parcel amid a 432-acre former apple orchard the company purchased in the mid-1950s. There are two other IBM buildings within walking distance of CHQ: the North Castle office, which previously served as IBM's headquarters; and the IBM Learning Center (ILC), a resort hotel and training center, which has 182 guest rooms, 31 meeting rooms, and various amenities.
IBM operates in 170 countries as of 2016, with mobility centers in smaller markets areas and major campuses in the larger ones. In New York City, IBM has several offices besides CHQ, including the IBM Watson headquarters at Astor Place in Manhattan. Outside of New York, major campuses in the United States include Austin, Texas; Research Triangle Park (Raleigh-Durham), North Carolina; Rochester, Minnesota; and Silicon Valley, California.
IBM's real estate holdings are varied and globally diverse. Towers occupied by IBM include 1250 René-Lévesque (Montreal, Canada), Tour Descartes (Paris, France), and One Atlantic Center (Atlanta, Georgia, USA). In Beijing, China, IBM occupies Pangu Plaza, which is the city's seventh tallest building and overlooks Beijing National Stadium ("Bird's Nest"), which was home to the 2008 Summer Olympics.
Other notable buildings include the IBM Rome Software Lab (Rome, Italy), the Hursley House (Winchester, UK), 330 North Wabash (Chicago, Illinois, USA), the Cambridge Scientific Center (Cambridge, Massachusetts, USA), the IBM Toronto Software Lab (Toronto, Canada), the IBM Building, Johannesburg (Johannesburg, South Africa), the IBM Building (Seattle) (Seattle, Washington, USA), the IBM Hakozaki Facility (Tokyo, Japan), the IBM Yamato Facility (Yamato, Japan), and the IBM Canada Head Office Building (Ontario, Canada). Defunct IBM campuses include the IBM Somers Office Complex (Somers, New York). The company's contributions to industrial architecture and design include works by Eero Saarinen, Ludwig Mies van der Rohe and I.M. Pei. Van der Rohe's building in Chicago, the original center of the company's research division post-World War II, was recognized with the 1990 Honor Award from the National Building Museum. IBM was recognized as one of the Top 20 Best Workplaces for Commuters by the United States Environmental Protection Agency (EPA) in 2005, which recognized Fortune 500 companies that provided employees with excellent commuter benefits to help reduce traffic and air pollution. In 2004, concerns were raised related to IBM's contribution in its early days to pollution in its original location in Endicott, New York.
Products and services
thumb|InterConnect, IBM's annual conference on cloud computing and mobile technologies
thumb|right|Blue Gene was awarded the National Medal of Technology and Innovation in 2009.
IBM has a large and diverse portfolio of products and services. As of 2016, these offerings fall into the categories of cloud computing, cognitive computing, commerce, data and analytics, Internet of Things, IT infrastructure, mobile, and security.
IBM Cloud includes infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS) offered through public, private and hybrid cloud delivery models. For instance, the IBM Bluemix PaaS enables developers to quickly create complex websites on a pay-as-you-go model. IBM SoftLayer is a dedicated server, managed hosting and cloud computing provider, which in 2011 reported hosting more than 81,000 servers for more than 26,000 customers. IBM also offers Cloud Data Encryption Services (ICDES), using cryptographic splitting to secure customer data.
IBM also hosts the industry-wide cloud computing and mobile technologies conference InterConnect each year.
Hardware designed by IBM for these categories include IBM's POWER microprocessors, which are employed inside many console gaming systems, including Xbox 360, PlayStation 3, and Nintendo's Wii U. IBM Secure Blue is encryption hardware that can be built into microprocessors, and in 2014, the company revealed it was investing $3 billion over the following five years to design a neural chip that mimics the human brain, with 10 billion neurons and 100 trillion synapses, but that uses just 1 kilowatt of power. In 2016, the company launched all-flash arrays designed for small and midsized companies, which includes software for data compression, provisioning, and snapshots across various systems.Larry Dignan, ZDNet. “IBM launches flash arrays for smaller enterprises, aims to court EMC, Dell customers.” August 23, 2016. August 23, 2016.
IT outsourcing also represents a major service offered by IBM, with more than 40 data centers worldwide. alphaWorks is IBM's source for emerging software technologies, and SPSS is a software package used for statistical analysis. IBM's Kenexa suite provides employment and retention solutions, and includes the BrassRing, an applicant tracking system used by thousands of companies for recruiting. IBM also owns The Weather Company, which provides weather forecasting and includes weather.com and Weather Underground.
Smarter Planet is an initiative that seeks to achieve economic growth, near-term efficiency, sustainable development, and societal progress, targeting opportunities such as smart grids, water management systems, solutions to traffic congestion, and greener buildings.
Services offerings include Redbooks, which are publicly available online books about best practices with IBM products, and developerWorks, a website for software developers and IT professionals with how-to articles and tutorials, as well as software downloads, code samples, discussion forums, podcasts, blogs, wikis, and other resources for developers and technical professionals.
IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. Watson was debuted in 2011 on the American game-show Jeopardy!, where it competed against champions Ken Jennings and Brad Rutter in a three-game tournament and won. Watson has since been applied to business, healthcare, developers, and universities. For example, IBM has partnered with Memorial Sloan Kettering Cancer Center to assist with considering treatment options for oncology patients and for doing melanoma screenings. Also, several companies have begun using Watson for call centers, either replacing or assisting customer service agents.
Research
thumb|The Thomas J. Watson Research Center in Yorktown Heights, New York, is one of 12 IBM research labs worldwide.
thumb|upright|IBM Fellow Benoit Mandelbrot discovered fractal geometry in 1975.
Research has been a part of IBM since its founding, and its organized efforts trace their roots back to 1945, when the Watson Scientific Computing Laboratory was founded at Columbia University in New York City, converting a renovated fraternity house on Manhattan's West Side into IBM's first laboratory. Now, IBM Research constitutes the largest industrial research organization in the world, with 12 labs on 6 continents. IBM Research is headquartered at the Thomas J. Watson Research Center in New York, and facilities include the Almaden lab in California, Austin lab in Texas, Australia lab in Melbourne, Brazil lab in São Paulo and Rio de Janeiro, China lab in Beijing and Shanghai, Ireland lab in Dublin, Haifa lab in Israel, India lab in Delhi and Bangalore, Tokyo lab, Zurich lab and Africa lab in Nairobi.
In terms of investment, IBM's R&D spend totals several billion dollars each year. In 2012, that expenditure was approximately $6.3 billion USD. Recent allocations have included $1 billion to create a business unit for Watson in 2014, and $3 billion to create a next-gen semiconductor along with $4 billion towards growing the company's "strategic imperatives" (cloud, analytics, mobile, security, social) in 2015.
IBM has been a leading proponent of the Open Source Initiative, and began supporting Linux in 1998. The company invests billions of dollars in services and software based on Linux through the IBM Linux Technology Center, which includes over 300 Linux kernel developers. IBM has also released code under different open source licenses, such as the platform-independent software framework Eclipse (worth approximately US$40 million at the time of the donation), the three-sentence International Components for Unicode (ICU) license, and the Java-based relational database management system (RDBMS) Apache Derby. IBM's open source involvement has not been trouble-free, however (see SCO v. IBM).
Famous inventions and developments by IBM include: the Automated teller machine (ATM), Dynamic random access memory (DRAM), the electronic keypunch, the financial swap, the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, RISC, the SABRE airline reservation system, SQL, the Universal Product Code (UPC) bar code,and the virtual machine. Additionally, in 1990 company scientists used a scanning tunneling microscope to arrange 35 individual xenon atoms to spell out the company acronym, marking the first structure assembled one atom at a time. A major part of IBM research is the generation of patents. Since its first patent for a traffic signaling device, IBM has been one of the world's most prolific patent sources. In 2017, the company holds the record for most patents generated by a business, marking 24 consecutive years for the achievement.
Five IBMers have received the Nobel Prize: Leo Esaki, of the Thomas J. Watson Research Center in Yorktown Heights, N.Y., in 1973, for work in semiconductors; Gerd Binnig and Heinrich Rohrer, of the Zurich Research Center, in 1986, for the scanning tunneling microscope; and Georg Bednorz and Alex Müller, also of Zurich, in 1987, for research in superconductivity. Several IBMers have also won the Turing Award, including the first female recipient Frances E. Allen.
Current research includes a collaboration with the University of Michigan to see computers act as an academic adviser for undergraduate computer science and engineering students at the university,Clare Hopping, IT Pro. “IBM and University of Michigan develop human computer.” Jan 18, 2016. Jan 18, 2016. and a partnership with AT&T, combining their cloud and Internet of Things (IoT) platforms to make them interoperable and to provide developers with easier tools.Larry Dignan, ZDNet. “IBM, AT&T to meld Internet of Things platforms.” July 13, 2016. July 13, 2016.
Brand and reputation
thumb|IBM ads at John F. Kennedy International Airport, 2013
IBM is nicknamed Big Blue in part due to its blue logo and color scheme, and also partially since IBM once had a de facto dress code of white shirts with blue suits. The company logo has undergone several changes over the years, with its current "8-bar" logo designed in 1972 by graphic designer Paul Rand. It was a general replacement for a 13-bar logo, since period photocopiers did not render large areas well.
IBM has a valuable brand as a result of over 100 years of operations and marketing campaigns. Since 1996, IBM has been the exclusive technology partner for the Masters Tournament, one of the four major championships in professional golf, with IBM creating the first Masters.org (1996), the first course cam (1998), the first iPhone app with live streaming (2009), and first-ever live 4K Ultra High Definition feed in the United States for a major sporting event (2016). As a result, IBM CEO Ginni Rometty became the third female member of the Master's governing body, the Augusta National Golf Club. IBM is also a major sponsor in professional tennis, with engagements at the U.S. Open, Wimbledon, the Australian Open, and the French Open. The company also sponsored the Olympic Games from 1960-2000, and the National Football League from 2003-2012.
In 2012, IBM's brand was valued at $75.5 billion and ranked by Interbrand as the №2 best brand worldwide. That same year, it was also ranked the №1 company for leaders (Fortune), the №2 green company in the U.S. (Newsweek), the №2 most respected company (Barron's), the №5 most admired company (Fortune), the №18 most innovative company (Fast Company), and the №1 in technology consulting and №2 in outsourcing (Vault). In 2015, Forbes ranked IBM the №5 most valuable brand.
People and culture
Employees
thumb|New IBMers being welcomed to bootcamp at IBM Austin, 2015
thumb|Employees demonstrating IBM Watson capabilities in a Jeopardy! exhibition match on campus, 2011
IBM has one of the largest workforces in the world, and employees at Big Blue are referred to as "IBMers". The company was among the first corporations to provide group life insurance (1934), survivor benefits (1935), training for females (1935), paid vacations (1937), and training for disabled people (1942). IBM hired its first black salesperson in 1946, and in 1952, CEO Thomas J. Watson, Jr. published the company's first written equal opportunity policy letter, one year before the U.S. Supreme Court decision in Brown vs. Board of Education and 11 years before the Civil Rights Act of 1964. The Human Rights Campaign has rated IBM 100% on its index of gay-friendliness every year since 2003, with IBM providing same-sex partners of its employees with health benefits and an anti-discrimination clause. Additionally, in 2005, IBM became the first major company in the world to commit formally to not use genetic information in employment decisions; and in 2015, IBM was named to Working Mother's 100 Best Companies List for the 30th consecutive year.
IBM has several leadership development and recognition programs to recognize employee potential and achievements. For early-career high potential employees, IBM sponsors leadership development programs by discipline (e.g., general management (GMLDP), human resources (HRLDP), finance (FLDP)). Each year, the company also selects 500 IBMers for the IBM Corporate Service Corps (CSC), which has been described as the corporate equivalent of the Peace Corps and gives top employees a month to do humanitarian work abroad. For certain interns, IBM also has a program called Extreme Blue that partners top business and technical students to develop high-value technology and compete to present their business case to the company's CEO at internship's end.
The company also has various designations for exceptional individual contributors such as Senior Technical Staff Member (STSM), Research Staff Member (RSM), Distinguished Engineer (DE), and Distinguished Designer (DD). Prolific inventors can also achieve patent plateaus and earn the designation of Master Inventor. The company's most prestigious designation is that of IBM Fellow. Since 1963, the company names a handful of Fellows each year based on technical achievement. Other programs recognize years of service such as the Quarter Century Club established in 1924, and sellers are eligible to join the Hundred Percent Club, composed of IBM salesmen who meet their quotas, convened in Atlantic City, New Jersey. Each year, the company also selects 1,000 IBMers annually to award the Best of IBM Award, which includes an all-expenses paid trip to the awards ceremony in an exotic location.
IBM's culture has evolved significantly over its century of operations. In its early days, a dark (or gray) suit, white shirt, and a "sincere" tie constituted the public uniform for IBM employees. During IBM's management transformation in the 1990s, CEO Louis V. Gerstner, Jr. relaxed these codes, normalizing the dress and behavior of IBM employees. The company's culture has also given to different plays on the company acronym (IBM), with some saying is stands for "I've Been Moved" due to relocations and layoffs, others saying it stands for "I'm By Myself" pursuant to a prevalent work-from-anywhere norm, and others saying it stands for "I'm Being Mentored" due to the company's open door policy and encouragement for mentoring at all levels. In terms of labor relations, the company has traditionally resisted labor union organizing, although unions represent some IBM workers outside the United States. In Japan, IBM employees also have an American football team complete with pro stadium, cheerleaders and televised games, competing in the Japanese X-League as the "Big Blue".
In 2015, IBM started giving employees the option of choosing either a PC or a Mac as their primary work device, resulting in IBM becoming the world's largest Mac shop. In 2016, IBM eliminated forced rankings and changed its annual performance review system to focus more on frequent feedback, coaching, and skills development.Shana Lebowitz, Business Insider. “After overhauling its performance review system, IBM now uses an app to give and receive real-time feedback.” May 20, 2016. May 20, 2016.
IBM alumni
Many IBMers have also achieved notability outside of work and after leaving IBM. In business, former IBM employees include Apple Inc. CEO Tim Cook, former EDS CEO and politician Ross Perot, Microsoft chairman John W. Thompson, SAP co-founder Hasso Plattner, Advanced Micro Devices (AMD) CEO Lisa Su, Citizens Financial Group CEO Ellen Alemany, former Yahoo! chairman Alfred Amoroso, former AT&T CEO C. Michael Armstrong, former Xerox Corporation CEOs David T. Kearns and G. Richard Thoman, former Fair Isaac Corporation CEO Mark N. Greene,CNN/Money Citrix Systems co-founder Ed Iacobucci, ASOS.com chairman Brian McBride, and former Lenovo CEO Steve Ward.
In government, alumna Patricia Roberts Harris served as United States Secretary of Housing and Urban Development, the first African American woman to serve in the United States Cabinet. Samuel K. Skinner served as U.S. Secretary of Transportation and as the White House Chief of Staff. Alumni also include U.S. Senators Mack Mattingly and Thom Tillis; Wisconsin governor Scott Walker; former U.S. Ambassadors Vincent Obsitnik (Slovakia), Arthur K. Watson (France), and Thomas Watson Jr. (Soviet Union); and former U.S. Representatives Todd Akin, Official Manual of the State of Missouri, 1993–1994, p. 157 Glenn Andrews, Robert Garcia, Katherine Harris,"Katherine Harris' Biography". Project Vote Smart. Retrieved April 30, 2006. Amo Houghton, Jim Ross Lightfoot, Thomas J. Manton, Donald W. Riegle Jr., and Ed Zschau.
Others are NASA astronaut Michael J. Massimino, Canadian astronaut Julie Payette, Harvey Mudd College president Maria Klawe, Western Governors University president emeritus Robert Mendenhall, former University of Kentucky president Lee T. Todd Jr., NFL referee Bill Carollo, former Rangers F.C. chairman John McClelland, and recipient of the Nobel Prize in Literature J. M. Coetzee. Thomas Watson Jr. also served as the 11th national president of the Boy Scouts of America.
Board and shareholders
thumb|upright|IBM's largest shareholder is Warren Buffett's Berkshire Hathaway.
The company's 14 member Board of Directors is responsible for overall corporate management and includes the CEOs of American Express, Ford Motor Company, Boeing, Dow Chemical, Johnson and Johnson, and Cemex.
In 2011, IBM became the first technology company Warren Buffett's holding company Berkshire Hathaway invested in. As of 2016, he owns 8.51 percent of IBM's shares.
See also
List of companies of the United States#I
List of mergers and acquisitions by IBM
List of international subsidiaries of IBM
List of largest Internet companies
Top 100 US Federal Contractors
List of electronics brands
List of largest manufacturing companies by revenue
Tech companies in the New York City metropolitan region
References
Further reading
For additional books about IBM, such as biographies, memoirs, technology and more, see: History of IBM.
{{cite book | author = Robert Sobel | year = 2000 |origyear=1981| title = Thomas Watson, Sr.: IBM and the Computer Revolution | isbn = 1-893122-82-4 }} *** A paperback reprint of IBM: Colossus in Transition.
External links
Category:1888 establishments in the United States
Category:American brands
Category:American companies established in 1888
Category:Cloud computing providers
Category:Collier Trophy recipients
Category:Companies based in Westchester County, New York
Category:Companies in the Dow Jones Industrial Average
Category:Companies listed on the New York Stock Exchange
Category:Computer companies of the United States
Category:Computer hardware companies
Category:Computer storage companies
Category:Display technology companies
Category:Electronics companies of the United States
Category:Foundry semiconductor companies
Category:Multinational companies headquartered in the United States
Category:National Medal of Technology recipients
Category:Outsourcing companies
Category:Point of sale companies
Category:Semiconductor companies
Category:Software companies based in New York
Category:Storage Area Network companies | 40,379,651 | 2017-01 |
Virgil | Publius Vergilius Maro (; traditional dates October 15, 70 BC – September 21, 19 BC), usually called Virgil or Vergil in English, was an ancient Roman poet of the Augustan period. He is known for three acclaimed works of Latin literature, the Eclogues (or Bucolics), the Georgics, and the epic Aeneid. A number of minor poems, collected in the Appendix Vergiliana, are sometimes attributed to him.
Virgil is traditionally ranked as one of Rome's greatest poets. His Aeneid has been considered the national epic of ancient Rome from the time of its composition to the present day. Modeled after Homer's Iliad and Odyssey, the Aeneid follows the Trojan refugee Aeneas as he struggles to fulfill his destiny and arrive on the shores of Italy—in Roman mythology the founding act of Rome. Virgil's work has had wide and deep influence on Western literature, most notably Dante's Divine Comedy, in which Virgil appears as Dante's guide through hell and purgatory.
Life and works
Birth and biographical tradition
thumb|220px|right|A bust of Virgil in Naples
Virgil's biographical tradition is thought to depend on a lost biography by Varius, Virgil's editor, which was incorporated into the biography by Suetonius and the commentaries of Servius and Donatus, the two great commentators on Virgil's poetry. Although the commentaries no doubt record much factual information about Virgil, some of their evidence can be shown to rely on inferences made from his poetry and allegorizing; thus, Virgil's biographical tradition remains problematic.Don Fowler "Virgil (Publius Vergilius Maro)" in The Oxford Classical Dictionary, (3.ed. 1996, Oxford), pg.1602
The tradition holds that Virgil was born in the village of Andes, near MantuaThe epitaph on his tomb in Posilipo near Naples was Mantua me genuit; Calabri rapuere; tenet nunc Parthenope. Cecini pascua, rura, duces ("Mantua gave birth to me, the Calabrians took me, now Naples holds me; I sang of pastures [the Eclogues], country [the Georgics] and leaders [the Aeneid]"). in Cisalpine Gaul.Map of Cisalpine Gaul Analysis of his name has led to beliefs that he descended from earlier Roman colonists. Modern speculation ultimately is not supported by narrative evidence either from his own writings or his later biographers. Macrobius says that Virgil's father was of a humble background; however, scholars generally believe that Virgil was from an equestrian landowning family which could afford to give him an education. He attended schools in Cremona, Mediolanum, Rome and Naples. After considering briefly a career in rhetoric and law, the young Virgil turned his talents to poetry.http://www.usu.edu/markdamen/1320AncLit/chapters/11verg.htm
Early works
According to the commentators, Virgil received his first education when he was five years old and he later went to Cremona, Milan, and finally Rome to study rhetoric, medicine, and astronomy, which he soon abandoned for philosophy. From Virgil's admiring references to the neoteric writers Pollio and Cinna, it has been inferred that he was, for a time, associated with Catullus' neoteric circle. According to Servius, schoolmates considered Virgil extremely shy and reserved, and he was nicknamed "Parthenias" or "maiden" because of his social aloofness. Virgil also seems to have suffered bad health throughout his life and in some ways lived the life of an invalid. According to the Catalepton, he began to write poetry while in the Epicurean school of Siro the Epicurean at Naples. A group of small works attributed to the youthful Virgil by the commentators survive collected under the title Appendix Vergiliana, but are largely considered spurious by scholars. One, the Catalepton, consists of fourteen short poems,Fowler, pg.1602 some of which may be Virgil's, and another, a short narrative poem titled the Culex ("The Gnat"), was attributed to Virgil as early as the 1st century AD.
The Eclogues
thumb|Page from the beginning of the Eclogues in the 5th-century Vergilius Romanus
The biographical tradition asserts that Virgil began the hexameter Eclogues (or Bucolics) in 42 BC and it is thought that the collection was published around 39–38 BC, although this is controversial. The Eclogues (from the Greek for "selections") are a group of ten poems roughly modeled on the bucolic hexameter poetry ("pastoral poetry") of the Hellenistic poet Theocritus. After his victory in the Battle of Philippi in 42 BC, fought against the army led by the assassins of Julius Caesar, Octavian tried to pay off his veterans with land expropriated from towns in northern Italy, supposedly including, according to the tradition, an estate near Mantua belonging to Virgil. The loss of his family farm and the attempt through poetic petitions to regain his property have traditionally been seen as Virgil's motives in the composition of the Eclogues. This is now thought to be an unsupported inference from interpretations of the Eclogues. In Eclogues 1 and 9, Virgil indeed dramatizes the contrasting feelings caused by the brutality of the land expropriations through pastoral idiom, but offers no indisputable evidence of the supposed biographic incident. While some readers have identified the poet himself with various characters and their vicissitudes, whether gratitude by an old rustic to a new god (Ecl. 1), frustrated love by a rustic singer for a distant boy (his master's pet, Ecl. 2), or a master singer's claim to have composed several eclogues (Ecl. 5), modern scholars largely reject such efforts to garner biographical details from works of fiction, preferring to interpret an author's characters and themes as illustrations of contemporary life and thought.
The ten Eclogues present traditional pastoral themes with a fresh perspective. Eclogues 1 and 9 address the land confiscations and their effects on the Italian countryside. 2 and 3 are pastoral and erotic, discussing both homosexual love (Ecl. 2) and attraction toward people of any gender (Ecl. 3). Eclogue 4, addressed to Asinius Pollio, the so-called "Messianic Eclogue" uses the imagery of the golden age in connection with the birth of a child (who the child was meant to be has been subject to debate). 5 and 8 describe the myth of Daphnis in a song contest, 6, the cosmic and mythological song of Silenus; 7, a heated poetic contest, and 10 the sufferings of the contemporary elegiac poet Cornelius Gallus. Virgil is credited in the Eclogues with establishing Arcadia as a poetic ideal that still resonates in Western literature and visual arts and setting the stage for the development of Latin pastoral by Calpurnius Siculus, Nemesianus, and later writers.
The Georgics
Sometime after the publication of the Eclogues (probably before 37 BC),Fowler , pg.1603 Virgil became part of the circle of Maecenas, Octavian's capable agent d'affaires who sought to counter sympathy for Antony among the leading families by rallying Roman literary figures to Octavian's side. Virgil came to know many of the other leading literary figures of the time, including Horace, in whose poetry he is often mentioned,Horace, Satires 1.5, 1.6, and Odes 1.3 and Varius Rufus, who later helped finish the Aeneid.
thumb|400px|Late 17th-century illustration of a passage from the Georgics by Jerzy Siemiginowski-Eleuter
At Maecenas' insistence (according to the tradition) Virgil spent the ensuing years (perhaps 37–29 BC) on the long didactic hexameter poem called the Georgics (from Greek, "On Working the Earth") which he dedicated to Maecenas. The ostensible theme of the Georgics is instruction in the methods of running a farm. In handling this theme, Virgil follows in the didactic ("how to") tradition of the Greek poet Hesiod's Works and Days and several works of the later Hellenistic poets. The four books of the Georgics focus respectively on raising crops and trees (1 and 2), livestock and horses (3), and beekeeping and the qualities of bees (4). Well-known passages include the beloved Laus Italiae of Book 2, the prologue description of the temple in Book 3, and the description of the plague at the end of Book 3. Book 4 concludes with a long mythological narrative, in the form of an epyllion which describes vividly the discovery of beekeeping by Aristaeus and the story of Orpheus' journey to the underworld. Ancient scholars, such as Servius, conjectured that the Aristaeus episode replaced, at the emperor's request, a long section in praise of Virgil's friend, the poet Gallus, who was disgraced by Augustus, and who committed suicide in 26 BC.
The Georgics tone wavers between optimism and pessimism, sparking critical debate on the poet's intentions,Fowler, pg.1605 but the work lays the foundations for later didactic poetry. Virgil and Maecenas are said to have taken turns reading the Georgics to Octavian upon his return from defeating Antony and Cleopatra at the Battle of Actium in 31 BC.
The Aeneid
thumb|220px|A 1st-century terracotta expressing the pietas of Aeneas, who carries his aged father and leads his young son
The Aeneid is widely considered Virgil's finest work and one of the most important poems in the history of western literature. Virgil worked on the Aeneid during the last eleven years of his life (29–19 BC), commissioned, according to Propertius, by Augustus. The epic poem consists of 12 books in dactylic hexameter verse which describe the journey of Aeneas, a warrior fleeing the sack of Troy, to Italy, his battle with the Italian prince Turnus, and the foundation of a city from which Rome would emerge. The Aeneid's first six books describe the journey of Aeneas from Troy to Rome. Virgil made use of several models in the composition of his epic; Homer, the preeminent author of classical epic, is everywhere present, but Virgil also makes special use of the Latin poet Ennius and the Hellenistic poet Apollonius of Rhodes among the various other writers to which he alludes. Although the Aeneid casts itself firmly into the epic mode, it often seeks to expand the genre by including elements of other genres such as tragedy and aetiological poetry. Ancient commentators noted that Virgil seems to divide the Aeneid into two sections based on the poetry of Homer; the first six books were viewed as employing the Odyssey as a model while the last six were connected to the Iliad.Jenkyns, p. 53
Book 1For a succinct summary, see Globalnet.co.uk (at the head of the Odyssean section) opens with a storm which Juno, Aeneas' enemy throughout the poem, stirs up against the fleet. The storm drives the hero to the coast of Carthage, which historically was Rome's deadliest foe. The queen, Dido, welcomes the ancestor of the Romans, and under the influence of the gods falls deeply in love with him. At a banquet in Book 2, Aeneas tells the story of the sack of Troy, the death of his wife, and his escape, to the enthralled Carthaginians, while in Book 3 he recounts to them his wanderings over the Mediterranean in search of a suitable new home. Jupiter in Book 4 recalls the lingering Aeneas to his duty to found a new city, and he slips away from Carthage, leaving Dido to commit suicide, cursing Aeneas and calling down revenge in a symbolic anticipation of the fierce wars between Carthage and Rome. In Book 5, Aeneas' father Anchises dies and funeral games are celebrated for him. On reaching Cumae, in Italy in Book 6, Aeneas consults the Cumaean Sibyl, who conducts him through the Underworld where Aeneas meets the dead Anchises who reveals Rome's destiny to his son.
Book 7 (beginning the Iliadic half) opens with an address to the muse and recounts Aeneas' arrival in Italy and betrothal to Lavinia, daughter of King Latinus. Lavinia had already been promised to Turnus, the king of the Rutulians, who is roused to war by the Fury Allecto, and Amata Lavinia's mother. In Book 8, Aeneas allies with King Evander, who occupies the future site of Rome, and is given new armor and a shield depicting Roman history. Book 9 records an assault by Nisus and Euryalus on the Rutulians, Book 10, the death of Evander's young son Pallas, and 11 the death of the Volscian warrior princess Camilla and the decision to settle the war with a duel between Aeneas and Turnus. The Aeneid ends in Book 12 with the taking of Latinus' city, the death of Amata, and Aeneas' defeat and killing of Turnus, whose pleas for mercy are spurned. The final book ends with the image of Turnus' soul lamenting as it flees to the underworld.
Reception of the Aeneid
thumb|280px|Virgil Reading the Aeneid to Augustus, Octavia, and Livia by Jean-Baptiste Wicar, Art Institute of Chicago
Critics of the Aeneid focus on a variety of issues.For a bibliography and summary see Fowler, pg.1605–6 The tone of the poem as a whole is a particular matter of debate; some see the poem as ultimately pessimistic and politically subversive to the Augustan regime, while others view it as a celebration of the new imperial dynasty. Virgil makes use of the symbolism of the Augustan regime, and some scholars see strong associations between Augustus and Aeneas, the one as founder and the other as re-founder of Rome. A strong teleology, or drive towards a climax, has been detected in the poem. The Aeneid is full of prophecies about the future of Rome, the deeds of Augustus, his ancestors, and famous Romans, and the Carthaginian Wars; the shield of Aeneas even depicts Augustus' victory at Actium against Mark Antony and Cleopatra VII in 31 BC. A further focus of study is the character of Aeneas. As the protagonist of the poem, Aeneas seems to waver constantly between his emotions and commitment to his prophetic duty to found Rome; critics note the breakdown of Aeneas' emotional control in the last sections of the poem where the "pious" and "righteous" Aeneas mercilessly slaughters Turnus.
The Aeneid appears to have been a great success. Virgil is said to have recited Books 2, 4, and 6 to Augustus; and Book 6 apparently caused Augustus' sister Octavia to faint. Although the truth of this claim is subject to scholarly scepticism, it has served as a basis for later art, such as Jean-Baptiste Wicar's Virgil Reading the Aeneid.
Unfortunately, some lines of the poem were left unfinished, and the whole was unedited, at Virgil's death in 19 BC.
Virgil's death and editing of the Aeneid
According to the tradition, Virgil traveled to Greece in about 19 BC to revise the Aeneid. After meeting Augustus in Athens and deciding to return home, Virgil caught a fever while visiting a town near Megara. After crossing to Italy by ship, weakened with disease, Virgil died in Brundisium harbor on September 21, 19 BC. Augustus ordered Virgil's literary executors, Lucius Varius Rufus and Plotius Tucca, to disregard Virgil's own wish that the poem be burned, instead ordering it published with as few editorial changes as possible. As a result, the text of the Aeneid that exists may contain faults which Virgil was planning to correct before publication. However, the only obvious imperfections are a few lines of verse that are metrically unfinished (i.e. not a complete line of dactylic hexameter). Some scholars have argued that Virgil deliberately left these metrically incomplete lines for dramatic effect. Other alleged imperfections are subject to scholarly debate.
Later views and reception
In antiquity
thumb|250px|A 3rd-century Tunisian mosaic of Virgil seated between Clio and Melpomene (from Hadrumetum [Sousse])
The works of Virgil almost from the moment of their publication revolutionized Latin poetry. The Eclogues, Georgics, and above all the Aeneid became standard texts in school curricula with which all educated Romans were familiar. Poets following Virgil often refer intertextually to his works to generate meaning in their own poetry. The Augustan poet Ovid parodies the opening lines of the Aeneid in Amores 1.1.1–2, and his summary of the Aeneas story in Book 14 of the Metamorphoses, the so-called "mini-Aeneid", has been viewed as a particularly important example of post-Virgilian response to the epic genre. Lucan's epic, the Bellum Civile has been considered an anti-Virgilian epic, disposing with the divine mechanism, treating historical events, and diverging drastically from Virgilian epic practice. The Flavian poet Statius in his 12-book epic Thebaid engages closely with the poetry of Virgil; in his epilogue he advises his poem not to "rival the divine Aeneid, but follow afar and ever venerate its footsteps."Theb.12.816–7 In Silius Italicus, Virgil finds one of his most ardent admirers. With almost every line of his epic Punica Silius references Virgil. Indeed, Silius is known to have bought Virgil's tomb and worshipped the poet.Pliny Ep. 3.7.8 Partially as a result of his so-called "Messianic" Fourth Eclogue—widely interpreted later to have predicted the birth of Jesus Christ—Virgil was in later antiquity imputed to have the magical abilities of a seer; the Sortes Vergilianae, the process of using Virgil's poetry as a tool of divination, is found in the time of Hadrian, and continued into the Middle Ages. In a similar vein Macrobius in the Saturnalia credits the work of Virgil as the embodiment of human knowledge and experience, mirroring the Greek conception of Homer.Fowler, pg.1603 Virgil also found commentators in antiquity. Servius, a commentator of the 4th century AD, based his work on the commentary of Donatus. Servius' commentary provides us with a great deal of information about Virgil's life, sources, and references; however, many modern scholars find the variable quality of his work and the often simplistic interpretations frustrating.
Late antiquity and Middle Ages
thumb|A 5th-century portrait of Virgil from the Vergilius Romanus
Even as the Western Roman empire collapsed, literate men acknowledged that Virgil was a master poet. Gregory of Tours read Virgil, whom he quotes in several places, along with some other Latin poets, though he cautions that "we ought not to relate their lying fables, lest we fall under sentence of eternal death."
Dante made Virgil his guide in Hell and the greater part of Purgatory in The Divine Comedy. Dante also mentions Virgil in De vulgari eloquentia, along with Ovid, Lucan and Statius, as one of the four regulati poetae (ii, vi, 7).
The best-known surviving manuscripts of Virgil's works include the Vergilius Augusteus, the Vergilius Vaticanus and the Vergilius Romanus.
Legends
thumb|left|Virgil in his Basket, Lucas van Leyden, 1525
In the Middle Ages, Virgil's reputation was such that it inspired legends associating him with magic and prophecy. From at least the 3rd century, Christian thinkers interpreted Eclogues 4, which describes the birth of a boy ushering in a golden age, as a prediction of Jesus' birth. In consequence, Virgil came to be seen on a similar level to the Hebrew prophets of the Bible as one who had heralded Christianity.
Possibly as early as the second century AD, Virgil's works were seen as having magical properties and were used for divination. In what became known as the Sortes Vergilianae (Virgilian Lots), passages would be selected at random and interpreted to answer questions.Ziolkowski & Putnam, pp. xxxiv, 829–830. In the 12th century, starting around Naples but eventually spreading widely throughout Europe, a tradition developed in which Virgil was regarded as a great magician. Legends about Virgil and his magical powers remained popular for over two hundred years, arguably becoming as prominent as his writings themselves.Ziolkowski & Putnam, p. xxxiv. Virgil's legacy in medieval Wales was such that the Welsh version of his name, Fferyllt or Pheryllt, became a generic term for magic-worker, and survives in the modern Welsh word for pharmacist, fferyllydd.Ziolkowski & Putnem, pp. 101–102.
The legend of "Virgil in his basket" arose in the Middle Ages, and is often seen in art and mentioned in literature as part of the Power of Women literary topos, demonstrating the disruptive force of female attractiveness on men. In this story Virgil became enamoured of a beautiful woman, sometimes described as the emperor's daughter or mistress and called Lucretia. She played him along and agreed to an assignation at her house, which he was to sneak into at night by climbing into a large basket let down from a window. When he did so he was only hoisted halfway up the wall and then left him trapped there into the next day, exposed to public ridicule. The story paralleled that of Phyllis riding Aristotle. Among other artists depicting the scene, Lucas van Leyden made a woodcut and later an engraving.Snyder, James. Northern Renaissance Art, 1985, Harry N. Abrams, ISBN 0136235964, pp. 461–462
Virgil's tomb
thumb|right|250px|The verse inscription at Virgil's tomb was supposedly composed by the poet himself: Mantua me genuit, Calabri rapuere, tenet nunc Parthenope. Cecini pascua, rura, duces. ("Mantua gave me life, the Calabrians took it away, Naples holds me now; I sang of pastures, farms, and commanders." [trans. Bernard Knox])
The structure known as "Virgil's tomb" is found at the entrance of an ancient Roman tunnel (also known as "grotta vecchia") in Piedigrotta, a district two miles from the centre of Naples, near the Mergellina harbor, on the road heading north along the coast to Pozzuoli. While Virgil was already the object of literary admiration and veneration before his death, in the Middle Ages his name became associated with miraculous powers, and for a couple of centuries his tomb was the destination of pilgrimages and veneration.
Spelling
By the fourth or fifth century A.D. the original spelling Vergilius had been corrupted to Virgilius, and then the latter spelling spread to the modern European languages. The error probably originated with scribes reproducing manuscripts by dictation. The error persisted even though, as early as the 15th century, the classical scholar Poliziano had shown Vergilius to be the original spelling. Today, the anglicisations Vergil and Virgil are both acceptable.
References
Further reading
Buckham, Philip Wentworth; Spence, Joseph; Holdsworth, Edward; Warburton, William; Jortin, John. Miscellanea Virgiliana: In Scriptis Maxime Eruditorum Virorum Varie Dispersa, in Unum Fasciculum Collecta. Cambridge: Printed for W. P. Grant, 1825.
Sondrup, Steven P. (2009). "Virgil: From Farms to Empire: Kierkegaard's Understanding of a Roman Poet" in Kierkegaard and the Roman World edited by Jon Bartley Stewart. Farnham: Ashgate.
External links
Collected works
Works of Virgil at the Perseus Digital Library
Latin texts, translations and commentaries
Aeneid translated by T. C. Williams, 1910
Aeneid translated by John Dryden, 1697
Aeneid, Eclogues and Georgics translated by J. C. Greenough, 1900
Works of Virgil at Theoi Project
Aeneid, Eclogues and Georgics translated by H. R. Fairclough, 1916
Works of Virgil at Sacred Texts
Aeneid translated by John Dryden, 1697
Eclogues and Georgics translated by J.W. MacKail, 1934
P. Vergilius Maro at The Latin Library
Virgil's works: text, concordances and frequency list.
Virgil: The Major Texts: contemporary, line by line English translations of Eclogues, Georgics, and Aeneid.
Virgil in the collection of Ferdinand, Duke of Calabria at Somni:
Publii Vergilii Maronis Opera Naples and Milan, 1450.
Publii Vergilii Maronis Opera Italy, between 1470 and 1499.
Publii Vergilii Maronis Opera Milan, 1465.
Biography Suetonius: The Life of Virgil, an English translation.
Vita Vergiliana, Aelius Donatus' Life of Virgil in the original Latin.
Virgil.org: Aelius Donatus' Life of Virgil translated into English by David Wilson-Okamura
Project Gutenberg edition of Vergil—A Biography, by Tenney Frank.
Vergilian Chronology (in German).
Commentary "A new Aeneid for the 21st century". A review of Robert Fagles's new translation of the Aeneid in the TLS, February 9, 2007.
Virgilmurder (Jean-Yves Maleuvre's website setting forth his theory that Virgil was murdered by Augustus)
The Secret History of Virgil, containing a selection on the magical legends and tall tales that circulated about Virgil in the Middle Ages.
Interview with Virgil scholar Richard Thomas and poet David Ferry, who recently translated the "Georgics", on ThoughtCast
SORGLL: Aeneid, Bk I, 1–49; read by Robert Sonkowsky
SORGLL: Aeneid, Bk IV, 296–396; read by Stephen Daitz
Bibliographies'
Comprehensive bibliographies on all three of Virgil's major works, downloadable in Word or pdf format
Bibliography of works relating Vergil to the literature of the Hellenistic age
A selective Bibliographical Guide to Vergil's Aeneid
Virgil in Late Antiquity, the Middle Ages, and the Renaissance: an Online Bibliography
The article above was originally sourced from Nupedia and is open content.
Category:People from Mantua
Category:Golden Age Latin writers
Category:Latin-language writers
Category:Ancient Roman writers
Category:Roman-era poets
Category:1st-century BC writers
Category:1st-century BC Romans
Category:1st-century BC Roman poets
Category:Bucolic poets
Category:Epic poets
Category:Didactic poets
Virgilio
Category:70 BC births
Category:19 BC deaths | 32,359 | 2017-01 |
Montana | Montana is a state in the Western region of the United States. The state's name is derived from the Spanish word (mountain). Montana has several nicknames, although none official, including "Big Sky Country" and "The Treasure State", and slogans that include "Land of the Shining Mountains" and more recently "The Last Best Place". Montana has a border with three Canadian provinces: British Columbia, Alberta, and Saskatchewan, the only state to do so. It also borders North Dakota and South Dakota to the east, Wyoming to the south, and Idaho to the west and southwest. Montana is ranked 4th in size, but 44th in population and 48th in population density of the 50 United States. The western third of Montana contains numerous mountain ranges. Smaller island ranges are found throughout the state. In total, 77 named ranges are part of the Rocky Mountains. The eastern half of Montana is characterized by western prairie terrain and badlands.
The economy is primarily based on agriculture, including ranching and cereal grain farming. Other significant economic activities include oil, gas, coal and hard rock mining, lumber, and the fastest-growing sector, tourism. The health care, service, and government sectors also are significant to the state's economy. Millions of tourists annually visit Glacier National Park, the Little Bighorn Battlefield National Monument, and Yellowstone National Park.
Etymology and naming history
The name Montana comes from the Spanish word Montaña and the Latin word Montana, meaning "mountain", or more broadly, "mountainous country". Montaña del Norte was the name given by early Spanish explorers to the entire mountainous region of the west. The name Montana was added to a bill by the United States House Committee on Territories, which was chaired at the time by Rep. James Ashley of Ohio, for the territory that would become Idaho Territory. The name was changed by Representatives Henry Wilson (Massachusetts) and Benjamin F. Harding (Oregon), who complained Montana had "no meaning". When Ashley presented a bill to establish a temporary government in 1864 for a new territory to be carved out of Idaho, he again chose Montana Territory. This time Rep. Samuel Cox, also of Ohio, objected to the name. Cox complained that the name was a misnomer given most of the territory was not mountainous and that a Native American name would be more appropriate than a Spanish one. Other names such as Shoshone were suggested, but it was decided that the Committee on Territories could name it whatever they wanted, so the original name of Montana was adopted.
Geography
thumb|left|Map of Montana
With an area of , Montana is slightly larger than Japan. It is the fourth largest state in the United States after Alaska, Texas, and California; the largest landlocked U.S. state; and the world's 56th largest national state/province subdivision. To the north, Montana shares a border with three Canadian provinces: British Columbia, Alberta, and Saskatchewan, the only state to do so. It borders North Dakota and South Dakota to the east, Wyoming to the south and Idaho to the west and southwest.
Topography
The state's topography is roughly defined by the Continental Divide, which splits much of the state into distinct eastern and western regions. Most of Montana's 100 or more named mountain ranges are in the state's western half, most of which is geologically and geographically part of the Northern Rocky Mountains. The Absaroka and Beartooth ranges in the state's south-central part are technically part of the Central Rocky Mountains. The Rocky Mountain Front is a significant feature in the state's north-central portion, and isolated island ranges that interrupt the prairie landscape common in the central and eastern parts of the state. About 60 percent of the state is prairie, part of the northern Great Plains.
The Bitterroot Mountains—one of the longest continuous ranges in the Rocky Mountain chain from Alaska to Mexico—along with smaller ranges, including the Coeur d'Alene Mountains and the Cabinet Mountains, divide the state from Idaho. The southern third of the Bitterroot range blends into the Continental Divide. Other major mountain ranges west of the Divide include the Cabinet Mountains, the Anaconda Range, the Missions, the Garnet Range, Sapphire Mountains, and Flint Creek Range.
thumb|Montana terrain
The Divide's northern section, where the mountains rapidly give way to prairie, is part of the Rocky Mountain Front. The front is most pronounced in the Lewis Range, located primarily in Glacier National Park. Due to the configuration of mountain ranges in Glacier National Park, the Northern Divide (which begins in Alaska's Seward Peninsula) crosses this region and turns east in Montana at Triple Divide Peak. It causes the Waterton River, Belly, and Saint Mary rivers to flow north into Alberta, Canada. There they join the Saskatchewan River, which ultimately empties into Hudson Bay.
East of the divide, several roughly parallel ranges cover the state's southern part, including the Gravelly Range, the Madison Range, Gallatin Range, Absaroka Mountains and the Beartooth Mountains. The Beartooth Plateau is the largest continuous land mass over high in the continental United States. It contains the state's highest point, Granite Peak, high. North of these ranges are the Big Belt Mountains, Bridger Mountains, Tobacco Roots, and several island ranges, including the Crazy Mountains and Little Belt Mountains.
thumb|left|St. Mary Lake in Glacier National Park
Between many mountain ranges are rich river valleys. The Big Hole Valley, Bitterroot Valley, Gallatin Valley, Flathead Valley, and Paradise Valley have extensive agricultural resources and multiple opportunities for tourism and recreation.
East and north of this transition zone are the expansive and sparsely populated Northern Plains, with tableland prairies, smaller island mountain ranges, and badlands. The isolated island ranges east of the Divide include the Bear Paw Mountains, Bull Mountains, Castle Mountains, Crazy Mountains, Highwood Mountains, Judith Mountains, Little Belt Mountains, Little Rocky Mountains, the Pryor Mountains, Snowy Mountains, Sweet Grass Hills, and—in the state's southeastern corner near Ekalaka—the Long Pines. Many of these isolated eastern ranges were created about 120 to 66 million years ago when magma welling up from the interior cracked and bowed the earth's surface here.
The area east of the divide in the state' north-central portion is known for the Missouri Breaks and other significant rock formations. Three buttes south of Great Falls are major landmarks: Cascade, Crown, Square, Shaw and Buttes. Known as laccoliths, they formed when igneous rock protruded through cracks in the sedimentary rock. The underlying surface consists of sandstone and shale. Surface soils in the area are highly diverse, and greatly affected by the local geology, whether glaciated plain, intermountain basin, mountain foothills, or tableland. Foothill regions are often covered in weathered stone or broken slate, or consist of uncovered bare rock (usually igneous, quartzite, sandstone, or shale). The soil of intermountain basins usually consists of clay, gravel, sand, silt, and volcanic ash, much of it laid down by lakes which covered the region during the Oligocene 33 to 23 million years ago. Tablelands are often topped with argillite gravel and weathered quartzite, occasionally underlain by shale. The glaciated plains are generally covered in clay, gravel, sand, and silt left by the proglacial Lake Great Falls or by moraines or gravel-covered former lake basins left by the Wisconsin glaciation 85,000 to 11,000 years ago. Farther east, areas such as Makoshika State Park near Glendive and Medicine Rocks State Park near Ekalaka contain some of the most scenic badlands regions in the state.
thumb|250px|The Belly River in Waterton Lakes National Park
The Hell Creek Formation in Northeast Montana is a major source of dinosaur fossils. Paleontologist Jack Horner of the Museum of the Rockies in Bozeman brought this formation to the world's attention with several major finds.
Rivers, lakes and reservoirs
Montana has thousands of named rivers and creeks, of which are known for "blue-ribbon" trout fishing. Montana's water resources provide for recreation, hydropower, crop and forage irrigation, mining, and water for human consumption. Montana is one of few geographic areas in the world whose rivers form parts of three major watersheds (i.e. where two continental divides intersect). Its rivers feed the Pacific Ocean, the Gulf of Mexico, and Hudson Bay. The watersheds divide at Triple Divide Peak in Glacier National Park.
Pacific Ocean drainage basin
thumb|Missouri Breaks region in central Montana
West of the divide, the Clark Fork of the Columbia (not to be confused with the Clarks Fork of the Yellowstone River) rises near Butte and flows northwest to Missoula, where it is joined by the Blackfoot River and Bitterroot River. Farther downstream, it is joined by the Flathead River before entering Idaho near Lake Pend Oreille. The Pend Oreille River forms the outflow of Lake Pend Oreille. The Pend Oreille River joined the Columbia River, which flows to the Pacific Ocean—making the long Clark Fork/Pend Oreille (considered a single river system) the longest river in the Rocky Mountains. The Clark Fork discharges the greatest volume of water of any river exiting the state. The Kootenai River in northwest Montana is another major tributary of the Columbia.
Gulf of Mexico drainage basin
East of the divide the Missouri River, which is formed by the confluence of the Jefferson, Madison and Gallatin rivers near Three Forks, flows due north through the west-central part of the state to Great Falls. From this point, it then flows generally east through fairly flat agricultural land and the Missouri Breaks to Fort Peck reservoir. The stretch of river between Fort Benton and the Fred Robinson Bridge at the western boundary of Fort Peck Reservoir was designated a National Wild and Scenic River in 1976. The Missouri enters North Dakota near Fort Union, having drained more than half the land area of Montana (). Nearly one-third of the Missouri River in Montana lies behind 10 dams: Toston, Canyon Ferry, Hauser, Holter, Black Eagle, Rainbow, Cochrane, Ryan, Morony, and Fort Peck.
The Yellowstone River rises on the continental divide near Younts Peak in Wyoming's Teton Wilderness. It flows north through Yellowstone National Park, enters Montana near Gardiner, and passes through the Paradise Valley to Livingston. It then flows northeasterly across the state through Billings, Miles City, Glendive, and Sidney. The Yellowstone joins the Missouri in North Dakota just east of Fort Union. It is the longest undammed, free-flowing river in the contiguous United States, and drains about a quarter of Montana ().
Other major Montana tributaries of the Missouri include the Smith, Milk, Marias, Judith, and Musselshell Rivers. Montana also claims the disputed title of possessing the world's shortest river, the Roe River, just outside Great Falls. Through the Missouri, these rivers ultimately join the Mississippi River and flow into the Gulf of Mexico.
Major tributaries of the Yellowstone include the Boulder, Stillwater, Clarks Fork, Bighorn, Tongue, and Powder Rivers.
Hudson Bay drainage basin
The Northern Divide turns east in Montana at Triple Divide Peak, causing the Waterton River, Belly, and Saint Mary rivers to flow north into Alberta. There they join the Saskatchewan River, which ultimately empties into Hudson Bay.
Lakes and reservoirs
There are at least 3,223 named lakes and reservoirs in Montana, including Flathead Lake, the largest natural freshwater lake in the western United States. Other major lakes include Whitefish Lake in the Flathead Valley and Lake McDonald and St. Mary Lake in Glacier National Park. The largest reservoir in the state is Fort Peck Reservoir on the Missouri river, which is contained by the second largest earthen dam and largest hydraulically filled dam in the world. Other major reservoirs include Hungry Horse on the Flathead River; Lake Koocanusa on the Kootenai River; Lake Elwell on the Marias River; Clark Canyon on the Beaverhead River; Yellowtail on the Bighorn River, Canyon Ferry, Hauser, Holter, Rainbow; and Black Eagle on the Missouri River.
Flora and fauna
thumb|Pompey's Pillar National Monument
Vegetation of the state includes lodgepole pine, ponderosa pine; Douglas fir, larch, spruce; aspen, birch, red cedar, hemlock, ash, alder; rocky mountain maple and cottonwood trees. Forests cover approximately 25 percent of the state. Flowers native to Montana include asters, bitterroots, daisies, lupins, poppies, primroses, columbine, lilies, orchids, and dryads. Several species of sagebrush and cactus and many species of grasses are common. Many species of mushrooms and lichens are also found in the state.
Montana is home to a diverse array of fauna that includes 14 amphibian, 90 fish, 117 mammal, 20 reptile and 427 bird species. Additionally, there are over 10,000 invertebrate species, including 180 mollusks and 30 crustaceans. Montana has the largest grizzly bear population in the lower 48 states. Montana hosts five federally endangered species–black-footed ferret, whooping crane, least tern, pallid sturgeon and white sturgeon and seven threatened species including the grizzly bear, Canadian lynx and bull trout. The Montana Department of Fish, Wildlife and Parks manages fishing and hunting seasons for at least 17 species of game fish including seven species of trout, walleye and smallmouth bass and at least 29 species of game birds and animals including ring-neck pheasant, grey partridge, elk, pronghorn antelope, mule deer, whitetail deer, gray wolf and bighorn sheep.
Protected lands
thumb|Bison herd grazing at the National Bison Range
Montana contains Glacier National Park, "The Crown of the Continent"; and portions of Yellowstone National Park, including three of the park's five entrances. Other federally recognized sites include the Little Bighorn National Monument, Bighorn Canyon National Recreation Area, Big Hole National Battlefield, and the National Bison Range. Approximately , or 35 percent of Montana's land is administered by federal or state agencies. The U.S. Department of Agriculture Forest Service administers of forest land in ten National Forests. There are approximately of wilderness in 12 separate wilderness areas that are part of the National Wilderness Preservation System established by the Wilderness Act of 1964. The U.S. Department of the Interior Bureau of Land Management controls of federal land. The U.S. Department of the Interior Fish and Wildlife Service administers of 1.1 million acres of National Wildlife Refuges and waterfowl production areas in Montana. The U.S. Department of the Interior Bureau of Reclamation administers approximately of land and water surface in the state. The Montana Department of Fish, Wildlife and Parks operates approximately of state parks and access points on the state's rivers and lakes. The Montana Department of Natural Resources and Conservation manages of School Trust Land ceded by the federal government under the Land Ordinance of 1785 to the state in 1889 when Montana was granted statehood. These lands are managed by the state for the benefit of public schools and institutions in the state.
thumb|right|Quake Lake was created by a landslide during the 1959 Hebgen Lake earthquake
Areas managed by the National Park Service include:
Big Hole National Battlefield near Wisdom
Bighorn Canyon National Recreation Area near Fort Smith
Glacier National Park
Grant-Kohrs Ranch National Historic Site at Deer Lodge
Lewis and Clark National Historic Trail
Little Bighorn Battlefield National Monument near Crow Agency
Nez Perce National Historical Park
Yellowstone National Park
Climate
thumb|Left|Temperature and precipitation for Montana's capital city, Helena
thumb|Köppen climate types of Montana
Montana is a large state with considerable variation in geography, and the climate is, therefore, equally varied. The state spans from below the 45th parallel (the line equidistant between the equator and North Pole) to the 49th parallel, and elevations range from under to nearly above sea level. The western half is mountainous, interrupted by numerous large valleys. Eastern Montana comprises plains and badlands, broken by hills and isolated mountain ranges, and has a semi-arid, continental climate (Köppen climate classification BSk). The Continental Divide has a considerable effect on the climate, as it restricts the flow of warmer air from the Pacific from moving east, and drier continental air from moving west. The area west of the divide has a modified northern Pacific coast climate, with milder winters, cooler summers, less wind and a longer growing season. Low clouds and fog often form in the valleys west of the divide in winter, but this is rarely seen in the east.
Average daytime temperatures vary from in January to in July. The variation in geography leads to great variation in temperature. The highest observed summer temperature was at Glendive on July 20, 1893, and Medicine Lake on July 5, 1937. Throughout the state, summer nights are generally cool and pleasant. Extremely hot weather is less common above . Snowfall has been recorded in all months of the year in the more mountainous areas of central and western Montana, though it is rare in July and August.
right|thumb|The Big Drift covering the Going-to-the-Sun Road in Glacier National Park as photographed on March 23, 2006
The coldest temperature on record for Montana is also the coldest temperature for the entire contiguous U.S. On January 20, 1954, was recorded at a gold mining camp near Rogers Pass. Temperatures vary greatly on cold nights, and Helena, to the southeast had a low of only on the same date, and an all-time record low of . Winter cold spells are usually the result of cold continental air coming south from Canada. The front is often well defined, causing a large temperature drop in a 24-hour period. Conversely, air flow from the southwest results in "chinooks". These steady (or more) winds can suddenly warm parts of Montana, especially areas just to the east of the mountains, where temperatures sometimes rise up to for periods of ten days or longer.
Loma is the site of the most extreme recorded temperature change in a 24-hour period in the United States. On January 15, 1972, a chinook wind blew in and the temperature rose from .
thumb|left|The Grinnell Glacier receives of precipitation per year
thumb|left|Clark Fork River, Missoula, in autumn
Average annual precipitation is , but great variations are seen. The mountain ranges block the moist Pacific air, holding moisture in the western valleys, and creating rain shadows to the east. Heron, in the west, receives the most precipitation, . On the eastern (leeward) side of a mountain range, the valleys are much drier; Lonepine averages , and Deer Lodge of precipitation. The mountains can receive over , for example the Grinnell Glacier in Glacier National Park gets . An area southwest of Belfry averaged only over a sixteen-year period. Most of the larger cities get of snow each year. Mountain ranges can accumulate of snow during a winter. Heavy snowstorms may occur from September through May, though most snow falls from November to March.
The climate has become warmer in Montana and continues to do so. The glaciers in Glacier National Park have receded and are predicted to melt away completely in a few decades. Many Montana cities set heat records during July 2007, the hottest month ever recorded in Montana. Winters are warmer, too, and have fewer cold spells. Previously these cold spells had killed off bark beetles, but these are now attacking the forests of western Montana. The warmer winters in the region have allowed various species to expand their ranges and proliferate. The combination of warmer weather, attack by beetles, and mismanagement during past years has led to a substantial increase in the severity of forest fires in Montana. According to a study done for the U.S. Environmental Protection Agency by the Harvard School of Engineering and Applied Science, portions of Montana will experience a 200-percent increase in area burned by wildfires, and an 80-percent increase in related air pollution.
The table below lists average temperatures for the warmest and coldest month for Montana's seven largest cities. The coldest month varies between December and January depending on location, although figures are similar throughout.
+Average daily maximum and minimum temperatures for selected cities in MontanaLocationJuly (°F)Coldest month (°F)July (°C)Coldest month (°C)Billings 89/54 32/14 32/15 4/–9Missoula 86/51 30/11 31/16 −0/–8Great Falls 83/51 28/11 34/15 1/–9Bozeman 81/51 27/10 31/12 −0/–11Butte 80/45 27/7 30/5 −1/–15Helena 86/54 30/12 31/12 −0/–11Kalispell 81/48 27/9 29/14 −1/–10
Antipodes
Montana is one of only two continental US states (along with Colorado) which is antipodal to land. The Kerguelen Islands are antipodal to the Montana–Saskatchewan–Alberta border. No towns are precisely antipodal to Kerguelen, though Chester and Rudyard are close.
History
thumb|upright|Assiniboine family, Montana, 1890–91
Various indigenous peoples lived in the territory of the present-day state of Montana for thousands of years. Historic tribes encountered by Europeans and settlers from the United States included the Crow in the south-central area; the Cheyenne in the southeast; the Blackfeet, Assiniboine and Gros Ventres in the central and north-central area; and the Kootenai and Salish in the west. The smaller Pend d'Oreille and Kalispel tribes lived near Flathead Lake and the western mountains, respectively.
The land in Montana east of the continental divide was part of the Louisiana Purchase in 1803. Subsequent to and particularly in the decades following the Lewis and Clark Expedition, American, British and French traders operated a fur trade, typically working with indigenous peoples, in both eastern and western portions of what would become Montana. These dealings were not always peaceful, and though the fur trade brought some material gain for indigenous tribal groups it also brought exposure to European diseases and altered their economic and cultural traditions. Until the Oregon Treaty (1846), land west of the continental divide was disputed between the British and U.S. and was known as the Oregon Country. The first permanent settlement by Euro-Americans in what today is Montana was St. Mary's (1841) near present-day Stevensville. In 1847, Fort Benton was established as the uppermost fur-trading post on the Missouri River. In the 1850s, settlers began moving into the Beaverhead and Big Hole valleys from the Oregon Trail and into the Clark's Fork valley.
The first gold discovered in Montana was at Gold Creek near present-day Garrison in 1852. A series of major mining discoveries in the western third of the state starting in 1862 found gold, silver, copper, lead, coal (and later oil) that attracted tens of thousands of miners to the area. The richest of all gold placer diggings was discovered at Alder Gulch, where the town of Virginia City was established. Other rich placer deposits were found at Last Chance Gulch, where the city of Helena now stands, Confederate Gulch, Silver Bow, Emigrant Gulch, and Cooke City. Gold output from 1862 through 1876 reached $144 million; silver then became even more important. The largest mining operations were in the city of Butte, which had important silver deposits and gigantic copper deposits.
Montana territory
Before the creation of Montana Territory (1864–1889), various parts of what is now Montana were parts of Oregon Territory (1848–1859), Washington Territory (1853–1863), Idaho Territory (1863–1864), and Dakota Territory (1861–1864). Montana became a United States territory (Montana Territory) on May 26, 1864. The first territorial capital was at Bannack. The first territorial governor was Sidney Edgerton. The capital moved to Virginia City in 1865 and to Helena in 1875. In 1870, the non-Indian population of Montana Territory was 20,595. The Montana Historical Society, founded on February 2, 1865, in Virginia City is the oldest such institution west of the Mississippi (excluding Louisiana). In 1869 and 1870 respectively, the Cook–Folsom–Peterson and the Washburn–Langford–Doane Expeditions were launched from Helena into the Upper Yellowstone region and directly led to the creation of Yellowstone National Park in 1872.
Conflicts
As white settlers began populating Montana from the 1850s through the 1870s, disputes with Native Americans ensued, primarily over land ownership and control. In 1855, Washington Territorial Governor Isaac Stevens negotiated the Hellgate treaty between the United States Government and the Salish, Pend d'Oreille, and the Kootenai people of western Montana, which established boundaries for the tribal nations. The treaty was ratified in 1859. While the treaty established what later became the Flathead Indian Reservation, trouble with interpreters and confusion over the terms of the treaty led whites to believe that the Bitterroot Valley was opened to settlement, but the tribal nations disputed those provisions. The Salish remained in the Bitterroot Valley until 1891.
The first U.S. Army post established in Montana was Camp Cooke in 1866, on the Missouri River, to protect steamboat traffic going to Fort Benton, Montana. More than a dozen additional military outposts were established in the state. Pressure over land ownership and control increased due to discoveries of gold in various parts of Montana and surrounding states. Major battles occurred in Montana during Red Cloud's War, the Great Sioux War of 1876, the Nez Perce War and in conflicts with Piegan Blackfeet. The most notable of these were the Marias Massacre (1870), Battle of the Little Bighorn (1876), Battle of the Big Hole (1877) and Battle of Bear Paw (1877). The last recorded conflict in Montana between the U.S. Army and Native Americans occurred in 1887 during the Battle of Crow Agency in the Big Horn country. Indian survivors who had signed treaties were generally required to move onto reservations.
thumb|Chief Joseph and Col. John Gibbon met again on the Big Hole Battlefield site in 1889
Simultaneously with these conflicts, bison, a keystone species and the primary protein source that Native people had survived on for centuries were being destroyed. Some estimates say there were over 13 million bison in Montana in 1870. In 1875, General Philip Sheridan pleaded to a joint session of Congress to authorize the slaughtering of herds in order to deprive the Indians of their source of food. By 1884, commercial hunting had brought bison to the verge of extinction; only about 325 bison remained in the entire United States.
Cattle ranching
Cattle ranching has been central to Montana's history and economy since Johnny Grant began wintering cattle in the Deer Lodge Valley in the 1850s and traded cattle fattened in fertile Montana valleys with emigrants on the Oregon Trail. Nelson Story brought the first Texas Longhorn cattle into the territory in 1866. Granville Stuart, Samuel Hauser and Andrew J. Davis started a major open range cattle operation in Fergus County in 1879. The Grant-Kohrs Ranch National Historic Site in Deer Lodge is maintained today as a link to the ranching style of the late 19th century. Operated by the National Park Service, it is a working ranch.
Railroads
Tracks of the Northern Pacific Railroad (NPR) reached Montana from the west in 1881 and from the east in 1882. However, the railroad played a major role in sparking tensions with Native American tribes in the 1870s. Jay Cooke, the NPR president launched major surveys into the Yellowstone valley in 1871, 1872 and 1873 which were challenged forcefully by the Sioux under chief Sitting Bull. These clashes, in part, contributed to the Panic of 1873, a financial crisis that delayed construction of the railroad into Montana. Surveys in 1874, 1875 and 1876 helped spark the Great Sioux War of 1876. The transcontinental NPR was completed on September 8, 1883, at Gold Creek.
Tracks of the Great Northern Railroad (GNR) reached eastern Montana in 1887 and when they reached the northern Rocky Mountains in 1890, the GNR became a significant promoter of tourism to Glacier National Park region. The transcontinental GNR was completed on January 6, 1893, at Scenic, Washington.
In 1881, the Utah and Northern Railway a branch line of the Union Pacific completed a narrow gauge line from northern Utah to Butte. A number of smaller spur lines operated in Montana from 1881 into the 20th century including the Oregon Short Line, Montana Railroad and Milwaukee Road.
Statehood
thumb|Buffalo Soldiers, Ft. Keogh, Montana, 1890. The nickname was given to the "Black Cavalry" by the Native American tribes they fought.
Under Territorial Governor Thomas Meagher, Montanans held a constitutional convention in 1866 in a failed bid for statehood. A second constitutional convention was held in Helena in 1884 that produced a constitution ratified 3:1 by Montana citizens in November 1884. For political reasons, Congress did not approve Montana statehood until 1889. Congress approved Montana statehood in February 1889 and President Grover Cleveland signed an omnibus bill granting statehood to Montana, North Dakota, South Dakota and Washington once the appropriate state constitutions were crafted. In July 1889, Montanans convened their third constitutional convention and produced a constitution accepted by the people and the federal government. On November 8, 1889 President Benjamin Harrison proclaimed Montana the forty-first state in the union. The first state governor was Joseph K. Toole. In the 1880s, Helena (the current state capital) had more millionaires per capita than any other United States city.
Homesteading
The Homestead Act of 1862 provided free land to settlers who could claim and "prove-up" of federal land in the midwest and western United States. Montana did not see a large influx of immigrants from this act because 160 acres was usually insufficient to support a family in the arid territory. The first homestead claim under the act in Montana was made by David Carpenter near Helena in 1868. The first claim by a woman was made near Warm Springs Creek by Gwenllian Evans, the daughter of Deer Lodge Montana pioneer, Morgan Evans. By 1880, there were farms in the more verdant valleys of central and western Montana, but few on the eastern plains.
The Desert Land Act of 1877 was passed to allow settlement of arid lands in the west and allotted to settlers for a fee of $.25 per acre and a promise to irrigate the land. After three years, a fee of one dollar per acre would be paid and the land would be owned by the settler. This act brought mostly cattle and sheep ranchers into Montana, many of whom grazed their herds on the Montana prairie for three years, did little to irrigate the land and then abandoned it without paying the final fees. Some farmers came with the arrival of the Great Northern and Northern Pacific Railroads throughout the 1880s and 1890s, though in relatively small numbers.
thumb|Mennonite family in Montana, c. 1937
In the early 1900s, James J. Hill of the Great Northern began promoting settlement in the Montana prairie to fill his trains with settlers and goods. Other railroads followed suit. In 1902, the Reclamation Act was passed, allowing irrigation projects to be built in Montana's eastern river valleys. In 1909, Congress passed the Enlarged Homestead Act that expanded the amount of free land from per family and in 1912 reduced the time to "prove up" on a claim to three years. In 1916, the Stock-Raising Homestead Act allowed homesteads of 640 acres in areas unsuitable for irrigation. This combination of advertising and changes in the Homestead Act drew tens of thousands of homesteaders, lured by free land, with World War I bringing particularly high wheat prices. In addition, Montana was going through a temporary period of higher-than-average precipitation. Homesteaders arriving in this period were known as "Honyockers", or "scissorbills." Though the word "honyocker", possibly derived from the ethnic slur "hunyak," was applied in a derisive manner at homesteaders as being "greenhorns", "new at his business" or "unprepared", the reality was that a majority of these new settlers had previous farming experience, though there were also many who did not.
However, farmers faced a number of problems. Massive debt was one. Also, most settlers were from wetter regions, unprepared for the dry climate, lack of trees, and scarce water resources. In addition, small homesteads of fewer than were unsuited to the environment. Weather and agricultural conditions are much harsher and drier west of the 100th meridian. Then, the droughts of 1917–1921 proved devastating. Many people left, and half the banks in the state went bankrupt as a result of providing mortgages that could not be repaid. As a result, farm sizes increased while the number of farms decreased
By 1910, homesteaders filed claims on over five million acres, and by 1923, over 93 million acres were farmed. In 1910, the Great Falls land office alone saw over 1,000 homestead filings per month, and the peak of 1917– 1918 saw 14,000 new homesteads each year. But significant drop occurred following drought in 1919.
Montana and World War I
As World War I broke out, Jeannette Rankin, the first woman in the United States to be a member of Congress, was a pacifist and voted against the United States' declaration of war. Her actions were widely criticized in Montana, where public support for the war was strong, and wartime sentiment reached levels of hyper-patriotism among many Montanans. In 1917–18, due to a miscalculation of Montana's population, approximately 40,000 Montanans, ten percent of the state's population, either volunteered or were drafted into the armed forces. This represented a manpower contribution to the war that was 25 percent higher than any other state on a per capita basis. Approximately 1500 Montanans died as a result of the war and 2437 were wounded, also higher than any other state on a per capita basis. Montana's Remount station in Miles City provided 10,000 cavalry horses for the war, more than any other Army post in the US. The war created a boom for Montana mining, lumber and farming interests as demand for war materials and food increased.
In June 1917, the U.S. Congress passed the Espionage Act of 1917 which was later extended by the Sedition Act of 1918, enacted in May 1918. In February 1918, the Montana legislature had passed the Montana Sedition Act, which was a model for the federal version. In combination, these laws criminalized criticism of the U.S. government, military, or symbols through speech or other means. The Montana Act led to the arrest of over 200 individuals and the conviction of 78, mostly of German or Austrian descent. Over 40 spent time in prison. In May 2006, then-Governor Brian Schweitzer posthumously issued full pardons for all those convicted of violating the Montana Sedition Act.
The Montanans who opposed U.S. entry into the war included certain immigrant groups of German and Irish heritage as well as pacifist Anabaptist people such as the Hutterites and Mennonites, many of whom were also of Germanic heritage. In turn, pro-War groups formed, such as the Montana Council of Defense, created by Governor Samuel V. Stewart as well as local "loyalty committees."
War sentiment was complicated by labor issues. The Anaconda Copper Company, which was at its historic peak of copper production, was an extremely powerful force in Montana, but also faced criticism and opposition from socialist newspapers and unions struggling to make gains for their members. In Butte, a multi-ethnic community with significant European immigrant population, labor unions, particularly the newly formed Metal Mine Workers' Union, opposed the war on grounds that it mostly profited large lumber and mining interests. In the wake of ramped-up mine production and the Speculator Mine disaster in June 1917, Industrial Workers of the World organizer Frank Little arrived in Butte to organize miners. He gave some speeches with inflammatory anti-war rhetoric. On August 1, 1917, he was dragged from his boarding house by masked vigilantes, and hanged from a railroad trestle, considered a lynching. Little's murder and the strikes that followed resulted in the National Guard being sent to Butte to restore order. Overall, anti-German and anti-labor sentiment increased and created a movement that led to the passage of the Montana Sedition Act the following February. In addition, the Council of Defense was made a state agency with the power to prosecute and punish individuals deemed in violation of the Act. The Council also passed rules limiting public gatherings and prohibiting the speaking of German in public.
In the wake of the legislative action in 1918, emotions rose. U.S. Attorney Burton K. Wheeler and several District Court Judges who hesitated to prosecute or convict people brought up on charges were strongly criticized. Wheeler was brought before the Council of Defense, though he avoided formal proceedings, and a District Court judge from Forsyth was impeached. There were burnings of German-language books and several near-hangings. The prohibition on speaking German remained in effect into the early 1920s. Complicating the wartime struggles, the 1918 Influenza epidemic claimed the lives of over 5,000 Montanans. The period has been dubbed "Montana's Agony" by some historians due to the suppression of civil liberties that occurred.
Depression era
An economic depression began in Montana after World War I and lasted through the Great Depression until the beginning of World War II. This caused great hardship for farmers, ranchers, and miners. The wheat farms in eastern Montana make the state a major producer; the wheat has a relatively high protein content and thus commands premium prices.
Montana and World War II
When the U.S. entered World War II on December 8, 1941, many Montanans already had enlisted in the military to escape the poor national economy of the previous decade. Another 40,000-plus Montanans entered the armed forces in the first year following the declaration of war, and over 57,000 joined up before the war ended. These numbers constituted about 10 percent of the state's total population, and Montana again contributed one of the highest numbers of soldiers per capita of any state. Many Native Americans were among those who served, including soldiers from the Crow Nation who became Code Talkers. At least 1500 Montanans died in the war. Montana also was the training ground for the First Special Service Force or "Devil's Brigade," a joint U.S-Canadian commando-style force that trained at Fort William Henry Harrison for experience in mountainous and winter conditions before deployment. Air bases were built in Great Falls, Lewistown, Cut Bank and Glasgow, some of which were used as staging areas to prepare planes to be sent to allied forces in the Soviet Union. During the war, about 30 Japanese balloon bombs were documented to have landed in Montana, though no casualties nor major forest fires were attributed to them.
In 1940, Jeannette Rankin was again elected to Congress. In 1941, as she had in 1917, she voted against the United States' declaration of war after the Japanese attack on Pearl Harbor. Hers was the only vote against the war, and in the wake of public outcry over her vote, Rankin required police protection for a time. Other pacifists tended to be those from "peace churches" who generally opposed war. Many individuals claiming conscientious objector status from throughout the U.S. were sent to Montana during the war as smokejumpers and for other forest fire-fighting duties.
Other military
During World War II, the planned battleship USS Montana was named in honor of the state. However, the battleship was never completed. Montana is the only one of the first 48 states lacking a completed battleship being named for it. Alaska and Hawaii have both had nuclear submarines named after them. Montana is the only state in the union without a modern naval ship named in its honor. However, in August 2007 Senator Jon Tester made a request to the Navy that a submarine be christened USS Montana. Secretary of the Navy Ray Mabus announced on September 3, 2015 that Virginia Class attack Submarine SSN-794 will bear the state's namesake. This will be the second commissioned warship to bear the name Montana.
Cold War Montana
In the post-World War II Cold War era, Montana became host to U.S. Air Force Military Air Transport Service (1947) for airlift training in C-54 Skymasters and eventually, in 1953 Strategic Air Command air and missile forces were based at Malmstrom Air Force Base in Great Falls. The base also hosted the 29th Fighter Interceptor Squadron, Air Defense Command from 1953 to 1968. In December 1959, Malmstrom AFB was selected as the home of the new Minuteman I ballistic missile. The first operational missiles were in-place and ready in early 1962. In late 1962 missiles assigned to the 341st Strategic Missile Wing would play a major role in the Cuban Missile Crisis. When the Soviets removed their missiles from Cuba, President John F. Kennedy said the Soviets backed down because they knew he had an "Ace in the Hole," referring directly to the Minuteman missiles in Montana. Montana eventually became home to the largest ICBM field in the U.S. covering .
Demographics
thumb|Montana population density map
The United States Census Bureau estimates that the population of Montana was 1,032,949 on July 1, 2015, a 4.40% increase since the 2010 United States Census. The 2010 census put Montana's population at 989,415 which is an increase of 43,534 people, or 4.40 percent, since 2010. During the first decade of the new century, growth was mainly concentrated in Montana's seven largest counties, with the highest percentage growth in Gallatin County, which saw a 32 percent increase in its population from 2000-2010. The city seeing the largest percentage growth was Kalispell with 40.1 percent, and the city with the largest increase in actual residents was Billings with an increase in population of 14,323 from 2000-2010.
On January 3, 2012, the Census and Economic Information Center (CEIC) at the Montana Department of Commerce estimated Montana had hit the one million population mark sometime between November and December 2011. The United States Census Bureau estimates that the population of Montana was 1,005,141 on July 1, 2012, a 1.6 percent increase since the 2010 United States Census.
According to the 2010 Census, 89.4 percent of the population was White (87.8 percent Non-Hispanic White), 6.3 percent American Indian and Alaska Native, 2.9 percent Hispanics and Latinos of any race, 0.6 percent Asian, 0.4 percent Black or African American, 0.1 percent Native Hawaiian and Other Pacific Islander, 0.6 percent from Some Other Race, and 2.5 percent from two or more races. The largest European ancestry groups in Montana as of 2010 are: German (27.0 percent), Irish (14.8 percent), English (12.6 percent), Norwegian (10.9 percent), French (4.7 percent) and Italian (3.4 percent).
+ Montana Racial Breakdown of Population Racial composition 1990Historical Census Statistics on Population Totals By Race, 1790 to 1990, and By Hispanic Origin, 1970 to 1990, For The United States, Regions, Divisions, and States 2000Population of Montana: Census 2010 and 2000 Interactive Map, Demographics, Statistics, Quick Facts 20102010 Census Data White 92.7% 90.6% 89.4% Native 6.0% 6.2% 6.3% Asian 0.5% 0.5% 0.6% Black 0.3% 0.3% 0.4% Native Hawaiian andother Pacific Islander - 0.1% 0.1% Other race 0.5% 0.6% 0.6% Two or more races - 1.7% 2.5%
Language
English is the official language in the state of Montana, as it is in many U.S. states. According to the 2000 U.S. Census, 94.8 percent of the population aged 5 and older speak English at home. Spanish is the language most commonly spoken at home other than English. There were about 13,040 Spanish-language speakers in the state (1.4 percent of the population) in 2011. There were also 15,438 (1.7 percent of the state population) speakers of Indo-European languages other than English or Spanish, 10,154 (1.1 percent) speakers of a Native American language, and 4,052 (0.4 percent) speakers of an Asian or Pacific Islander language. Other languages spoken in Montana (as of 2013) include Assiniboine (about 150 speakers in the Montana and Canada), Blackfoot (about 100 speakers), Cheyenne (about 1,700 speakers), Plains Cree (about 100 speakers), Crow (about 3,000 speakers), Dakota (about 18,800 speakers in Minnesota, Montana, Nebraska, North Dakota, and South Dakota), German Hutterite (about 5,600 speakers), Gros Ventre (about 10 speakers), Kalispel-Pend d'Oreille (about 64 speakers), Kutenai (about 6 speakers), and Lakota (about 6,000 speakers in Minnesota, Montana, Nebraska, North Dakota, South Dakota). The United States Department of Education estimated in 2009 that 5,274 students in Montana spoke a language at home other than English. These included a Native American language (64 percent), German (4 percent), Spanish (3 percent), Russian (1 percent), and Chinese (less than 0.5 percent).
+ Top 14 Non-English Languages Spoken in Montana Language Percentage of population(as of 2000) Spanish 1.5% German 1.1% French and Crow (tied) 0.4% Scandinavian languages (including Danish, Norwegian, and Swedish) 0.2% Italian, Japanese, Russian, Native American languages (other than Crow; significantly Cheyenne),Cheyenne | Ethnologue Slavic languages (including Czech, Slovak, and Ukrainian) (tied) 0.1%
Intrastate demographics
Montana has a larger Native American population numerically and percentage-wise than most U.S. states. Although the state ranked 45th in population (according to the 2010 U.S. Census), it ranked 19th in total native people population. Native people constituted 6.5 percent of the state's total population, the sixth highest percentage of all 50 states. Montana has three counties in which Native Americans are a majority: Big Horn, Glacier, and Roosevelt. Other counties with large Native American populations include Blaine, Cascade, Hill, Missoula, and Yellowstone counties. The state's Native American population grew by 27.9 percent between 1980 and 1990 (at a time when Montana's entire population rose just 1.6 percent), and by 18.5 percent between 2000 and 2010. As of 2009, almost two-thirds of Native Americans in the state live in urban areas. Of Montana's 20 largest cities, Polson (15.7 percent), Havre (13.0 percent), Great Falls (5.0 percent), Billings (4.4 percent), and Anaconda (3.1 percent) had the greatest percentage of Native American residents in 2010. Billings (4,619), Great Falls (2,942), Missoula (1,838), Havre (1,210), and Polson (706) have the most Native Americans living there. The state's seven reservations include more than twelve distinct Native American ethnolinguistic groups.
While the largest European-American population in Montana overall is German, pockets of significant Scandinavian ancestry are prevalent in some of the farming-dominated northern and eastern prairie regions, parallel to nearby regions of North Dakota and Minnesota. Farmers of Irish, Scots, and English roots also settled in Montana. The historically mining-oriented communities of western Montana such as Butte have a wider range of European-American ethnicity; Finns, Eastern Europeans and especially Irish settlers left an indelible mark on the area, as well as people originally from British mining regions such as Cornwall, Devon and Wales. The nearby city of Helena, also founded as a mining camp, had a similar mix in addition to a small Chinatown. Many of Montana's historic logging communities originally attracted people of Scottish, Scandinavian, Slavic, English and Scots-Irish descent.
The Hutterites, an Anabaptist sect originally from Switzerland, settled here, and today Montana is second only to South Dakota in U.S. Hutterite population with several colonies spread across the state. Beginning in the mid-1990s, the state also saw an influx of Amish, who relocated to Montana from the increasingly urbanized areas of Ohio and Pennsylvania.
Montana's Hispanic population is concentrated around the Billings area in south-central Montana, where many of Montana's Mexican-Americans have been in the state for generations. Great Falls has the highest percentage of African-Americans in its population, although Billings has more African American residents than Great Falls.
The Chinese in Montana, while a low percentage today, have historically been an important presence. About 2000–3000 Chinese miners were in the mining areas of Montana by 1870, and 2500 in 1890. However, public opinion grew increasingly negative toward them in the 1890s and nearly half of the state's Asian population left the state by 1900. Today, there is a significant Hmong population centered in the vicinity of Missoula. Montanans who claim Filipino ancestry amount to almost 3,000, making them currently the largest Asian American group in the state.
Religion
According to the Pew Forum, the religious affiliations of the people of Montana are as follows: Protestant 47%, Catholic 23%, LDS (Mormon) 5%, Jehovah's Witness 2%, Buddhist 1%, Jewish 0.5%, Muslim 0.5%, Hindu 0.5% and Non-Religious at 20%.Religion in America: U.S. Religious Data, Demographics and Statistics | Pew Research Center
The largest denominations in Montana as of 2010 were the Catholic Church with 127,612 adherents, The Church of Jesus Christ of Latter-day Saints with 46,484 adherents, Evangelical Lutheran Church in America with 38,665 adherents, and non-denominational Evangelical Protestant with 27,370 adherents.
The Association of Religion Data Archives | Maps & Reports
Native American population
thumb|left|Seven Indian reservations in Montana (borders are not exact)
Approximately 66,000 people of Native American heritage live in Montana. Stemming from multiple treaties and federal legislation, including the Indian Appropriations Act (1851), the Dawes Act (1887), and the Indian Reorganization Act (1934), seven Indian reservations, encompassing eleven federally recognized tribal nations, were created in Montana. A twelfth nation, the Little Shell Chippewa is a "landless" people headquartered in Great Falls; it is recognized by the state of Montana but not by the U.S. government. The Blackfeet nation is headquartered on the Blackfeet Indian Reservation (1851) in Browning, Crow on the Crow Indian Reservation (1851) in Crow Agency, Confederated Salish and Kootenai and Pend d'Oreille on the Flathead Indian Reservation (1855) in Pablo, Northern Cheyenne on the Northern Cheyenne Indian Reservation (1884) at Lame Deer, Assiniboine and Gros Ventre on the Fort Belknap Indian Reservation (1888) in Fort Belknap Agency, Assiniboine and Sioux on the Fort Peck Indian Reservation (1888) at Poplar, and Chippewa-Cree on the Rocky Boy's Indian Reservation (1916) near Box Elder. Approximately 63% of all Native people live off the reservations, concentrated in the larger Montana cities, with the largest concentration of urban Indians in Great Falls. The state also has a small Métis population, and 1990 census data indicated that people from as many as 275 different tribes lived in Montana.
Montana's Constitution specifically reads that "the state recognizes the distinct and unique cultural heritage of the American Indians and is committed in its educational goals to the preservation of their cultural integrity." It is the only state in the U.S. with such a constitutional mandate. The Indian Education for All Act (IEFA) was passed in 1999 to provide funding for this mandate and ensure implementation. It mandates that all schools teach American Indian history, culture, and heritage from preschool through college. For kindergarten through 12th-grade students, an "Indian Education for All" curriculum from the Montana Office of Public Instruction is available free to all schools. The state was sued in 2004 because of lack of funding, and the state has increased its support of the program. South Dakota passed similar legislation in 2007, and Wisconsin was working to strengthen its own program based on this model - and the current practices of Montana's schools. Each Indian reservation in the state has a fully accredited tribal colleges. The University of Montana "was the first to establish dual admission agreements with all of the tribal colleges and as such it was the first institution in the nation to actively facilitate student transfer from the tribal colleges"
Economy
thumb|Montana ranks 2nd nationally in craft breweries per capita.
thumb|upright|First Interstate Center in downtown Billings, the tallest building in Montana The Bureau of Economic Analysis estimates that Montana's total state product in 2014 was $44.3 billion. Per capita personal income in 2014 was $40,601, 35th in the nation.
Montana is a relative hub of beer microbrewing, ranking third in the nation in number of craft breweries per capita in 2011. There are significant industries for lumber and mineral extraction; the state's resources include gold, coal, silver, talc, and vermiculite. Ecotaxes on resource extraction are numerous. A 1974 state severance tax on coal (which varied from 20 to 30 percent) was upheld by the Supreme Court of the United States in Commonwealth Edison Co. v. Montana, 453 U.S. 609 (1981).
Tourism is also important to the economy with over ten million visitors a year to Glacier National Park, Flathead Lake, the Missouri River headwaters, the site of the Battle of Little Bighorn and three of the five entrances to Yellowstone National Park.
Montana's personal income tax contains 7 brackets, with rates ranging from 1 percent to 6.9 percent. Montana has no sales tax. In Montana, household goods are exempt from property taxes. However, property taxes are assessed on livestock, farm machinery, heavy equipment, automobiles, trucks, and business equipment. The amount of property tax owed is not determined solely by the property's value. The property's value is multiplied by a tax rate, set by the Montana Legislature, to determine its taxable value. The taxable value is then multiplied by the mill levy established by various taxing jurisdictions—city and county government, school districts and others.
As of June 2015, the state's unemployment rate is 3.9 percent.
Culture
Many well-known artists, photographers and authors have documented the land, culture and people of Montana in the last 100 years. Painter and sculptor Charles Marion Russell, known as "the cowboy artist" created more than 2,000 paintings of cowboys, Native Americans, and landscapes set in the Western United States and in Alberta, Canada. The C. M. Russell Museum Complex located in Great Falls, Montana houses more than 2,000 Russell artworks, personal objects, and artifacts.
Evelyn Cameron, a naturalist and photographer from Terry documented early 20th century life on the Montana prairie, taking startlingly clear pictures of everything around her: cowboys, sheepherders, weddings, river crossings, freight wagons, people working, badlands, eagles, coyotes and wolves.
Many notable Montana authors have documented or been inspired by life in Montana in both fiction and non-fiction works. Pulitzer Prize winner Wallace Earle Stegner from Great Falls was often called "The Dean of Western Writers". James Willard Schultz ("Apikuni") from Browning is most noted for his prolific stories about Blackfeet life and his contributions to the naming of prominent features in Glacier National Park.
Major cultural events
thumb|Dancers at Crow Fair in 1941
Montana hosts numerous arts and cultural festivals and events every year. Major events include:
Bozeman was once known as the "Sweet Pea capital of the nation" referencing the prolific edible pea crop. To promote the area and celebrate its prosperity, local business owners began a "Sweet Pea Carnival" that included a parade and queen contest. The annual event lasted from 1906 to 1916. Promoters used the inedible but fragrant and colorful sweet pea flower as an emblem of the celebration. In 1977 the "Sweet Pea" concept was revived as an arts festival rather than a harvest celebration, growing into a three-day event that is one of the largest festivals in Montana.
Montana Shakespeare in the Parks has been performing free, live theatrical productions of Shakespeare and other classics throughout Montana since 1973. The Montana Shakespeare Company is based in Helena.
Since 1909, the Crow Fair and Rodeo, near Hardin, has been an annual event every August in Crow Agency and is currently the largest Northern Native American gathering, attracting nearly 45,000 spectators and participants. Since 1952, North American Indian Days has been held every July in Browning.
Lame Deer hosts the annual Northern Cheyenne Powwow.
Education
Colleges and universities
The Montana University System consists of:
Dawson Community College
Flathead Valley Community College
Miles Community College
Montana State University – Bozeman
Gallatin College Montana State University – Bozeman
Montana State University – Billings
City College at Montana State University Billings – Billings
Montana State University – Northern – Havre
Great Falls College Montana State University – Great Falls
University of Montana – Missoula
Missoula College University of Montana – Missoula
Montana Tech of the University of Montana – Butte
Highlands College of Montana Tech – Butte
University of Montana Western – Dillon
Helena College University of Montana – Helena
Bitterroot College University of Montana – Hamilton
Tribal colleges in Montana include:
Aaniiih Nakoda College – Harlem
Blackfeet Community College – Browning
Chief Dull Knife College – Lame Deer
Fort Peck Community College – Poplar
Little Big Horn College – Crow Agency
Salish Kootenai College – Pablo
Stone Child College – Box Elder
There are three private, non-profit colleges in Montana:
Carroll College
Rocky Mountain College
University of Great Falls
Schools
The Montana Territory was formed on April 26, 1864, when the U.S. passed the Organic Act. Schools started forming in the area before it was officially a territory as families started settling into the area. The first schools were subscription schools that typically held in the teacher's home. The first formal school on record was at Fort Owen in Bitterroot valley in 1862. The students were Indian children and the children of Fort Owen employees. The first school term started in early winter and only lasted until February 28. Classes were taught by Mr. Robinson. Another early subscription school was started by Thomas Dimsdale in Virginia City in 1863. In this school students were charged $1.75 per week. The Montana Territorial Legislative Assembly had its inaugural meeting in 1864. The first legislature authorized counties to levy taxes for schools, which set the foundations for public schooling. Madison County was the first to take advantage of the newly authorized taxes and it formed fhe first public school in Virginia City in 1886. The first school year was scheduled to begin in January 1866, but severe weather postponed its opening until March. The first school year ran through the summer and didn't end until August 17. One of the first teachers at the school was Sarah Raymond. She was a 25-year-old woman who had traveled to Virginia City via wagon train in 1865. To become a certified teacher, Raymond took a test in her home and paid a $6 fee in gold dust to obtain a teaching certificate. With the help of an assistant teacher, Mrs. Farley, Raymond was responsible for teaching 50 to 60 students each day out of the 81 students enrolled at the school. Sarah Raymond was paid at a rate of $125 per month, and Mrs. Farley was paid $75 per month. There were no textbooks used in the school. In their place was an assortment of books brought in by various emigrants. Sarah quit teaching the following year, but would later become the Madison County superintendent of schools.
Sports
thumb|Montana Grizzlies football at Washington–Grizzly Stadium, Missoula
Professional sports
There are no major league sports franchises in Montana due to the state's relatively small and dispersed population, but a number of minor league teams play in the state. Baseball is the minor-league sport with the longest heritage in the state, and Montana is currently home to four Minor League Baseball teams, all members of the Pioneer Baseball League: Billings Mustangs, Great Falls Voyagers, Helena Brewers, and Missoula Osprey.
College sports
All of Montana's four-year colleges and universities field intercollegiate sports teams. The two largest schools, the University of Montana and Montana State University, are members of the Big Sky Conference and have enjoyed a strong athletic rivalry since the early twentieth century. Six of Montana's smaller four-year schools are members of the Frontier Conference. One is a member of the Great Northwest Athletic Conference.
Other sports
A variety of sports are offered at Montana high schools. Montana allows the smallest—"Class C"—high schools to utilize six-man football teams, dramatized in the independent 2002 film, The Slaughter Rule.
There are junior ice hockey teams in Montana, five of which are affiliated with the North American 3 Hockey League: Billings Bulls, Bozeman Icedogs, Glacier Nationals, Great Falls Americans, and Helena Bighorns. Others are in the Western States Hockey League: Butte Cobras and the Whitefish Wolverines.
Olympic competitors
Ski jumping champion and United States Skiing Hall of Fame inductee Casper Oimoen was captain of the U.S. Olympic team at the 1936 Winter Olympics while he was a resident of Anaconda. He placed thirteenth that year, and had previously finished fifth at the 1932 Winter Olympics.
Montana has produced two U.S. champions and Olympic competitors in men's figure skating, both from Great Falls: John Misha Petkevich, lived and trained in Montana before entering college, competed in the 1968 and 1972 Winter Olympics. Scott Davis, also from Great Falls, competed at the 1994 Winter Olympics
Missoulian Tommy Moe won Olympic gold and silver medals at the 1994 Winter Olympics in downhill skiing and super G, the first American skier to win two medals at any Winter Olympics.
Eric Bergoust, also of Missoula, won an Olympic gold medal in freestyle aerial skiing at the 1998 Winter Olympics, also competing in 1994, 2002 and 2006 Olympics plus winning 13 World Cup titles.
Sporting achievements
Montanans have been a part of several major sporting achievements:
In 1889, Spokane became the first and only Montana horse to win the Kentucky Derby. For this accomplishment, the horse was admitted to the Montana Cowboy Hall of Fame in 2008.
In 1904 a basketball team of young Native American women from Fort Shaw, after playing undefeated during their previous season, went to the Louisiana Purchase Exposition held in St. Louis in 1904, defeated all challenging teams and were declared to be world champions.
In 1923, the controversial Jack Dempsey vs. Tommy Gibbons fight for the heavyweight boxing championship, won by Dempsey, took place in Shelby.
Recreation
Montana provides year-round recreation opportunities for residents and visitors. Hiking, fishing, hunting, watercraft recreation, camping, golf, cycling, horseback riding, and skiing are popular activities.
Fishing and hunting
Montana has been a destination for its world-class trout fisheries since the 1930s. Fly fishing for several species of native and introduced trout in rivers and lakes is popular for both residents and tourists throughout the state. Montana is the home of the Federation of Fly Fishers and hosts many of the organizations annual conclaves. The state has robust recreational lake trout and kokanee salmon fisheries in the west, walleye can be found in many parts of the state, while northern pike, smallmouth and largemouth bass fisheries as well as catfish and paddlefish can be found in the waters of eastern Montana. Robert Redford's 1992 film of Norman Mclean's novel, A River Runs Through It, was filmed in Montana and brought national attention to fly fishing and the state.
Montana is home to the Rocky Mountain Elk Foundation and has a historic big game hunting tradition. There are fall bow and general hunting seasons for elk, pronghorn antelope, whitetail deer and mule deer. A random draw grants a limited number of permits for moose, mountain goats and bighorn sheep. There is a spring hunting season for black bear and in most years, limited hunting of bison that leave Yellowstone National Park is allowed. Current law allows both hunting and trapping of a specific number of wolves and mountain lions. Trapping of assorted fur bearing animals is allowed in certain seasons and many opportunities exist for migratory waterfowl and upland bird hunting.
Winter recreation
thumb|The Big Sky Resort
thumb|The Palisades area on the north end of the ski area at Red Lodge Mountain Resort
thumb|Guided snowmobile tours in Yellowstone Park
Both downhill skiing and cross-country skiing are popular in Montana, which has 15 developed downhill ski areas open to the public, including;
Bear Paw Ski Bowl near Havre, Montana
Big Sky Resort, at Big Sky
Blacktail Mountain near Lakeside
Bridger Bowl Ski Area near Bozeman
Discovery Basin between Philipsburg and Anaconda
Great Divide near Helena, Montana
Lookout Pass off Interstate 90 at the Montana-Idaho border
Lost Trail near Darby, Montana
Maverick Mountain near Dillon, Montana
Moonlight Basin near Big Sky
Red Lodge Mountain Resort near Red Lodge
Showdown Ski Area near White Sulphur Springs, Montana
Snowbowl Ski Area near Missoula
Teton Pass Ski Area near Choteau
Turner Mountain Ski Resort near Libby
Whitefish Mountain Resort near Whitefish
Big Sky, Moonlight Basin, Red Lodge, and Whitefish Mountain are destination resorts, while the remaining areas do not have overnight lodging at the ski area, though several host restaurants and other amenities. These day-use resorts partner with local lodging businesses to offer ski and lodging packages.
Montana also has millions of acres open to cross-country skiing on nine of its national forests plus in Glacier National Park. In addition to cross-country trails at most of the downhill ski areas, there are also 13 private cross-country skiing resorts. Yellowstone National Park also allows cross-country skiing.
Snowmobiling is popular in Montana which boasts over 4000 miles of trails and frozen lakes available in winter. There are 24 areas where snowmobile trails are maintained, most also offering ungroomed trails. West Yellowstone offers a large selection of trails and is the primary starting point for snowmobile trips into Yellowstone National Park, where "oversnow" vehicle use is strictly limited, usually to guided tours, and regulations are in considerable flux.
Snow coach tours are offered at Big Sky, Whitefish, West Yellowstone and into Yellowstone National Park. Equestrian skijoring has a niche in Montana, which hosts the World Skijoring Championships in Whitefish as part of the annual Whitefish Winter Carnival.
Health
Montana does not have a Trauma I hospital, but does have Trauma II hospitals in Missoula, Billings, and Great Falls. In 2013 AARP The Magazine named the Billings Clinic one of the safest hospitals in the United States.
Montana is ranked as the least obese state in the U.S., at 19.6%, according to the 2014 Gallup Poll.
Media
As of 2010, Missoula is the 166th largest media market in the United States as ranked by Nielsen Media Research, while Billings is 170th, Great Falls is 190th, the Butte-Bozeman area 191st, and Helena is 206th. There are 25 television stations in Montana, representing each major U.S. network. As of August 2013, there are 527 FCC-licensed FM radio stations broadcast in Montana, with 114 such AM stations.
During the age of the Copper Kings, each Montana copper company had its own newspaper. This changed in 1959 when Lee Enterprises bought several Montana newspapers. Montana's largest circulating daily city newspapers are the Billings Gazette (circulation 39,405), Great Falls Tribune (26,733), and Missoulian (25,439).
Transportation
thumb|Yellowstone Airport, West Yellowstone, Montana
Railroads have been an important method of transportation in Montana since the 1880s. Historically, the state was traversed by the main lines of three east-west transcontinental routes: the Milwaukee Road, the Great Northern, and the Northern Pacific. Today, the BNSF Railway is the state's largest railroad, its main transcontinental route incorporating the former Great Northern main line across the state. Montana RailLink, a privately held Class II railroad, operates former Northern Pacific trackage in western Montana.
In addition, Amtrak's Empire Builder train runs through the north of the state, stopping in Libby, Whitefish, West Glacier, Essex, East Glacier Park, Browning, Cut Bank, Shelby, Havre, Malta, Glasgow, and Wolf Point.
Bozeman Yellowstone International Airport is the busiest airport in the state of Montana, surpassing Billings Logan International Airport in the spring of 2013. Montana's other major Airports include Billings Logan International Airport, Missoula International Airport, Great Falls International Airport, Glacier Park International Airport, Helena Regional Airport, Bert Mooney Airport and Yellowstone Airport. Eight smaller communities have airports designated for commercial service under the Essential Air Service program.
Historically, U.S. Route 10 was the primary east-west highway route across Montana, connecting the major cities in the southern half of the state. Still the state's most important east-west travel corridor, the route is today served by Interstate 90 and Interstate 94 which roughly follow the same route as the Northern Pacific. U.S. Routes 2 and 12 and Montana Highway 200 also traverse the entire state from east to west.
Montana's only north-south Interstate Highway is Interstate 15. Other major north-south highways include U.S. Routes 87, 89, 93 and 191. Interstate 25 terminates into I-90 just south of the Montana border in Wyoming.
Montana and South Dakota are the only states to share a land border which is not traversed by a paved road. Highway 212, the primary paved route between the two, passes through the northeast corner of Wyoming between Montana and South Dakota.
Law and government
The current Governor is Steve Bullock, a Democrat elected in 2012 and sworn in on January 7, 2013. His predecessor in office was two-term governor, Brian Schweitzer. Montana's two U.S. senators are Jon Tester (Democrat) and Steve Daines (Republican). The state's congressional representative is currently Republican Ryan Zinke.
In 1914 Montana granted women the vote and in 1916 became the first state to elect a woman, Progressive Republican Jeannette Rankin, to Congress.
Montana is an Alcoholic beverage control state. It is an equitable distribution and no-fault divorce state. It is one of five states to have no sales tax.
Politics
Politics in the state has been competitive, with the Democrats usually holding an edge, thanks to the support among unionized miners and railroad workers. Large-scale battles revolved around the giant Anaconda Copper company, based in Butte and controlled by Rockefeller interests, until it closed in the 1970s. Until 1959, the company owned five of the state's six largest newspapers.
Historically, Montana is a swing state of cross-ticket voters who tend to fill elected offices with individuals from both parties. Through the mid-20th century, the state had a tradition of "sending the liberals to Washington and the conservatives to Helena." Between 1988 and 2006, the pattern flipped, with voters more likely to elect conservatives to federal offices. There have also been long-term shifts of party control. From 1968 through 1988, the state was dominated by the Democratic Party, with Democratic governors for a 20-year period, and a Democratic majority of both the national congressional delegation and during many sessions of the state legislature. This pattern shifted, beginning with the 1988 election, when Montana elected a Republican governor for the first time since 1964 and sent a Republican to the U.S. Senate for the first time since 1948. This shift continued with the reapportionment of the state's legislative districts that took effect in 1994, when the Republican Party took control of both chambers of the state legislature, consolidating a Republican party dominance that lasted until the 2004 reapportionment produced more swing districts and a brief period of Democratic legislative majorities in the mid-2000s.
In more recent presidential elections, Montana has voted for the Republican candidate in all but two elections from 1952 to the present. The state last supported a Democrat for president in 1992, when Bill Clinton won a plurality victory. Overall, since 1889 the state has voted for Democratic governors 60 percent of the time and Republican presidents 40 percent of the time. In the 2008 presidential election, Montana was considered a swing state and was ultimately won by Republican John McCain, albeit by a narrow margin of two percent.
At the state level, the pattern of split ticket voting and divided government holds. Democrats currently hold one of the state's U.S. Senate seats, as well as four of the five statewide offices (Governor, Superintendent of Public Instruction, Secretary of State and State Auditor). The lone congressional district has been Republican since 1996 and in 2014 Steve Daines won one of the state's Senate seats for the GOP. The Legislative branch had split party control between the house and senate most years between 2004 and 2010, when the mid-term elections returned both branches to Republican control. The state Senate is, as of 2015, controlled by the Republicans 29 to 21, and the State House of Representatives at 59 to 41. Historically, Republicans are strongest in the east, while Democrats are strongest in the west.
Montana currently has only one representative in the U.S. House, having lost its second district in the 1990 census reapportionment. Montana's single congressional district holds the largest population of any district in the country, which means its one member in the House of Representatives represents more people than any other member of the U.S. House (see List of U.S. states by population). Montana's population grew at about the national average during the 2000s, and it failed to regain its second seat in 2010. Like other states, Montana has two senators.
Current trends
An October 2013 Montana State University Billings survey found that 46.6 percent of Montana voters supported the legalization of same-sex marriage, while 42.6 percent opposed it and 10.8 percent were not sure.2013 State Poll msubillings.edu
Cities and towns
thumb|Missoula
Montana has 56 counties with the United States Census Bureau stating Montana's contains 364 "places", broken down into 129 incorporated places and 235 census-designated places. Incorporated places consist of 52 cities, 75 towns, and two consolidated city-counties. Montana has one city, Billings, with a population over 100,000; and two cities with populations over 50,000, Missoula and Great Falls. These three communities are considered the centers of Montana's three Metropolitan Statistical Areas.
The state also has five Micropolitan Statistical Areas centered on Bozeman, Butte, Helena, Kalispell and Havre. These communities, excluding Havre, are colloquially known as the "big 7" Montana cities, as they are consistently the seven largest communities in Montana, with a significant population difference when these communities are compared to those that are 8th and lower on the list. According to the 2010 U.S. Census, the population of Montana's seven most populous cities, in rank order, are Billings, Missoula, Great Falls, Bozeman, Butte, Helena and Kalispell. Based on 2013 census numbers, they collectively contain 35 percent of Montana's population. and the counties containing these communities hold 62 percent of the state's population. The geographic center of population of Montana is located in sparsely populated Meagher County, in the town of White Sulphur Springs.
State symbols
thumb|150px|Montana's state quarter, released in 2007
Montana's motto, Oro y Plata, Spanish for "Gold and Silver", recognizing the significant role of mining, was first adopted in 1865, when Montana was still a territory. A state seal with a miner's pick and shovel above the motto, surrounded by the mountains and the Great Falls of the Missouri River, was adopted during the first meeting of the territorial legislature in 1864–65. The design was only slightly modified after Montana became a state and adopted it as the Great Seal of the State of Montana, enacted by the legislature in 1893. The state flower, the bitterroot, was adopted in 1895 with the support of a group called the Floral Emblem Association, which formed after Montana's Women's Christian Temperance Union adopted the bitterroot as the organization's state flower. All other symbols were adopted throughout the 20th century, save for Montana's newest symbol, the state butterfly, the mourning cloak, adopted in 2001, and the state lullaby, "Montana Lullaby", adopted in 2007.
The state song was not composed until 21 years after statehood, when a musical troupe led by Joseph E. Howard stopped in Butte in September 1910. A former member of the troupe who lived in Butte buttonholed Howard at an after-show party, asking him to compose a song about Montana and got another partygoer, the city editor for the Butte Miner newspaper, Charles C. Cohan, to help. The two men worked up a basic melody and lyrics in about a half-hour for the entertainment of party guests, then finished the song later that evening, with an arrangement worked up the following day. Upon arriving in Helena, Howard's troupe performed 12 encores of the new song to an enthusiastic audience and the governor proclaimed it the state song on the spot, though formal legislative recognition did not occur until 1945. Montana is one of only three states to have a "state ballad", "Montana Melody", chosen by the legislature in 1983. Montana was the first state to also adopt a State Lullaby.
Montana schoolchildren played a significant role in selecting several state symbols. The state tree, the ponderosa pine, was selected by Montana schoolchildren as the preferred state tree by an overwhelming majority in a referendum held in 1908. However, the legislature did not designate a state tree until 1949, when the Montana Federation of Garden Clubs, with the support of the state forester, lobbied for formal recognition. Schoolchildren also chose the western meadowlark as the state bird, in a 1930 vote, and the legislature acted to endorse this decision in 1931. Similarly, the secretary of state sponsored a children's vote in 1981 to choose a state animal, and after 74 animals were nominated, the grizzly bear won over the elk by a 2–1 margin. The students of Livingston started a statewide school petition drive plus lobbied the governor and the state legislature to name the Maiasaura as the state fossil in 1985.
Various community civic groups also played a role in selecting the state grass and the state gemstones. When broadcaster Norma Ashby discovered there was no state fish, she initiated a drive via her television show, Today in Montana, and an informal citizen's election to select a state fish resulted in a win for the blackspotted cutthroat trout after hot competition from the Arctic grayling. The legislature in turn adopted this recommendation by a wide margin.
+ Symbols of Montana Designation Name Enacted Image State seal 1893 80px State flag 80px State animal Grizzly bear (Ursus arctos horribilis) 1983 80px State bird Western meadowlark (Sturnella neglecta) 1931 60px State butterfly Mourning cloak (Nymphalis antiopa) 2001 80px State fish Blackspotted cutthroat trout (Oncorhynchus clarkii) 1977 80px State flower Bitterroot (Lewisia rediviva) 1895 80px State fossil Duck-billed dinosaur (Maiasaura peeblesorum) 1985 80px State gemstones Sapphire and agate 1969 80px State grass Bluebunch wheatgrass (Pseudoroegneria spicata) 1973 80px State motto "Oro y Plata" (Spanish for "Gold and Silver") 1865 80px State music 80px State tree Ponderosa pine (Pinus ponderosa) 1949 80px
See also
Outline of Montana
Index of Montana-related articles
References
Bibliography
Reviewed by
Further reading
External links
Census of Montana
General Information About Montana
List of Searchable Databases Produced by Montana State Agencies
Montana Energy Data & Statistics – From the U.S. Department of Energy
Montana Historical Society
Montana Official Travel Information Site
Montana Official Website
Montana State Facts From the U.S. Department of Agriculture
USGS Real-time, Geographic, and Other Scientific Resources of Montana
Category:States and territories established in 1889
Category:States of the United States
Category:Western United States
Category:1889 establishments in the United States | 19,978 | 2017-01 |
Pain | Pain is a distressing feeling often caused by intense or damaging stimuli, such as stubbing a toe, burning a finger, putting alcohol on a cut, or bumping the "funny bone".The examples represent respectively the three classes of nociceptive pain - mechanical, thermal and chemical - and neuropathic pain. Because it is a complex, subjective phenomenon, defining pain has been a challenge. The International Association for the Study of Pain's widely used definition states: "Pain is an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage." Derived from In medical diagnosis, pain is regarded as a symptom of an underlying condition.
Pain motivates the individual to withdraw from damaging situations, to protect a damaged body part while it heals, and to avoid similar experiences in the future. Most pain resolves once the noxious stimulus is removed and the body has healed, but it may persist despite removal of the stimulus and apparent healing of the body. Sometimes pain arises in the absence of any detectable stimulus, damage or disease.
Pain is the most common reason for physician consultation in most developed countries. It is a major symptom in many medical conditions, and can interfere with a person's quality of life and general functioning. Simple pain medications are useful in 20% to 70% of cases. Psychological factors such as social support, hypnotic suggestion, excitement, or distraction can significantly affect pain's intensity or unpleasantness. In some arguments put forth in physician-assisted suicide or euthanasia debates, pain has been used as an argument to permit people who are terminally ill to end their lives.
One judgment on the value of pain is given by German philosopher, Friedrich Nietzsche, who wrote: "Only great pain is the ultimate liberator of the spirit….I doubt that such pain makes us ‘better’; but I know that it makes us more profound”. Nietzsche and philosophers influenced by him thus oppose the entirely negative valuation of pain, instead holding that 'What does not destroy me, makes me stronger."
Classification
In 1994, responding to the need for a more useful system for describing chronic pain, the International Association for the Study of Pain (IASP) classified pain according to specific characteristics:
region of the body involved (e.g. abdomen, lower limbs),
system whose dysfunction may be causing the pain (e.g., nervous, gastrointestinal),
duration and pattern of occurrence,
intensity and time since onset, and
cause
However, this system has been criticized by Clifford J. Woolf and others as inadequate for guiding research and treatment.
Woolf suggests three classes of pain:
nociceptive pain,
inflammatory pain which is associated with tissue damage and the infiltration of immune cells, and
pathological pain which is a disease state caused by damage to the nervous system or by its abnormal function (e.g. fibromyalgia, peripheral neuropathy, tension type headache, etc.).
Duration
Pain is usually transitory, lasting only until the noxious stimulus is removed or the underlying damage or pathology has healed, but some painful conditions, such as rheumatoid arthritis, peripheral neuropathy, cancer and idiopathic pain, may persist for years. Pain that lasts a long time is called chronic or persistent, and pain that resolves quickly is called acute. Traditionally, the distinction between acute and chronic pain has relied upon an arbitrary interval of time from onset; the two most commonly used markers being 3 months and 6 months since the onset of pain, though some theorists and researchers have placed the transition from acute to chronic pain at 12 months. Others apply acute to pain that lasts less than 30 days, chronic to pain of more than six months' duration, and subacute to pain that lasts from one to six months. A popular alternative definition of chronic pain, involving no arbitrarily fixed durations, is "pain that extends beyond the expected period of healing". Chronic pain may be classified as cancer pain or else as benign.
Nociceptive
Nociceptive pain is caused by stimulation of sensory nerve fibers that respond to stimuli approaching or exceeding harmful intensity (nociceptors), and may be classified according to the mode of noxious stimulation. The most common categories are "thermal" (e.g. heat or cold), "mechanical" (e.g. crushing, tearing, shearing, etc.) and "chemical" (e.g. iodine in a cut or chemicals released during inflammation). Some nociceptors respond to more than one of these modalities and are consequently designated polymodal.
Nociceptive pain may also be divided into "visceral", "deep somatic" and "superficial somatic" pain. Visceral structures are highly sensitive to stretch, ischemia and inflammation, but relatively insensitive to other stimuli that normally evoke pain in other structures, such as burning and cutting. Visceral pain is diffuse, difficult to locate and often referred to a distant, usually superficial, structure. It may be accompanied by nausea and vomiting and may be described as sickening, deep, squeezing, and dull. Deep somatic pain is initiated by stimulation of nociceptors in ligaments, tendons, bones, blood vessels, fasciae and muscles, and is dull, aching, poorly-localized pain. Examples include sprains and broken bones. Superficial pain is initiated by activation of nociceptors in the skin or other superficial tissue, and is sharp, well-defined and clearly located. Examples of injuries that produce superficial somatic pain include minor wounds and minor (first degree) burns.
Neuropathic
Neuropathic pain is caused by damage or disease affecting any part of the nervous system involved in bodily feelings (the somatosensory system). Peripheral neuropathic pain is often described as "burning", "tingling", "electrical", "stabbing", or "pins and needles". Bumping the "funny bone" elicits acute peripheral neuropathic pain.
Phantom
Phantom pain is pain felt in a part of the body that has been lost or from which the brain no longer receives signals. It is a type of neuropathic pain. Phantom limb pain is a common experience of amputees.
The prevalence of phantom pain in upper limb amputees is nearly 82%, and in lower limb amputees is 54%. One study found that eight days after amputation, 72 percent of patients had phantom limb pain, and six months later, 65 percent reported it. Some amputees experience continuous pain that varies in intensity or quality; others experience several bouts a day, or it may occur only once every week or two. It is often described as shooting, crushing, burning or cramping. If the pain is continuous for a long period, parts of the intact body may become sensitized, so that touching them evokes pain in the phantom limb. Phantom limb pain may accompany urination or defecation.
Local anesthetic injections into the nerves or sensitive areas of the stump may relieve pain for days, weeks, or sometimes permanently, despite the drug wearing off in a matter of hours; and small injections of hypertonic saline into the soft tissue between vertebrae produces local pain that radiates into the phantom limb for ten minutes or so and may be followed by hours, weeks or even longer of partial or total relief from phantom pain. Vigorous vibration or electrical stimulation of the stump, or current from electrodes surgically implanted onto the spinal cord, all produce relief in some patients.
Mirror box therapy produces the illusion of movement and touch in a phantom limb which in turn may cause a reduction in pain.
Paraplegia, the loss of sensation and voluntary motor control after serious spinal cord damage, may be accompanied by girdle pain at the level of the spinal cord damage, visceral pain evoked by a filling bladder or bowel, or, in five to ten per cent of paraplegics, phantom body pain in areas of complete sensory loss. This phantom body pain is initially described as burning or tingling but may evolve into severe crushing or pinching pain, or the sensation of fire running down the legs or of a knife twisting in the flesh. Onset may be immediate or may not occur until years after the disabling injury. Surgical treatment rarely provides lasting relief.
Psychogenic
Psychogenic pain, also called psychalgia or somatoform pain, is pain caused, increased, or prolonged by mental, emotional, or behavioral factors.Cleveland Clinic, Health information
"Psychogenic pain - definition from Biology-Online.org" Biology-online.org. Retrieved 2008-11-05. Headache, back pain, and stomach pain are sometimes diagnosed as psychogenic. Sufferers are often stigmatized, because both medical professionals and the general public tend to think that pain from a psychological source is not "real". However, specialists consider that it is no less actual or hurtful than pain from any other source."International Association for the Study of Pain | Pain Definitions".. Retrieved 12 October 2010.
People with long-term pain frequently display psychological disturbance, with elevated scores on the Minnesota Multiphasic Personality Inventory scales of hysteria, depression and hypochondriasis (the "neurotic triad"). Some investigators have argued that it is this neuroticism that causes acute pain to turn chronic, but clinical evidence points the other way, to chronic pain causing neuroticism. When long-term pain is relieved by therapeutic intervention, scores on the neurotic triad and anxiety fall, often to normal levels. Self-esteem, often low in chronic pain patients, also shows improvement once pain has resolved.
Breakthrough pain
Breakthrough pain is transitory acute pain that comes on suddenly and is not alleviated by the patient's regular pain management. It is common in cancer patients who often have background pain that is generally well-controlled by medications, but who also sometimes experience bouts of severe pain that from time to time "breaks through" the medication. The characteristics of breakthrough cancer pain vary from person to person and according to the cause. Management of breakthrough pain can entail intensive use of opioids, including fentanyl.
Incident pain
Incident pain is pain that arises as a result of activity, such as movement of an arthritic joint, stretching a wound, etc.
Pain asymbolia and insensitivity
The ability to experience pain is essential for protection from injury, and recognition of the presence of injury. Episodic analgesia may occur under special circumstances, such as in the excitement of sport or war: a soldier on the battlefield may feel no pain for many hours from a traumatic amputation or other severe injury.Beecher, HK (1959). Measurement of subjective responses. New York: Oxford University Press. cited in Melzack, R; Wall, PD (1996). The challenge of pain (2 ed.). London: Penguin. p. 7. ISBN 978-0-14-025670-3.
Although unpleasantness is an essential part of the IASP definition of pain, it is possible to induce a state described as intense pain devoid of unpleasantness in some patients, with morphine injection or psychosurgery. Such patients report that they have pain but are not bothered by it; they recognize the sensation of pain but suffer little, or not at all.Nikola Grahek, Feeling pain and being in pain, Oldenburg, 2001. ISBN 3-8142-0780-7. Indifference to pain can also rarely be present from birth; these people have normal nerves on medical investigations, and find pain unpleasant, but do not avoid repetition of the pain stimulus.
Insensitivity to pain may also result from abnormalities in the nervous system. This is usually the result of acquired damage to the nerves, such as spinal cord injury, diabetes mellitus (diabetic neuropathy), or leprosy in countries where that disease is prevalent. These individuals are at risk of tissue damage and infection due to undiscovered injuries. People with diabetes-related nerve damage, for instance, sustain poorly-healing foot ulcers as a result of decreased sensation.
A much smaller number of people are insensitive to pain due to an inborn abnormality of the nervous system, known as "congenital insensitivity to pain". Children with this condition incur carelessly-repeated damage to their tongues, eyes, joints, skin, and muscles. Some die before adulthood, and others have a reduced life expectancy. Most people with congenital insensitivity to pain have one of five hereditary sensory and autonomic neuropathies (which includes familial dysautonomia and congenital insensitivity to pain with anhidrosis). These conditions feature decreased sensitivity to pain together with other neurological abnormalities, particularly of the autonomic nervous system. A very rare syndrome with isolated congenital insensitivity to pain has been linked with mutations in the SCN9A gene, which codes for a sodium channel (Nav1.7) necessary in conducting pain nerve stimuli.
Effect on functioning
Experimental subjects challenged by acute pain and patients in chronic pain experience impairments in attention control, working memory, mental flexibility, problem solving, and information processing speed. Acute and chronic pain are also associated with increased depression, anxiety, fear, and anger.
Theory
Historical theories
Before the relatively recent discovery of neurons and their role in pain, various different body functions were proposed to account for pain. There were several competing early theories of pain among the ancient Greeks: Hippocrates believed that it was due to an imbalance in vital fluids.Linton. Models of Pain Perception. Elsevier Health, 2005. Print. In the 11th century, Avicenna theorized that there were a number of feeling senses including touch, pain and titillation.
In 1644, René Descartes theorized that pain was a disturbance that passed down along nerve fibers until the disturbance reached the brain, a development that transformed the perception of pain from a spiritual, mystical experience to a physical, mechanical sensation . Descartes's work, along with Avicenna's, prefigured the 19th-century development of specificity theory. Specificity theory saw pain as "a specific sensation, with its own sensory apparatus independent of touch and other senses". Another theory that came to prominence in the 18th and 19th centuries was intensive theory, which conceived of pain not as a unique sensory modality, but an emotional state produced by stronger than normal stimuli such as intense light, pressure or temperature. By the mid-1890s, specificity was backed mostly by physiologists and physicians, and the intensive theory was mostly backed by psychologists. However, after a series of clinical observations by Henry Head and experiments by Max von Frey, the psychologists migrated to specificity almost en masse, and by century's end, most textbooks on physiology and psychology were presenting pain specificity as fact.
In 1955, DC Sinclair and G Weddell developed peripheral pattern theory, based on a 1934 suggestion by John Paul Nafe. They proposed that all skin fiber endings (with the exception of those innervating hair cells) are identical, and that pain is produced by intense stimulation of these fibers. Another 20th-century theory was gate control theory, introduced by Ronald Melzack and Patrick Wall in the 1965 Science article "Pain Mechanisms: A New Theory". The authors proposed that both thin (pain) and large diameter (touch, pressure, vibration) nerve fibers carry information from the site of injury to two destinations in the dorsal horn of the spinal cord, and that the more large fiber activity relative to thin fiber activity at the inhibitory cell, the less pain is felt.
Three dimensions of pain
In 1968 Ronald Melzack and Kenneth Casey described pain in terms of its three dimensions: "sensory-discriminative" (sense of the intensity, location, quality and duration of the pain), "affective-motivational" (unpleasantness and urge to escape the unpleasantness), and "cognitive-evaluative" (cognitions such as appraisal, cultural values, distraction and hypnotic suggestion). They theorized that pain intensity (the sensory discriminative dimension) and unpleasantness (the affective-motivational dimension) are not simply determined by the magnitude of the painful stimulus, but "higher" cognitive activities can influence perceived intensity and unpleasantness. Cognitive activities "may affect both sensory and affective experience or they may modify primarily the affective-motivational dimension. Thus, excitement in games or war appears to block both dimensions of pain, while suggestion and placebos may modulate the affective-motivational dimension and leave the sensory-discriminative dimension relatively undisturbed." (p. 432) The paper ends with a call to action: "Pain can be treated not only by trying to cut down the sensory input by anesthetic block, surgical intervention and the like, but also by influencing the motivational-affective and cognitive factors as well." (p. 435)
Theory today
thumb|right|Regions of the cerebral cortex associated with pain.Wilhelm Erb's (1874) "intensive" theory, that a pain signal can be generated by intense enough stimulation of any sensory receptor, has been soundly disproved. Some sensory fibers do not differentiate between noxious and non-noxious stimuli, while others, nociceptors, respond only to noxious, high intensity stimuli. At the peripheral end of the nociceptor, noxious stimuli generate currents that, above a given threshold, send signals along the nerve fiber to the spinal cord. The "specificity" (whether it responds to thermal, chemical or mechanical features of its environment) of a nociceptor is determined by which ion channels it expresses at its peripheral end. Dozens of different types of nociceptor ion channels have so far been identified, and their exact functions are still being determined.
The pain signal travels from the periphery to the spinal cord along an A-delta or C fiber. Because the A-delta fiber is thicker than the C fiber, and is thinly sheathed in an electrically insulating material (myelin), it carries its signal faster (5–30 m/s) than the unmyelinated C fiber (0.5–2 m/s). Pain evoked by the A-delta fibers is described as sharp and is felt first. This is followed by a duller pain, often described as burning, carried by the C fibers. These "first order" neurons enter the spinal cord via Lissauer's tract.
These A-delta and C fibers connect with "second order" nerve fibers in the central gelatinous substance of the spinal cord (laminae II and III of the dorsal horns). The second order fibers then cross the cord via the anterior white commissure and ascend in the spinothalamic tract. Before reaching the brain, the spinothalamic tract splits into the lateral, neospinothalamic tract and the medial, paleospinothalamic tract.
Second order neospinothalamic tract neurons carry information from A-delta fibers and terminate at the ventral posterolateral nucleus of the thalamus, where they connect with third order neurons of the somatosensory cortex. Paleospinothalamic neurons carry information from C fibers and terminate throughout the brain stem, a tenth of them in the thalamus and the rest in the medulla, pons and periaqueductal gray matter.
Second order, spinal cord fibers dedicated to carrying A-delta fiber pain signals, and others that carry both A-delta and C fiber pain signals to the thalamus have been identified. Other spinal cord fibers, known as wide dynamic range neurons, respond to A-delta and C fibers, but also to the large A-beta fibers that carry touch, pressure and vibration signals. Pain-related activity in the thalamus spreads to the insular cortex (thought to embody, among other things, the feeling that distinguishes pain from other homeostatic emotions such as itch and nausea) and anterior cingulate cortex (thought to embody, among other things, the affective/motivational element, the unpleasantness of pain). Pain that is distinctly located also activates primary and secondary somatosensory cortex.
Evolutionary and behavioral role
Pain is part of the body's defense system, producing a reflexive retraction from the painful stimulus, and tendencies to protect the affected body part while it heals, and avoid that harmful situation in the future. It is an important part of animal life, vital to healthy survival. People with congenital insensitivity to pain have reduced life expectancy.
In his book, The Greatest Show on Earth: The Evidence for Evolution, biologist Richard Dawkins grapples with the question of why pain has to be so very painful. He describes the alternative as a simple, mental raising of a "red flag". To argue why that red flag might be insufficient, Dawkins explains that drives must compete with each other within living beings. The most fit creature would be the one whose pains are well balanced. Those pains which mean certain death when ignored will become the most powerfully felt. The relative intensities of pain, then, may resemble the relative importance of that risk to our ancestors (lack of food, too much cold, or serious injuries are felt as agony, whereas minor damage is felt as mere discomfort). This resemblance will not be perfect, however, because natural selection can be a poor designer. The result is often glitches in animals, including supernormal stimuli. Such glitches help explain pains which are not, or at least no longer directly adaptive (e.g. perhaps some forms of toothache, or injury to fingernails).
Idiopathic pain (pain that persists after the trauma or pathology has healed, or that arises without any apparent cause), may be an exception to the idea that pain is helpful to survival, although some psychodynamic psychologists argue that such pain is psychogenic, enlisted as a protective distraction to keep dangerous emotions unconscious.
Thresholds
In pain science, thresholds are measured by gradually increasing the intensity of a stimulus such as electric current or heat applied to the body. The pain perception threshold is the point at which the stimulus begins to hurt, and the pain tolerance threshold is reached when the subject acts to stop the pain.
Differences in pain perception and tolerance thresholds are associated with, among other factors, ethnicity, genetics, and sex. People of Mediterranean origin report as painful some radiant heat intensities that northern Europeans describe as nonpainful. And Italian women tolerate less intense electric shock than Jewish or Native American women. Some individuals in all cultures have significantly higher than normal pain perception and tolerance thresholds. For instance, patients who experience painless heart attacks have higher pain thresholds for electric shock, muscle cramp and heat.
Assessment
A person's self-report is the most reliable measure of pain. Some health care professionals may to underestimate severity. A definition of pain widely employed in nursing, emphasizing its subjective nature and the importance of believing patient reports, was introduced by Margo McCaffery in 1968: "Pain is whatever the experiencing person says it is, existing whenever he says it does".McCaffery M. (1968). Nursing practice theories related to cognition, bodily pain, and man-environment interactions. Los Angeles: UCLA Students Store.More recently, McCaffery defined pain as "whatever the experiencing person says it is, existing whenever the experiencing person says it does." To assess intensity, the patient may be asked to locate their pain on a scale of 0 to 10, with 0 being no pain at all, and 10 the worst pain they have ever felt. Quality can be established by having the patient complete the McGill Pain Questionnaire indicating which words best describe their pain.
Multidimensional pain inventory
The Multidimensional Pain Inventory (MPI) is a questionnaire designed to assess the psychosocial state of a person with chronic pain. Analysis of MPI results by Turk and Rudy (1988) found three classes of chronic pain patient: "(a) dysfunctional, people who perceived the severity of their pain to be high, reported that pain interfered with much of their lives, reported a higher degree of psychological distress caused by pain, and reported low levels of activity; (b) interpersonally distressed, people with a common perception that significant others were not very supportive of their pain problems; and (c) adaptive copers, patients who reported high levels of social support, relatively low levels of pain and perceived interference, and relatively high levels of activity." Combining the MPI characterization of the person with their IASP five-category pain profile is recommended for deriving the most useful case description.
People who are non-verbal
When a person is non-verbal and cannot self-report pain, observation becomes critical, and specific behaviors can be monitored as pain indicators. Behaviors such as facial grimacing and guarding indicate pain, as well as an increase or decrease in vocalizations, changes in routine behavior patterns and mental status changes. Patients experiencing pain may exhibit withdrawn social behavior and possibly experience a decreased appetite and decreased nutritional intake. A change in condition that deviates from baseline such as moaning with movement or when manipulating a body part, and limited range of motion are also potential pain indicators. In patients who possess language but are incapable of expressing themselves effectively, such as those with dementia, an increase in confusion or display of aggressive behaviors or agitation may signal that discomfort exists, and further assessment is necessary.
Infants feel pain but they lack the language needed to report it, so communicate distress by crying. A non-verbal pain assessment should be conducted involving the parents, who will notice changes in the infant not obvious to the health care provider. Pre-term babies are more sensitive to painful stimuli than full term babies.
Other barriers to reporting
The experience of pain has many cultural dimensions. For instance, the way in which one experiences and responds to pain is related to sociocultural characteristics, such as gender, ethnicity, and age.Zborowski M. People in Pain. 1969, San Francisco, CA:Josey-Bass An aging adult may not respond to pain in the way that a younger person would. Their ability to recognize pain may be blunted by illness or the use of multiple prescription drugs. Depression may also keep the older adult from reporting they are in pain. The older adult may also stop doing activities they love because it hurts too much. Decline in self-care activities (dressing, grooming, walking, etc.) may also be indicators that the older adult is experiencing pain. The older adult may refrain from reporting pain because they are afraid they will have to have surgery or will be put on a drug they might become addicted to. They may not want others to see them as weak, or may feel there is something impolite or shameful in complaining about pain, or they may feel the pain is deserved punishment for past transgressions.lawhorne, L; Passerini, J (1999). Chronic Pain Management in the Long Term Care Setting: Clinical Practice Guidelines. Baltimore, Maryland: American Medical Directors Association. pp. 1–27.
Cultural barriers can also keep a person from telling someone they are in pain. Religious beliefs may prevent the individual from seeking help. They may feel certain pain treatment is against their religion. They may not report pain because they feel it is a sign that death is near. Many people fear the stigma of addiction and avoid pain treatment so as not to be prescribed potentially addicting drugs. Many Asians do not want to lose respect in society by admitting they are in pain and need help, believing the pain should be borne in silence, while other cultures feel they should report pain right away and get immediate relief. Gender can also be a factor in reporting pain. Sexual differences can be the result of social and cultural expectations, with women expected to be emotional and show pain and men stoic, keeping pain to themselves.
As an aid to diagnosis
Pain is a symptom of many medical conditions. Knowing the time of onset, location, intensity, pattern of occurrence (continuous, intermittent, etc.), exacerbating and relieving factors, and quality (burning, sharp, etc.) of the pain will help the examining physician to accurately diagnose the problem. For example, chest pain described as extreme heaviness may indicate myocardial infarction, while chest pain described as tearing may indicate aortic dissection.
Physiological measurement of pain
fMRI brain scanning has been used to measure pain, giving good correlations with self-reported pain.http://med.stanford.edu/ism/2011/september/pain.html
Hedonic adaptation
Hedonic adaptation means that actual long-term suffering due to physical illness is often much lower than expected.http://www.cmu.edu/dietrich/sds/docs/loewenstein/painSufferingAwards.pdf
Legal awards for pain and suffering
One area where assessments of pain are effectively required to be made is in legal awards for pain and suffering. In the Western world these are typically discretionary awards made by juries and are regarded as difficult to predict, variable and subjective, for instance in the US, UK, Australia and New Zealand.http://www.ejcl.org/133/art133-2.pdf
Management
Inadequate treatment of pain is widespread throughout surgical wards, intensive care units, accident and emergency departments, in general practice, in the management of all forms of chronic pain including cancer pain, and in end of life care.
This neglect is extended to all ages, from neonates to the frail elderly. African and Hispanic Americans are more likely than others to suffer needlessly in the hands of a physician; and women's pain is more likely to be undertreated than men's.
The International Association for the Study of Pain advocates that the relief of pain should be recognized as a human right, that chronic pain should be considered a disease in its own right, and that pain medicine should have the full status of a specialty.Delegates to the International Pain Summit of the International Association for the Study of Pain (2010) "Declaration of Montreal" . It is a specialty only in China and Australia at this time. Elsewhere, pain medicine is a subspecialty under disciplines such as anesthesiology, physiatry, neurology, palliative medicine and psychiatry."Physical Medicine and Rehabilitation" In 2011, Human Rights Watch alerted that tens of millions of people worldwide are still denied access to inexpensive medications for severe pain.
Breastfeeding may decrease pain when babies are immunized.
Medication
Acute pain is usually managed with medications such as analgesics and anesthetics. Caffeine, when added to pain medications such as ibuprofen, may provide some additional benefit. Management of chronic pain, however, is much more difficult and may require the coordinated efforts of a pain management team, which typically includes medical practitioners, clinical pharmacists, clinical psychologists, physiotherapists, occupational therapists, physician assistants, and nurse practitioners.Thienhaus, O; Cole, BE (2002). "The classification of pain". In Weiner, RS. Pain management: A practical guide for clinicians. American Academy of Pain Management. p. 29. ISBN 0-8493-0926-3.
Main, Chris J.; Spanswick, Chris C. (2000). Pain management: an interdisciplinary approach Churchill Livingstone. ISBN 0-443-05683-8.
Sugar (sucrose) when taken by mouth reduces pain in newborn babies undergoing some medical procedures (a single lancing of the heel, venipuncture, and intramuscular injections). Sugar does not remove pain from circumcision, and it is unknown if sugar reduces pain for other procedures.
Sugar did not affect pain-related electrical activity in the brains of newborns one second after the heel lance procedure. Sweet liquid by mouth moderately reduces the rate and duration of crying caused by immunization injection in children between one and twelve months of age.
Psychological
Individuals with more social support experience less cancer pain, take less pain medication, report less labor pain and are less likely to use epidural anesthesia during childbirth or suffer from chest pain after coronary artery bypass surgery.Eisenberger, NI; Lieberman (2005). "Why it hurts to be left out: The neurocognitive overlap between physical and social pain" In Williams, KD; Forgas, JP; von Hippel, W. The social outcast: Ostracism, social exclusion, rejection, and bullying. New York: Cambridge University Press. pp. 109–127. ISBN 1-84169-424-X.
Suggestion can significantly affect pain intensity. About 35% of people report marked relief after receiving a saline injection they believe to have been morphine. This "placebo" effect is more pronounced in people who are prone to anxiety, so anxiety reduction may account for some of the effect, but it does not account for all of it. Placebos are more effective in intense pain than mild pain; and they produce progressively weaker effects with repeated administration.Melzack, R; Wall, PD (1996). The challenge of pain (2 ed.). London: Penguin. pp. 26–28. ISBN 978-0-14-025670-3
It is possible for many people with chronic pain to become so absorbed in an activity or entertainment that the pain is no longer felt, or is greatly diminished.Melzack, R; Wall, PD (1996). The challenge of pain (2 ed.). London: Penguin. pp. 22–23. ISBN 978-0-14-025670-3.
Cognitive behavioral therapy (CBT) has been shown effective for improving quality of life in those with chronic pain but the reduction in suffering is quite modest, and the CBT method employed seems to have no effect on outcome. Acceptance and Commitment Therapy (ACT) is likely also effective in the treatment of chronic pain.
A number of meta-analyses have found clinical hypnosis to be effective in controlling pain associated with diagnostic and surgical procedures in both adults and children, as well as pain associated with cancer and childbirth.Wark, D.M. (2008). What can we do with hypnosis: A brief note. American Journal of Clinical Hypnosis. http://www.tandfonline.com/doi/abs/10.1080/00029157.2008.10401640#.UgGMqZLVArU A 2007 review of 13 studies found evidence for the efficacy of hypnosis in the reduction of chronic pain in some conditions, though the number of patients enrolled in the studies was low, bringing up issues of power to detect group differences, and most lacked credible controls for placebo and/or expectation. The authors concluded that "although the findings provide support for the general applicability of hypnosis in the treatment of chronic pain, considerably more research will be needed to fully determine the effects of hypnosis for different chronic-pain conditions."
Alternative medicine
Pain is the most common reason for people to use complementary and alternative medicine. An analysis of the 13 highest quality studies of pain treatment with acupuncture, published in January 2009, concluded there is little difference in the effect of real, sham and no acupuncture. However other reviews have found benefit. Additionally, there is tentative evidence for a few herbal medicines. There is interest in the relationship between vitamin D and pain, but the evidence so far from controlled trials for such a relationship, other than in osteomalacia, is unconvincing.
A 2003 meta-analysis of randomized clinical trials found that spinal manipulation was "more effective than sham therapy but was no more or less effective than general practitioner care, analgesics, physical therapy, exercise, or back school" in the treatment of low back pain.
Epidemiology
Pain is the main reason for visiting the emergency department in more than 50% of cases and is present in 30% of family practice visits. Several epidemiological studies from different countries have reported widely varying prevalence rates for chronic pain, ranging from 12 to 80% of the population. It becomes more common as people approach death. A study of 4,703 patients found that 26% had pain in the last two years of life, increasing to 46% in the last month.
A survey of 6,636 children (0–18 years of age) found that, of the 5,424 respondents, 54% had experienced pain in the preceding three months. A quarter reported having experienced recurrent or continuous pain for three months or more, and a third of these reported frequent and intense pain. The intensity of chronic pain was higher for girls, and girls' reports of chronic pain increased markedly between ages 12 and 14.
Society and culture
thumb|The okipa ceremony as witnessed by George Catlin, circa 1835.
The nature or meaning of physical pain has been diversely understood by religious or secular traditions from antiquity to modern times.
Physical pain is an important political topic in relation to various issues, including pain management policy, drug control, animal rights or animal welfare, torture, and pain compliance. In various contexts, the deliberate infliction of pain in the form of corporal punishment is used as retribution for an offence, or for the purpose of disciplining or reforming a wrongdoer, or to deter attitudes or behaviour deemed unacceptable. In some cultures, extreme practices such as mortification of the flesh or painful rites of passage are highly regarded.
Philosophy of pain is a branch of philosophy of mind that deals essentially with physical pain, especially in connection with such views as dualism, identity theory, and functionalism.
More generally, it is often as a part of pain in the broad sense, i.e. suffering, that physical pain is dealt with in culture, religion, philosophy, or society.
Other animals
thumb|right|Portrait of René Descartes by Jan Baptist Weenix 1647-1649
The most reliable method for assessing pain in most humans is by asking a question: a person may report pain that cannot be detected by any known physiological measure. However, like infants, animals cannot answer questions about whether they feel pain; thus the defining criterion for pain in humans cannot be applied to them. Philosophers and scientists have responded to this difficulty in a variety of ways. René Descartes for example argued that animals lack consciousness and therefore do not experience pain and suffering in the way that humans do.Working party of the Nuffield Council on Bioethics (2005). "The ethics of research involving animals. London: Nuffield Council on Bioethics." ISBN 1-904384-10-2. Archived from the original on 25 June 2008. Retrieved 12 January 2010. Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals,Rollin drafted the 1985 Health Research Extension Act and an animal welfare amendment to the 1985 Food Security Act. See: writes that researchers remained unsure into the 1980s as to whether animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain.Rollin, B. (1989) The Unheeded Cry: Animal Consciousness, Animal Pain, and Science. New York: Oxford University Press, pp. xii, 117–118, cited in Carbone 2004, p. 150. In his interactions with scientists and other veterinarians, he was regularly asked to "prove" that animals are conscious, and to provide "scientifically acceptable" grounds for claiming that they feel pain. Carbone writes that the view that animals feel pain differently is now a minority view. Academic reviews of the topic are more equivocal, noting that although the argument that animals have at least simple conscious thoughts and feelings has strong support, some critics continue to question how reliably animal mental states can be determined. The ability of invertebrate species of animals, such as insects, to feel pain and suffering is also unclear.Sherwin, C.M., (2001). Can invertebrates suffer? Or, how robust is argument-by-analogy? Animal Welfare, 10 (supplement): S103-S118
The presence of pain in an animal cannot be known for certain, but it can be inferred through physical and behavioral reactions. Specialists currently believe that all vertebrates can feel pain, and that certain invertebrates, like the octopus, might too."Do Invertebrates Feel Pain?" , The Senate Standing Committee on Legal and Constitutional Affairs, The Parliament of Canada Web Site. Retrieved 11 June 2008. As for other animals, plants, or other entities, their ability to feel physical pain is at present a question beyond scientific reach, since no mechanism is known by which they could have such a feeling. In particular, there are no known nociceptors in groups such as plants, fungi, and most insects, except for instance in fruit flies.
In vertebrates, endogenous opioids are neuromodulators that moderate pain by interacting with opioid receptors. Opioids and opioid receptors occur naturally in crustaceans and, although at present no certain conclusion can be drawn,L. Sømme (2005). "Sentience and pain in invertebrates: Report to Norwegian Scientific Committee for Food Safety". Norwegian University of Life Sciences, Oslo. their presence indicates that lobsters may be able to experience pain. Opioids may mediate their pain in the same way as in vertebrates. Veterinary medicine uses, for actual or potential animal pain, the same analgesics and anesthetics as used in humans.
Etymology
First attested in English in 1297, the word peyn comes from the Old French peine, in turn from Latin poena meaning "punishment, penalty"poena, Charlton T. Lewis, Charles Short, A Latin Dictionary, on Perseus Digital Library (in L.L. also meaning "torment, hardship, suffering") and that from Greek ποινή (poine), generally meaning "price paid, penalty, punishment".ποινή , Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Librarypain, Online Etymology Dictionary
References
External links
Pain Stanford Encyclopedia of Philosophy
Category:Nociception
Category:Sensory systems
Category:Suffering
Category:Acute pain | 24,373 | 2017-01 |
Mexico City | Mexico City, officially City of Mexico ( , ; abbreviated as "CDMX"), is the capital and most populous city of the United Mexican States. As an "alpha" global city, Mexico City is one of the most important financial centers in the Americas. It is located in the Valley of Mexico (Valle de México), a large valley in the high plateaus at the center of Mexico, at an altitude of . The city consists of sixteen municipalities (previously called boroughs).
The 2009 estimated population for the city proper was approximately 8.84 million people, with a land area of .Brian W. Blouet, Olwyn M. Blouet. OECD Reviews of Regional Innovation OECD Reviews of Regional Innovation: 15 Mexican States 2009. OECD Publishing, 2009. p. 418 (p. 299). ISBN 978-92-64-06012-8. According to the most recent definition agreed upon by the federal and state governments, the Greater Mexico City population is 21.2 million people, making it the largest metropolitan area of the world's western hemisphere and both the tenth-largest agglomeration and largest Spanish-speaking city in the world.
The Greater Mexico City has a gross domestic product (GDP) of US$411 billion in 2011, making Mexico City urban agglomeration one of the economically largest metropolitan areas in the world.Global MetroMonitor | Brookings Institution . Brookings.edu. Retrieved on April 12, 2014. The city was responsible for generating 15.8% of Mexico's Gross Domestic Product and the metropolitan area accounted for about 22% of total national GDP. As a stand-alone country, in 2013, Mexico City would be the fifth-largest economy in Latin America—five times as large as Costa Rica's and about the same size as Peru's.
Mexico’s capital is both the oldest capital city in the Americas and one of two founded by Amerindians (Native Americans), the other being Quito. The city was originally built on an island of Lake Texcoco by the Aztecs in 1325 as Tenochtitlan, which was almost completely destroyed in the 1521 siege of Tenochtitlan, and subsequently redesigned and rebuilt in accordance with the Spanish urban standards. In 1524, the municipality of Mexico City was established, known as México Tenochtitlán, and as of 1585 it was officially known as Ciudad de México (Mexico City). Mexico City served as the political, administrative and financial center of a major part of the Spanish colonial empire. After independence from Spain was achieved, the federal district was created in 1824.
After years of demanding greater political autonomy, residents were given the right to directly elect a Head of Government and the representatives of the unicameral Legislative Assembly by popular vote in 1997. Ever since, the left-wing Party of the Democratic Revolution (PRD) has controlled both of them.Daniel C. Schechter, Josephine Quintero. Lonely Planet Mexico City, City Guide [With Pullout Map]. Third Edition. Lonely Planet, 2008. p. 288 (pp. 20–21). ISBN 978-1-74059-182-9. In recent years, the local government has passed a wave of liberal policies, such as abortion on request, a limited form of euthanasia, no-fault divorce, and same-sex marriage. On January 29, 2016, it ceased to be called the Federal District (Spanish: Distrito Federal or D.F.) and is now in transition to become the country's 32nd federal entity, giving it a level of autonomy comparable to that of a state. Because of a clause in the Mexican Constitution, however, as the seat of the powers of the federation, it can never become a state, lest the capital of the country be relocated elsewhere.
History
Aztec period
thumb|right|Tenochtitlan, the Aztec capital
The city of Mexico-Tenochtitlan was founded by the Mexica people in 1325. The old Mexica city that is now simply referred to as Tenochtitlan was built on an island in the center of the inland lake system of the Valley of Mexico, which it shared with a smaller city-state called Tlatelolco.Frances F. Berdan, The Aztecs of Mexico: An Imperial Society, New York: Holt, Rinehart, Winston 1982, pp. 10–14. According to legend, the Mexicas' principal god, Huitzilopochtli indicated the site where they were to build their home by presenting an eagle perched on a nopal cactus with a snake in its beak.Frances F. Berdan, The Aztecs of Mexico: An Imperial Society, New York: Holt, Rinehart, Winston 1982, p. 14.
Between 1325 and 1521, Tenochtitlan grew in size and strength, eventually dominating the other city-states around Lake Texcoco and in the Valley of Mexico. When the Spaniards arrived, the Aztec Empire had reached much of Mesoamerica, touching both the Gulf of Mexico and the Pacific Ocean.
Spanish conquest
thumb|right|The ruins of the Templo Mayor
After landing in Veracruz, Spanish explorer Hernán Cortés advanced upon Tenochtitlán with the aid of many of the other native peoples,
arriving there on November 8, 1519. Cortés and his men marched along the causeway leading into the city from Iztapalapa, and the city's ruler, Moctezuma II, greeted the Spaniards; they exchanged gifts, but the camaraderie did not last long.
Cortés put Moctezuma under house arrest, hoping to rule through him.
Tensions increased until, on the night of June 30, 1520 – during a struggle known as "La Noche Triste" – the Azteca rose up against the Spanish intrusion and managed to capture or drive out the Europeans and their Tlaxcalan allies. Cortés regrouped at Tlaxcala. The Aztecs thought the Spaniards were permanently gone, and they elected a new king, Cuitláhuac, but he soon died; the next king was Cuauhtémoc.
Cortés began a siege of Tenochtitlán in May 1521. For three months, the city suffered from the lack of food and water as well as the spread of smallpox brought by the Europeans. Cortés and his allies landed their forces in the south of the island and slowly fought their way through the city. Cuauhtémoc surrendered in August 1521. The Spaniards practically razed Tenochtitlán during the final siege of the conquest.
Rebuilding
thumb|right|The Mexico City Metropolitan Cathedral was built by the Spaniards over the ruins of the main Aztec temple
Cortés first settled in Coyoacán, but decided to rebuild the Aztec site to erase all traces of the old order. He did not establish a territory under his own personal rule, but remained loyal to the Spanish crown. The first Spanish viceroy arrived in Mexico City fourteen years later. By that time, the city had again become a city-state, having power that extended far beyond its borders.
Although the Spanish preserved Tenochtitlán's basic layout, they built Catholic churches over the old Aztec temples and claimed the imperial palaces for themselves. Tenochtitlán was renamed "Mexico" because the Spanish found the word easier to pronounce.
Growth of colonial Mexico City
The city had been the capital of the Aztec empire and in the colonial era, Mexico City became the capital of New Spain. The viceroy of Mexico or vice-king lived in the viceregal palace on the main square or Zócalo. The Mexico City Metropolitan Cathedral, the seat of the Archbishopric of New Spain, was constructed on another side of the Zócalo, as was the archbishop's palace, and across from it the building housing the City Council or ayuntamiento of the city.
A famous late seventeenth-century painting of the Zócalo by Cristóbal de Villalpando depicts the main square, which had been the old Aztec ceremonial center. The existing central place of the Aztecs was effectively and permanently transformed to the ceremonial center and seat of power during the colonial period, and remains to this day in modern Mexico, the central place of the nation.
The rebuilding of the city after the siege of Tenochtitlan was accomplished by the abundant indigenous labor in the surrounding area. Franciscan friar Toribio de Benavente Motolinia, one of the Twelve Apostles of Mexico who arrived in New Spain in 1524, described the rebuilding of the city as one of the afflictions or plagues of the early period:
The seventh plague was the construction of the great City of Mexico, which, during the early years used more people than in the construction of Jerusalem. The crowds of laborers were so numerous that one could hardly move in the streets and causeways, although they are very wide. Many died from being crushed by beams, or falling from high places, or in tearing down old buildings for new ones.Toribio de Benavente Motolinia, Motolinia's History of the Indians of New Spain, translated and edited by Elizabeth Adnros Foster. Wesport: Greenwood Press, (1950) 1973, pp. 41–42 Preconquest Tenochtitlan was built in the center of the inland lake system, with the city reachable by canoe and by wide causeways to the mainland. The causeways were rebuilt under Spanish rule with indigenous labor.
Colonial Spanish cities were constructed on a grid pattern, if no geographical obstacle prevented it. In Mexico City, the Zócalo (main square) was the central place from which the grid was then built outward. The Spanish lived in the area closest to the main square in what was known as the traza, in orderly, well laid-out streets. Indian residences were outside that exclusive zone and houses were haphazardly located.Edmundo O'Gorman, Reflexiones sobre la distribución urbana coloinal de la ciudad de México, Mexico 1938, pp. 16ff.
Spaniards sought to keep Indians separate from Spaniards but since the Zócalo was a center of commerce for Indians, they were a constant presence in the central area, so strict segregation was never enforced.Magnus Mörner and Charles Gibson, "Diego Muñoz Camargo and the Segregation Policy of the Spanish Crown," Hispanic American Historical Review, vol. 42, pp. 558ff. At intervals Zócalo was where major celebrations took place as well as executions. It was also the site of two major riots in the seventeenth century, one in 1624, the other in 1692.Ida Altman, Sarah Cline, and Javier Pescador, The Early History of Greater Mexico, Pearson 2003, pp. 246–249.
The city grew as the population did, coming up against the lake's waters. As the depth of the lake water fluctuated, Mexico City was subject to periodic flooding. A major labor draft, the desagüe, compelled thousands of Indians over the colonial period to work on infrastructure to prevent flooding. Floods were not only an inconvenience but also a health hazard, since during flood periods human waste polluted the city's streets. By draining the area, the mosquito population dropped as did the frequency of the diseases they spread. However, draining the wetlands also changed the habitat for fish and birds and the areas accessible for Indian cultivation close to the capital.Noble David Cook, Born to Die: Disease and New World Conquest, 1492–1650. New York: Cambridge University Press 1998.
The 16th century saw a proliferation of churches, many of which can still be seen today in the historic center.
Economically, Mexico City prospered as a result of trade. Unlike Brazil or Peru, Mexico had easy contact with both the Atlantic and Pacific worlds. Although the Spanish crown tried to completely regulate all commerce in the city, it had only partial success.
thumb|right|Castle of Chapultepec
The concept of nobility flourished in New Spain in a way not seen in other parts of the Americas. Spaniards encountered a society in which the concept of nobility mirrored that of their own. Spaniards respected the indigenous order of nobility and added to it. In the ensuing centuries, possession of a noble title in Mexico did not mean one exercised great political power, for one's power was limited even if the accumulation of wealth was not. The concept of nobility in Mexico was not political but rather a very conservative Spanish social one, based on proving the worthiness of the family. Most of these families proved their worth by making fortunes in New Spain outside of the city itself, then spending the revenues in the capital, building churches, supporting charities and building extravagant palatial homes. The craze to build the most opulent residence possible reached its height in the last half of the 18th century. Many of these palaces can still be seen today, leading to Mexico City's nickname of "The city of palaces" given by Alexander Von Humboldt.
The Grito de Dolores ("Cry of Dolores"), also known as El Grito de la Independencia ("Cry of Independence"), marked the beginning of the Mexican War of Independence. The Battle of Guanajuato, the first major engagement of the insurgency, occurred four days later. After a decade of war, Mexico's independence from Spain was effectively declared in the Declaration of Independence of the Mexican Empire on September 27, 1821. Unrest followed for the next several decades, as different factions fought for control of Mexico.
The Mexican Federal District was established by the new government and by the signing of their new constitution, where the concept of a federal district was adapted from The U.S. constitution. Before this designation, Mexico City had served as the seat of government for both the State of Mexico and the nation as a whole. Texcoco and then Toluca became the capital of the state of Mexico.
The Battle of Mexico City in the U.S.–Mexican War of 1847
The Battle for Mexico City was the series of engagements from September 8 to 15, 1847, in the general vicinity of Mexico City during the U.S. Mexican War. Included are major actions at the battles of Molino del Rey and Chapultepec, culminating with the fall of Mexico City. The U.S. Army under Winfield Scott scored a major success that ended the war. The American invasion into the Federal District was first resisted during the Battle of Churubusco on August 8 where the Saint Patrick's Battalion, which was composed primarily of Catholic Irish and German immigrants, but also Canadians, English, French, Italians, Poles, Scots, Spaniards, Swiss, and Mexican people, fought for the Mexican cause repelling the American attacks. After defeating the Saint Patrick's Battalion, the Mexican–American War came to a close after the United States deployed combat units deep into Mexico resulting in the capture of Mexico City and Veracruz by the U.S. Army's 1st, 2nd, 3rd and 4th Divisions. The invasion culminated with the storming of Chapultepec Castle in the city itself.
During this battle, on September 13, the 4th Division, under John A. Quitman, spearheaded the attack against Chapultepec and carried the castle. Future Confederate generals George E. Pickett and James Longstreet participated in the attack. Serving in the Mexican defense were the cadets later immortalized as Los Niños Héroes (the "Boy Heroes"). The Mexican forces fell back from Chapultepec and retreated within the city. Attacks on the Belén and San Cosme Gates came afterwards. The treaty of Guadalupe Hidalgo was signed in what is now the far north of the city.
Porfirian era (1876–1911)
thumb|right|French-styled Porfirian houses in Colonia Roma, whose architectural legacy remains in several central neighborhoods of the city such as Condesa, Zona Rosa, Downtown Mexico City and San Miguel Chapultepec
Events such as the Mexican–American War, the French Intervention and the Reform War left the city relatively untouched and it continued to grow, especially during the rule of President Porfirio Díaz. During this time the city developed a modern infrastructure, such as roads, schools, transportation systems and communication systems. However the regime concentrated resources and wealth into the city while the rest of the country languished in poverty.
Under the rule of Porfirio Díaz, Mexico City experienced a massive transformation. Díaz's goal was to create a city which could rival the great European cities. He and his government came to the conclusion that they would use Paris as a model, while still containing remnants of Amerindian and Hispanic elements. This style of Mexican-French fusion architecture became colloquially known as Porfirian Architecture. Porfirian architecture became very influenced by Paris' Haussmannization.
During this era of Porfirian rule, the city underwent an extensive modernization. Many Spanish Colonial style buildings were destroyed, replaced by new much larger Porfirian institutions and many outlying rural zones were transformed into urban or industrialized districts with most having electrical, gas and sewage utilities by 1908. While the initial focus was on developing modern hospitals, schools, factories and massive public works, perhaps the most long-lasting effects of the Porfirian modernization were creation of the Colonia Roma area and the development of Reforma Avenue. Many of Mexico City's major attractions and landmarks were built during this era in this style.
Diaz's plans called for the entire city to eventually be modernized or rebuilt in the Porfirian/French style of the Colonia Roma; but the Mexican Revolution began soon afterward and the plans never came to fruition, with many projects being left half-completed. One of the best examples of this is the Monument to the Mexican Revolution. Originally the monument was to be the main dome of Diaz's new senate hall, but when the revolution erupted only the dome of the senate hall and its supporting pillars were completed, this was subsequently seen as a symbol by many Mexicans that the Porfirian era was over once and for all and as such, it was turned into a monument to victory over Diaz.
Mexican Revolution (1910–1920)
thumb|right|Francisco Villa and Emiliano Zapata entering Mexico City (1914)
The capital escaped the worst of the violence of the ten-year conflict of the Mexican Revolution. The most significant episode of this period for the city was the February 1913 La decena trágica ("The Ten Tragic Days"), when forces counter to the elected government of Francisco I. Madero staged a successful coup. The center of the city was subjected to artillery attacks from the army stronghold of the ciudadela or citadel, with significant civilian casualties and the undermining of confidence in the Madero government. Victoriano Huerta, chief general of the Federal Army, saw a chance to take power, forcing Madero and Pino Suarez to sign resignations. The two were murdered later while on their way to Lecumberri prison. Huerta's ouster in July 1914 saw the entry of the armies of Pancho Villa and Emiliano Zapata, but the city did not experience violence. Huerta had abandoned the capital and the conquering armies marched in. Venustiano Carranza's Constitutionalist faction ultimately prevailed in the revolutionary civil war and Carranza took up residence in the presidential palace.
20th century to present
thumb|right|Frida Kahlo and Diego Rivera house in San Angel designed by Juan O'Gorman, an example of 20th Century Modernist Architecture in Mexico
The history of the rest of the 20th century to the present focuses on the phenomenal growth of the city and its environmental and political consequences. In 1900, the population of Mexico City was about 500,000. The city began to grow rapidly westward in the early part of the 20th century and then began to grow upwards in the 1950s, with the Torre Latinoamericana becoming the city's first skyscraper. The 1968 Olympic Games brought about the construction of large sporting facilities.
In 1969 the Metro system was inaugurated.
Explosive growth in the population of the city started from the 1960s, with the population overflowing the boundaries of the Federal District into the neighboring state of Mexico, especially to the north, northwest and northeast. Between 1960 and 1980 the city's population more than doubled to nearly 9 million.
In 1980 half of all the industrial jobs in Mexico were located in Mexico City. Under relentless growth, the Mexico City government could barely keep up with services. Villagers from the countryside who continued to pour into the city to escape poverty only compounded the city's problems. With no housing available, they took over lands surrounding the city, creating huge shantytowns that extended for many miles. This caused serious air pollution in Mexico City and water pollution problems, as well as subsidence due to overextraction of groundwater. Air and water pollution has been contained and improved in several areas due to government programs, the renovation of vehicles and the modernization of public transportation.
The autocratic government that ruled Mexico City since the Revolution was tolerated, mostly because of the continued economic expansion since World War II. This was the case even though this government could not handle the population and pollution problems adequately. Nevertheless, discontent and protests began in the 1960s leading to the massacre of an unknown number of protesting students in Tlatelolco.
Three years later, a demonstration in the Maestros avenue, organized by former members of the 1968 student movement, was violently repressed by a paramilitary group called "Los Halcones", composed of gang members and teenagers from many sports clubs who received training in the U.S.
On Thursday, September 19, 1985, at 7:19 am local time, Mexico City was struck by an earthquake of magnitude 8.1 on the Richter magnitude scale. Although this earthquake was not as deadly or destructive as many similar events in Asia and other parts of Latin America, it proved to be a disaster politically for the one-party government. The government was paralyzed by its own bureaucracy and corruption, forcing ordinary citizens to create and direct their own rescue efforts and to reconstruct much of the housing that was lost as well.
However, the last straw may have been the controversial elections of 1988. That year, the presidency was set between the P.R.I.'s candidate, Carlos Salinas de Gortari, and a coalition of left-wing parties led by Cuauhtémoc Cárdenas, son of the former president Lázaro Cárdenas. The counting system "fell" because coincidentally the light went out and suddenly, when it returned, the winning candidate was Salinas, even though Cárdenas had the upper hand.
As a result of the fraudulent election, Cárdenas became a member of the Party of the Democratic Revolution. Discontent over the election eventually led Cuauhtémoc Cárdenas to become the first elected mayor of Mexico City in 1997. Cárdenas promised a more democratic government, and his party claimed some victories against crime, pollution, and other major problems. He resigned in 1999 to run for the presidency.
Geography
Mexico City is located in the Valley of Mexico, sometimes called the Basin of Mexico. This valley is located in the Trans-Mexican Volcanic Belt in the high plateaus of south-central Mexico. It has a minimum altitude of above sea level and is surrounded by mountains and volcanoes that reach elevations of over . This valley has no natural drainage outlet for the waters that flow from the mountainsides, making the city vulnerable to flooding. Drainage was engineered through the use of canals and tunnels starting in the 17th century.
Mexico City primarily rests on what was Lake Texcoco. Seismic activity is frequent here. Lake Texcoco was drained starting from the 17th century. Although none of the lake waters remain, the city rests on the lake bed's heavily saturated clay. This soft base is collapsing due to the over-extraction of groundwater, called groundwater-related subsidence. Since the beginning of the 20th century the city has sunk as much as in some areas. This sinking is causing problems with runoff and wastewater management, leading to flooding problems, especially during the rainy season. The entire lake bed is now paved over and most of the city's remaining forested areas lie in the southern boroughs of Milpa Alta, Tlalpan and Xochimilco.
Geophysical maps of the Federal District120px120px120pxTopographyHydrologyClimate patterns
Climate
thumb|right|Cumbres del Ajusco National Park
Mexico City has a subtropical highland climate (Köppen climate classification Cwb), due to its tropical location but high elevation. The lower region of the valley receives less rainfall than the upper regions of the south; the lower boroughs of Iztapalapa, Iztacalco, Venustiano Carranza and the west portion of Gustavo A. Madero are usually drier and warmer than the upper southern boroughs of Tlalpan and Milpa Alta, a mountainous region of pine and oak trees known as the range of Ajusco.
The average annual temperature varies from , depending on the altitude of the borough. The temperature is rarely below or above . At the Tacubaya observatory, the lowest temperature ever registered was on February 13, 1960, and the highest temperature on record was on May 9, 1998.
Overall precipitation is heavily concentrated in the summer months, and includes dense hail. The Central Valley of Mexico rarely gets snow during winter; the two last recorded instances of such an event were on March 5, 1940 and January 12, 1967.
The region of the Valley of Mexico receives anti-cyclonic systems. The weak winds of these systems do not allow for the dispersion, outside the basin, of the air pollutants which are produced by the 50,000 industries and 4 million vehicles operating in and around the metropolitan area.
The area receives about of annual rainfall, which is concentrated from June through September/October with little or no precipitation the remainder of the year. The area has two main seasons. The rainy season runs from June to October when winds bring in tropical moisture from the sea, which the wettest month is July. The dry season runs from November to May, when the air is relatively drier, which the driest month is December. This dry season subdivides into a cold period and a warm period. The cold period spans from November to February when polar air masses push down from the north and keep the air fairly dry. The warm period extends from March to May when tropical winds again dominate but do not yet carry enough moisture for rain.
Environment
thumb|Xochimilco trajineras
Originally much of the valley laid beneath the waters of Lake Texcoco, a system of interconnected salt and freshwater lakes. The Aztecs built dikes to separate the fresh water used to raise crops in chinampas and to prevent recurrent floods. These dikes were destroyed during the siege of Tenochtitlan, and during colonial times the Spanish regularly drained the lake to prevent floods. Only a small section of the original lake remains, located outside the Federal District, in the municipality of Atenco, State of Mexico.
Architects Teodoro González de León and Alberto Kalach along with a group of Mexican urbanists, engineers and biologists have developed the project plan for Recovering the City of Lakes. If approved by the government the project will contribute to the supply of water from natural sources to the Valley of Mexico, the creation of new natural spaces, a great improvement in air quality, and greater population establishment planning.
Pollution
thumb|right|Air pollution over Mexico City
By the 1990s Mexico City had become infamous as one of the world's most polluted cities; however, the city has become a model for dramatically lowering pollution levels. By 2014 carbon monoxide pollution had dropped dramatically, while levels of sulfur dioxide and nitrogen dioxide were nearly three times lower than in 1992. The levels of signature pollutants in Mexico City are similar to those of Los Angeles. Despite the cleanup, the metropolitan area is still the most ozone-polluted part of the country, with ozone levels 2.5 times beyond WHO-defined safe limits.
To clean up pollution, the federal and local governments implemented numerous plans including the constant monitoring and reporting of environmental conditions, such as ozone and nitrogen oxides. When the levels of these two pollutants reached critical levels, contingency actions were implemented which included closing factories, changing school hours, and extending the A day without a car program to two days of the week. The government also instituted industrial technology improvements, a strict biannual vehicle emission inspection and the reformulation of gasoline and diesel fuels. The introduction of Metrobús bus rapid transit and the Ecobici bike-sharing were among efforts to encourage alternate, greener forms of transportation.
Politics
Federal District
thumb|right|Mexico City's Legislative Assembly building
The Acta Constitutiva de la Federación of January 31, 1824, and the Federal Constitution of October 4, 1824, fixed the political and administrative organization of the United Mexican States after the Mexican War of Independence. In addition, Section XXVIII of Article 50 gave the new Congress the right to choose where the federal government would be located. This location would then be appropriated as federal land, with the federal government acting as the local authority. The two main candidates to become the capital were Mexico City and Querétaro.Boletín Mexicano de Derecho Comparado. Juridicas.unam.mx. Retrieved on April 12, 2014.
Due in large part to the persuasion of representative Servando Teresa de Mier, Mexico City was chosen because it was the center of the country's population and history, even though Querétaro was closer to the center geographically. The choice was official on November 18, 1824, and Congress delineated a surface area of two leagues square (8,800 acres) centered on the Zocalo. This area was then separated from the State of Mexico, forcing that state's government to move from the Palace of the Inquisition (now Museum of Mexican Medicine) in the city to Texcoco. This area did not include the population centers of the towns of Coyoacán, Xochimilco, Mexicaltzingo and Tlalpan, all of which remained as part of the State of Mexico.
In 1854 president Antonio López de Santa Anna enlarged the area of the Federal District almost eightfold from the original , annexing the rural and mountainous areas to secure the strategic mountain passes to the south and southwest to protect the city in event of a foreign invasion. (The Mexican–American War had just been fought.) The last changes to the limits of the Federal District were made between 1898 and 1902, reducing the area to the current by adjusting the southern border with the state of Morelos. By that time, the total number of municipalities within the Federal District was twenty-two.
While the Federal District was ruled by the federal government through an appointed governor, the municipalities within it were autonomous, and this duality of powers created tension between the municipalities and the federal government for more than a century. In 1903, Porfirio Díaz largely reduced the powers of the municipalities within the Federal District. Eventually, in December 1928, the federal government decided to abolish all the municipalities of the Federal District. In place of the municipalities, the Federal District was divided into one "Central Department" and 13 delegaciones (boroughs) administered directly by the government of the Federal District. The Central Department was integrated by the former municipalities of Mexico City, Tacuba, Tacubaya and Mixcoac.
In 1941, the General Anaya borough was merged to the Central Department, which was then renamed "Mexico City" (thus reviving the name, but not the autonomous municipality). From 1941 to 1970, the Federal District comprised twelve delegaciones and Mexico City. In 1970 Mexico City was split into four different delegaciones: Cuauhtémoc, Miguel Hidalgo, Venustiano Carranza and Benito Juárez, increasing the number of delegaciones to sixteen. Since then, in a de facto manner, the whole Federal District, whose delegaciones had by then almost formed a single urban area, began to be considered a synonym of Mexico City.Statute of Government of the Federal District
The lack of a de jure stipulation left a legal vacuum that led to a number of sterile discussions about whether one concept had engulfed the other or if the latter had ceased to exist altogether. In 1993 this situation was solved by an amendment to the 44th article of the Constitution whereby Mexico City and the Federal District were set to be the same entity. This amendment was later introduced into the second article of the Statute of Government of the Federal District.
Political structure
thumb|right|The National Palace of Mexico
thumb|right|Offices of the Secretariat of Foreign Affairs
Mexico City, being the seat of the powers of the Union, did not belong to any particular state but to all. Therefore, it was the president, representing the federation, who used to designate the head of government of the Federal District, a position which is sometimes presented outside Mexico as the "Mayor" of Mexico City. In the 1980s, given the dramatic increase in population of the previous decades, the inherent political inconsistencies of the system, as well as the dissatisfaction with the inadequate response of the federal government after the 1985 earthquake, residents began to request political and administrative autonomy to manage their local affairs. Some political groups even proposed that the Federal District be converted into the 32nd state of the federation.
In response to the demands, in 1987 the Federal District received a greater degree of autonomy, with the elaboration the first Statute of Government (Estatuto de Gobierno), and the creation of an Assembly of Representatives. In the 1990s, this autonomy was further expanded and, starting from 1997, residents can directly elect the head of government of the Federal District and the representatives of a unicameral Legislative Assembly (which succeeded the previous Assembly) by popular vote.
The first elected head of government was Cuauhtémoc Cárdenas. Cárdenas resigned in 1999 to run in the 2000 presidential elections and designated Rosario Robles to succeed him, who became the first woman (elected or otherwise) to govern Mexico City. In 2000 Andrés Manuel López Obrador was elected, and resigned in 2005 to run in the 2006 presidential elections, Alejandro Encinas being designated by the Legislative Assembly to finish the term. In 2006, Marcelo Ebrard was elected for the 2006–2012 period.
The Federal District does not have a constitution, like the states of the Union, but rather a Statute of Government. As part of its recent changes in autonomy, the budget is administered locally; it is proposed by the head of government and approved by the Legislative Assembly. Nonetheless, it is the Congress of the Union that sets the ceiling to internal and external public debt issued by the Federal District.
According to the 44th article of the Mexican Constitution, in case the powers of the Union move to another city, the Federal District will be transformed into a new state, which will be called "State of the Valley of Mexico", with the new limits set by the Congress of the Union.
Elections and government
thumb|right|Mexico City's Head of Government Miguel Ángel Mancera
In 2012, elections were held for the post of head of government and the representatives of the Legislative Assembly. Heads of government are elected for a 6-year period without the possibility of reelection. Traditionally, this position has been considered as the second most important executive office in the country.Hamnett, Brian (1999) A Concise History of Mexico Cambridge University Press; Cambridge, UK, p. 293
The Legislative Assembly of the Federal District is formed, as it is the case in all legislatures in Mexico, by both single-seat and proportional seats, making it a system of parallel voting. The Federal District is divided into 40 electoral constituencies of similar population which elect one representative by first-past-the-post plurality (FPP), locally called "uninominal deputies". The Federal District as a whole constitutes a single constituency for the parallel election of 26 representatives by proportionality (PR) with open-party lists, locally called "plurinominal deputies".
Even though proportionality is confined to the proportional seats to prevent a part from being overrepresented, several restrictions apply in the assignation of the seats; namely, that no party can have more than 63% of all seats, both uninominal and plurinominal. In the 2006 elections leftist PRD got the absolute majority in the direct uninominal elections, securing 34 of the 40 FPP seats. As such, the PRD was not assigned any plurinominal seat to comply with the law that prevents over-representation. The overall composition of the Legislative Assembly is:
Political partyFPPPRTotal25px National Regeneration Movement1842218px 18px 18px Party of the Democratic Revolution / Labour Party / New Alliance Party1472118px National Action Party551018px 18px Institutional Revolutionary Party / Ecologist Green Party of Mexico36918px Social Encounter Party02218px Citizens' Movement01118px Humanist Party011Total402666
The politics pursued by the administrations of heads of government in Mexico City since the second half of the 20th century have usually been more liberal than those of the rest of the country, whether with the support of the federal government—as was the case with the approval of several comprehensive environmental laws in the 1980s—or through laws recently approved by the Legislative Assembly. In April of the same year, the Legislative Assembly expanded provisions on abortions, becoming the first federal entity to expand abortion in Mexico beyond cases of rape and economic reasons, to permit it regardless of the reason should the mother request it before the twelfth week of pregnancy. In December 2009, the Federal District became the first city in Latin America, and one of very few in the world, to legalize same-sex marriage.
Boroughs and neighborhoods
thumb|right|The 16 boroughs of Mexico City
thumb|right|A traditional street in Coyoacan
thumb|right|A German-style home, now a restaurant, in the San Angel neighborhood
For administrative purposes, the Federal District is divided into 16 "delegaciones" or boroughs. While not fully equivalent to a municipality, the 16 boroughs have gained significant autonomy, and since 2000 their heads of government are elected directly by plurality (they were previously appointed by the head of government of the Federal District). Given that Mexico City is organized entirely as a Federal District, most of the city services are provided or organized by the Government of the Federal District and not by the boroughs themselves, while in the constituent states these services would be provided by the municipalities. The 16 boroughs of the Federal District with their 2010 populations are:2010 census tables: INEGISelect Municipales (Municipal), then Descargar (Download).
1. Álvaro Obregón (pop. 727,034)
2. Azcapotzalco (pop. 414,711)
3. Benito Juárez (pop. 385,439)
4. Coyoacán (pop. 620,416)
5. Cuajimalpa (pop. 186,391)
6. Cuauhtémoc (pop. 531,831)
7. Gustavo A. Madero (pop. 1,185,772)
8. Iztacalco (pop. 384,326)9. Iztapalapa (pop. 1,815,786)
10. Magdalena Contreras (pop. 239,086)
11. Miguel Hidalgo (pop. 372,889)
12. Milpa Alta (pop. 130,582)
13. Tláhuac (pop. 360,265)
14. Tlalpan (pop. 650,567)
15. Venustiano Carranza (pop. 430,978)
16. Xochimilco (pop. 415,007)
The boroughs are composed by hundreds of colonias or neighborhoods, which have no jurisdictional autonomy or representation. The Historic Center is the oldest part of the city (along with some other, formerly separate colonial towns such as Coyoacán and San Ángel), some of the buildings dating back to the 16th century. Other well-known central neighborhoods include Condesa, known for its Art Deco architecture and its restaurant scene; Colonia Roma, a beaux arts neighborhood and artistic and culinary hot-spot, the Zona Rosa, formerly the center of nightlife and restaurants, now reborn as the center of the LGBT and Korean-Mexican communities; and Tepito and La Lagunilla, known for their local working-class foklore and large flea markets. Santa María la Ribera and San Rafael are the latest neighborhoods of magnificent Porfiriato architecture seeing the first signs of gentrification.
West of the Historic Center (Centro Histórico) along Paseo de la Reforma are many of the city's wealthiest neighborhoods such as Polanco, Lomas de Chapultepec, Bosques de las Lomas, Santa Fe, and (in the State of Mexico) Interlomas, which are also the city's most important areas of class A office space, corporate headquarters, skyscrapers and shopping malls. Nevertheless, areas of lower income colonias exist in some cases cheek-by-jowl with rich neighborhoods, particularly in the case of Santa Fe.
The south of the city is home to some other high-income neighborhoods such as Colonia del Valle and Jardines del Pedregal, and the formerly separate colonial towns of Coyoacán, San Ángel, and San Jerónimo. Along Avenida Insurgentes from Paseo de la Reforma, near the center, south past the World Trade Center and UNAM university towards the Periférico ring road, is another important corridor of corporate office space. The far southern boroughs of Xochimilco and Tláhuac have a significant rural population with Milpa Alta being entirely rural.
East of the center are mostly lower-income areas with some middle-class neighborhoods such as Jardín Balbuena. Urban sprawl continues further east for many miles into the State of Mexico, including Ciudad Nezahualcoyotl, now increasingly middle-class, but once full of informal settlements. These kind of slums are now found on the eastern edges of the metropolitan area in the Chalco area.
North of the Historic Center, Azcapotzalco and Gustavo A. Madero have important industrial centers and neighborhoods that range from established middle-class colonias such as Claveria and Lindavista to huge low-income housing areas that share hillsides with adjacent municipalities in the State of Mexico. In recent years much of northern Mexico City's industry has moved to nearby municipalities in the State of Mexico. Northwest of Mexico City itself is Ciudad Satélite, a vast middle to upper-middle-class residential and business area.
The Human Development Index report of 2005 shows that there were three boroughs with a very high Human Development Index, 12 with a high HDI value (9 above .85) and one with a medium HDI value (almost high). Benito Juárez borough had the highest HDI of the country (.9510) followed by Miguel Hidalgo which came up 4th nationally with a HDI of (.9189) and Coyoacán (5th nationally) with a HDI value of (.9169). Cuajimalpa, Cuauhtémoc and Azcapotzalco had very high values; respectively .8994 (15th nationally),.8922 (23rd) and .8915 (25th).
In contrast, the boroughs of Xochimilco (172th), Tláhuac (177th) and Iztapalapa (183th) presented the lowest HDI values of the Federal District with values of .8481, .8473 and .8464 respectively—values still in the global high-HDI range. The only borough that did not present a high HDI was that of rural Milpa Alta which presented a "medium" HDI of .7984, far below all other boroughs (627th nationally while the rest stood in the top 200). Mexico City's HDI for the 2005 report was of .9012 (very high), and its 2010 value of .9225 (very high) or (by newer methodology) .8307, and Mexico's highest.
Metropolitan area
thumb|right|Growth of Mexico city's area from 1900 to 2000
Greater Mexico City is formed by the Federal District, 60 municipalities from the State of Mexico and one from the state of Hidalgo. Greater Mexico City is the largest metropolitan area in Mexico and the area with the highest population density. , 21,163,226 people live in this urban agglomeration, of which 8,841,916 live in Mexico City proper. In terms of population, the biggest municipalities that are part of Greater Mexico City (excluding Mexico City proper) are:Censo de Población y Vivienda 2010 Resultados preliminares (choose drop down Mexico for state)
Atizapan de Zaragoza ( 489,775)
Chimalhuacán (pop. 602,079)
Cuautitlán Izcalli (pop. 532,973)
Ecatepec de Morelos (pop. 1,658,806)
Ixtapaluca (pop. 467,630)
Naucalpan (pop. 833,782)
Nezahualcóyotl (pop. 1,109,363)
Tlalnepantla de Baz (pop. 664,160)
The above municipalities are located in the state of Mexico but are part of the Greater Mexico City area. Approximately 75% (10 million) of the state of México's population live in municipalities that are part of Greater Mexico City's conurbation.
Greater Mexico City was the fastest growing metropolitan area in the country until the late 1980s. Since then, and through a policy of decentralization in order to reduce the environmental pollutants of the growing conurbation, the annual rate of growth of the agglomeration has decreased, and it is lower than that of the other four largest metropolitan areas (namely Greater Guadalajara, Greater Monterrey, Greater Puebla and Greater Toluca) even though it is still positive.Síntesis de Resultados del Conteo 2005 INEGI
The net migration rate of Mexico City proper from 1995 to 2000 was negative, which implies that residents are moving to the suburbs of the metropolitan area, or to other states of Mexico. In addition, some inner suburbs are losing population to outer suburbs, indicating the continuing expansion of Greater Mexico City.
Health
thumb|right|Health Secretary
Mexico City is home to some of the best private hospitals in the country; Hospital Ángeles, Hospital ABC and Médica Sur to name a few. The national public healthcare institution for private-sector employees, IMSS, has its largest facilities in Mexico City—including the National Medical Center and the La Raza Medical Center—and has an annual budget of over 6 billion pesos. The IMSS and other public health institutions, including the ISSSTE (Public Sector Employees' Social Security Institute) and the National Health Ministry (SSA) maintain large specialty facilities in the city. These include the National Institutes of Cardiology, Nutrition, Psychiatry, Oncology, Pediatrics, Rehabilitation, among others.
The World Bank has sponsored a project to curb air pollution through public transport improvements and the Mexican government has started shutting down polluting factories. They have phased out diesel buses and mandated new emission controls on new cars; since 1993 all new cars must be fitted with a catalytic converter, which reduces the emissions released. Trucks must use only liquefied petroleum gas (LPG).
Also construction of an underground rail system was begun in 1968 in order to help curb air pollution problems and alleviate traffic congestion. Today it has over of track and carries over 5 million people every day. Fees are kept low to encourage use of the system and during rush hours the crush is so great, that authorities have reserved a special carriage specifically for women.
Due to these initiatives and others, the air quality in Mexico City has begun to improve, with the air becoming cleaner since 1991, when the air quality was declared to be a public health risk for 355 days of the year.
Economy
thumb|right|Mexican Stock Exchange in Paseo de la Reforma, Mexico City
Mexico City is one of the most important economic hubs in Latin America. The city proper (Federal District) produces 15.8% of the country's gross domestic product. According to a study conducted by PwC, Mexico City had a GDP of $390 billion, ranking it as the eighth richest city in the world after the greater metropolitan areas of Tokyo, New York City, Los Angeles, Chicago, Paris, London and Osaka/Kobe (and the richest in the whole of Latin America). Excluding the rest of the Mexican economy, Mexico City alone would rank as the 30th largest economy in the world.
Mexico City is the greatest contributor to the country's industrial GDP (15.8%) and also the greatest contributor to the country's GDP in the service sector (25.3%). Due to the limited non-urbanized space at the south—most of which is protected through environmental laws—the contribution of the Federal District in agriculture is the smallest of all federal entities in the country. Mexico City has one of the world's fastest-growing economies and its GDP is set to double by 2020.
In 2002, Mexico City had a Human Development Index score of 0.915, identical to that of South Korea.
The top twelve percent of GDP per capita holders in the city had a mean disposable income of in 2007. The high spending power of Mexico City inhabitants makes the city attractive for companies offering prestige and luxury goods.
The economic reforms of President Carlos Salinas de Gortari had a tremendous effect on the city, as a number of businesses, including banks and airlines, were privatized. He also signed the North American Free Trade Agreement (NAFTA). This led to decentralization and a shift in Mexico City's economic base, from manufacturing to services, as most factories moved away to either the State of Mexico, or more commonly to the northern border. By contrast, corporate office buildings set their base in the city.
Demographics
thumb|right|La Villa de Guadalupe, the main Catholic pilgrimage site in Mexico
thumb|right|A Synagogue in Downtown Mexico City
thumb|right|Paifang in the Barrio Chino
Historically, and since Pre-Columbian times, the Valley of Anahuac has been one of the most densely populated areas in Mexico. When the Federal District was created in 1824, the urban area of Mexico City extended approximately to the area of today's Cuauhtémoc borough. At the beginning of the 20th century, the elites began migrating to the south and west and soon the small towns of Mixcoac and San Ángel were incorporated by the growing conurbation. According to the 1921 census, 54.78% of the city's population was considered Mestizo (Indigenous mixed with European), 22.79% considered European, and 18.74% considered Indigenous. This was the last Mexican Census which asked people to self-identify with an heritage other than Amerindian. However, the census had the particularity that, unlike racial/ethnic census in other countries, it was focused in the perception of cultural heritage rather than in a racial perception, leading to a good number of white people to identify with "Mixed heritage" due cultural influence. In 1921, Mexico City had less than one million inhabitants.
Up to the 1990s, the Federal District was the most populous federal entity in Mexico, but since then its population has remained stable at around 8.7 million. The growth of the city has extended beyond the limits of the Federal District to 59 municipalities of the state of Mexico and 1 in the state of Hidalgo.Consejo Nacional de Población, México; Delimitación de las zonas metropolitanas de México 2005. Retrieved September 27, 2008. With a population of approximately 19.8 million inhabitants (2008), Total projected population of Distrito Federal and the 60 other municipalities of Zona metropolitana del Valle de México, as defined in 2005. Retrieved September 27, 2008. it is one of the most populous conurbations in the world. Nonetheless, the annual rate of growth of the Metropolitan Area of Mexico City is much lower than that of other large urban agglomerations in Mexico, a phenomenon most likely attributable to the environmental policy of decentralization. The net migration rate of the Federal District from 1995 to 2000 was negative.
Representing around 18.74% of the city's population, indigenous peoples from different regions of Mexico have migrated to the capital in search of better economic opportunities. Nahuatl, Otomi, Mixtec, Zapotec and Mazahua are the indigenous languages with the greatest number of speakers in Mexico City.Población de 5 y más años hablante de lengua indígena por principales lenguas, 2005 INEGI
Genetics
According to a genetic study done in 2011, the average genetic composition of people from Mexico city is 65% Native American, 31% European, and 3% African.http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1002410
Nationality
On the other hand, Mexico City is also home to large communities of expatriates and immigrants, most notably from the rest of North America (U.S. and Canada), from South America (mainly from Argentina and Colombia, but also from Brazil, Chile, Uruguay and Venezuela), from Central America and the Caribbean (mainly from Cuba, Guatemala, El Salvador, Haiti and Honduras); from Europe (mainly from Spain, Germany and Switzerland, but also from Czech Republic, Hungary, France, Italy, Ireland, the Netherlands, Poland and Romania), from the Middle East (mainly from Egypt, Lebanon and Syria); and recently from Asia-Pacific (mainly from China and South Korea). Historically since the era of New Spain, many Filipinos settled in the city and have become integrated in Mexican society. While no official figures have been reported, population estimates of each of these communities are quite significant.
Mexico City is home to the largest population of U.S. Americans living outside the United States. Current estimates are as high as 700,000 U.S. Americans living in Mexico City, while in 1999 the U.S. Bureau of Consular Affairs estimated over 440,000 Americans lived in the Mexico City Metropolitan Area.
Religion
thumb|First Communion in Mexico City
The majority (82%) of the residents in Mexico City are Roman Catholic, higher than the national percentage, though it has been decreasing over the last decades.Volumen y porcentaje de la población de 5 y más años católica por entidad federativa, 2010 INEGI Many other religions and philosophies are also practiced in the city: many different types of Protestant groups, different types of Jewish communities, Buddhist, Islamic and other spiritual and philosophical groups. There are also growing numbers of irreligious people, whether agnostic or atheist.
Transportation
Public transportation
thumb|Mexico City Metro
Metro
Mexico City is served by the Sistema de Transporte Colectivo, a metro system, which is the largest in Latin America. The first portions were opened in 1969 and it has expanded to 12 lines with 195 stations. The metro transports 4.4 million people every day. It is the 8th busiest metro system in the world, behind Tokyo (10.0 million), Beijing (9.3 million), Shanghai (7.8 million), Seoul (7.3 million), Moscow (6.7 million), Guangzhou (6.2 million), and New York City (4.9 million). It is heavily subsidized, and has some of the lowest fares in the world, each trip costing 5.00 pesos (roughy $0.27 USD) from 05:00 am to midnight. Several stations display pre-Columbian artifacts and architecture that were discovered during the metro's construction. However, the metro covers less than half of the total urban area. The Metro stations are also differentiated by the use of icons and glyphs which were proposed for people who could not read. The specific icons were developed based on historical (characters, sites, pre-Hispanic motifs), linguistic, symbolic (glyphs) or location references and has been emulated in further transportations alternatives in the City and in other Mexican cities. Mexico City is the only city in the world to use the icon reference and has become a popular culture trademark for the city.
Suburban rail
A suburban rail system, the Tren Suburbano serves the metropolitan area, beyond the city limits of the metro, to municipalities such as Tlalnepantla and Cuautitlán Izcalli, with future extensions to Chalco and La Paz.
Peseros
thumb|left|Eco3292|A pesero or microbús
Peseros are typically half-length passenger buses (known as microbús) that sit 22 passengers and stand up to 28. , the approximately 28,000 peseros carried up to 60 percent of the city's passengers. In August 2016, Mayor Mancera announced that new pesero vehicle and concessions would be eliminated completely unless they were ecologically friendly vehicles,"No habrá más microbuses en la CDMX: Mancera" (Mancera states that there will not be any more microbuses in Mexico City), El Universal, August 6, 2016 and in October 2011 the city's Secretary of Mobility Héctor Serrano states that by the end of the current administration (2018) there would no longer by any peseros/microbuses circulating at all, and that new full-sized buses would take over the routes."Al término del gobierno de Mancera ya no habrá microbuses: Semovi" ("Semovi says that by the end of Mancera's term there will be no microbuses", Excelsior, October 10, 2016
Urban buses
City agency Red de Transporte de Pasajeros operates a network of large buses. In 2016, more bus routes were added to replace pesero routes.
In 2016, the SVBUS express bus service was launched, with limited stops and utilizing the city's toll roads on the second-level of the Periférico ring road and Supervía Poniente and connecting Toreo/Cuatro Caminos with Santa Fe, San Jerónimo Lídice and Tepepan near Xochimilco in the southeast.
Suburban buses also leave from the city's main intercity bus stations.
Bus rapid transit
thumb|right||Metrobús rapid transit bus stop station at Indios Verdes
The city's first bus rapid transit line, the Metrobús, began operation in June 2005, along Avenida Insurgentes. Line 2 opened in December 2008, serving Eje 4 Sur, line 3 opened in February 2011, serving Eje 1 Poniente, and line 4 opened in April 2012 connecting the airport with San Lázaro and Buenavista Station at Insurgentes. As the microbuses were removed from its route, it was hoped that the Metrobús could reduce pollution and decrease transit time for passengers. In June 2013, Mexico City's mayor announced two more lines to come: Line 5 serving Eje 3 Oriente and Line 6 serving Eje 5 Norte. As of June 2013, 367 Metrobús buses transported 850,000 passengers daily.
Mexibús bus rapid transit lines serve suburban areas in the State of Mexico and connect to the Mexico City metro.
Trolleybus, light rail, streetcars
thumb|right|An STE trolleybus using a transit-only contraflow lane on Eje Central
Electric transport other than the metro also exists, in the form of several Mexico City trolleybus routes and the Xochimilco Light Rail line, both of which are operated by Servicio de Transportes Eléctricos. The central area's last streetcar line (tramway, or tranvía) closed in 1979.
Roads and car transport
thumb|right|View of the Anillo Periférico and Paseo de la Reforma near Chapultepec
In the late 1970s many arterial roads were redesigned as ejes viales; high-volume one-way roads that cross, in theory, Mexico City proper from side to side. The eje vial network is based on a quasi-Cartesian grid, with the ejes themselves being called Eje 1 Poniente, Eje Central, and Eje 1 Oriente, for example, for the north-south roads, and Eje 2 Sur and Eje 3 Norte, for example, for east-west roads. Ring roads are the Circuito Interior (inner ring), Anillo Periférico; the Circuito Exterior Mexiquense ("State of Mexico outer loop") toll road skirting the northeastern and eastern edges of the metropolitan area, the Chamapa-La Venta toll road skirting the northwestern edge, and the Arco Norte completely bypassing the metropolitan area in an arc from northwest (Atlacomulco) to north (Tula, Hidalgo) to east (Puebla). A second level (where tolls are charged) of the Periférico, colloquially called the segundo piso ("second floor"), was officially opened in 2012, with sections still being completed. The Viaducto Miguel Alemán crosses the city east-west from Observatorio to the airport. In 2013 the Supervía Poniente opened, a toll road linking the new Santa Fe business district with southwestern Mexico City.
There is an environmental program, called Hoy No Circula ("Today Does Not Run", or "One Day without a Car"), whereby vehicles that have not passed emissions testing are restricted from circulating on certain days according to the ending digit of their license plates; this in an attempt to cut down on pollution and traffic congestion. While in 2003, the program still restricted 40% of vehicles in the metropolitan area, with the adoption of stricter emissions standards in 2001 and 2006, in practice, these days most vehicles are exempt from the circulation restrictions as long as they pass regular emissions tests.
Parking
Street parking in urban neighborhoods is mostly controlled by the franeleros a.k.a. "viene vienes" (lit. "come on, come on"), who ask drivers for a fee to park, in theory to guard the car, but with the implicit threat that the franelero will damage the car if the fee is not paid. Double parking is common (with franeleros moving the cars as required), impeding on the available lanes for traffic to pass. In order to mitigate that and other problems and to raise revenue, 721 parking meters (as of October 2013), have been installed in the west-central neighborhoods Lomas de Chapultepec, Condesa, Roma, Polanco and Anzures, in operation from 8 AM to 8 PM on weekdays and charging a rate of 2 pesos per 15 minutes, with offenders' cars booted, costing about 500 pesos to remove. 30 percent of the monthly 16 million-peso (as of October 2013) income from the parking-meter system (named "ecoParq") is earmarked for neighborhood improvements. The granting of the license for all zones exclusively to a new company without experience in operating parking meters, Operadora de Estacionamientos Bicentenario, has generated controversy.
Cycling
thumb|right|Bicycles available for rental in Zona Rosa
The local government continuously strives for a reduction of massive traffic congestion, and has increased incentives for making a bicycle-friendly city. This includes North America's second-largest bicycle sharing system, EcoBici, launched in 2010, in which registered residents can get bicycles for 45 minutes with a pre-paid subscription of 300 pesos a year. There are, as of September 2013, 276 stations with 4,000 bicycles across an area stretching from the Historic center to Polanco. within of one another and are fully automatic using a transponder based card. Bicycle-service users have access to several permanent Ciclovías (dedicated bike paths/lanes/streets), including ones along Paseo de la Reforma and Avenida Chapultepec as well as one running from Polanco to Fierro del Toro, which is located south of Cumbres del Ajusco National Park, near the Morelos state line."Ciclovía Reforma", Transeunte The city's initiative is inspired by forward thinking examples, such as Denmark's Copenhagenization.
Intercity buses
The city has four major bus stations (North, South, Observatorio, TAPO), which comprise one of the world's largest transportation agglomerations, with bus service to many cities across the country and international connections.
Airports
thumb|right|Terminal 2 of the Mexico City airport
Mexico City is served by Mexico City International Airport (IATA Airport Code: MEX). This airport is Latin America's second busiest and one of the largests in traffic, with daily flights to United States and Canada, Mexico, Central America and the Caribbean, South America, Europe and Asia. Aeroméxico (Skyteam) is based at this airport, and provide codeshare agreements with non-Mexican airlines that span the entire globe. In 2014, the airport handled well over 34 million passengers, just over 2 million more than the year before.
This traffic exceeds the current capacity of the airport, which has historically centralized the majority of air traffic in the country. An alternate option is Lic. Adolfo López Mateos International Airport (IATA Airport Code: TLC) in nearby Toluca, State of Mexico, although due to several airlines' decisions to terminate service to TLC, the airport has seen a passenger drop to just over 700,000 passengers in 2014 from over 2.1 million passengers just four years prior.
In the Mexico City airport, the government engaged in an extensive restructuring program that includes the addition of a new second terminal, which began operations in 2007, and the enlargement of four other airports (at the nearby cities of Toluca, Querétaro, Puebla and Cuernavaca) that, along with Mexico City's airport, comprise the Grupo Aeroportuario del Valle de México, distributing traffic to different regions in Mexico. The city of Pachuca will also provide additional expansion to central Mexico's airport network. Mexico City's airport is the main hub for 11 of the 21 national airline companies.
During his annual state-of-the-nation address on September 2, 2014, President of Mexico Enrique Peña Nieto unveiled plans for a new international airport to ease the city's notorious air traffic congestion, tentatively slated for a 2018 opening. The new airport, which would have six runways, will cost $9.15 billion and would be built on vacant federal land east of Mexico City International Airport. Goals are to eventually handle 120 million passengers a year, which would make it the busiest airport in the world.
Culture
The Historic center of Mexico City (Centro Histórico) and the "floating gardens" of Xochimilco in the southern borough have been declared World Heritage Sites by UNESCO. Famous landmarks in the Historic Center include the Plaza de la Constitución (Zócalo), the main central square with its epoch-contrasting Spanish-era Metropolitan Cathedral and National Palace, ancient Aztec temple ruins Templo Mayor ("Major Temple") and modern structures, all within a few steps of one another. (The Templo Mayor was discovered in 1978 while workers were digging to place underground electric cables).
The most recognizable icon of Mexico City is the golden Angel of Independence on the wide, elegant avenue Paseo de la Reforma, modeled by the order of the Emperor Maximilian of Mexico after the Champs-Élysées in Paris. This avenue was designed over the Americas' oldest known major roadway in the 19th century to connect the National Palace (seat of government) with the Castle of Chapultepec, the imperial residence. Today, this avenue is an important financial district in which the Mexican Stock Exchange and several corporate headquarters are located. Another important avenue is the Avenida de los Insurgentes, which extends and is one of the longest single avenues in the world.
Chapultepec Park houses the Chapultepec Castle, now a museum on a hill that overlooks the park and its numerous museums, monuments and the national zoo and the National Museum of Anthropology (which houses the Aztec Calendar Stone). Another piece of architecture is the Fine Arts Palace, a white marble theatre/museum whose weight is such that it has gradually been sinking into the soft ground below. Its construction began during the presidency of Porfirio Díaz and ended in 1934, after being interrupted by the Mexican Revolution in the 1920s. The Plaza of the Three Cultures in the Tlatelolco neighbourhood, and the shrine and Basilicas of Our Lady of Guadalupe are also important sites. There is a double-decker bus, known as the "Turibus", that circles most of these sites, and has timed audio describing the sites in multiple languages as they are passed.
In addition, the city has about 160 museums—the world's greatest single metropolitan concentration —over 100 art galleries, and some 30 concert halls, all of which maintain a constant cultural activity during the whole year. It has either the third or fourth-highest number of theatres in the world after New York, London and perhaps Toronto. Many areas (e.g. Palacio Nacional and the National Institute of Cardiology) have murals painted by Diego Rivera. He and his wife Frida Kahlo lived in Coyoacán, where several of their homes, studios, and art collections are open to the public. The house where Leon Trotsky was initially granted asylum and finally murdered in 1940 is also in Coyoacán.
In addition, there are several restored haciendas that are now restaurants, such as the San Ángel Inn, the Hacienda de Tlalpan and the Hacienda de los Morales.
Art
thumb|right|Mural in the Palacio de Bellas Artes by David Alfaro Siqueiros
Having been capital of a vast pre-Hispanic empire, and also the capital of richest viceroyalty within the Spanish Empire (ruling over a vast territory in the Americas and Spanish West Indies), and, finally, the capital of the United Mexican States, Mexico City has a rich history of artistic expression. Since the mesoamerican pre-Classical period the inhabitants of the settlements around Lake Texcoco produced many works of art and complex craftsmanship, some of which are today displayed at the world-renowned National Museum of Anthropology and the Templo Mayor museum. While many pieces of pottery and stone-engraving have survived, the great majority of the Amerindian iconography was destroyed during the Conquest of Mexico.
Much of the early colonial art stemmed from the codices (Aztec illustrated books), aiming to recover and preserve some Aztec and other Amerindian iconography and history. From then, artistic expressions in Mexico were mostly religious in theme. The Metropolitan Cathedral still displays works by Juan de Rojas, Juan Correa and an oil painting whose authorship has been attributed to Murillo. Secular works of art of this period include the equestrian sculpture of Charles IV of Spain, locally known as El Caballito ("The little horse"). This piece, in bronze, was the work of Manuel Tolsá and it has been placed at the Plaza Tolsá, in front of the Palacio de Minería (Mining Palace). Directly in front of this building is the beautiful Museo Nacional de Arte (Munal) (the National Museum of Art).
thumb|left|Receptions hall at the Museo Nacional de Arte
During the 19th century, an important producer of art was the Academia de San Carlos (San Carlos Art Academy), founded during colonial times, and which later became the Escuela Nacional de Artes Plásticas (the National School of Arts) including painting, sculpture and graphic design, one of UNAM's art schools. Many of the works produced by the students and faculty of that time are now displayed in the Museo Nacional de San Carlos (National Museum of San Carlos). One of the students, José María Velasco, is considered one of the greatest Mexican landscape painters of the 19th century. Porfirio Díaz's regime sponsored arts, especially those that followed the French school. Popular arts in the form of cartoons and illustrations flourished, e.g. those of José Guadalupe Posada and Manuel Manilla. The permanent collection of the San Carlos Museum also includes paintings by European masters such as Rembrandt, Velázquez, Murillo, and Rubens.
After the Mexican Revolution, an avant-garde artistic movement originated in Mexico City: muralism. Many of the works of muralists José Clemente Orozco, David Alfaro Siqueiros and Diego Rivera are displayed in numerous buildings in the city, most notably at the National Palace and the Palacio de Bellas Artes. Frida Kahlo, wife of Rivera, with a strong nationalist expression, was also one of the most renowned of Mexican painters. Her house has become a museum that displays many of her works.
The former home of Rivera muse Dolores Olmedo houses the namesake museum. The facility is in Xochimilco borough in southern Mexico City and includes several buildings surrounded by sprawling manicured lawns. It houses a large collection of Rivera and Kahlo paintings and drawings, as well as living Xoloizcuintles (Mexican Hairless Dog). It also regularly hosts small but important temporary exhibits of classical and modern art (e.g. Venetian Masters and Contemporary New York artists).
During the 20th century, many artists immigrated to Mexico City from different regions of Mexico, such as Leopoldo Méndez, an engraver from Veracruz, who supported the creation of the socialist Taller de la Gráfica Popular (Popular Graphics Workshop), designed to help blue-collar workers find a venue to express their art. Other painters came from abroad, such as Catalan painter Remedios Varo and other Spanish and Jewish exiles. It was in the second half of the 20th century that the artistic movement began to drift apart from the Revolutionary theme. José Luis Cuevas opted for a modernist style in contrast to the muralist movement associated with social politics.
Museums
thumb|right|Museo Frida Kahlo
thumb|right|Museo Soumaya
Mexico City has numerous museums dedicated to art, including Mexican colonial, modern and contemporary art, and international art. The Museo Tamayo was opened in the mid-1980s to house the collection of international contemporary art donated by famed Mexican (born in the state of Oaxaca) painter Rufino Tamayo. The collection includes pieces by Picasso, Klee, Kandinsky, Warhol and many others, though most of the collection is stored while visiting exhibits are shown. The Museo de Arte Moderno (Museum of Modern Art) is a repository of Mexican artists from the 20th century, including Rivera, Orozco, Siqueiros, Kahlo, Gerzso, Carrington, Tamayo, among others, and also regularly hosts temporary exhibits of international modern art. In southern Mexico City, the Museo Carrillo Gil (Carrillo Gil Museum) showcases avant-garde artists, as does the University Museum/Contemporary Art (Museo Universitario Arte Contemporáneo – or MUAC), designed by famed Mexican architect Teodoro González de León, inaugurated in late 2008.
The Museo Soumaya, named after the wife of Mexican magnate Carlos Slim, has the largest private collection of original Rodin sculptures outside Paris. It also has a large collection of Dalí sculptures, and recently began showing pieces in its masters collection including El Greco, Velázquez, Picasso and Canaletto. The museum inaugurated a new futuristic-design facility in 2011 just north of Polanco, while maintaining a smaller facility in Plaza Loreto in southern Mexico City. The Colección Júmex is a contemporary art museum located on the sprawling grounds of the Jumex juice company in the northern industrial suburb of Ecatepec. It is said to have the largest private contemporary art collection in Latin America and hosts pieces from its permanent collection as well as traveling exhibits by leading contemporary artists. The new Museo Júmex in Nuevo Polanco was slated to open in November 2013. The Museo de San Ildefonso, housed in the Antiguo Colegio de San Ildefonso in Mexico City's historic downtown district is a 17th-century colonnaded palace housing an art museum that regularly hosts world-class exhibits of Mexican and international art. Recent exhibits have included those on David LaChapelle, Antony Gormley and Ron Mueck. The National Museum of Art (Museo Nacional de Arte) is also located in a former palace in the historic center. It houses a large collection of pieces by all major Mexican artists of the last 400 years and also hosts visiting exhibits.
thumb|left|Reconstruction of the entrance to the Hochob temple in the National Museum of Anthropology
Jack Kerouac, the noted American author, spent extended periods of time in the city, and wrote his masterpiece volume of poetry Mexico City Blues here. Another American author, William S. Burroughs, also lived in the Colonia Roma neighborhood of the city for some time. It was here that he accidentally shot his wife.
Most of Mexico City's more than 150 museums can be visited from Tuesday to Sunday from 10 am to 5 pm, although some of them have extended schedules, such as the Museum of Anthropology and History, which is open to 7 pm. In addition to this, entrance to most museums is free on Sunday. In some cases a modest fee may be charged.
Another major addition to the city's museum scene is the Museum of Remembrance and Tolerance (Museo de la Memoria y Tolerancia), inaugurated in early 2011. The brainchild of two young Mexican women as a Holocaust museum, the idea morphed into a unique museum dedicated to showcasing all major historical events of discrimination and genocide. Permanent exhibits include those on the Holocaust and other large-scale atrocities. It also houses temporary exhibits; one on Tibet was inaugurated by the Dalai Lama in September 2011.
Music, theater and entertainment
thumb|right|The City Theatre
thumb|right|Mexico City Arena
Mexico City is home to a number of orchestras offering season programs. These include the Mexico City Philharmonic,Mexico City Philharmonic which performs at the Sala Ollin Yoliztli; the National Symphony Orchestra, whose home base is the Palacio de Bellas Artes (Palace of the Fine Arts), a masterpiece of art nouveau and art decó styles; the Philharmonic Orchestra of the National Autonomous University of Mexico (OFUNAM), and the Minería Symphony Orchestra, both of which perform at the Sala Nezahualcóyotl, which was the first wrap-around concert hall of the world's western hemisphere when inaugurated in 1976. There are also many smaller ensembles that enrich the city's musical scene, including the Carlos Chávez Youth Symphony, the New World Orchestra (Orquesta del Nuevo Mundo), the National Polytechnical Symphony and the Bellas Artes Chamber Orchestra (Orquesta de Cámara de Bellas Artes).
The city is also a leading center of popular culture and music. There are a multitude of venues hosting Spanish and foreign-language performers. These include the 10,000-seat National Auditorium that regularly schedules the Spanish and English-language pop and rock artists, as well as many of the world's leading performing arts ensembles, the auditorium also broadcasts Grand Opera performances from New York's Metropolitan Opera on giant, high definition screens. In 2007 National Auditorium was selected world's best venue by multiple genre media.
Other popular sites for pop-artist performances include the 3,000-seat Teatro Metropolitan, the 15,000-seat Palacio de los Deportes, and the larger 50,000-seat Foro Sol Stadium, where popular international artists perform on a regular basis. The Cirque du Soleil has held several seasons at the Carpa Santa Fe, in the Santa Fe district in the western part of the city. There are numerous venues for smaller musical ensembles and solo performers. These include the Hard Rock Live, Bataclán, Foro Scotiabank, Lunario, Circo Volador and Voilá Acoustique. Recent additions include the 20,000-seat Arena Ciudad de México, the 3,000-seat Pepsi Center World Trade Center, and the 2,500-seat Auditorio Blackberry.
The Centro Nacional de las Artes (National Center for the Arts has several venues for music, theatre, dance. UNAM's main campus, also in the southern part of the city, is home to the Centro Cultural Universitario (the University Culture Center) (CCU). The CCU also houses the National Library, the interactive Universum, Museo de las Ciencias, the Sala Nezahualcóyotl concert hall, several theatres and cinemas, and the new University Museum of Contemporary Art (MUAC).University Museum of Contemporary Art A branch of the National University's CCU cultural center was inaugurated in 2007 in the facilities of the former Ministry of Foreign Affairs, known as Tlatelolco, in north-central Mexico City.
The José Vasconcelos Library, a national library, is located on the grounds of the former Buenavista railroad station in the northern part of the city.
The Papalote children's museum, which houses the world's largest dome screen, is located in the wooded park of Chapultepec, near the Museo Tecnológico, and La Feria amusement park. The theme park Six Flags México (the largest amusement park in Latin America) is located in the Ajusco neighborhood, in Tlalpan borough, southern Mexico City. During the winter, the main square of the Zócalo is transformed into a gigantic ice skating rink, which is said to be the largest in the world behind that of Moscow's Red Square.
The Cineteca Nacional (the Mexican Film Library), near the Coyoacán suburb, shows a variety of films, and stages many film festivals, including the annual International Showcase, and many smaller ones ranging from Scandinavian and Uruguayan cinema, to Jewish and LGBT-themed films. Cinépolis and Cinemex, the two biggest film business chains, also have several film festivals throughout the year, with both national and international movies. Mexico City tops the world in number of IMAX theatres, providing residents and visitors access to films ranging from documentaries to popular blockbusters on these especially large, dramatic screens.
Cuisine
Mexico City offers a variety of cuisines. Restaurants specializing in the regional cuisines of Mexico's 31 states are available in the city.
Also available are an array of international cuisines, including Canadian,Mexico food truck promoting Canadian cuisine. CBC News. Retrieved 29–07–15. French, Italian, Croatian, Spanish (including many regional variations), Jewish, Lebanese, Chinese (again with regional variations), Indian, Japanese, Korean, Thai, Vietnamese; and of course fellow Latin American cuisines such as Argentine, Brazilian, and Peruvian.
Haute, fusion, kosher, vegetarian and vegan cuisines are also available, as are restaurants solely based on the concepts of local food and slow Food.
Mexico City is known for having some of the freshest fish and seafood in Mexico's interior. La Nueva Viga Market is the second largest seafood market in the world after the Tsukiji fish market in Japan.
The city also has several branches of renowned international restaurants and chefs. These include Paris' Au Pied de Cochon and Brasserie Lipp, Philippe (by Philippe Chow); Nobu, Morimoto; Pámpano, owned by Mexican-raised opera legend Plácido Domingo. There are branches of the exclusive Japanese restaurant Suntory, Rome's famed Alfredo, as well as New York steakhouses Morton's and The Palm, and Monte Carlo's BeefBar. Three of the most famous Lima-based Haute Peruvian restaurants, La Mar, Segundo Muelle and Astrid y Gastón have locations in Mexico City.
For the 2014 list of World's 50 Best Restaurants as named by the British magazine Restaurant, Mexico City ranked with the Mexican avant-garde restaurant Pujol (owned by Mexican chef Enrique Olvera) at 20th best. Also notable is the Basque-Mexican fusion restaurant Biko (run and co-owned by Bruno Oteiza and Mikel Alonso), which placed outside the list at 59th, but in previous years has ranked within the top 50.Restaurant, The World's 50 Best Restaurant Awards: 2014
Mexico's award-winning wines are offered at many restaurants, and the city offers unique experiences for tasting the regional spirits, with broad selections of tequila and mezcal.
At the other end of the scale are working class pulque bars known as pulquerías, a challenge for tourists to locate and experience.
Sports
Team Stadium League América Azteca Stadium Liga MX UNAM University Olympic Stadium Liga MX Cruz Azul Azul Stadium Liga MX Diablos Rojos del México Foro Sol Mexican League
thumb|right|Azteca Stadium, the 12th largest stadium in the world
Association football is the country's most popular and most televised franchised sport. Its important venues in Mexico City include the Azteca Stadium, home to the Mexico national football team and giants América, which can seat 91,653 fans, making it the biggest stadium in Latin America. The Olympic Stadium in Ciudad Universitaria is home to the football club giants Universidad Nacional, with a seating capacity of over 52,000. The Estadio Azul, which seats 33,042 fans, is near the World Trade Center Mexico City in the Nochebuena neighborhood, and is home to the giants Cruz Azul. The three teams are based in Mexico City and play in the First Division; they are also part, with Guadalajara-based giants Club Deportivo Guadalajara, of Mexico's traditional "Big Four" (though recent years have tended to erode the teams' leading status at least in standings).
The country hosted the FIFA World Cup in 1970 and 1986, and Azteca Stadium is the first stadium in World Cup history to host the final twice.
Mexico City is the first Latin American city to host the Olympic Games, having held the Summer Olympics in 1968, winning bids against Buenos Aires, Lyon and Detroit. The city hosted the 1955 and 1975 Pan American Games, the last after Santiago and São Paulo withdrew.
The ICF Flatwater Racing World Championships were hosted here in 1974 and 1994. Lucha libre is a Mexican style of wrestling, and is one of the more popular sports throughout the country. The main venues in the city are Arena México and Arena Coliseo.
thumb|left|Estadio Olímpico Universitario, considered as the most important building in modern America, by American architect Frank Lloyd Wright
From 1962 to 1970 and again from 1986 to 1992, the track hosted the Formula 1 Mexican Grand Prix. From 1980 to 1981 and again from 2002 to 2007, it hosted the Champ Car World Series Gran Premio de México. Beginning in 2005, the NASCAR Nationwide Series ran the Telcel-Motorola México 200. 2005 also marked the first running of the Mexico City 250 by the Grand-Am Rolex Sports Car Series. Both races were removed from their series' schedules for 2009.
Baseball is another sport played professionally in the city. Mexico City is currently home to Mexican League baseball's Mexico Red Devils, considered Triple-A by U.S/Canadian Major League Baseball. The Devils play their home games at the Foro Sol sports and concert venue, adjacent to Autodromo Hermanos Rodriguez. Mexico City has some 10 Little Leagues for young baseball players.
In 2005, Mexico City became the first city to host an NFL regular season game outside of the United States, at the Azteca Stadium. The crowd of 103,467 people attending this game was the largest ever for a regular season game in NFL history until 2009. The city has also hosted several NBA pre-season games and has hosted international basketball's FIBA Americas Championship, along with north-of-the-border Major League Baseball exhibition games at Foro Sol.
Other sports facilities in Mexico City are the Palacio de los Deportes indoor arena, Francisco Márquez Olympic Swimming Pool, the Hipódromo de Las Américas, the Agustin Melgar Olympic Velodrome, and venues for equestrianism and horse racing, ice hockey, rugby, American-style football, baseball, and basketball.
Bullfighting takes place every Sunday during bullfighting season at the 50,000-seat Plaza México, the world's largest bullring.
Mexico City's golf courses have hosted Women's LPGA action, and two Men's Golf World Cups. Courses throughout the city are available as private as well as public venues.
Education
The National Autonomous University of Mexico (UNAM), located in Mexico City, is the largest university on the continent, with more than 300,000 students from all backgrounds. Three Nobel laureates, several Mexican entrepreneurs and most of Mexico's modern-day presidents are among its former students. UNAM conducts 50% of Mexico's scientific research and has presence all across the country with satellite campuses, observatories and research centres. UNAM ranked 74th in the Top 200 World University Ranking published by Times Higher Education (then called Times Higher Education Supplement) in 2006, making it the highest ranked Spanish-speaking university in the world. The sprawling main campus of the university, known as Ciudad Universitaria, was named a World Heritage Site by UNESCO in 2007.
The second largest higher-education institution is the National Polytechnic Institute (IPN), which includes among many other relevant centers the Centro de Investigación y de Estudios Avanzados (Cinvestav), where varied high-level scientific and technological research is done. Other major higher-education institutions in the city include the Metropolitan Autonomous University (UAM), the National School of Anthropology and History (ENAH), the Instituto Tecnológico Autónomo de México (ITAM), the Monterrey Institute of Technology and Higher Education (3 campuses), the Universidad Panamericana (UP), the Universidad La Salle, the Universidad del Valle de Mexico (UVM), the Universidad Anáhuac, Simon Bolivar University (USB), the Alliant International University, the Universidad Iberoamericana, El Colegio de México (Colmex), Escuela Libre de Derecho and the Centro de Investigación y Docencia Económica, (CIDE).
In addition, the prestigious University of California maintains a campus known as "Casa de California" in the city. The Universidad Tecnológica de México is also in Mexico City.
Unlike those of Mexican states' schools, curricula of Mexico City's public schools is managed by the federal Secretary of Public Education. The whole funding is allocated by the government of Mexico City (in some specific cases, such as El Colegio de México, funding comes from both the city's government and other public and private national and international entities). The city's public high school system is the Instituto de Educación Media Superior del Distrito Federal (IEMS-DF).
A special case is that of El Colegio Nacional, created during the district's governmental period of Miguel Alemán Valdés to have, in Mexico, an institution similar to the College of France. The select and privileged group of Mexican scientists and artists belonging to this institution—membership is for life—include, among many, Mario Lavista, Ruy Pérez Tamayo, José Emilio Pacheco, Marcos Moshinsky (d.2009), Guillermo Soberón Acevedo. Members are obligated to publicly disclose their works through conferences and public events such as concerts and recitals.
Among its many public and private schools (K–13), the city offers multi-cultural, multi-lingual and international schools attended by Mexican and foreign students. Best known are the Colegio Alemán (German school with three main campuses), the Liceo Mexicano Japonés (Japanese), the Centro Cultural Coreano en México (Korean), the Lycée Franco-Mexicain (French), the American School, The Westhill Institute (American School), the Edron Academy and the Greengates School (British).
thumb|National Polytechnic Institutethumb|Universidad Iberoamericana Private Universitythumb|Biblioteca Vasconcelos
Media
Mexico City is Latin America's leading center for the television, music and film industries. It is also Mexico's most important for the printed media and book publishing industries. Dozens of daily newspapers are published, including El Universal, Excélsior, Reforma and La Jornada. Other major papers include Milenio, Crónica, El Economista and El Financiero. Leading magazines include Expansión, Proceso, Poder, as well as dozens of entertainment publications such as Vanidades, Quién, Chilango, TV Notas, and local editions of Vogue, GQ, and Architectural Digest.
It is also a leading center of the advertising industry. Most international ad firms have offices in the city, including Grey, JWT, Leo Burnett, Euro RSCG, BBDO, Ogilvy, Saatchi & Saatchi, and McCann Erickson. Many local firms also compete in the sector, including Alazraki, Olabuenaga/Chemistri, Terán, Augusto Elías, and Clemente Cámara, among others. There are 60 radio stations operating in the city and many local community radio transmission networks.
The two largest media companies in the Spanish-speaking world, Televisa and Azteca, are headquartered in Mexico City. Other local television channels include:
XEW-TV 2
XHTV-TV 4
XHGC-TV 5
XHIMT-TV 7
XEQ-TV 9
XEIPN-TV 11
XHDF-TV 13
XHUNAM-TV 20
XEIMT-TV 22
XHRAE-TV 28
XHTVM-TV 40
XHCDM-DT 21
Shopping
Mexico City offers an immense and varied consumer retail market, ranging from basic foods to ultra high-end luxury goods. Consumers may buy in fixed indoor markets, mobile markets (tianguis), from street vendors, from downtown shops in a street dedicated to a certain type of good, in convenience stores and traditional neighborhood stores, in modern supermarkets, in warehouse and membership stores and the shopping centers that they anchor, in department stores, big-box stores and in modern shopping malls.
Traditional markets
thumb|right|Multi-storey Sanborns department store with the façade of a 19th-century home being used as an entrance area
The city's main source of fresh produce is the Central de Abasto. This in itself is a self-contained mini-city in Iztapalapa borough covering an area equivalent to several dozen city blocks. The wholesale market supplies most of the city's "mercados", supermarkets and restaurants, as well as people who come to buy the produce for themselves. Tons of fresh produce are trucked in from all over Mexico every day.
The principal fish market is known as La Nueva Viga, in the same complex as the Central de Abastos. The world-renowned market of Tepito occupies 25 blocks, and sells a variety of products.
A staple for consumers in the city is the omnipresent "mercado". Every major neighborhood in the city has its own borough-regulated market, often more than one. These are large well-established facilities offering most basic products, such as fresh produce and meat/poultry, dry goods, tortillerías, and many other services such as locksmiths, herbal medicine, hardware goods, sewing implements; and a multitude of stands offering freshly made, home-style cooking and drinks in the tradition of aguas frescas and atole.
Tianguis
In addition, "tianguis" or mobile markets set up shop on streets in many neighborhoods, depending on day of week. Sundays see the largest number of these markets.
Street vendors
Street vendors ply their trade from stalls in the tianguis as well as at non-officially controlled concentrations around metro stations and hospitals; at plazas comerciales, where vendors of a certain "theme" (e.g. stationery) are housed; originally these were organized to accommodate vendors formerly selling on the street; or simply from improvised stalls on a city sidewalk. In addition, food and goods are sold from people walking with baskets, pushing carts, from bicycles or the backs of trucks, or simply from a tarp or cloth laid on the ground. In the centre of the city informal street vendors are increasingly targeted by laws and prosecution.
Downtown shopping
thumb|right|Palacio de Hierro store
The Historic Center of Mexico City is widely known for specialized, often low-cost retailers. Certain blocks or streets are dedicated to shops selling a certain type of merchandise, with areas dedicated to over 40 categories such as home appliances, lamps and electricals, closets and bathrooms, housewares, wedding dresses, jukeboxes, printing, office furniture and safes, books, photography, jewelry, and opticians. The main department stores are also represented downtown.
Traditional markets downtown include the La Merced Market; the Mercado de Jamaica specializes in fresh flowers, the Mercado de Sonora in the occult, and La Lagunilla in furniture.
Ethnic shopping areas are located in Chinatown, downtown along Calle Dolores, but Mexico City's Koreatown, or Pequeño Seúl, is located in the Zona Rosa.
Supermarkets and neighborhood stores
Large, modern chain supermarkets, hypermarkets and warehouse clubs including Soriana, Comercial Mexicana, Chedraui, Bodega Aurrerá, Walmart and Costco, are located across the city. Many anchor shopping centers that contain smaller shops, services, a food court and sometimes cinemas.
Small "mom-and-pop" corner stores ("abarroterías" or more colloquially as "changarros") abound in all neighborhoods, rich and poor. These are small shops offering basics such as soft drinks, packaged snacks, canned goods and dairy products. Thousands of C-stores or corner stores, such as Oxxo, 7-Eleven and Extra are located throughout the city.
Parks and recreation
Chapultepec Park, the city's most iconic public park, has history back to the Aztec emperors who used the area as a retreat. It is south of Polanco district, and houses the city's zoo, several ponds, seven museums including the National Museum of Anthropology, and the oldest and most traditional amusement park, La Feria de Chapultepec Mágico, with its vintage Montaña Rusa rollercoaster.
Other iconic city parks include the Alameda Central, Mexico City historic center, a city park since colonial times and renovated in 2013; Parque México and Parque España in the hip Condesa district; Parque Hundido and Parque de los Venados in Colonia del Valle, and Parque Lincoln in Polanco. There are many smaller parks throughout the city. Most are small "squares" occupying two or three square blocks amid residential or commercial districts.
Several other larger parks such as the Bosque de Tlalpan and Viveros de Coyoacán, and in the east Alameda Oriente, offer many recreational activities. Northwest of the city is a large ecological reserve, the Bosque de Aragón. In the southeast is the Xochimilco Ecological Park and Plant Market, a World Heritage site. West of Santa Fe district are the pine forests of the Desierto de los Leones National Park.
Amusement parks include Six Flags México, in Ajusco neighborhood which is the largest in Latin America. There are numerous seasonal fairs present in the city.
Mexico City has three zoos. Chapultepec Zoo, the San Juan de Aragon Zoo and Los Coyotes Zoo. Chapultepec Zoo is located in the first section of Chapultepec Park in the Miguel Hidalgo. It was opened in 1924. Visitors can see about 243 specimens of different species including kangaroos, giant panda, gorillas, caracal, hyena, hippos, jaguar, giraffe, lemur, lion, among others. Zoo San Juan de Aragon is near the San Juan de Aragon Park in the Gustavo A. Madero. In this zoo, opened in 1964, there are species that are in danger of extinction such as the jaguar and the Mexican wolf. Other guests are the golden eagle, pronghorn, bighorn sheep, caracara, zebras, African elephant, macaw, hippo, among others. Zoo Los Coyotes is a 27.68-acre (11.2 ha) zoo located south of Mexico City in the Coyoacan. It was inaugurated on February 2, 1999. It has more than 301 specimens of 51 species of wild native or endemic fauna from the Mexico City. You can admire eagles, ajolotes, coyotes, macaws, bobcats, Mexican wolves, raccoons, mountain lions, teporingos, foxes, white-tailed deer.
Nicknames
Mexico City was traditionally known as La Ciudad de los Palacios ("the City of the Palaces"), a nickname attributed to Baron Alexander von Humboldt when visiting the city in the 19th century, who, sending a letter back to Europe, said Mexico City could rival any major city in Europe.
During Andrés López Obrador's administration a political slogan was introduced: la Ciudad de la Esperanza ("The City of Hope"). This motto was quickly adopted as a city nickname, but has faded since the new motto Capital en Movimiento ("Capital in Movement") was adopted by the administration headed by Marcelo Ebrard, though the latter is not treated as often as a nickname in media. Since 2013, to refer to the City particularly in relation to government campaigns, the abbreviation CDMX has been used (from Ciudad de México).
The city is colloquially known as Chilangolandia after the locals' nickname chilangos.1994 Oxford Spanish-English Dictionary Chilango is used pejoratively by people living outside Mexico City to "connote a loud, arrogant, ill-mannered, loutish person".David Lida, First Stop in the New World: Mexico City, the Capital of the 21sr Century, New York: Riverhead Books 2008, p. 15. For their part those living in Mexico City designate insultingly those who live elsewhere as living in la provincia ("the provinces", the periphery) and many proudly embrace the term chilango.Lida, ibid. Residents of Mexico City are more recently called defeños (deriving from the postal abbreviation of the Federal District in Spanish: D.F., which is read "De-Efe"). They are formally called capitalinos (in reference to the city being the capital of the country), but "[p]erhaps because capitalino is the more polite, specific, and correct word, it is almost never utilized".Lida, ibid. p. 16.
Law enforcement
thumb|right|Officers of the Secretariat of Public Security
The Secretariat of Public Security of the Federal District (Secretaría de Seguridad Pública del Distrito Federal – SSP) manages a combined force of over 90,000 officers in the Federal District (DF). The SSP is charged with maintaining public order and safety in the heart of Mexico City. The historic district is also roamed by tourist police, aiming to orient and serve tourists. These horse-mounted agents dress in traditional uniforms.
The investigative Judicial Police of the Federal District (Policía Judicial del Distrito Federal – PJDF) is organized under the Office of the Attorney General of the DF (the Procuraduría General de Justicia del Distrito Federal). The PGJDF maintains 16 precincts (delegaciones) with an estimated 3,500 judicial police, 1,100 investigating agents for prosecuting attorneys (agentes del ministerio público), and nearly 1,000 criminology experts or specialists (peritos).
Between 2000 and 2004 an average of 478 crimes were reported each day in Mexico City; however, the actual crime rate is thought to be much higher "since most people are reluctant to report crime". Under policies enacted by Mayor Marcelo Ebrard between 2009 and 2011, Mexico City underwent a major security upgrade with violent and petty crime rates both falling significantly despite the rise in violent crime in other parts of the country. Some of the policies enacted included the installation of 11,000 security cameras around the city and a very large expansion of the police force. Mexico City has one of the world's highest police officer-to-resident ratios, with one uniformed officer per 100 citizens.
Since 1997 the prison population has increased by more than 500%. Political scientist Markus-Michael Müller argues that mostly informal street vendors are hit by these measures. He sees punishment "related to the growing politicisation of security and crime issues and the resulting criminalisation of the people living at the margins of urban society, in particular those who work in the city’s informal economy."
International relations
Twin towns and sister cities
Mexico City is twinned with:
Astana, Kazakhstan
Berlin, Germany
Chicago, United States
Ciudad Juárez, Mexico
Cusco, Peru
Dolores Hidalgo, Mexico
Kaliningrad, Russia
Kiev,Ukraine
Los Angeles, United States
Manila, Philippines
Nagoya, Japan
Paris, France
Seoul, South Korea
Union of Ibero-American Capital Cities
Mexico is part of the Union of Ibero-American Capital Cities from October 12, 1982 establishing brotherly relations with the following cities:
Andorra la Vella, Andorra
Asunción, Paraguay
Barcelona, Spain
Bogotá, Colombia
Buenos Aires, Argentina
Caracas, Venezuela
Guatemala City, Guatemala
Havana, Cuba
La Paz, Bolivia
Lima, Peru
Lisbon, Portugal
Madrid, Spain
Managua, Nicaragua
Mexico City, Mexico
Montevideo, Uruguay
Panama City, Panama
Quito, Ecuador
Rio de Janeiro, Brazil
San Jose, Costa Rica
San Juan, Puerto Rico
San Salvador, El Salvador
Santiago, Chile
Santo Domingo, Dominican Republic
Tegucigalpa, Honduras
See also
Large Cities Climate Leadership Group
Largest cities in the Americas
Metropolitan areas of Mexico
Outline of Mexico
World's largest cities
References
External links
Mexico City Government
Mexico City Tourism Ministry
Mexico City Experience – An English-language website operated on behalf of the Mexico City government
*
.
Category:1520s establishments in Mexico
Category:1521 establishments in New Spain
Category:1521 in Mexico
Category:Articles containing video clips
Category:Capital districts and territories
Category:Capitals in North America
Category:Central Mexico
Category:Cities in Mexico
Category:Populated places established in 1521
Category:South-Central Mexico
Category:Subdivisions of Mexico
Category:Nahua settlements | 18,987 | 2017-01 |
Infection | Infection is the invasion of an organism's body tissues by disease-causing agents, their multiplication, and the reaction of host tissues to these organisms and the toxins they produce.Definition of "infection" from several medical dictionaries - Retrieved on 2012-04-03 Infectious disease, also known as transmissible disease or communicable disease, is illness resulting from an infection.
Infections are caused by infectious agents including viruses, viroids, prions, bacteria, nematodes such as parasitic roundworms and pinworms, arthropods such as ticks, mites, fleas, and lice, fungi such as ringworm, and other macroparasites such as tapeworms and other helminths.
Hosts can fight infections using their immune system. Mammalian hosts react to infections with an innate response, often involving inflammation, followed by an adaptive response.
Specific medications used to treat infections include antibiotics, antivirals, antifungals, antiprotozoals, and antihelminthics. Infectious diseases resulted in 9.2 million deaths in 2013 (about 17% of all deaths). The branch of medicine that focuses on infections is referred to as infectious disease.
Classification
Subclinical versus clinical (latent versus apparent)
Symptomatic infections are apparent and clinical, whereas an infection that is active but does not produce noticeable symptoms may be called inapparent, silent, subclinical, or occult. An infection that is inactive or dormant is called a latent infection. An example of a latent bacterial infection is latent tuberculosis. Some viral infections can also be latent, examples of latent viral infections are any of those from the Herpesviridae family.
The word infection can denote any presence of a particular pathogen at all (no matter how little) but also is often used in a sense implying a clinically apparent infection (in other words, a case of infectious disease). This fact occasionally creates some ambiguity or prompts some usage discussion. To get around the usage annoyance, it is common for health professionals to speak of colonization (rather than infection) when they mean that some of the pathogens are present but that no clinically apparent infection (no disease) is present.
A short-term infection is an acute infection. A long-term infection is a chronic infection. Infections can be further classified by causative agent (bacterial, viral, fungal, parasitic), and by the presence or absence of systemic symptoms (sepsis).
Primary versus opportunistic
Among the many varieties of microorganisms, relatively few cause disease in otherwise healthy individuals.This section incorporates public domain materials included in the text: Medical Microbiology Fourth Edition: Chapter 8 (1996). Baron, Samuel MD. The University of Texas Medical Branch at Galveston. Infectious disease results from the interplay between those few pathogens and the defenses of the hosts they infect. The appearance and severity of disease resulting from any pathogen, depends upon the ability of that pathogen to damage the host as well as the ability of the host to resist the pathogen. However a host's immune system can also cause damage to the host itself in an attempt to control the infection. Clinicians therefore classify infectious microorganisms or microbes according to the status of host defenses - either as primary pathogens or as opportunistic pathogens:
Primary pathogens
Primary pathogens cause disease as a result of their presence or activity within the normal, healthy host, and their intrinsic virulence (the severity of the disease they cause) is, in part, a necessary consequence of their need to reproduce and spread. Many of the most common primary pathogens of humans only infect humans, however many serious diseases are caused by organisms acquired from the environment or that infect non-human hosts.
Opportunistic pathogens
Opportunistic pathogens can cause an infectious disease in a host with depressed resistance (immunodeficiency) or if they have unusual access to the inside of the body (for example, via trauma). Opportunistic infection may be caused by microbes ordinarily in contact with the host, such as pathogenic bacteria or fungi in the gastrointestinal or the upper respiratory tract, and they may also result from (otherwise innocuous) microbes acquired from other hosts (as in Clostridium difficile colitis) or from the environment as a result of traumatic introduction (as in surgical wound infections or compound fractures). An opportunistic disease requires impairment of host defenses, which may occur as a result of genetic defects (such as Chronic granulomatous disease), exposure to antimicrobial drugs or immunosuppressive chemicals (as might occur following poisoning or cancer chemotherapy), exposure to ionizing radiation, or as a result of an infectious disease with immunosuppressive activity (such as with measles, malaria or HIV disease). Primary pathogens may also cause more severe disease in a host with depressed resistance than would normally occur in an immunosufficient host.
Primary infection versus secondary infection
A primary infection is infection that is, or can practically be viewed as, the root cause of the current health problem. In contrast, a secondary infection is a sequela or complication of a root cause. For example, pulmonary tuberculosis is often a primary infection, but an infection that happened only because a burn or penetrating trauma (the root cause) allowed unusual access to deep tissues is a secondary infection. Primary pathogens often cause primary infection and also often cause secondary infection. Usually opportunistic infections are viewed as secondary infections (because immunodeficiency or injury was the predisposing factor).
Infectious or not
One way of proving that a given disease is "infectious", is to satisfy Koch's postulates (first proposed by Robert Koch), which demands that the infectious agent be identified only in patients and not in healthy controls, and that patients who contract the agent also develop the disease. These postulates were first used in the discovery that Mycobacteria species cause tuberculosis. Koch's postulates can not be applied ethically for many human diseases because they require experimental infection of a healthy individual with a pathogen produced as a pure culture. Often, even clearly infectious diseases do not meet the infectious criteria. For example, Treponema pallidum, the causative spirochete of syphilis, cannot be cultured in vitro - however the organism can be cultured in rabbit testes. It is less clear that a pure culture comes from an animal source serving as host than it is when derived from microbes derived from plate culture.
Epidemiology is another important tool used to study disease in a population. For infectious diseases it helps to determine if a disease outbreak is sporadic (occasional occurrence), endemic (regular cases often occurring in a region), epidemic (an unusually high number of cases in a region), or pandemic (a global epidemic).
Contagiousness
Infectious diseases are sometimes called contagious disease when they are easily transmitted by contact with an ill person or their secretions (e.g., influenza). Thus, a contagious disease is a subset of infectious disease that is especially infective or easily transmitted. Other types of infectious/transmissible/communicable diseases with more specialized routes of infection, such as vector transmission or sexual transmission, are usually not regarded as "contagious", and often do not require medical isolation (sometimes loosely called quarantine) of victims. However, this specialized connotation of the word "contagious" and "contagious disease" (easy transmissibility) is not always respected in popular use.
By anatomic location
Infections can be classified by the anatomic location or organ system infected, including:
Urinary tract infection
Skin infection
Respiratory tract infection
Odontogenic infection (an infection that originates within a tooth or in the closely surrounding tissues)
Vaginal infections
Intra-amniotic infection
In addition, locations of inflammation where infection is the most common cause include pneumonia, meningitis and salpingitis.
Signs and symptoms
The symptoms of an infection depend on the type of disease. Some signs of infection affect the whole body generally, such as fatigue, loss of appetite, weight loss, fevers, night sweats, chills, aches and pains. Others are specific to individual body parts, such as skin rashes, coughing, or a runny nose.
In certain cases, infectious diseases may be asymptomatic for much or even all of their course in a given host. In the latter case, the disease may only be defined as a "disease" (which by definition means an illness) in hosts who secondarily become ill after contact with an asymptomatic carrier. An infection is not synonymous with an infectious disease, as some infections do not cause illness in a host.
Bacterial or viral
Bacterial and viral infections can both cause the same kinds of symptoms. It can be difficult to distinguish which is the cause of a specific infection."Bacterial vs. Viral Infections - Do You Know the Difference?" National Information Program on Antibiotics It's important to distinguish, because viral infections cannot be cured by antibiotics.
+Comparison of viral and bacterial infection Characteristic Viral infection Bacterial infection Typical symptoms In general, viral infections are systemic. This means they involve many different parts of the body or more than one body system at the same time; i.e. a runny nose, sinus congestion, cough, body aches etc. They can be local at times as in viral conjunctivitis or "pink eye" and herpes. Only a few viral infections are painful, like herpes. The pain of viral infections is often described as itchy or burning. The classic symptoms of a bacterial infection are localized redness, heat, swelling and pain. One of the hallmarks of a bacterial infection is local pain, pain that is in a specific part of the body. For example, if a cut occurs and is infected with bacteria, pain occurs at the site of the infection. Bacterial throat pain is often characterized by more pain on one side of the throat. An ear infection is more likely to be diagnosed as bacterial if the pain occurs in only one ear. A cut that produces pus and milky-colored liquid is most likely infected. Cause Pathogenic viruses Pathogenic bacteria
Pathophysiology
There is a general chain of events that applies to infections.Infection Cycle - Retrieved on 2010-01-21 The chain of events involves several steps—which include the infectious agent, reservoir, entering a susceptible host, exit and transmission to new hosts. Each of the links must be present in a chronological order for an infection to develop. Understanding these steps helps health care workers target the infection and prevent it from occurring in the first place.Understanding Infectious Diseases Science.Education.Nih.Gov article - Retrieved on 2010-01-21
Colonization
thumb|Infection of a toe with an ingrown toenail; there is pus (yellow) and resultant inflammation (redness and swelling around the nail).
Infection begins when an organism successfully enters the body, grows and multiplies. This is referred to as colonization. Most humans are not easily infected. Those who are weak, sick, malnourished, have cancer or are diabetic have increased susceptibility to chronic or persistent infections. Individuals who have a suppressed immune system are particularly susceptible to opportunistic infections. Entrance to the host at host-pathogen interface, generally occurs through the mucosa in orifices like the oral cavity, nose, eyes, genitalia, anus, or the microbe can enter through open wounds. While a few organisms can grow at the initial site of entry, many migrate and cause systemic infection in different organs. Some pathogens grow within the host cells (intracellular) whereas others grow freely in bodily fluids.
Wound colonization refers to nonreplicating microorganisms within the wound, while in infected wounds, replicating organisms exist and tissue is injured. All multicellular organisms are colonized to some degree by extrinsic organisms, and the vast majority of these exist in either a mutualistic or commensal relationship with the host. An example of the former is the anaerobic bacteria species, which colonizes the mammalian colon, and an example of the latter is various species of staphylococcus that exist on human skin. Neither of these colonizations are considered infections. The difference between an infection and a colonization is often only a matter of circumstance. Non-pathogenic organisms can become pathogenic given specific conditions, and even the most virulent organism requires certain circumstances to cause a compromising infection. Some colonizing bacteria, such as Corynebacteria sp. and viridans streptococci, prevent the adhesion and colonization of pathogenic bacteria and thus have a symbiotic relationship with the host, preventing infection and speeding wound healing.
thumb|This image depicts the steps of pathogenic infection.
The variables involved in the outcome of a host becoming inoculated by a pathogen and the ultimate outcome include:
the route of entry of the pathogen and the access to host regions that it gains
the intrinsic virulence of the particular organism
the quantity or load of the initial inoculant
the immune status of the host being colonized
As an example, several staphylococcal species remain harmless on the skin, but, when present in a normally sterile space, such as in the capsule of a joint or the peritoneum, multiply without resistance and cause harm.
An interesting fact that gas chromatography–mass spectrometry, 16S ribosomal RNA analysis, omics, and other advanced technologies have made more apparent to humans in recent decades is that microbial colonization is very common even in environments that humans think of as being nearly sterile. Because it is normal to have bacterial colonization, it is difficult to know which chronic wounds can be classified as infected and how much risk of progression exists. Despite the huge number of wounds seen in clinical practice, there are limited quality data for evaluated symptoms and signs. A review of chronic wounds in the Journal of the American Medical Association's "Rational Clinical Examination Series" quantified the importance of increased pain as an indicator of infection. The review showed that the most useful finding is an increase in the level of pain [likelihood ratio (LR) range, 11-20] makes infection much more likely, but the absence of pain (negative likelihood ratio range, 0.64-0.88) does not rule out infection (summary LR 0.64-0.88).
Disease
Disease can arise if the host's protective immune mechanisms are compromised and the organism inflicts damage on the host. Microorganisms can cause tissue damage by releasing a variety of toxins or destructive enzymes. For example, Clostridium tetani releases a toxin that paralyzes muscles, and staphylococcus releases toxins that produce shock and sepsis. Not all infectious agents cause disease in all hosts. For example, less than 5% of individuals infected with polio develop disease.http://www.immunize.org/catg.d/p4215.pdf On the other hand, some infectious agents are highly virulent. The prion causing mad cow disease and Creutzfeldt–Jakob disease invariably kills all animals and people that are infected.
Persistent infections occur because the body is unable to clear the organism after the initial infection. Persistent infections are characterized by the continual presence of the infectious organism, often as latent infection with occasional recurrent relapses of active infection. There are some viruses that can maintain a persistent infection by infecting different cells of the body. Some viruses once acquired never leave the body. A typical example is the herpes virus, which tends to hide in nerves and become reactivated when specific circumstances arise.
Persistent infections cause millions of deaths globally each year.Chronic Infection Information Retrieved on 2010-01-14 Chronic infections by parasites account for a high morbidity and mortality in many underdeveloped countries.
Transmission
For infecting organisms to survive and repeat the infection cycle in other hosts, they (or their progeny) must leave an existing reservoir and cause infection elsewhere. Infection transmission can take place via many potential routes:
Droplet contact, also known as the respiratory route, and the resultant infection can be termed airborne disease. If an infected person coughs or sneezes on another person the microorganisms, suspended in warm, moist droplets, may enter the body through the nose, mouth or eye surfaces.
Fecal-oral transmission, wherein foodstuffs or water become contaminated (by people not washing their hands before preparing food, or untreated sewage being released into a drinking water supply) and the people who eat and drink them become infected. Common fecal-oral transmitted pathogens include Vibrio cholerae, Giardia species, rotaviruses, Entameba histolytica, Escherichia coli, and tape worms.Intestinal Parasites and Infection fungusfocus.com - Retrieved on 2010-01-21 Most of these pathogens cause gastroenteritis.
Sexual transmission, with the resulting disease being called sexually transmitted disease
Oral transmission, Diseases that are transmitted primarily by oral means may be caught through direct oral contact such as kissing, or by indirect contact such as by sharing a drinking glass or a cigarette.
Transmission by direct contact, Some diseases that are transmissible by direct contact include athlete's foot, impetigo and warts
Vertical transmission, directly from the mother to an embryo, fetus or baby during pregnancy or childbirth. It can occur when the mother gets an infection as an intercurrent disease in pregnancy.
Iatrogenic transmission, due to medical procedures such as injection or transplantation of infected material.
thumb|220px|left|Culex mosquitos (Culex quinquefasciatus shown) are biological vectors that transmit West Nile Virus.
Vector-borne transmission, transmitted by a vector, which is an organism that does not cause disease itself but that transmits infection by conveying pathogens from one host to another.Pathogens and vectors. MetaPathogen.com.
The relationship between virulence versus transmissibility is complex; if a disease is rapidly fatal, the host may die before the microbe can be passed along to another host.
Diagnosis
Diagnosis of infectious disease sometimes involves identifying an infectious agent either directly or indirectly. In practice most minor infectious diseases such as warts, cutaneous abscesses, respiratory system infections and diarrheal diseases are diagnosed by their clinical presentation and treated without knowledge of the specific causative agent. Conclusions about the cause of the disease are based upon the likelihood that a patient came in contact with a particular agent, the presence of a microbe in a community, and other epidemiological considerations. Given sufficient effort, all known infectious agents can be specifically identified. The benefits of identification, however, are often greatly outweighed by the cost, as often there is no specific treatment, the cause is obvious, or the outcome of an infection is benign.
Diagnosis of infectious disease is nearly always initiated by medical history and physical examination. More detailed identification techniques involve the culture of infectious agents isolated from a patient. Culture allows identification of infectious organisms by examining their microscopic features, by detecting the presence of substances produced by pathogens, and by directly identifying an organism by its genotype. Other techniques (such as X-rays, CAT scans, PET scans or NMR) are used to produce images of internal abnormalities resulting from the growth of an infectious agent. The images are useful in detection of, for example, a bone abscess or a spongiform encephalopathy produced by a prion.
Symptomatic diagnostics
The diagnosis is aided by the presenting symptoms in any individual with an infectious disease, yet it usually needs additional diagnostic techniques to confirm the suspicion. Some signs are specifically characteristic and indicative of a disease and are called pathognomonic signs; but these are rare. Not all infections are symptomatic.
In children the presence of cyanosis, rapid breathing, poor peripheral perfusion, or a petechial rash increases the risk of a serious infection by greater than 5 fold. Other important indicators include parental concern, clinical instinct, and temperature greater than 40 °C.
Microbial culture
thumb|200px|Four nutrient agar plates growing colonies of common Gram negative bacteria.
Microbiological culture is a principal tool used to diagnose infectious disease. In a microbial culture, a growth medium is provided for a specific agent. A sample taken from potentially diseased tissue or fluid is then tested for the presence of an infectious agent able to grow within that medium. Most pathogenic bacteria are easily grown on nutrient agar, a form of solid medium that supplies carbohydrates and proteins necessary for growth of a bacterium, along with copious amounts of water. A single bacterium will grow into a visible mound on the surface of the plate called a colony, which may be separated from other colonies or melded together into a "lawn". The size, color, shape and form of a colony is characteristic of the bacterial species, its specific genetic makeup (its strain), and the environment that supports its growth. Other ingredients are often added to the plate to aid in identification. Plates may contain substances that permit the growth of some bacteria and not others, or that change color in response to certain bacteria and not others. Bacteriological plates such as these are commonly used in the clinical identification of infectious bacterium. Microbial culture may also be used in the identification of viruses: the medium in this case being cells grown in culture that the virus can infect, and then alter or kill. In the case of viral identification, a region of dead cells results from viral growth, and is called a "plaque". Eukaryotic parasites may also be grown in culture as a means of identifying a particular agent.
In the absence of suitable plate culture techniques, some microbes require culture within live animals. Bacteria such as Mycobacterium leprae and Treponema pallidum can be grown in animals, although serological and microscopic techniques make the use of live animals unnecessary. Viruses are also usually identified using alternatives to growth in culture or animals. Some viruses may be grown in embryonated eggs. Another useful identification method is Xenodiagnosis, or the use of a vector to support the growth of an infectious agent. Chagas disease is the most significant example, because it is difficult to directly demonstrate the presence of the causative agent, Trypanosoma cruzi in a patient, which therefore makes it difficult to definitively make a diagnosis. In this case, xenodiagnosis involves the use of the vector of the Chagas agent T. cruzi, an uninfected triatomine bug, which takes a blood meal from a person suspected of having been infected. The bug is later inspected for growth of T. cruzi within its gut.
Microscopy
Another principal tool in the diagnosis of infectious disease is microscopy. Virtually all of the culture techniques discussed above rely, at some point, on microscopic examination for definitive identification of the infectious agent. Microscopy may be carried out with simple instruments, such as the compound light microscope, or with instruments as complex as an electron microscope. Samples obtained from patients may be viewed directly under the light microscope, and can often rapidly lead to identification. Microscopy is often also used in conjunction with biochemical staining techniques, and can be made exquisitely specific when used in combination with antibody based techniques. For example, the use of antibodies made artificially fluorescent (fluorescently labeled antibodies) can be directed to bind to and identify a specific antigens present on a pathogen. A fluorescence microscope is then used to detect fluorescently labeled antibodies bound to internalized antigens within clinical samples or cultured cells. This technique is especially useful in the diagnosis of viral diseases, where the light microscope is incapable of identifying a virus directly.
Other microscopic procedures may also aid in identifying infectious agents. Almost all cells readily stain with a number of basic dyes due to the electrostatic attraction between negatively charged cellular molecules and the positive charge on the dye. A cell is normally transparent under a microscope, and using a stain increases the contrast of a cell with its background. Staining a cell with a dye such as Giemsa stain or crystal violet allows a microscopist to describe its size, shape, internal and external components and its associations with other cells. The response of bacteria to different staining procedures is used in the taxonomic classification of microbes as well. Two methods, the Gram stain and the acid-fast stain, are the standard approaches used to classify bacteria and to diagnosis of disease. The Gram stain identifies the bacterial groups Firmicutes and Actinobacteria, both of which contain many significant human pathogens. The acid-fast staining procedure identifies the Actinobacterial genera Mycobacterium and Nocardia.
Biochemical tests
Biochemical tests used in the identification of infectious agents include the detection of metabolic or enzymatic products characteristic of a particular infectious agent. Since bacteria ferment carbohydrates in patterns characteristic of their genus and species, the detection of fermentation products is commonly used in bacterial identification. Acids, alcohols and gases are usually detected in these tests when bacteria are grown in selective liquid or solid media.
The isolation of enzymes from infected tissue can also provide the basis of a biochemical diagnosis of an infectious disease. For example, humans can make neither RNA replicases nor reverse transcriptase, and the presence of these enzymes are characteristic of specific types of viral infections. The ability of the viral protein hemagglutinin to bind red blood cells together into a detectable matrix may also be characterized as a biochemical test for viral infection, although strictly speaking hemagglutinin is not an enzyme and has no metabolic function.
Serological methods are highly sensitive, specific and often extremely rapid tests used to identify microorganisms. These tests are based upon the ability of an antibody to bind specifically to an antigen. The antigen, usually a protein or carbohydrate made by an infectious agent, is bound by the antibody. This binding then sets off a chain of events that can be visibly obvious in various ways, dependent upon the test. For example, "Strep throat" is often diagnosed within minutes, and is based on the appearance of antigens made by the causative agent, S. pyogenes, that is retrieved from a patients throat with a cotton swab. Serological tests, if available, are usually the preferred route of identification, however the tests are costly to develop and the reagents used in the test often require refrigeration. Some serological methods are extremely costly, although when commonly used, such as with the "strep test", they can be inexpensive.
Complex serological techniques have been developed into what are known as Immunoassays. Immunoassays can use the basic antibody – antigen binding as the basis to produce an electro - magnetic or particle radiation signal, which can be detected by some form of instrumentation. Signal of unknowns can be compared to that of standards allowing quantitation of the target antigen. To aid in the diagnosis of infectious diseases, immunoassays can detect or measure antigens from either infectious agents or proteins generated by an infected organism in response to a foreign agent. For example, immunoassay A may detect the presence of a surface protein from a virus particle. Immunoassay B on the other hand may detect or measure antibodies produced by an organism's immune system that are made to neutralize and allow the destruction of the virus.
Instrumentation can be used to read extremely small signals created by secondary reactions linked to the antibody – antigen binding. Instrumentation can control sampling, reagent use, reaction times, signal detection, calculation of results, and data management to yield a cost effective automated process for diagnosis of infectious disease.
Molecular diagnostics
Technologies based upon the polymerase chain reaction (PCR) method will become nearly ubiquitous gold standards of diagnostics of the near future, for several reasons. First, the catalog of infectious agents has grown to the point that virtually all of the significant infectious agents of the human population have been identified. Second, an infectious agent must grow within the human body to cause disease; essentially it must amplify its own nucleic acids in order to cause a disease. This amplification of nucleic acid in infected tissue offers an opportunity to detect the infectious agent by using PCR. Third, the essential tools for directing PCR, primers, are derived from the genomes of infectious agents, and with time those genomes will be known, if they are not already.
Thus, the technological ability to detect any infectious agent rapidly and specifically are currently available. The only remaining blockades to the use of PCR as a standard tool of diagnosis are in its cost and application, neither of which is insurmountable. The diagnosis of a few diseases will not benefit from the development of PCR methods, such as some of the clostridial diseases (tetanus and botulism). These diseases are fundamentally biological poisonings by relatively small numbers of infectious bacteria that produce extremely potent neurotoxins. A significant proliferation of the infectious agent does not occur, this limits the ability of PCR to detect the presence of any bacteria.
Indication of tests
There is usually an indication for a specific identification of an infectious agent only when such identification can aid in the treatment or prevention of the disease, or to advance knowledge of the course of an illness prior to the development of effective therapeutic or preventative measures. For example, in the early 1980s, prior to the appearance of AZT for the treatment of AIDS, the course of the disease was closely followed by monitoring the composition of patient blood samples, even though the outcome would not offer the patient any further treatment options. In part, these studies on the appearance of HIV in specific communities permitted the advancement of hypotheses as to the route of transmission of the virus. By understanding how the disease was transmitted, resources could be targeted to the communities at greatest risk in campaigns aimed at reducing the number of new infections. The specific serological diagnostic identification, and later genotypic or molecular identification, of HIV also enabled the development of hypotheses as to the temporal and geographical origins of the virus, as well as a myriad of other hypothesis. The development of molecular diagnostic tools have enabled physicians and researchers to monitor the efficacy of treatment with anti-retroviral drugs. Molecular diagnostics are now commonly used to identify HIV in healthy people long before the onset of illness and have been used to demonstrate the existence of people who are genetically resistant to HIV infection. Thus, while there still is no cure for AIDS, there is great therapeutic and predictive benefit to identifying the virus and monitoring the virus levels within the blood of infected individuals, both for the patient and for the community at large.
Prevention
thumb|right|Washing one's hands, a form of hygiene, is an effective way to prevent the spread of infectious disease.Bloomfield, SF, Aiello AE, Cookson B, O’Boyle C, Larson, EL, The effectiveness of hand hygiene procedures including hand-washing and alcohol-based hand sanitizers in reducing the risks of infections in home and community settings" American Journal of Infection Control 2007;35, suppl 1:S1-64
Techniques like hand washing, wearing gowns, and wearing face masks can help prevent infections from being passed from one person to another.Aseptic technique was introduced in medicine and surgery in the late 19th century and greatly reduced the incidence of infections caused by surgery. Frequent hand washing remains the most important defense against the spread of unwanted organisms."Generalized Infectious Cycle" Diagram Illustration - Retrieved on 2010-01-21 There are other forms of prevention such as avoiding the use of illicit drugs, using a condom, and having a healthy lifestyle with a balanced diet and regular exercise. Cooking foods well and avoiding foods that have been left outside for a long time is also important.
Antimicrobial substances used to prevent transmission of infections include:
antiseptics, which are applied to living tissue/skin
disinfectants, which destroy microorganisms found on non-living objects.
antibiotics, called prophylactic when given as prevention rather as treatment of infection. However, long term use of antibiotics leads to resistance and chances of developing opportunistic infections such as clostridium difficile colitis.eMedicine Health. "Bacterial and Viral Infections" 2010-02-08. Thus, avoiding using antibiotics longer than necessary helps preventing such infectious diseases.
One of the ways to prevent or slow down the transmission of infectious diseases is to recognize the different characteristics of various diseases. Some critical disease characteristics that should be evaluated include virulence, distance traveled by victims, and level of contagiousness. The human strains of Ebola virus, for example, incapacitate their victims extremely quickly and kill them soon after. As a result, the victims of this disease do not have the opportunity to travel very far from the initial infection zone. Also, this virus must spread through skin lesions or permeable membranes such as the eye. Thus, the initial stage of Ebola is not very contagious since its victims experience only internal hemorrhaging. As a result of the above features, the spread of Ebola is very rapid and usually stays within a relatively confined geographical area. In contrast, the Human Immunodeficiency Virus (HIV) kills its victims very slowly by attacking their immune system. As a result, many of its victims transmit the virus to other individuals before even realizing that they are carrying the disease. Also, the relatively low virulence allows its victims to travel long distances, increasing the likelihood of an epidemic.
Another effective way to decrease the transmission rate of infectious diseases is to recognize the effects of small-world networks. In epidemics, there are often extensive interactions within hubs or groups of infected individuals and other interactions within discrete hubs of susceptible individuals. Despite the low interaction between discrete hubs, the disease can jump to and spread in a susceptible hub via a single or few interactions with an infected hub. Thus, infection rates in small-world networks can be reduced somewhat if interactions between individuals within infected hubs are eliminated (Figure 1). However, infection rates can be drastically reduced if the main focus is on the prevention of transmission jumps between hubs. The use of needle exchange programs in areas with a high density of drug users with HIV is an example of the successful implementation of this treatment method. [6] Another example is the use of ring culling or vaccination of potentially susceptible livestock in adjacent farms to prevent the spread of the foot-and-mouth virus in 2001.
A general method to prevent transmission of vector-borne pathogens is pest control.
Immunity
thumb|Mary Mallon (a.k.a. Typhoid Mary) was an asymptomatic carrier of typhoid fever. Over the course of her career as a cook, she infected 53 people, three of whom died.
Infection with most pathogens does not result in death of the host and the offending organism is ultimately cleared after the symptoms of the disease have waned. This process requires immune mechanisms to kill or inactivate the inoculum of the pathogen. Specific acquired immunity against infectious diseases may be mediated by antibodies and/or T lymphocytes. Immunity mediated by these two factors may be manifested by:
a direct effect upon a pathogen, such as antibody-initiated complement-dependent bacteriolysis, opsonoization, phagocytosis and killing, as occurs for some bacteria,
neutralization of viruses so that these organisms cannot enter cells,
or by T lymphocytes, which will kill a cell parasitized by a microorganism.
The immune system response to a microorganism often causes symptoms such as a high fever and inflammation, and has the potential to be more devastating than direct damage caused by a microbe.
Resistance to infection (immunity) may be acquired following a disease, by asymptomatic carriage of the pathogen, by harboring an organism with a similar structure (crossreacting), or by vaccination. Knowledge of the protective antigens and specific acquired host immune factors is more complete for primary pathogens than for opportunistic pathogens.
There is also the phenomenon of herd immunity which offers a measure of protection to those otherwise vulnerable people when a large enough proportion of the population has acquired immunity from certain infections.
Immune resistance to an infectious disease requires a critical level of either antigen-specific antibodies and/or T cells when the host encounters the pathogen. Some individuals develop natural serum antibodies to the surface polysaccharides of some agents although they have had little or no contact with the agent, these natural antibodies confer specific protection to adults and are passively transmitted to newborns.
Host genetic factors
The clearance of the pathogens, either treatment-induced or spontaneous, it can be influenced by the genetic variants carried by the individual patients. For instance, for genotype 1 hepatitis C treated with Pegylated interferon-alpha-2a or Pegylated interferon-alpha-2b (brand names Pegasys or PEG-Intron) combined with ribavirin, it has been shown that genetic polymorphisms near the human IL28B gene, encoding interferon lambda 3, are associated with significant differences in the treatment-induced clearance of the virus. This finding, originally reported in Nature, showed that genotype 1 hepatitis C patients carrying certain genetic variant alleles near the IL28B gene are more possibly to achieve sustained virological response after the treatment than others. Later report from Nature demonstrated that the same genetic variants are also associated with the natural clearance of the genotype 1 hepatitis C virus.
Treatments
When infection attacks the body, anti-infective drugs can suppress the infection. Several broad types of anti-infective drugs exist, depending on the type of organism targeted; they include antibacterial (antibiotic; including antitubercular), antiviral, antifungal and antiparasitic (including antiprotozoal and antihelminthic) agents. Depending on the severity and the type of infection, the antibiotic may be given by mouth or by injection, or may be applied topically. Severe infections of the brain are usually treated with intravenous antibiotics. Sometimes, multiple antibiotics are used in case there is resistance to one antibiotic. Antibiotics only work for bacteria and do not affect viruses. Antibiotics work by slowing down the multiplication of bacteria or killing the bacteria. The most common classes of antibiotics used in medicine include penicillin, cephalosporins, aminoglycosides, macrolides, quinolones and tetracyclines.
Not all infections require treatment, and for many self-limiting infections the treatment may cause more side-effects than benefits. Antimicrobial stewardship is the concept that healthcare providers should treat an infection with an antimicrobial specifically works well for the target pathogen for the shortest amount of time and to only treat when there is a known or highly suspected pathogen that will respond to the medication.
Epidemiology
upright=1.3|thumb|Deaths due to infectious and parasitic diseases per million persons in 2012
thumb|upright=1.3|Disability-adjusted life year for infectious and parasitic diseases per 100,000 inhabitants in 2004.
In 2010 about 10 million people died of an infectious disease.
The World Health Organization collects information on global deaths by International Classification of Disease (ICD) code categories. The following table lists the top infectious disease by number of deaths in 2002. 1993 data is included for comparison.
+ Worldwide mortality due to infectious diseases Rank Cause of death Deaths 2002 (in millions) Percentage of all deaths Deaths 1993 (in millions) 1993 RankN/A All infectious diseases 14.7 25.9% 16.4 32.2% 1 Lower respiratory infectionsLower respiratory infections include various pneumonias, influenzas and acute bronchitis.3.9 6.9%4.11 2 HIV/AIDS 2.8 4.9% 0.77 3 Diarrheal diseasesDiarrheal diseases are caused by many different organisms, including cholera, botulism, and E. coli to name a few. See also: Intestinal infectious diseases1.8 3.2% 3.02 4 Tuberculosis (TB) 1.6 2.7% 2.7 3 5 Malaria 1.3 2.2% 2.0 4 6 Measles 0.6 1.1% 1.1 5 7 Pertussis 0.290.5% 0.36 7 8 Tetanus 0.21 0.4% 0.15 12 9 Meningitis0.17 0.3% 0.25 8 10 Syphilis0.16 0.3% 0.19 11 11 Hepatitis B 0.10 0.2% 0.93 6 12-17 Tropical diseases (6)Tropical diseases include Chagas disease, dengue fever, lymphatic filariasis, leishmaniasis, onchocerciasis, schistosomiasis and trypanosomiasis. 0.13 0.2% 0.53 9, 10, 16–18
The top three single agent/disease killers are HIV/AIDS, TB and malaria. While the number of deaths due to nearly every disease have decreased, deaths due to HIV/AIDS have increased fourfold. Childhood diseases include pertussis, poliomyelitis, diphtheria, measles and tetanus. Children also make up a large percentage of lower respiratory and diarrheal deaths. In 2012, approximately 3.1 million people have died due to lower respiratory infections, making it the number 4 leading cause of death in the world.
Historic pandemics
thumb|Great Plague of Marseille in 1720 killed 100,000 people in the city and the surrounding provinces
A pandemic (or global epidemic) is a disease that affects people over an extensive geographical area.
Plague of Justinian, from 541 to 750, killed between 50% and 60% of Europe's population."Infectious and Epidemic Disease in History"
The Black Death of 1347 to 1352 killed 25 million in Europe over 5 years. The plague reduced the old world population from an estimated 450 million to between 350 and 375 million in the 14th century.
The introduction of smallpox, measles, and typhus to the areas of Central and South America by European explorers during the 15th and 16th centuries caused pandemics among the native inhabitants. Between 1518 and 1568 disease pandemics are said to have caused the population of Mexico to fall from 20 million to 3 million.
The first European influenza epidemic occurred between 1556 and 1560, with an estimated mortality rate of 20%.
Smallpox killed an estimated 60 million Europeans during the 18th century"Smallpox". North Carolina Digital History. (approximately 400,000 per year).Smallpox and Vaccinia. National Center for Biotechnology Information. Up to 30% of those infected, including 80% of the children under 5 years of age, died from the disease, and one-third of the survivors went blind."Smallpox: The Triumph over the Most Terrible of the Ministers of Death"
In the 19th century, tuberculosis killed an estimated one-quarter of the adult population of Europe;Multidrug-Resistant "Tuberculosis". Centers for Disease Control and Prevention. by 1918 one in six deaths in France were still caused by TB.
The Influenza Pandemic of 1918 (or the Spanish Flu) killed 25-50 million people (about 2% of world population of 1.7 billion)."Influenza of 1918 (Spanish Flu) and the US Navy" Today Influenza kills about 250,000 to 500,000 worldwide each year.
Emerging diseases
In most cases, microorganisms live in harmony with their hosts via mutual or commensal interactions. Diseases can emerge when existing parasites become pathogenic or when new pathogenic parasites enter a new host.
Coevolution between parasite and host can lead to hosts becoming resistant to the parasites or the parasites may evolve greater virulence, leading to immunopathological disease.
Human activity is involved with many emerging infectious diseases, such as environmental change enabling a parasite to occupy new niches. When that happens, a pathogen that had been confined to a remote habitat has a wider distribution and possibly a new host organism. Parasites jumping from nonhuman to human hosts are known as zoonoses. Under disease invasion, when a parasite invades a new host species, it may become pathogenic in the new host.
Several human activities have led to the emergence of zoonotic human pathogens, including viruses, bacteria, protozoa, and rickettsia, and spread of vector-borne diseases, see also Globalization and Disease and Wildlife disease:
Encroachment on wildlife habitats. The construction of new villages and housing developments in rural areas force animals to live in dense populations, creating opportunities for microbes to mutate and emerge.
Changes in agriculture. The introduction of new crops attracts new crop pests and the microbes they carry to farming communities, exposing people to unfamiliar diseases.
The destruction of rain forests. As countries make use of their rain forests, by building roads through forests and clearing areas for settlement or commercial ventures, people encounter insects and other animals harboring previously unknown microorganisms.
Uncontrolled urbanization. The rapid growth of cities in many developing countries tends to concentrate large numbers of people into crowded areas with poor sanitation. These conditions foster transmission of contagious diseases.
Modern transport. Ships and other cargo carriers often harbor unintended "passengers", that can spread diseases to faraway destinations. While with international jet-airplane travel, people infected with a disease can carry it to distant lands, or home to their families, before their first symptoms appear.
History
thumb|East German postage stamps depicting four antique microscopes. Advancements in microscopy were essential to the early study of infectious diseases.
Ideas of contagion became more popular in Europe during the Renaissance, particularly through the writing of the Italian physician Girolamo Fracastoro.
Anton van Leeuwenhoek (1632–1723) advanced the science of microscopy by being the first to observe microorganisms, allowing for easy visualization of bacteria.
In the mid-19th century John Snow and William Budd did important work demonstrating the contagiousness of typhoid and cholera through contaminated water. Both are credited with decreasing epidemics of cholera in their towns by implementing measures to prevent contamination of water.
Louis Pasteur proved beyond doubt that certain diseases are caused by infectious agents, and developed a vaccine for rabies.
Robert Koch, provided the study of infectious diseases with a scientific basis known as Koch's postulates.
Edward Jenner, Jonas Salk and Albert Sabin developed effective vaccines for smallpox and polio, which would later result in the eradication and near-eradication of these diseases, respectively.
Alexander Fleming discovered the world's first antibiotic, Penicillin, which Florey and Chain then developed.
Gerhard Domagk developed sulphonamides, the first broad spectrum synthetic antibacterial drugs.
Medical specialists
The medical treatment of infectious diseases falls into the medical field of Infectious Disease and in some cases the study of propagation pertains to the field of Epidemiology. Generally, infections are initially diagnosed by primary care physicians or internal medicine specialists. For example, an "uncomplicated" pneumonia will generally be treated by the internist or the pulmonologist (lung physician). The work of the infectious diseases specialist therefore entails working with both patients and general practitioners, as well as laboratory scientists, immunologists, bacteriologists and other specialists.
An infectious disease team may be alerted when:
The disease has not been definitively diagnosed after an initial workup
The patient is immunocompromised (for example, in AIDS or after chemotherapy);
The infectious agent is of an uncommon nature (e.g. tropical diseases);
The disease has not responded to first line antibiotics;
The disease might be dangerous to other patients, and the patient might have to be isolated
Society and culture
A number of studies have reported associations between pathogen load in an area and human behavior. Higher pathogen load is associated with decreased size of ethnic and religious groups in an area. This may be due high pathogen load favoring avoidance of other groups, which may reduce pathogen transmission, or a high pathogen load preventing the creation of large settlements and armies that enforce a common culture. Higher pathogen load is also associated with more restricted sexual behavior, which may reduce pathogen transmission. It also associated with higher preferences for health and attractiveness in mates. Higher fertility rates and shorter or less parental care per child is another association that may be a compensation for the higher mortality rate. There is also an association with polygyny which may be due to higher pathogen load, making selecting males with a high genetic resistance increasingly important. Higher pathogen load is also associated with more collectivism and less individualism, which may limit contacts with outside groups and infections. There are alternative explanations for at least some of the associations although some of these explanations may in turn ultimately be due to pathogen load. Thus, polygny may also be due to a lower male:female ratio in these areas but this may ultimately be due to male infants having increased mortality from infectious diseases. Another example is that poor socioeconomic factors may ultimately in part be due to high pathogen load preventing economic development.
Fossil record
thumb|150px|right|alt=Skull of dinosaur with long jaws and teeth.|Herrerasaurus skull.
Evidence of infection in fossil remains is a subject of interest for paleopathologists, scientists who study occurrences of injuries and illness in extinct life forms. Signs of infection have been discovered in the bones of carnivorous dinosaurs. When present, however, these infections seem to tend to be confined to only small regions of the body. A skull attributed to the early carnivorous dinosaur Herrerasaurus ischigualastensis exhibits pit-like wounds surrounded by swollen and porous bone. The unusual texture of the bone around the wounds suggests they were afflicted by a short-lived, non-lethal infection. Scientists who studied the skull speculated that the bite marks were received in a fight with another Herrerasaurus. Other carnivorous dinosaurs with documented evidence of infection include Acrocanthosaurus, Allosaurus, Tyrannosaurus and a tyrannosaur from the Kirtland Formation. The infections from both tyrannosaurs were received by being bitten during a fight, like the Herrerasaurus specimen.Molnar, R. E., 2001, "Theropod paleopathology: a literature survey": In: Mesozoic Vertebrate Life, edited by Tanke, D. H., and Carpenter, K., Indiana University Press, p. 337–363.
See also
infectious disease (medical specialty)
Host-pathogen interface
Bioinformatics Resource Centers for Infectious Diseases
Biological contamination
Blood-borne disease
Coinfection
Copenhagen Consensus
Disease diffusion mapping
Foodborne illness
Globalization and disease
Human microbiome project
Infection control
Infectious disease dynamics
Membrane vesicle trafficking
Infectious disease eradication
Infectious disease in the 20th century
List of causes of death by rate
List of diseases caused by insects
List of epidemics
List of bacterial vaginosis microbiota
List of infectious diseases
Multiplicity of infection
Neglected tropical diseases
Nosocomial infection
Spatiotemporal Epidemiological Modeler (STEM)
Spillover infection
Threshold host density
Transmission (medicine)
Tropical disease
Ubi pus, ibi evacua (Latin: "where there is pus, there evacuate it")
Vaccine-preventable diseases
Waterborne diseases
Notes and references
External links
European Center for Disease Prevention and Control
U.S. Centers for Disease Control and Prevention,
Infectious Disease Society of America (IDSA)
Infectious Disease Index of the Public Health Agency of Canada (PHAC)
Vaccine Research Center Information concerning vaccine research clinical trials for Emerging and re-Emerging Infectious Diseases.
Infection Information Resource
Microbes & Infection (Journal)
Knowledge source for Health Care Professionals involved in Wound management www.woundsite.info
Table: Global deaths from communicable diseases, 2010 - Canadian Broadcasting Corp.
Category:Epidemiology | 37,220 | 2017-01 |
Slavs | thumb|275px|Distribution of Slavic-speaking populations in Europe.
Slavs are the largest Indo-European ethno-linguistic group in Europe. They are native to Central Europe, Eastern Europe, Southeastern Europe, Northeastern Europe, North Asia and Central Asia. Slavs speak Slavic languages of the Balto-Slavic language group. From the early 6th century they spread to inhabit most of Central, Eastern and Southeastern Europe.
States with Slavic languages comprise over 50% of the territory of Europe.
Present-day Slavic people are classified into West Slavs (chiefly Poles, Czechs and Slovaks), East Slavs (chiefly Russians, Belarusians, and Ukrainians), and South Slavs (chiefly Serbs, Croats, Bosniaks, Macedonians, Slovenes, and Montenegrins of the Former Yugoslavia as well as Bulgarians). For a more comprehensive list, see the ethnocultural subdivisions. Modern Slavic nations and ethnic groups are considerably diverse both genetically and culturally, and relations between them – even within the individual ethnic groups themselves – are varied, ranging from a sense of connection to mutual feelings of hostility.
Population
thumb|275px|World map of countries with
There are an estimated 360 million Slavs worldwide.
Nation Nation-stateNumbersRussians130,000,000 Poles57,393,000including 36,522,000 single ethnic identity, 871,000 multiple ethnic identity (especially 431,000 Polish and Silesian, 216,000 Polish and Kashubian and 224,000 Polish and another identity) in Poland (according to the census 2011) and estimated 20,000,000 out of Poland Świat Polonii, witryna Stowarzyszenia Wspólnota Polska: "Polacy za granicą" (Polish people abroad as per summary by Świat Polonii, internet portal of the Polish Association Wspólnota Polska)Ukrainians46,700,000–51,800,000Serbs12,100,000–12,500,000Czechs12,000,000Bulgarians10,000,000Kolev, Yordan, Българите извън България 1878 – 1945, 2005, р. 18 Quote:"В началото на ХХI в. общият брой на етническите българи в България и зад граница се изчислява на около 10 милиона души/In 2005 the number of Bulgarians is 10 million peopleBelarusians10,000,000Croats8,000,000, Croatian World Congress, "4.5 million Croats and people of Croatian heritage live outside of the Republic of Croatia and Bosnia and Herzegovina"Slovaks6,940,000including 4,353,000 in Slovakia (according to the census 2011), 147,000 single ethnic identity, 19,000 multiple ethnic identity (especially 18,000 Czech and Slovak and 1,000 Slovak and another identity) in Czech Republic (according to the census 2011), 53,000 in Serbia (according to the census 2011), 762,000 in the USA (according to the census 2010), 2,000 single ethnic identity and 1,000 multiple ethnic identity Slovak and Polish in Poland (according to the census 2011), 21,000 single ethnic identity, 43,000 multiple ethnic identity in Canada (according to the census 2006)Bosniaks2,800,000Slovenes2,500,000Macedonians2,200,000Montenegrins500,000
Ethnonym
The Slavic autonym is reconstructed in Proto-Slavic as *Slověninъ, plural *Slověne. The oldest documents written in Old Church Slavonic and dating from the 9th century attest the autonym as Slověne (Словѣне). The oldest mention of the Slavic ethnonym is the 6th century AD Procopius, writing in Byzantine Greek – Sklaboi (), Sklabēnoi (), Sklauenoi (), Sthlabenoi (), or Sklabinoi (), while his contemporary Jordanes refers to the Sclaveni in Latin.
The Slavic autonym *Slověninъ is usually considered a derivation from slovo ("word"), originally denoting "people who speak (the same language)," i.e. people who understand each other, in contrast to the Slavic word denoting German people – němci, meaning "silent, mute people" (from Slavic *němъ – "mute, mumbling").
The word slovo ("word") and the related slava ("glory, fame") and slukh ("hearing") originate from the Proto-Indo-European root *ḱlew- ("be spoken of, glory"), cognate with Ancient Greek κλῆς (klês – "famous"), whence comes the name Pericles, Latin clueo ("be called"), and English loud.
Early history
First mentions
thumb|Slavic peoples in 6th century
thumb|Slavic tribes from the 7th to 9th centuries in Europe
The Slavs under name of the Antes and the Sclaveni make their first appearance in Byzantine records in the early 6th century. Byzantine historiographers under Justinian I (527–565), such as Procopius of Caesarea, Jordanes and Theophylact Simocatta describe tribes of these names emerging from the area of the Carpathian Mountains, the lower Danube and the Black Sea, invading the Danubian provinces of the Eastern Empire.
Procopius wrote in 545 that "the Sclaveni and the Antae actually had a single name in the remote past; for they were both called Sporoi in olden times." He described them as barbarians, who lived under democracy, and that they believe in one god, "the maker of lightning" (Perun), to whom they made sacrifice. They lived in scattered housing, and constantly changed settlement. Regarding warfare, they were mainly foot soldiers with small shields and battleaxes, lightly clothed, some entering battle naked with only their genitals covered. Their language is "barbarous" (that is, not Greek-speaking), and the two tribes do not differ in appearance, being tall and robust, "while their bodies and hair are neither very fair or blond, nor indeed do they incline entirely to the dark type, but they are all slightly ruddy in color. And they live a hard life, giving no heed to bodily comforts..." Jordanes described the Sclaveni having swamps and forests for their cities. Another 6th-century source refers to them living among nearly impenetrable forests, rivers, lakes, and marshes.
Menander Protector mentions a Daurentius (577–579) that slew an Avar envoy of Khagan Bayan I. The Avars asked the Slavs to accept the suzerainty of the Avars; he however declined and is reported as saying: "Others do not conquer our land, we conquer theirs – so it shall always be for us".
The relationship between the Slavs and a tribe called the Veneti east of the River Vistula in the Roman period is uncertain. The name may refer both to Balts and Slavs.
Migrations
thumb|East Slavic tribes, 8th and 9th centuries
According to eastern homeland theory, prior to becoming known to the Roman world, Slavic-speaking tribes were part of the many multi-ethnic confederacies of Eurasia – such as the Sarmatian, Hun and Gothic empires. The Slavs emerged from obscurity when the westward movement of Germans in the 5th and 6th centuries CE (thought to be in conjunction with the movement of peoples from Siberia and Eastern Europe: Huns, and later Avars and Bulgars) started the great migration of the Slavs, who settled the lands abandoned by Germanic tribes fleeing the Huns and their allies: westward into the country between the Oder and the Elbe-Saale line; southward into Bohemia, Moravia, much of present-day Austria, the Pannonian plain and the Balkans; and northward along the upper Dnieper river. Perhaps some Slavs migrated with the movement of the Vandals to Iberia and north Africa.
Around the 6th century, Slavs appeared on Byzantine borders in great numbers. The Byzantine records note that grass would not regrow in places where the Slavs had marched through, so great were their numbers. After a military movement even the Peloponnese and Asia Minor were reported to have Slavic settlements.Tachiaos, Anthony-Emil N. 2001. Cyril and Methodius of Thessalonica: The Acculturation of the Slavs. Crestwood, NY: St. Vladimir's Seminary Press. This southern movement has traditionally been seen as an invasive expansion. By the end of the 6th century, Slavs had settled the Eastern Alps regions.
Middle Ages
Early Slavic states
thumb|Life of the East Slavs, by Sergey Ivanov
When their migratory movements ended, there appeared among the Slavs the first rudiments of state organizations, each headed by a prince with a treasury and a defense force. Moreover, it was the beginning of class differentiation, and nobles pledged allegiance either to the Frankish/ Holy Roman Emperors or the Byzantine Emperors.
In the 7th century, the Frankish merchant Samo, who supported the Slavs fighting their Avar rulers, became the ruler of the first known Slav state in Central Europe, which, however, most probably did not outlive its founder and ruler. This provided the foundation for subsequent Slavic states to arise on the former territory of this realm with Carantania being the oldest of them. Very old also are the Principality of Nitra and the Moravian principality (see under Great Moravia). In this period, there existed central Slavic groups and states such as the Balaton Principality, but the subsequent expansion of the Magyars, as well as the Germanisation of Austria, separated the northern and southern Slavs. The First Bulgarian Empire was founded in 681, and the Slavic language Old Church Slavonic became the main and official of the empire in 864. Bulgaria was instrumental in the spread of Slavic literacy and Christianity to the rest of the Slavic world.
Modern history
thumb|Since the 16th century east Slavs settled most of Siberia reaching Kamchatka and the Pacific island of Sakhalin
As of 1878, there were only three free Slavic states in the world: the Russian Empire, Serbia and Montenegro. Bulgaria was also free but was de jure vassal to the Ottoman Empire until official independence was declared in 1908. In the entire Austro-Hungarian Empire of approximately 50 million people, about 23 million were Slavs. The Slavic peoples who were, for the most part, denied a voice in the affairs of the Austria-Hungary, were calling for national self-determination. Because of the vastness and diversity of the territory occupied by Slavic people, there were several centers of Slavic consolidation. In the 19th century, Pan-Slavism developed as a movement among intellectuals, scholars, and poets, but it rarely influenced practical politics and did not find support in some Slavic nations. Pan-Slavism became compromised when the Russian Empire started to use it as an ideology justifying its territorial conquests in Central Europe as well as subjugation of other Slavic ethnic groups such as Poles and Ukrainians, and the ideology became associated with Russian imperialism.
During World War I, representatives of the Czechs, Slovaks, Poles, Serbs, Croats, and Slovenes set up organizations in the Allied countries to gain sympathy and recognition. In 1918, after World War I ended, the Slavs established such independent states as Czechoslovakia, the Second Polish Republic, and the State of Slovenes, Croats and Serbs (which merged into Yugoslavia).
During World War II, Nazi Germany entailed killing, deporting, or enslaving the Slavic and Jewish population of occupied Eastern Europe to create Living space for German settlers, and also planned the starvation of 80 million people in the Soviet Union. These partially fulfilled plans resulted in the deaths of an estimated 19.3 million civilians and prisoners of war.
The first half of the 20th century in Russia and the Soviet Union was marked by a succession of wars, famines and other disasters, each accompanied by large-scale population losses.Mark Harrison (2002). "Accounting for War: Soviet Production, Employment, and the Defence Burden, 1940–1945". Cambridge University Press. p. 167. ISBN 0-521-89424-7 Stephen J. Lee estimates that, by the end of World War II in 1945, the Russian population was about 90 million fewer than it could have been otherwise.Stephen J. Lee (2000). "European dictatorships, 1918–1945". Routledge. p.86. ISBN 0-415-23046-2.
The common Slavic experience of communism combined with the repeated usage of the ideology by Soviet propaganda after World War II within the Eastern bloc (Warsaw Pact) was a forced high-level political and economic hegemony of the USSR dominated by Russians. A notable political union of the 20th century that covered most South Slavs was Yugoslavia, but it ultimately broke apart in the 1990s along with the Soviet Union.
The word "Slavs" was used in the national anthem of the Slovak Republic (1939–1945), Yugoslavia (1943–1992) and the Federal Republic of Yugoslavia (1992–2003), later Serbia and Montenegro (2003–2006).
Former Soviet states, as well as countries that used to be satellite states or territories of the Warsaw Pact, have numerous minority Slavic populations, many of whom are originally from the Russian SFSR, Ukrainian SSR and Byelorussian SSR. As of now, Kazakhstan has the largest Slavic minority population with most being Russians (Ukrainians, Belarusians and Poles are present as well but in much smaller numbers).
Pan-Slavism
Pan-Slavism, a movement which came into prominence in the mid-19th century, emphasized the common heritage and unity of all the Slavic peoples. The main focus was in the Balkans where the South Slavs had been ruled for centuries by other empires: the Byzantine Empire, Austria-Hungary, the Ottoman Empire, and Venice. The Russian Empire used Pan-Slavism as a political tool; as did the Soviet Union, which gained political-military influence and control over most Slavic-majority nations between 1945 and 1948 and retained a hegemonic role until the period 1989–1991.
thumb|right|150px|South Slavic languages.
Slovene
Croatian
Bosnian
Serbian
Montenegrin
Torlakian (transitional dialect)
Macedonian
Bulgarian
thumb|150px|left|East Slavic languages.
thumb|150px|left|West Slavic languages.
Languages
Proto-Slavic, the supposed ancestor language of all Slavic languages, is a descendant of common Proto-Indo-European, via a Balto-Slavic stage in which it developed numerous lexical and morphophonological isoglosses with the Baltic languages. In the framework of the Kurgan hypothesis, "the Indo-Europeans who remained after the migrations [from the steppe] became speakers of Balto-Slavic". Proto-Slavic is defined as the last stage of the language preceding the geographical split of the historical Slavic languages. That language was uniform, and on the basis of borrowings from foreign languages and Slavic borrowings into other languages, cannot be said to have any recognizable dialects – this suggests that there was, at one time, a relatively small Proto-Slavic homeland.
Slavic linguistic unity was to some extent visible as late as Old Church Slavonic manuscripts which, though based on local Slavic speech of Thessaloniki, could still serve the purpose of the first common Slavic literary language.J.P. Mallory and D.Q. Adams, The Oxford Introduction to Proto-Indo-European and the Proto-Indo-European World (2006), pp. 25–26. Slavic studies began as an almost exclusively linguistic and philological enterprise. As early as 1833, Slavic languages were recognized as Indo-European. Sometimes the West Slavic and East Slavic languages are combined into a single group known as North Slavic languages.
Standardised Slavic languages that have official status in at least one country are: Belarusian, Bosnian, Bulgarian, Croatian, Czech, Macedonian, Montenegrin, Polish, Russian, Serbian, Slovak, Slovene, and Ukrainian.
The alphabets used for Slavic languages is frequently connected to the dominant religion among the respective ethnic groups. The Orthodox use the Cyrillic alphabet and the Roman Catholics use Latin alphabet; the Bosniaks who are Muslims also use the Latin. Few Greek Roman and Roman Catholics use the Cyrillic alphabet however. Serbian language and Montenegrin language use both Cyrillic and Latin alphabets. There is also a Latin script to write in Belarusian, called the Lacinka alphabet.
Religion
The pagan Slavic populations were Christianized between the 7th and 12th centuries. Orthodox Christianity is predominant in the East and South Slavs, while Roman Catholicism is predominant in West Slavs and the western South Slavs. The religious borders are largely comparable to the East–West Schism which began in the 11th century.
The majority of contemporary Slavic populations who profess a religion are Orthodox, followed by Catholic, while a small minority are Protestant. There are minor Slavic Muslim groups. Religious delineations by nationality can be very sharp; usually in the Slavic ethnic groups the vast majority of religious people share the same religion. Some Slavs are atheist or agnostic: in the Czech Republic 20% were atheists according to a 2012 poll.
The main Slavic ethnic groups by religion:
Mainly Eastern Orthodoxy:
Russians
Ukrainians (incl. Rusyns)
Serbs
Bulgarians
Belarusians
Macedonians
Montenegrins
Mainly Roman Catholicism:
Poles (incl. Silesians, Kashubians)
Czechs (incl. Moravians)
Croats
Slovaks
Slovenes
Sorbs
Mainly Islam:
Bosniaks
Pomaks
Gorani
Torbeshi
Ethnic groups
Ethnocultural subdivisions
thumb|European countries where a Slavic language is the official one on the entire territory
Slavs are customarily divided along geographical lines into three major subgroups: West Slavs, East Slavs, and South Slavs, each with a different and a diverse background based on unique history, religion and culture of particular Slavic groups within them. Apart from prehistorical archaeological cultures, the subgroups have had notable cultural contact with non-Slavic Bronze- and Iron Age civilisations.
The West Slavs have origin in early Slavic tribes which settled in Central Europe after East Germanic tribes had left this area during the migration period. They are noted as having mixed with Germanics and Balts. The West Slavs came under the influence of the Western Roman Empire (Latin) and of the Roman Catholic Church.
The East Slavs have origins in early Slavic tribes who mixed with Finno-Ugric peoples and Balts. Their early Slavic component, Antes, mixed or absorbed Iranians, and later received influence from the Khazars and Vikings. The East Slavs trace their national origins to the tribal unions of Kievan Rus', beginning in the 10th century. They came particularly under the influence of the Eastern Roman Empire (Byzantine Empire) and of the Eastern Orthodox Church; Eastern Catholic Churches later became established in the 16th century in areas such as Ukraine.
The South Slavs from most of the region have origins in early Slavic tribes who mixed with the local Proto-Balkanic tribes (Illyrian, Dacian, Thracian, Pannonian, Paeonian and Hellenic tribes), Celtic tribes (most notably the Scordisci), as well as with Romans (and the Romanized remnants of the former groups), and also with remnants of temporarily settled invading East Germanic, Asiatic or Caucasian tribes such as Gepids, Huns, Avars and Bulgars. The original inhabitants of present-day Slovenia and continental Croatia have origins in early Slavic tribes who mixed with Romans and romanized Celtic and Illyrian people as well as with Avars and Germanic peoples (Lombards and East Goths). The South Slavs (except the Slovenes and Croats) came under the cultural sphere of the Eastern Roman Empire (Byzantine Empire), of the Ottoman Empire and of the Eastern Orthodox Church and Islam, while the Slovenes and the Croats were influenced by Western Roman Empire (Latin), Holy Roman Empire and, thus by the Roman Catholic Church.
List of major ethnic groups
Ethnic group Language familyRussiansEast SlavsPolesWest SlavsUkrainiansEast SlavsSerbsSouth SlavsCzechsWest SlavsBulgariansSouth SlavsBelarusiansEast SlavsCroatsSouth SlavsSlovaksWest SlavsBosniaksSouth SlavsSlovenesSouth SlavsMacedoniansSouth SlavsMontenegrinsSouth SlavsSilesiansWest SlavsMoraviansWest SlavsKashubiansWest Slavs
Notes
The ethnic classification is disputed. See main article for further information.
Relations with non-Slavic people
Assimilation
300px|thumb|West Slav tribes in 9th/10th century
Throughout their history, Slavs came into contact with non-Slavic groups. In the postulated homeland region (present-day Ukraine), they had contacts with the Iranic Sarmatians and the Germanic Goths. After their subsequent spread, the Slavs began assimilating non-Slavic peoples. For example, in the Balkans, there were Paleo-Balkan peoples, such as Romanized and Hellenized (Jireček Line) Illyrians, Thracians and Dacians, as well as Greeks and Celtic Scordisci. Over time, due to the larger number of Slavs, most descendants of the indigenous populations of the Balkans were Slavicized. The Thracians and Illyrians vanished as defined ethnic groups from the population during this period – although the modern Albanian nation claims descent from the Illyrians. Exceptions are Greece, where because Slavs were fewer than Greeks, they came to be Hellenized (aided in time by more Greeks returning to Greece in the 9th century and the role of the church and administration); and Romania, where Slavic people settled en route for present-day Greece, Republic of Macedonia, Bulgaria and East Thrace, where the Slavic population gradually assimilated. Bulgars were also assimilated by local Slavs but their ruling status and subsequent control of land cast the nominal legacy of Bulgarian country and people onto all future generations. The Romance speakers within the fortified Dalmatian cities managed to retain their culture and language for a long time. Dalmatian Romance was spoken until the high Middle Ages. But, they too were eventually assimilated into the body of Slavs.
In the Western Balkans, South Slavs and Germanic Gepids intermarried with Avar invaders, eventually producing a Slavicized population. In Central Europe, the Slavs intermixed with Germanic and Celtic peoples, while the eastern Slavs encountered Uralic and Scandinavian peoples. Scandinavians (Varangians) and Finnic peoples were involved in the early formation of the Rus' state but were completely Slavicized after a century. Some Finno-Ugric tribes in the north were also absorbed into the expanding Rus population. At the time of the Magyar migration, the present-day Hungary was inhabited by Slavs, numbering about 200,000, and by Romano-Dacians who were either assimilated or enslaved by the Magyars. In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Kipchak and the Pecheneg, caused a massive migration of East Slavic populations to the safer, heavily forested regions of the north. In the Middle Ages, groups of Saxon ore miners settled in medieval Bosnia, Serbia and Bulgaria, where they were Slavicized.
thumb|250px|left|The Limes Saxoniae forming the border between the Saxons to the west and the Obotrites to the east
Polabian Slavs (Wends) settled in eastern parts of England (the Danelaw), apparently as Danish allies. Polabian-Pomeranian Slavs are also known to have even settled on Norse age Iceland. Saqaliba refers to the Slavic mercenaries and slaves in the medieval Arab world in North Africa, Sicily and Al-Andalus. Saqaliba served as caliph's guards.Eigeland, Tor. 1976. "The golden caliphate". Saudi Aramco World, September/October 1976, pp. 12–16. In the 12th century, Slavic piracy in the Baltics increased. The Wendish Crusade was started against the Polabian Slavs in 1147, as a part of the Northern Crusades. Niklot, pagan chief of the Slavic Obodrites, began his open resistance when Lothar III, Holy Roman Emperor, invaded Slavic lands. In August 1160 Niklot was killed, and German colonization (Ostsiedlung) of the Elbe-Oder region began. In Hanoverian Wendland, Mecklenburg-Vorpommern and Lusatia, invaders started germanization. Early forms of germanization were described by German monks: Helmold in the manuscript Chronicon Slavorum and Adam of Bremen in Gesta Hammaburgensis ecclesiae pontificum. The Polabian language survived until the beginning of the 19th century in what is now the German state of Lower Saxony. In Eastern Germany, around 20% of Germans have historic Slavic paternal ancestry, as revealed in Y-DNA testing. Similarly, in Germany, around 20% of the foreign surnames are of Slavic origin.
Cossacks, although Slavic-speaking and practicing as Orthodox Christians, came from a mix of ethnic backgrounds, including Tatars and other Turks. Many early members of the Terek Cossacks were Ossetians.
The Gorals of southern Poland and northern Slovakia are partially descended from Romance-speaking Vlachs, who migrated into the region from the 14th to 17th centuries and were absorbed into the local population. The population of Moravian Wallachia also descend of this population.
Conversely, some Slavs were assimilated into other populations. Although the majority continued south, attracted by the riches of the territory which would become Bulgaria, a few remained in the Carpathian basin. There they were ultimately assimilated into the Magyar or Romanian peoples. Numerous river and other placenames in Romania are of Slavic origin.Alexandru Xenopol, Istoria românilor din Dacia Traiană, 1888, vol. I, p. 540
See also
Ethnic groups in Europe
Gord (archaeology)
Lech, Čech, and Rus
List of modern ethnic groups
Panethnicity
List of Slavic tribes
Pan-Slavic colors
Slavic names
References
Sources
Curta Florin, http://www.academia.edu/229543/The_early_Slavs_in_Bohemia_and_Moravia_a_response_to_my_critics
Lacey, Robert. 2003. Great Tales from English History. Little, Brown and Company. New York. 2004. ISBN 0-316-10910-X.
Lewis, Bernard. Race and Slavery in the Middle East. Oxford Univ. Press.
Nystazopoulou-Pelekidou, Maria. 1992. The "Macedonian Question": A Historical Review. © Association Internationale d'Etudes du Sud-Est Europeen (AIESEE, International Association of Southeast European Studies), Comité Grec. Corfu: Ionian University. (English translation of a 1988 work written in Greek.)
Rębała, Krzysztof, et al.. 2007. Y-STR variation among Slavs: evidence for the Slavic homeland in the middle Dnieper basin. Journal of Human Genetics, May 2007, 52(5): 408–414.
External links
Mitochondrial DNA Phylogeny in Eastern and Western Slavs, B. Malyarchuk, T. Grzybowski, M. Derenko, M. Perkova, T. Vanecek, J. Lazur, P. Gomolcaknd I. Tsybovsky, Oxford Journals
Category:Ethnic groups in Europe
Category:Ethnic groups in Asia
| 29,440 | 2017-01 |
Friedrich Hayek | Friedrich Hayek CH (; 8 May 189923 March 1992), born in Austria-Hungary as Friedrich August von Hayek and frequently referred to as F. A. Hayek, was an Austrian and British economist and philosopher best known for his defense of classical liberalism. Hayek shared the 1974 Nobel Memorial Prize in Economic Sciences with Gunnar Myrdal for his "pioneering work in the theory of money and economic fluctuations and ... penetrating analysis of the interdependence of economic, social and institutional phenomena."
Hayek was a major social theorist and political philosopher of the twentieth century, and his account of how changing prices communicate information which enables individuals to co-ordinate their plans is widely regarded as an important achievement in economics, leading to his Nobel Prize.
Hayek served in World War I and said that his experience in the war and his desire to help avoid the mistakes that had led to the war drew him to his career. Hayek lived in Austria, Great Britain, the United States, and Germany and became a British subject in 1938. He spent most of his academic life at the London School of Economics (LSE), the University of Chicago, and the University of Freiburg.
In 1984, he was appointed a member of the Order of the Companions of Honour by Queen Elizabeth II on the advice of Prime Minister Margaret Thatcher for his "services to the study of economics". He was the first recipient of the Hanns Martin Schleyer Prize in 1984.http://www.schleyer-stiftung.de/preise/hms_preis/preise_schleyer_preistraeger_e.html He also received the US Presidential Medal of Freedom in 1991 from President George H. W. Bush. In 2011, his article "The Use of Knowledge in Society" was selected as one of the top 20 articles published in The American Economic Review during its first 100 years.
Life
A Timeline of HayekButler, Eamonn (2012). Friedrich Hayek: The Ideas and Influence of the Libertarian Economist, Introduction1899: F. A. Hayek born in Vienna.
1917: Hayek joins the Austro-Hungarian Army.
1921: Hayek earns a doctorate in law from the University of Vienna.
1921: Ludwig von Mises hires Hayek in an office dealing with finance issues.
1923: Hayek earns another doctorate in political science.
1927: Mises and Hayek found the Austrian Institute for Business Cycle Research.
1928: Hayek first meets John Maynard Keynes at a conference in London.
1931: Hayek moves to the London School of Economics at the invitation of Lionel Robbins.
1931-2: Hayek becomes a critic of Keynes, writing critical reviews of his books and exchanging letters in The Times on the merits of government spending versus private investment.
1936: Keynes publishes The General Theory of Employment, Interest and Money.
1936: At the London Economic Club, Hayek gives a talk on the key role of information in economics.
1938: Hayek becomes a British citizen.
1944: Hayek publishes The Road to Serfdom.
1945-6: Hayek lectures across the United States and becomes Visiting Professor at Stanford University.
1947: Hayek founds the Mont Pelerin Society, aiming to keep liberty alive in a postwar world.
1952: Hayek publishes The Counter-Revolution of Science and The Sensory Order.
1956: Antony Fisher founds the free-market Institute of Economic Affairs, having been inspired by Hayek.
1960: Publication of The Constitution of Liberty.
1962: Hayek moves to the University of Freiburg, West Germany. His ideas on unplanned orders and other subjects are published in Studies in Philosophy, Politics and Economics (1967). He begins work on Law, Legislation and Liberty.
1972: As prices soar in Europe and the US, Hayek publishes a passionate critique of inflation and the Keynesian policies that cause it in A Tiger by the Tail. He goes on to propose solutions in Choice in Currency (1976) and The Denationalisation of Money (1976).
1973: Death of Mises
1974: Hayek is awarded the Nobel Memorial Prize.
1975: Through an introduction by the Institute of Economic Affairs, the British Conservative leader Margaret Thatcher meets Hayek for the first time, and is greatly impressed.
1988: Publication of The Fatal Conceit: The Errors of Socialism.
1991: Hayek is awarded the US Presidential Medal of Freedom.
1992: Hayek dies in Freiburg.
Early life
thumb|250px|An ethno-linguistic map of Austria–Hungary, 1910
Friedrich August von Hayek was born in Vienna to August von Hayek and Felicitas Hayek (née von Juraschek). Friedrich's father, from whom he received his middle name, was also born in Vienna in 1871. He was a medical doctor employed by the municipal ministry of health, with passion in botany, in which he wrote a number of monographs. August von Hayek was also a part-time botany lecturer at the University of Vienna. Friedrich's mother was born in 1875 to a wealthy, conservative, land-owning family. As her mother died several years prior to Friedrich's birth, Felicitas gained a significant inheritance which provided as much as half of her and August's income during the early years of their marriage. Hayek was the oldest of three brothers, Heinrich (1900–69) and Erich (1904–86), who were one-and-a-half and five years younger than him.
His father's career as a university professor influenced Friedrich's goals later in life. Both of his grandfathers, who lived long enough for Friedrich to know them, were scholars. Franz von Juraschek was a leading economist in Austria-Hungary and a close friend of Eugen Böhm von Bawerk, one of the founders of the Austrian School of Economics. Von Juraschek was a statistician and was later employed by the Austrian government. Friedrich's paternal grandfather, Gustav Edler von Hayek, taught natural sciences at the Imperial Realobergymnasium (secondary school) in Vienna. He wrote systematic works in biology, some of which are relatively well known.
On his mother's side, Hayek was second cousin to the philosopher Ludwig Wittgenstein. His mother often played with Wittgenstein's sisters, and had known Ludwig well. As a result of their family relationship, Hayek became one of the first to read Wittgenstein's Tractatus Logico-Philosophicus when the book was published in its original German edition in 1921. Although Hayek met Wittgenstein on only a few occasions, Hayek said that Wittgenstein's philosophy and methods of analysis had a profound influence on his own life and thought.Ebenstein, p. 245 In his later years, Hayek recalled a discussion of philosophy with Wittgenstein, when both were officers during World War I.Hayek on Hayek: an autobiographical dialogue, By Friedrich August Hayek, Routledge, 1994, p. 51 After Wittgenstein's death, Hayek had intended to write a biography of Wittgenstein and worked on collecting family materials; and he later assisted biographers of Wittgenstein.Young Ludwig: Wittgenstein's life, 1889–1921, Brian McGuinness, Oxford University Press, 2005 p. xii
Hayek displayed an intellectual and academic bent from a very young age. He read fluently and frequently before going to school. At his father's suggestion, Hayek, as a teenager, read the genetic and evolutionary works of Hugo de Vries and the philosophical works of Ludwig Feuerbach. In school Hayek was much taken by one instructor's lectures on Aristotle's ethics. In his unpublished autobiographical notes, Hayek recalled a division between him and his younger brothers who were only few years younger than he, but he believed that they were somehow of a different generation. He preferred to associate with adults.
thumb|right|Austro-Hungarian artillery unit appearing in The Illustrated London News in 1914
In 1917, Hayek joined an artillery regiment in the Austro-Hungarian Army and fought on the Italian front. Much of Hayek's combat experience was spent as a spotter in an aeroplane. Hayek suffered damage to his hearing in his left ear during the war,https://mises.org/daily/3458 and was decorated for bravery. During this time Hayek also survived the 1918 flu pandemic.Adam James Tebble, F.A. Hayek (Continuum, 2010), p. 2, ISBN 978-0826435996
Hayek then decided to pursue an academic career, determined to help avoid the mistakes that had led to the war. Hayek said of his experience, "The decisive influence was really World War I. It's bound to draw your attention to the problems of political organization." He vowed to work for a better world.
Education and career
thumb|240px|right|University of Vienna, main building, seen from across the Ringstraße
At the University of Vienna, Hayek earned doctorates in law and political science in 1921 and 1923 respectively; and he also studied philosophy, psychology, and economics. For a short time, when the University of Vienna closed, Hayek studied in Constantin von Monakow's Institute of Brain Anatomy, where Hayek spent much of his time staining brain cells. Hayek's time in Monakow's lab, and his deep interest in the work of Ernst Mach, inspired Hayek's first intellectual project, eventually published as The Sensory Order (1952). It located connective learning at the physical and neurological levels, rejecting the "sense data" associationism of the empiricists and logical positivists.The Sensory Order (1952) on learning
Hayek presented his work to the private seminar he had created with Herbert Furth called the Geistkreis."The Viennese Connection: Alfred Schutz and the Austrian School" by Peter Kurrild-Klitgaard.
During Hayek's years at the University of Vienna, Carl Menger's work on the explanatory strategy of social science and Friedrich von Wieser's commanding presence in the classroom left a lasting influence on him. Upon the completion of his examinations, Hayek was hired by Ludwig von Mises on the recommendation of Wieser as a specialist for the Austrian government working on the legal and economic details of the Treaty of Saint Germain. Between 1923 and 1924 Hayek worked as a research assistant to Prof. Jeremiah Jenks of New York University, compiling macroeconomic data on the American economy and the operations of the US Federal Reserve.A. J. Tebble, F.A. Hayek, Continuum International Publishing Group, 2010, pp. 4–5
Initially sympathetic to Wieser's democratic socialism, Hayek's economic thinking shifted away from socialism and toward the classical liberalism of Carl Menger after reading von Mises' book Socialism. It was sometime after reading Socialism that Hayek began attending von Mises' private seminars, joining several of his university friends, including Fritz Machlup, Alfred Schutz, Felix Kaufmann, and Gottfried Haberler, who were also participating in Hayek's own, more general, private seminar. It was during this time that he also encountered and befriended noted political philosopher Eric Voegelin, with whom he retained a long-standing relationship.Federici, Michael. Eric Voegelin: The Restoration of Order, ISI Books, 2002, p. 1
thumb|right|LSE's Old Building
With the help of Mises, in the late 1920s Hayek founded and served as director of the Austrian Institute for Business Cycle Research, before joining the faculty of the London School of Economics (LSE) in 1931 at the behest of Lionel Robbins. Upon his arrival in London, Hayek was quickly recognised as one of the leading economic theorists in the world, and his development of the economics of processes in time and the co-ordination function of prices inspired the ground-breaking work of John Hicks, Abba Lerner, and many others in the development of modern microeconomics.
In 1932, Hayek suggested that private investment in the public markets was a better road to wealth and economic co-ordination in Britain than government spending programs, as argued in a letter he co-signed with Lionel Robbins and others in an exchange of letters with John Maynard Keynes in The Times.http://thinkmarkets.files.wordpress.com/2010/06/keynes-hayek-1932-cambridgelse.pdfMalcolm Perrine McNair, Richard Stockton Meriam, Problems in business economics, McGraw-Hill, 1941, p. 504 The nearly decade long deflationary depression in Britain dating from Churchill's decision in 1925 to return Britain to the gold standard at the old pre-war, pre-inflationary par was the public policy backdrop for Hayek's single public engagement with Keynes over British monetary and fiscal policy, otherwise Hayek and Keynes agreed on many theoretical matters, and their economic disagreements were fundamentally theoretical, having to do almost exclusively with the relation of the economics of extending the length of production to the economics of labour inputs.
Economists who studied with Hayek at the LSE in the 1930s and the 1940s include Arthur Lewis, Ronald Coase, John Kenneth Galbraith, Abba Lerner, Nicholas Kaldor, George Shackle, Thomas Balogh, Vera Smith, L. K. Jha, Arthur Seldon, Paul Rosenstein-Rodan, and Oskar Lange. Hayek also taught or tutored many other LSE students, including David Rockefeller.Interview with David Rockefeller
Unwilling to return to Austria after the Anschluss brought it under the control of Nazi Germany in 1938, Hayek remained in Britain. Hayek and his children became British subjects in 1938. He held this status for the remainder of his life, but he did not live in Great Britain after 1950. He lived in the United States from 1950 to 1962 and then mostly in Germany but also briefly in Austria.
The Road to Serfdom
Hayek was concerned about the general view in Britain's academia that fascism was a capitalist reaction to socialism and The Road to Serfdom arose from those concerns. It was written between 1940 and 1943. The title was inspired by the French classical liberal thinker Alexis de Tocqueville's writings on the "road to servitude."Ebenstein, p. 116. It was first published in Britain by Routledge in March 1944 and was quite popular, leading Hayek to call it "that unobtainable book," also due in part to wartime paper rationing.Ebenstein, p. 128. When it was published in the United States by the University of Chicago in September of that year, it achieved greater popularity than in Britain.A. J. Tebble, F.A. Hayek, Continuum International Publishing Group, 2010, p. 8 At the arrangement of editor Max Eastman (an ardent socialist), the American magazine Reader's Digest also published an abridged version in April 1945, enabling The Road to Serfdom to reach a far wider audience than academics. The book is widely popular among those advocating individualism and classical liberalism.
Chicago
In 1950, Hayek left the London School of Economics for the University of Chicago, where he became a professor in the Committee on Social Thought. Hayek's salary was funded not by the university, but by an outside foundation. University of Chicago President Robert Hutchins was in the midst of a war with the U. of Chicago faculty over departmental autonomy and control, and Hayek got caught in the middle of that battle. Hutchins had been attempting to force all departments to adopt the neo-Thomist Great Books program of Mortimer Adler, and the U. of Chicago economists were sick of Hutchins' meddling. As the result the Economics department rejected Hutchins' pressure to hire Hayek, and Hayek became a part of the new Committee on Social Thought.
Hayek had made contact with many at the U. of Chicago in the 1940s, with Hayek's The Road to Serfdom playing a seminal role in transforming how Milton Friedman and others understood how society works.Milton and Rose Friedman, Two Lucky People: Memoirs (Chicago: U. of Chicago Press, 1998) Hayek conducted a number of influential faculty seminars while at the U. of Chicago, and a number of academics worked on research projects sympathetic to some of Hayek's own, such as Aaron Director, who was active in the Chicago School in helping to fund and establish what became the "Law and Society" program in the University of Chicago Law School. Hayek, Frank Knight, Friedman and George Stigler worked together in forming the Mont Pèlerin Society, an international forum for libertarian economists. Hayek and Friedman cooperated in support of the Intercollegiate Society of Individualists, later renamed the Intercollegiate Studies Institute, an American student organisation devoted to libertarian ideas.Johan Van Overtveldt, The Chicago School: How the University of Chicago Assembled the Thinkers Who Revolutionized Economics and Business(2006) pp. 7, 341–46
thumb|left|University of Chicago from the Midway Plaisance
Hayek's first class at Chicago was a faculty seminar on the philosophy of science attended by many of the University's most notable scientists of the time, including Enrico Fermi, Sewall Wright and Leó Szilárd. During his time at Chicago, Hayek worked on the philosophy of science, economics, political philosophy, and the history of ideas. Hayek's economics notes from this period have yet to be published. Hayek received a Guggenheim Fellowship in 1954.Biography at LibertyStory.net
After editing a book on John Stuart Mill's letters he planned to publish two books on the liberal order, The Constitution of Liberty and "The Creative Powers of a Free Civilization" (eventually the title for the second chapter of The Constitution of Liberty).Ebenstein, p. 195. He completed The Constitution of Liberty in May 1959, with publication in February 1960. Hayek was concerned "with that condition of men in which coercion of some by others is reduced as much as is possible in society".F. A. Hayek, The Constitution of Liberty (London: Routledge & Kegan Paul, 1960), p. 11. Hayek was disappointed that the book did not receive the same enthusiastic general reception as The Road to Serfdom had sixteen years before.Ebenstein, p. 203.
Freiburg, Los Angeles, and Salzburg
thumb|right|260px|Freiburg around 1900
From 1962 until his retirement in 1968, he was a professor at the University of Freiburg, West Germany, where he began work on his next book, Law, Legislation and Liberty. Hayek regarded his years at Freiburg as "very fruitful".Ebenstein, p. 218. Following his retirement, Hayek spent a year as a visiting professor of philosophy at the University of California, Los Angeles, where he continued work on Law, Legislation and Liberty, teaching a graduate seminar by the same name and another on the philosophy of social science. Primary drafts of the book were completed by 1970, but Hayek chose to rework his drafts and finally brought the book to publication in three volumes in 1973, 1976 and 1979.
thumb|right|260px|University of Salzburg (below, foreground) since the mid 1980s as seen from city center
He became a professor at the University of Salzburg from 1969 to 1977; he then returned to Freiburg, where he spent the rest of his days. When Hayek left Salzburg in 1977, he wrote, "I made a mistake in moving to Salzburg." The economics department was small, and the library facilities were inadequate.Ebenstein, p. 254.
Nobel laureate
On 9 October 1974, it was announced that Hayek would be awarded the Nobel Memorial Prize in Economics, along with Swedish economist Gunnar Myrdal. The reasons for the two of them winning the prize are described in the Nobel committee's press release. He was surprised at being given the award and believed that he was given it with Myrdal to balance the award with someone from the opposite side of the political spectrum.Ebenstein, p. 263.
During the Nobel ceremony in December 1974, Hayek met the Russian dissident Aleksandr Solzhenitsyn. Hayek later sent him a Russian translation of The Road to Serfdom. Although he spoke with apprehension at his award speech about the danger which the authority of the prize would lend to an economist, the prize brought much greater public awareness of Hayek and has been described by his biographer as "the great rejuvenating event in his life".Ebenstein, p. 261.
United Kingdom politics
In February 1975, Margaret Thatcher was elected leader of the British Conservative Party. The Institute of Economic Affairs arranged a meeting between Hayek and Thatcher in London soon after.Richard Cockett, Thinking the Unthinkable. Think-Tanks and the Economic Counter-Revolution, 1931–1983 (Fontana, 1995), pp. 174–6. During Thatcher's only visit to the Conservative Research Department in the summer of 1975, a speaker had prepared a paper on why the "middle way" was the pragmatic path the Conservative Party should take, avoiding the extremes of left and right. Before he had finished, Thatcher "reached into her briefcase and took out a book. It was Hayek's The Constitution of Liberty. Interrupting our pragmatist, she held the book up for all of us to see. 'This', she said sternly, 'is what we believe', and banged Hayek down on the table".John Ranelagh, Thatcher's People: An Insider's Account of the Politics, the Power, and the Personalities (Fontana, 1992), p. ix.
In 1977, Hayek was critical of the Lib-Lab pact, in which the British Liberal Party agreed to keep the British Labour government in office. Writing to The Times, Hayek said, "May one who has devoted a large part of his life to the study of the history and the principles of liberalism point out that a party that keeps a socialist government in power has lost all title to the name 'Liberal'. Certainly no liberal can in future vote 'Liberal'"."Letters to the Editor: Liberal pact with Labour", The Times (31 March 1977), p. 15. Hayek was criticised by Liberal politicians Gladwyn Jebb and Andrew Phillips, who both claimed that the purpose of the pact was to discourage socialist legislation.
Lord Gladwyn pointed out that the German Free Democrats were in coalition with the German Social Democrats."Letters to the Editor: Liberal pact with Labour", The Times (2 April 1977), p. 15. Hayek was defended by Professor Antony Flew who stated that the German Social Democrats, unlike the British Labour Party, had, since the late 1950s, abandoned public ownership of the means of production, distribution and exchange and had instead embraced the social market economy."Letters to the Editor: German socialist aims", The Times (13 April 1977), p. 13.
In 1978, Hayek came into conflict with the Liberal Party leader, David Steel, who claimed that liberty was possible only with "social justice and an equitable distribution of wealth and power, which in turn require a degree of active government intervention" and that the Conservative Party were more concerned with the connection between liberty and private enterprise than between liberty and democracy. Hayek claimed that a limited democracy might be better than other forms of limited government at protecting liberty but that an unlimited democracy was worse than other forms of unlimited government because "its government loses the power even to do what it thinks right if any group on which its majority depends thinks otherwise".
Hayek stated that if the Conservative leader had said "that free choice is to be exercised more in the market place than in the ballot box, she has merely uttered the truism that the first is indispensable for individual freedom while the second is not: free choice can at least exist under a dictatorship that can limit itself but not under the government of an unlimited democracy which cannot"."Letters to the Editor: The dangers to personal liberty", The Times (11 July 1978), p. 15.
Influence on central European politics
US President Ronald Reagan listed Hayek as among the two or three people who most influenced his philosophy and welcomed Hayek to the White House as a special guest.Martin Anderson, "Revolution" (Harcourt Brace Jovanovich, 1988), p. 164 In the 1970s and 1980s, the writings of Hayek were also a major influence on many of the leaders of the "velvet" revolution in Central Europe during the collapse of the old Soviet Empire. Here are some supporting examples:
There is no figure who had more of an influence, no person had more of an influence on the intellectuals behind the Iron Curtain than Friedrich Hayek. His books were translated and published by the underground and black market editions, read widely, and undoubtedly influenced the climate of opinion that ultimately brought about the collapse of the Soviet Union.
—Milton Friedman (Hoover Institution)
The most interesting among the courageous dissenters of the 1980s were the classical liberals, disciples of F. A. Hayek, from whom they had learned about the crucial importance of economic freedom and about the often-ignored conceptual difference between liberalism and democracy.Andrzy Walicki, "Liberalism in Poland", Critical Review, Winter, 1988, p. 9.
—Andrzej Walicki (History, Notre Dame)
Estonian Prime Minister Mart Laar came to my office the other day to recount his country's remarkable transformation. He described a nation of people who are harder-working, more virtuous – yes, more virtuous, because the market punishes immorality – and more hopeful about the future than they've ever been in their history. I asked Mr. Laar where his government got the idea for these reforms. Do you know what he replied? He said, "We read Milton Friedman and F. A. Hayek."Dick Armey, "Address at the Dedication of the Hayek Auditorium", Cato Institute, Washington, D.C., 9 May 1995.
—US Representative Dick Armey
I was 25 years old and pursuing my doctorate in economics when I was allowed to spend six months of post-graduate studies in Naples, Italy. I read the Western economic textbooks and also the more general work of people like Hayek. By the time I returned to Czechoslovakia, I had an understanding of the principles of the market. In 1968, I was glad at the political liberalism of the Dubcek Prague Spring, but was very critical of the Third Way they pursued in economics.Vaclav Klaus, "No Third Way Out: Creating a Capitalist Czechoslovakia", Reason, 1990, (June): 28–31.
—Václav Klaus (former President of the Czech Republic)
Recognition
In 1980, Hayek, a non-practising Roman Catholic, was one of twelve Nobel laureates to meet with Pope John Paul II, "to dialogue, discuss views in their fields, communicate regarding the relationship between Catholicism and science, and 'bring to the Pontiff's attention the problems which the Nobel Prize Winners, in their respective fields of study, consider to be the most urgent for contemporary man'".Ebenstein, p. 301.
In 1984, he was appointed as a member of the Order of the Companions of Honour (CH) by Queen Elizabeth II of the United Kingdom on the advice of the British Prime Minister Margaret Thatcher for his "services to the study of economics".Alan O. Ebenstein. (2003) Friedrich Hayek: A biography. p. 305. University of Chicago Press, 2003 Hayek had hoped to receive a baronetcy, and after he was awarded the CH he sent a letter to his friends requesting that he be called the English version of Friedrich (Frederick) from now on. After his 20 min audience with the Queen, he was "absolutely besotted" with her according to his daughter-in-law, Esca Hayek. Hayek said a year later that he was "amazed by her. That ease and skill, as if she'd known me all my life." The audience with the Queen was followed by a dinner with family and friends at the Institute of Economic Affairs. When, later that evening, Hayek was dropped off at the Reform Club, he commented: "I've just had the happiest day of my life."Ebenstein, p. 305.
In 1991, US President George H. W. Bush awarded Hayek the Presidential Medal of Freedom, one of the two highest civilian awards in the United States, for a "lifetime of looking beyond the horizon". Hayek died on 23 March 1992 in Freiburg, Germany, and was buried on 4 April in the Neustift am Walde cemetery in the northern outskirts of Vienna according to the Catholic rite. In 2011, his article The Use of Knowledge in Society was selected as one of the top 20 articles published in the American Economic Review during its first 100 years.
The New York University Journal of Law and Liberty holds an annual lecture in his honour.New York University Journal of Law and Liberty: About
Work
The business cycle
Hayek's principal investigations in economics concerned capital, money, and the business cycle. Mises had earlier applied the concept of marginal utility to the value of money in his Theory of Money and Credit (1912), in which he also proposed an explanation for "industrial fluctuations" based on the ideas of the old British Currency School and of Swedish economist Knut Wicksell. Hayek used this body of work as a starting point for his own interpretation of the business cycle, elaborating what later became known as the "Austrian Theory of the Business Cycle". Hayek spelled out the Austrian approach in more detail in his book, published in 1929, an English translation of which appeared in 1933 as Monetary Theory and the Trade Cycle. There he argued for a monetary approach to the origins of the cycle. In his Prices and Production (1931), Hayek argued that the business cycle resulted from the central bank's inflationary credit expansion and its transmission over time, leading to a capital misallocation caused by the artificially low interest rates. Hayek claimed that "the past instability of the market economy is the consequence of the exclusion of the most important regulator of the market mechanism, money, from itself being regulated by the market process".
Hayek's analysis was based on Böhm-Bawerk's concept of the "average period of production"See the chapter "The collaboration with Keynes and the controversy with Hayek,", Heinz D. Kurz and Neri Salvadori, "Piero Sraffa's contributions to economics," in Critical Essays on Piero Sraffa's Legacy in Economics, ed. H. D. Kurz, (Cambridge: Cambridge University Press, 2000), pp. 3–24. ISBN 978-0521580892 and on the effects that monetary policy could have upon it. In accordance with the reasoning later outlined in his essay The Use of Knowledge in Society (1945), Hayek argued that a monopolistic governmental agency like a central bank can neither possess the relevant information which should govern supply of money, nor have the ability to use it correctly.
In 1929, Lionel Robbins assumed the helm of the London School of Economics (LSE). Eager to promote alternatives to what he regarded as the narrow approach of the school of economic thought that then dominated the English-speaking academic world (centred at the University of Cambridge and deriving largely from the work of Alfred Marshall), Robbins invited Hayek to join the faculty at LSE, which he did in 1931. According to Nicholas Kaldor, Hayek's theory of the time-structure of capital and of the business cycle initially "fascinated the academic world" and appeared to offer a less "facile and superficial" understanding of macroeconomics than the Cambridge school's.
Also in 1931, Hayek critiqued Keynes's Treatise on Money (1930) in his "Reflections on the pure theory of Mr. J. M. Keynes"F. A. Hayek, "Reflection on the pure theory of money of Mr. J. M. Keynes," Economica, 11, S. 270–95 (1931). and published his lectures at the LSE in book form as Prices and Production.F. A. Hayek, Prices and Production, (London: Routledge, 1931). Unemployment and idle resources are, for Keynes, caused by a lack of effective demand; for Hayek, they stem from a previous, unsustainable episode of easy money and artificially low interest rates. Keynes asked his friend Piero Sraffa to respond. Sraffa elaborated on the effect of inflation-induced "forced savings" on the capital sector and about the definition of a "natural" interest rate in a growing economy. (Sraffa–Hayek debate)P. Sraffa, "Dr. Hayek on Money and Capital," Economic Journal, 42, S. 42–53 (1932). Others who responded negatively to Hayek's work on the business cycle included John Hicks, Frank Knight, and Gunnar Myrdal.Bruce Caldwell, Hayek's Challenge: An Intellectual Biography of F. A. Hayek (Chicago: University of Chicago Press, 2004), p. 179. ISBN 0226091937 Kaldor later wrote that Hayek's Prices and Production had produced "a remarkable crop of critics" and that the total number of pages in British and American journals dedicated to the resulting debate "could rarely have been equalled in the economic controversies of the past."
Hayek continued his research on monetary and capital theory, revising his theories of the relations between credit cycles and capital structure in Profits, Interest and Investment (1939) and The Pure Theory of Capital (1941), but his reputation as an economic theorist had by then fallen so much that those works were largely ignored, except for scathing critiques by Nicholas Kaldor. Lionel Robbins himself, who had embraced the Austrian theory of the business cycle in The Great Depression (1934), later regretted having written the book and accepted many of the Keynesian counter-arguments.R. W. Garrison, "F. A. Hayek as 'Mr. Fluctooations:' In Defense of Hayek's 'Technical Economics'", Hayek Society Journal (LSE), 5(2), 1 (2003).
Hayek never produced the book-length treatment of "the dynamics of capital" that he had promised in the Pure Theory of Capital. After 1941, he continued to publish works on the economics of information, political philosophy, the theory of law, and psychology, but seldom on macroeconomics. At the University of Chicago, Hayek was not part of the economics department and did not influence the rebirth of neoclassical theory which took place there (see Chicago school of economics). When, in 1974, he shared the Nobel Memorial Prize in Economics with Gunnar Myrdal, the latter complained about being paired with an "ideologue". Milton Friedman declared himself "an enormous admirer of Hayek, but not for his economics. I think Prices and Production is a very flawed book. I think his [Pure Theory of Capital] is unreadable. On the other hand, The Road to Serfdom is one of the great books of our time."
The economic calculation problem
Building on the earlier work of Ludwig von Mises and others, Hayek also argued that while in centrally planned economies an individual or a select group of individuals must determine the distribution of resources, these planners will never have enough information to carry out this allocation reliably. This argument, first proposed by Max Weber, says that the efficient exchange and use of resources can be maintained only through the price mechanism in free markets (see economic calculation problem).
In 1935, Hayek published Collectivist Economic Planning, a collection of essays from an earlier debate that had been initiated by Ludwig von Mises. Hayek included Mises's essay, in which Mises argued that rational planning was impossible under socialism.
Some socialists such as H. D. Dickinson and Oskar Lange, responded by invoking general equilibrium theory, which they argued disproved Mises's thesis. They noted that the difference between a planned and a free market system lay in who was responsible for solving the equations. They argued, if some of the prices chosen by socialist managers were wrong, gluts or shortages would appear, signalling them to adjust the prices up or down, just as in a free market. Through such a trial and error, a socialist economy could mimic the efficiency of a free market system, while avoiding its many problems.
Hayek challenged this vision in a series of contributions. In "Economics and Knowledge" (1937), he pointed out that the standard equilibrium theory assumed that all agents have full and correct information. In the real world, however, different individuals have different bits of knowledge, and furthermore, some of what they believe is wrong.
In The Use of Knowledge in Society (1945), Hayek argued that the price mechanism serves to share and synchronise local and personal knowledge, allowing society's members to achieve diverse, complicated ends through a principle of spontaneous self-organization. He contrasted the use of the price mechanism with central planning, arguing that the former allows for more rapid adaptation to changes in particular circumstances of time and place.Hein Schreuder, "Coase, Hayek and Hierarchy", In: S. Lindenberg et Hein Schreuder, dir., Interdisciplinary Perspectives on Organization Studies, Pergamon Press Thus, he set the stage for Oliver Williamson's later contrast between markets and hierarchies as alternative co-ordination mechanisms for economic transactions.Douma, Sytse and Hein Schreuder, 2013. "Economic Approaches to Organizations". 5th edition. London: Pearson [1] ISBN 0273735292 • ISBN 978-0273735298 He used the term catallaxy to describe a "self-organizing system of voluntary co-operation". Hayek's research into this argument was specifically cited by the Nobel Committee in its press release awarding Hayek the Nobel prize.
Against collectivism
180px|thumb|right|Individualism and Economic Order, Year 1948.
Hayek was one of the leading academic critics of collectivism in the 20th century. Hayek argued that all forms of collectivism (even those theoretically based on voluntary co-operation) could only be maintained by a central authority of some kind. In Hayek's view, the central role of the state should be to maintain the rule of law, with as little arbitrary intervention as possible. In his popular book, The Road to Serfdom (1944) and in subsequent academic works, Hayek argued that socialism required central economic planning and that such planning in turn leads towards totalitarianism.
From The Road to Serfdom:
Hayek posited that a central planning authority would have to be endowed with powers that would impact and ultimately control social life, because the knowledge required for centrally planning an economy is inherently decentralised, and would need to be brought under control.
Though Hayek did argue that the state should provide law centrally, others have pointed out that this contradicts his arguments about the role of judges in "discovering" the law, suggesting that Hayek would have supported decentralized provision of legal services.
Hayek also wrote that the state can play a role in the economy, and specifically, in creating a "safety net". He wrote, "There is no reason why, in a society which has reached the general level of wealth ours has, the first kind of security should not be guaranteed to all without endangering general freedom; that is: some minimum of food, shelter and clothing, sufficient to preserve health. Nor is there any reason why the state should not help to organize a comprehensive system of social insurance in providing for those common hazards of life against which few can make adequate provision."
Investment and choice
Perhaps more fully than any other economist, Hayek investigated the choice theory of investment. He examined the inter-relations between non-permanent production goods and "latent" or potentially economic permanent resources – building on the choice theoretical insight that, "processes that take more time will evidently not be adopted unless they yield a greater return than those that take less time".The Pure Theory of Capital (pdf), Chicago: University of Chicago Press, 1941/2007 (Vol. 12 of the Collected Works): p. 90.
Hayek's work on the microeconomics of the choice theoretics of investment, non-permanent goods, potential permanent resources, and economically-adapted permanent resources mark a central dividing point between his work in areas of macroeconomics and that of almost all other economists. Hayek's work on the macroeconomic subjects of central planning, trade cycle theory, the division of knowledge, and entrepreneurial adaptation especially, differ greatly from the opinions of macroeconomic "Marshallian" economists in the tradition of John Maynard Keynes and the microeconomic "Walrasian" economists in the tradition of Abba Lerner.
Philosophy of science
During World War II, Hayek began the ‘Abuse of Reason’ project. His goal was to show how a number of then-popular doctrines and beliefs had a common origin in some fundamental misconceptions about the social science.Caldwell, Bruce. "Hayek, Friedrich August von (1899–1992)." The New Palgrave Dictionary of Economics. Second Edition. Eds. Steven N. Durlauf and Lawrence E. Blume. Palgrave Macmillan, 2008. In his philosophy of science, which has much in common with that of his good friend Karl Popper, Hayek was highly critical of what he termed scientism: a false understanding of the methods of science that has been mistakenly forced upon the social sciences, but that is contrary to the practices of genuine science. Usually, scientism involves combining the philosophers' ancient demand for demonstrative justification with the associationists' false view that all scientific explanations are simple two-variable linear relationships.
Hayek points out that much of science involves the explanation of complex multivariable and nonlinear phenomena, and the social science of economics and undesigned order compares favourably with such complex sciences as Darwinian biology. These ideas were developed in The Counter-Revolution of Science in 1952, and in some of Hayek's later essays in the philosophy of science such as Degrees of Explanation and The Theory of Complex Phenomena.
In Counter-Revolution, for example, Hayek observed that the hard sciences attempt to remove the "human factor" in order to obtain objective, strictly controlled results:
Meanwhile, the soft sciences are attempting to measure human action itself:Book Review: The Counter-revolution of Science: Studies on the Abuse of Reason by F. A. Hayek
He notes that these are mutually exclusive: Social sciences should not attempt to impose positivist methodology, nor to claim objective or definite results:The Moral Foundations of Civil Society
Psychology
In The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology (1952), Hayek independently developed a "Hebbian learning" model of learning and memoryan idea which he first conceived in 1920, prior to his study of economics. Hayek's expansion of the "Hebbian synapse" construction into a global brain theory has received continued attentionGerald Edelman, Neural Darwinism, 1987, p. 25Joaquin Fuster, Memory in the Cerebral Cortex: An Empirical Approach to Neural Networks in the Human and Nonhuman Primate. Cambridge: MIT Press, 1995, p. 87Joaquin Fuster, Memory in the Cerebral Cortex: An Empirical Approach to Neural Networks in the Human and Nonhuman Primate. Cambridge: MIT Press, 1995, p. 88Joauin Fuster, "Network Memory", Trends in Neuroscience, 1997. Vol. 20, No. 10. (Oct .): 451–459. in neuroscience, cognitive science, computer science, behavioural science, and evolutionary psychology, by scientists such as Gerald Edelman, and Joaquin Fuster.
Hayek posited two orders, the sensory order that we experience, and the natural order that natural science has revealed. Hayek thought that the sensory order is in fact a product of the brain. He characterized the brain as a highly complex but self-ordering, hierarchical classification system, a huge network of connections.
Social and political philosophy
In the latter half of his career Hayek made a number of contributions to social and political philosophy, which he based on his views on the limits of human knowledge, and the idea of spontaneous order in social institutions. He argues in favour of a society organised around a market order, in which the apparatus of state is employed almost (though not entirely) exclusively to enforce the legal order (consisting of abstract rules, and not particular commands) necessary for a market of free individuals to function. These ideas were informed by a moral philosophy derived from epistemological concerns regarding the inherent limits of human knowledge. Hayek argued that his ideal individualistic, free-market polity would be self-regulating to such a degree that it would be 'a society which does not depend for its functioning on our finding good men for running it'.Individualism and Economic Order, p. 11
Hayek disapproved of the notion of 'social justice'. He compared the market to a game in which 'there is no point in calling the outcome just or unjust'The Mirage of Social Justice, chap. 10 and argued that 'social justice is an empty phrase with no determinable content';The Mirage of Social Justice, chap. 12 likewise "the results of the individual's efforts are necessarily unpredictable, and the question as to whether the resulting distribution of incomes is just has no meaning".The Constitution of Liberty, chap. 6 He generally regarded government redistribution of income or capital as an unacceptable intrusion upon individual freedom: "the principle of distributive justice, once introduced, would not be fulfilled until the whole of society was organized in accordance with it. This would produce a kind of society which in all essential respects would be the opposite of a free society."
Spontaneous order
Hayek viewed the free price system not as a conscious invention (that which is intentionally designed by man), but as spontaneous order or what he referred to as "that which is the result of human action but not of human design". Thus, Hayek put the price mechanism on the same level as, for example, language.
Hayek attributed the birth of civilisation to private property in his book The Fatal Conceit (1988). He explained that price signals are the only means of enabling each economic decision maker to communicate tacit knowledge or dispersed knowledge to each other, to solve the economic calculation problem.
Alain de Benoist of the Nouvelle Droite (New Right) produced a highly critical essay on Hayek's work in an issue of Telos, citing the flawed assumptions behind Hayek's idea of "spontaneous order" and the authoritarian, totalising implications of his free-market ideology.
The ecosystem as a spontaneous order
Hayek’s concept of the market as a spontaneous order has been recently applied to ecosystems to defend a broadly non-interventionist policy. Like the market, ecosystems contain complex networks of information, involve an ongoing dynamic process, contain orders within orders, and the entire system operates without being directed by a conscious mind. On this analysis, species takes the place of price as a visible element of the system formed by a complex set of largely unknowable elements. Human ignorance about the countless interactions between the organisms of an ecosystem limits our ability to manipulate nature. Since humans rely on the ecosystem to sustain themselves, we have a prima facie obligation to not disrupt such systems. This analysis of ecosystems as spontaneous orders does not rely on markets qualifying as spontaneous orders. As such, one need not endorse Hayek’s analysis of markets to endorse ecosystems as spontaneous orders.
Hayek's views on safety net
With regard to a safety net, Hayek advocated "some provision for those threatened by the extremes of indigence or starvation, be if only in the interest of those who require protection against acts of desperation on the part of the needy."The Constitution of Liberty, chap. 19 As referenced in the section on "The economic calculation problem," Hayek wrote that "there is no reason why... the state should not help to organize a comprehensive system of social insurance." Summarizing on this topic, WapshottKeynes Hayek, N. Wapshott, Norton, 2011, p. 291. writes "[Hayek] advocated mandatory universal health care and unemployment insurance, enforced, if not directly provided, by the state." Bernard Harcourt says that "Hayek was adamant about this."Bernard Harcourt (12 September 2012). How Paul Ryan enslaves Friedrich Hayek's The Road to Serfdom. The Guardian. Retrieved 27 December 2014. In the 1973 Law, Legislation, and Liberty, Hayek wrote:
And in The Road to Serfdom:
Critiques of his concept of collectivist rationalism
Arthur M. Diamond argues Hayek's problems arise when he goes beyond claims that can be evaluated within economic science. Diamond argued that: “The human mind, Hayek says, is not just limited in its ability to synthesize a vast array of concrete facts, it is also limited in its ability to give a deductively sound ground to ethics. Here is where the tension develops, for he also wants to give a reasoned moral defense of the free market. He is an intellectual skeptic who wants to give political philosophy a secure intellectual foundation. It is thus not too surprising that what results is confused and contradictory.”
Chandran Kukathas argues that Hayek's defence of liberalism is unsuccessful because it rests on presuppositions which are incompatible. The unresolved dilemma of his political philosophy is how to mount a systematic defence of liberalism if one emphasizes the limited capacity of reason.
Norman P. Barry similarly notes that the “critical rationalism” in Hayek’s writings appears incompatible with “a certain kind of fatalism, that we must wait for evolution to pronounce its verdict.”N. P. Barry(1994), "The road to freedom—Hayek’s social and economic philosophy," in Birner, J., and van Zijp, R. (eds) Hayek, Co-ordination and Evolution—His Legacy in Philosophy, Politics, Economics and the History of Ideas, pp. 141–63. London: Routledge.
Milton Friedman and Anna Schwartz argue that the element of paradox exists in the views of Hayek.Milton Friedman and Anna J. Schwartz, "Has Government Any Role in Money?" (1986) Also, John N. Gray summarized this view as "his scheme for an ultra-liberal constitution was a prototypical version of the philosophy he had attacked."John Gray, "The Friedrich Hayek I knew, and what he got right - and wrong" (30 July 2015)
Hayek's views on dictatorship
Hayek had sent António de Oliveira Salazar a copy of Hayek’s The Constitution of Liberty (1960) in 1962. Hayek hoped that his book—this “preliminary sketch of new constitutional principles”—“may assist” Salazar “in his endeavour to design a constitution which is proof against the abuses of democracy.”Farrant, Andrew, Edward McPhail, and Sebastian Berger. "Preventing the "Abuses" of Democracy: Hayek, the "Military Usurper" and Transitional Dictatorship in Chile?." American Journal of Economics and Sociology 71.3 (2012): 513-538.
Hayek visited Chile in the 1970s and 1980s during the Government Junta of general Augusto Pinochet and accepted being named Honorary Chairman of the Centro de Estudios Públicos, the think tank formed by the economists who transformed Chile into a free market economy.
Asked about the liberal, non-democratic rule by a Chilean interviewer, Hayek is translated from German to Spanish to English as having said, "As long term institutions, I am totally against dictatorships. But a dictatorship may be a necessary system for a transitional period. [...] Personally I prefer a liberal dictatorship to democratic government devoid of liberalism. My personal impression – and this is valid for South America – is that in Chile, for example, we will witness a transition from a dictatorial government to a liberal government." In a letter to the London Times, he defended the Pinochet regime and said that he had "not been able to find a single person even in much maligned Chile who did not agree that personal freedom was much greater under Pinochet than it had been under Allende."Greg Grandin, professor of history, New York University, [https://books.google.com/books?id=t5itdZ7oycUC&pg=PA172&lpg=PA172&dq=greg+grandin+hayek&source=bl&ots=yJzrb3YduJ&sig=GonAv6w1wQiM5vdwBknys97PZEY&hl=en#v=onepage&q=&f=false Empire's Workshop: Latin America, the United States, and the Rise of the New Imperialism], pp. 172–73, Metropolitan, 2006, ISBN 0805077383.Dan Avnôn, [https://books.google.com/books?id=E6TmN-9qAmQC Liberalism and its Practice], p. 56, Routledge, 1999, ISBN 0415193540. Hayek admitted that "it is not very likely that this will succeed, even if, at a particular point in time, it may be the only hope there is.", he explained, however, "It is not certain hope, because it will always depend on the goodwill of an individual, and there are very few individuals one can trust. But if it is the sole opportunity which exists at a particular moment it may be the best solution despite this. And only if and when the dictatorial government is visibly directing its steps towards limited democracy".
For Hayek, the supposedly stark difference between authoritarianism and totalitarianism has much importance and Hayek places heavy weight on this distinction in his defence of transitional dictatorship. For example, when Hayek visited Venezuela in May 1981, he was asked to comment on the prevalence of totalitarian regimes in Latin America. In reply, Hayek warned against confusing "totalitarianism with authoritarianism," and said that he was unaware of "any totalitarian governments in Latin America. The only one was Chile under Allende". For Hayek, however, the word 'totalitarian' signifies something very specific: the want to “organize the whole of society” to attain a “definite social goal” —which is stark in contrast to “liberalism and individualism”.Preventing the "Abuses" of Democracy: Hayek, the "Military Usurper" and Transitional Dictatorship in Chile?, The American Journal of Economics and Sociology
Influence and recognition
Hayek's influence on the development of economics is widely acknowledged. Hayek is the second-most frequently cited economist (after Kenneth Arrow) in the Nobel lectures of the prize winners in economics, which is particularly striking since his own lecture was critical of the field of orthodox economics and neo-classical modelisation. A number of Nobel Laureates in economics, such as Vernon Smith and Herbert A. Simon, recognise Hayek as the greatest modern economist. Another Nobel winner, Paul Samuelson, believed that Hayek was worthy of his award but nevertheless claimed that "there were good historical reasons for fading memories of Hayek within the mainstream last half of the twentieth century economist fraternity. In 1931, Hayek's Prices and Production had enjoyed an ultra-short Byronic success. In retrospect hindsight tells us that its mumbo-jumbo about the period of production grossly misdiagnosed the macroeconomics of the 1927–1931 (and the 1931–2007) historical scene". Despite this comment, Samuelson spent the last 50 years of his life obsessed with the problems of capital theory identified by Hayek and Böhm-Bawerk, and Samuelson flatly judged Hayek to have been right and his own teacher, Joseph Schumpeter, to have been wrong on the central economic question of the 20th century, the feasibility of socialist economic planning in a production goods dominated economy.The collected scientific papers of Paul A. Samuelson, Volume 5, p. 315.
Hayek is widely recognised for having introduced the time dimension to the equilibrium construction and for his key role in helping inspire the fields of growth theory, information economics, and the theory of spontaneous order. The "informal" economics presented in Milton Friedman's massively influential popular work Free to Choose (1980), is explicitly Hayekian in its account of the price system as a system for transmitting and co-ordinating knowledge. This can be explained by the fact that Friedman taught Hayek's famous paper "The Use of Knowledge in Society" (1945) in his graduate seminars.
In 1944 he was elected as a Fellow of the British Academy,Fritz Machlup, Essays on Hayek, Routledge, 2003. p. 14. after he was nominated for membership by Keynes.Sylvia Nasar, Grand Pursuit: The Story of Economic Genius, Simon and Schuster, 2011, p. 402
Harvard economist and former Harvard University President Lawrence Summers explains Hayek's place in modern economics: "What's the single most important thing to learn from an economics course today? What I tried to leave my students with is the view that the invisible hand is more powerful than the [un]hidden hand. Things will happen in well-organized efforts without direction, controls, plans. That's the consensus among economists. That's the Hayek legacy."Lawrence Summers, quoted in The Commanding Heights: The Battle Between Government and the Marketplace that Is Remaking the Modern World, by Daniel Yergin and Joseph Stanislaw. New York: Simon & Schuster. 1998, pp. 150–51.
By 1947, Hayek was an organiser of the Mont Pelerin Society, a group of classical liberals who sought to oppose what they saw as socialism in various areas. He was also instrumental in the founding of the Institute of Economic Affairs, the free-market think tank that inspired Thatcherism. He was in addition a member of the Philadelphia Society.http://phillysoc.org/DistinguishedMembers.pdf
Hayek had a long-standing and close friendship with philosopher of science Karl Popper, also from Vienna. In a letter to Hayek in 1944, Popper stated, "I think I have learnt more from you than from any other living thinker, except perhaps Alfred Tarski." (See Hacohen, 2000). Popper dedicated his Conjectures and Refutations to Hayek. For his part, Hayek dedicated a collection of papers, Studies in Philosophy, Politics, and Economics, to Popper and, in 1982, said that "ever since his Logik der Forschung first came out in 1934, I have been a complete adherent to his general theory of methodology".See Weimer and Palermo, 1982 Popper also participated in the inaugural meeting of the Mont Pelerin Society. Their friendship and mutual admiration, however, do not change the fact that there are important differences between their ideas.See Birner, 2001, and for the mutual influence they had on each other's ideas on evolution, Birner 2009
Hayek also played a central role in Milton Friedman's intellectual development. Friedman wrote:
"My interest in public policy and political philosophy was rather casual before I joined the faculty of the University of Chicago. Informal discussions with colleagues and friends stimulated a greater interest, which was reinforced by Friedrich Hayek's powerful book The Road to Serfdom, by my attendance at the first meeting of the Mont Pelerin Society in 1947, and by discussions with Hayek after he joined the university faculty in 1950. In addition, Hayek attracted an exceptionally able group of students who were dedicated to a libertarian ideology. They started a student publication, The New Individualist Review, which was the outstanding libertarian journal of opinion for some years. I served as an adviser to the journal and published a number of articles in it...."Milton & Rose Friedman, Two Lucky People: Memoirs (U. of Chicago Press), 1998. p. 333
Hayek's greatest intellectual debt was to Carl Menger, who pioneered an approach to social explanation similar to that developed in Britain by Bernard Mandeville and the Scottish moral philosophers in the Scottish Enlightenment. He had a wide-reaching influence on contemporary economics, politics, philosophy, sociology, psychology and anthropology. For example, Hayek's discussion in The Road to Serfdom (1944) about truth, falsehood and the use of language influenced some later opponents of postmodernism.e.g., Wolin 2004
Hayek and conservatism
Hayek received new attention in the 1980s and 1990s with the rise of conservative governments in the United States, United Kingdom, and Canada. After winning the United Kingdom general election, 1979, Margaret Thatcher appointed Keith Joseph, the director of the Hayekian Centre for Policy Studies, as her secretary of state for industry in an effort to redirect parliament's economic strategies. Likewise, David Stockman, Ronald Reagan's most influential financial official in 1981 was an acknowledged follower of Hayek.Kenneth R. Hoover, Economics as Ideology: Keynes, Laski, Hayek, and the Creation of Contemporary Politics (2003), p. 213
Hayek wrote an essay, "Why I Am Not a Conservative" (included as an appendix to The Constitution of Liberty), in which he disparaged conservatism for its inability to adapt to changing human realities or to offer a positive political program, remarking, "Conservatism is only as good as what it conserves." Although he noted that modern day conservatism shares many opinions on economics with classical liberals, particularly a belief in the free market, he believed it's because conservatism wants to "stand still," whereas liberalism embraces the free market because it "wants to go somewhere." Hayek identified himself as a classical liberal, but noted that in the United States it had become almost impossible to use "liberal" in its original definition, and the term "libertarian" has been used instead. In this text, Hayek also opposed conservatism for "its hostility to internationalism and its proneness to a strident nationalism" and its frequent association with imperialism.
However, for his part, Hayek found libertarianism a term "singularly unattractive" and offered the term "Old Whig" (a phrase borrowed from Edmund Burke) instead. In his later life, he said, "I am becoming a Burkean Whig." However, Whiggery as a political doctrine had little affinity for classical political economy, the tabernacle of the Manchester School and William Gladstone.E. H. H. Green, Ideologies of Conservatism. Conservative Political Ideas in the Twentieth Century (Oxford: Oxford University Press, 2004), p. 259. His essay has served as an inspiration to other liberal-minded economists wishing to distinguish themselves from conservative thinkers, for example James M. Buchanan's essay "Why I, Too, Am Not a Conservative: The Normative Vision of Classical Liberalism".
His opponents have attacked Hayek as a leading promoter of "neoliberalism". A British journalist, Samuel Brittan, concluded in 2010, "Hayek's book [The Constitution of Liberty] is still probably the most comprehensive statement of the underlying ideas of the moderate free market philosophy espoused by neoliberals."Samuel Brittan, "The many faces of liberalism," ft.com, 22 January 2010 Brittan adds that although Raymond Plant (2009) comes out in the end against Hayek's doctrines, Plant gives The Constitution of Liberty a "more thorough and fair-minded analysis than it has received even from its professed adherents".
In Why F A Hayek is a Conservative,"Why F A Hayek is a Conservative" Eamonn Butler and Madsen Pirie (eds) Hayek on the Fabric of Human Society (Adam Smith Institute, 1987) British policy analyst Madsen Pirie claims Hayek mistakes the nature of the conservative outlook. Conservatives, he says, are not averse to change – but like Hayek, they are highly averse to change being imposed on the social order by people in authority who think they know how to run things better. They wish to allow the market to function smoothly and give it the freedom to change and develop. It is an outlook, says Pirie, that Hayek and conservatives both share.
Hayek’s influence on contemporary macroeconomic policy discussions
Hayek’s ideas on spontaneous order and the importance of prices in dealing with the knowledge problem has inspired a debate on economic development and transition economies after the fall of the Berlin wall. For instance P. Boettke P. Boettke, WHY PERESTROIKA FAILED The Politics and Economics of Socialist Transformation, Routledge (1993): https://books.google.de/books?id=4y6IAgAAQBAJ&pg=PP1&dq=WHY+PERESTROIKA+FAILED+The+Politics+and+Economics+of+Socialist+Transformation&hl=de&sa=X&ved=0ahUKEwjG2b_-8PrOAhUF1RQKHSxXD8sQ6AEIHjAA#v=onepage&q=WHY%20PERESTROIKA%20FAILED%20The%20Politics%20and%20Economics%20of%20Socialist%20Transformation&f=false. elaborated in detail on why reforming socialism failed and the Soviet Union broke down. Ronald McKinnon McKinnon, Spontaneous Order on the Road Back from Socialism: An Asian Perspective. American Economic Review (1992): http://www.jstor.org/stable/2117371 uses Hayekian ideas to describe challenges of transition from a centralized state and planned economy to a market economy. Former World Bank Chief Economist William Easterly emphasizes why foreign aid tends to have no effect at best in bestsellers such as White Man’s Burden.W. Easterly, The White Man's Burden: Why the West's Efforts to Aid the Rest Have Done So Much Ill and So Little Good: https://books.google.de/books?id=Dcj_Ju1wICkC&redir_esc=y
Since the 2007-8 financial crisis there is a renewed interest in Hayek’s core explanation of boom-and-bust cycles, which serves as an alternative explanation to that of the savings glut as launched by Bernanke. Economists at the Bank of International Settlements, e.g. William White, emphasize the importance of Hayekian insights and the impact of monetary policies and credit growth as root causes of financial cycles.https://www.imf.org/external/pubs/ft/fandd/2009/12/pdf/white.pdf A. Hoffmann and G. Schnabl Hoffmann and Schnabl, A Vicious Cycle of Manias, Crises and Asymmetric Policy Responses - An Overinvestment View. Published in The World Economy 34, 3, 382-403 (2011) (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1513171). Hoffmann and Schnabl, Monetary Policy, Vagabonding Liquidity and Bursting Bubbles in New and Emerging Markets - An Overinvestment View. Published in The World Economy 31, 9, 1226-1252 (2008) (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1018342). provide an international perspective and explain recurring financial cycles in the world economy as consequence of gradual interest rate cuts led by the central banks in the large advanced economies since the 1980s. N. Cachanosky Cachanosky 2014, The Effects of U.S. Monetary Policy in Colombia and Panama (2002-2007) The Quarterly Review of Economics and Finance 54.3:428-436.http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2170566 outlines the impact of US monetary policy on the production structure in Latin America.
In line with Hayek, an increasing number of contemporary researchers sees expansionary monetary policies and too low interest rates as mal-incentives and main drivers of financial crises in general, and the subprime market crisis in particular.See John Taylor: https://ideas.repec.org/p/nbr/nberwo/13682.html To prevent problems caused by monetary policy, Hayekian/Austrian economists discuss alternatives to current policies and organizations. For instance, L. White favors free banking in the spirit of Hayek’s “Denationalization of Money”.
Hayek’s ideas find their way into the discussion of the post-Great Recession issues of secular stagnation. Monetary policy and mounting regulation is argued to have undermined the innovative forces of the market economies. Quantitative easing following the financial crises is argued to have not only conserved structural distortions in the economy, leading to a fall in trend-growth; it also created new distortions and contributes to distributional conflicts.Hoffmann and Schnabl, Adverse Effects of Unconventional Monetary Policy, Cato Journal 2016, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2747865
Personal life
In August 1926, Hayek married Helen Berta Maria von Fritsch (1901–1960), a secretary at the civil service office where Hayek worked. They had two children together.Ebenstein, p. 44. Friedrich and Helen divorced in July 1950 and he married his cousin Helene Bitterlich (1900–1996)Ebenstein, p. 169. just a few weeks later, after moving to Arkansas to take advantage of permissive divorce laws.Ebenstein, p. 155. He was unhappy in his first marriage, and his wife would not grant him a divorce, he had to enforce it.
Hayek was an agnostic from age 14.
Legacy and honours
thumbnail|Friedrich Hayek's grave in Neustifter Friedhof, Vienna
Even after his death, Hayek's intellectual presence is noticeable, especially in the universities where he had taught: the London School of Economics, the University of Chicago, and the University of Freiburg. A number of tributes have resulted, many posthumous:
The Hayek Society, a student-run group at the London School of Economics, was established in his honour.London School of Economics: Activities
The Oxford Hayek Society, founded in 1983, is named after Hayek.
The Cato Institute named its lower level auditorium after Hayek, who had been a Distinguished Senior Fellow at Cato during his later years.
The auditorium of the school of economics in Universidad Francisco Marroquín in Guatemala is named after him.
The Hayek Fund for Scholars of the Institute for Humane Studies provides financial awards for academic career activities of graduate students and untenured faculty members.
The Ludwig von Mises Institute holds a lecture named after Hayek every year at its Austrian Scholars Conference and invites notable academics to speak about subjects relating to Hayek's contributions to the Austrian School.
George Mason University has an economics essay award named in honour of Hayek.
The Mont Pelerin Society has a quadrennial economics essay contest named in his honour.
Hayek was awarded honorary degrees from Rikkyo University, University of Vienna, and University of Salzburg.
Hayek has an investment portfolio named after him. The Hayek Fund invests in corporations who financially support free market public policy organisations
1974: Austrian Decoration for Science and Art
1974: Nobel Memorial Prize in Economic Sciences (Sweden)
1977: Pour le Mérite for Science and Art (Germany)
1983: Honorary Ring of Vienna
1984: Honorary Dean of WHU-Otto Beisheim School of Management
1984: Order of the Companions of Honour (United Kingdom)
1990: Grand Gold Medal with Star for Services to the Republic of Austria
1991: Presidential Medal of Freedom (United States)
Selected bibliography
Monetary Theory and the Trade Cycle, 1929.
Prices and Production, 1931.
Profits, Interest and Investment: And other essays on the theory of industrial fluctuations, 1939.
The Road to Serfdom, 1944.
Individualism and Economic Order, 1948.
"The Transmission of the Ideals of Economic Freedom," 1951. Full Article
The Counter-revolution of Science: Studies on the Abuse of Reason, 1952.
The Constitution of Liberty, 1960, ...: The Definitive Edition, 2011. Description and preview.
Law, Legislation and Liberty (3 volumes)
Volume I. Rules and Order, 1973.
Volume II. The Mirage of Social Justice, 1976.
Volume III. The Political Order of a Free People, 1979.
The Fatal Conceit: The Errors of Socialism, 1988. Note: The authorship of The Fatal Conceit is under scholarly dispute.Alan Ebenstein: Investigation: The Fatal Deceit. Liberty 19:3 (March 2005) The book in its published form may actually have been written entirely by its editor W.W. Bartley, III, not by Hayek.Ian Jarvie (Editor), Karl Milford (Editor), David Miller (Editor) (2006), Karl Popper: a Centenary Assessment Vol. 1: Life and Times, and Values in a World of Facts, pp. 120, 295, ISBN 978-0754653752
See also
Constructivist epistemology
Fear the Boom and Bust – a series of music videos produced by the Mercatus Center in which Keynes and Hayek take part in a rap battle
Global financial system – describes the financial system consisting of institutions and regulators that act on the international level
History of economic thought
Liberalism in Austria
References and further reading
Notes
Publications
Birner, Jack (2001). "The mind-body problem and social evolution," CEEL Working Paper 1-02.
Birner, Jack, and Rudy van Zijp, eds. (1994). Hayek: Co-ordination and Evolution: His legacy in philosophy, politics, economics and the history of ideas
Birner, Jack (2009). "From group selection to ecological niches. Popper's rethinking of evolutionary theory in the light of Hayek's theory of culture", in S. Parusnikova & R.S. Cohen eds. (Spring 2009). "Rethinking Popper", Boston Studies in the Philosophy of Science. Vol. 272,
Caldwell, Bruce (2005). Hayek's Challenge: An Intellectual Biography of F.A. Hayek.
Cohen, Avi J. (2003). "The Hayek/Knight Capital Controversy: the Irrelevance of Roundaboutness, or Purging Processes in Time?" History of Political Economy 35(3): 469–90. Fulltext: online in Project Muse, Swetswise and Ebsco
Doherty, Brian (2007). Radicals for Capitalism: A Freewheeling History of the Modern American Libertarian Movement
Douma, Sytse and Hein Schreuder, (2013). "Economic Approaches to Organizations". 5th edition. London: Pearson [1] ISBN 0273735292 • ISBN 9780273735298
Ebeling, Richard M. (March 2004). "F. A. Hayek and The Road to Serfdom: A Sixtieth Anniversary Appreciation" (The Freeman,
Ebeling, Richard M. (March 2001). "F. A. Hayek: A Biography" Ludwig von Mises Institute
Ebeling, Richard M. (May 1999). "Friedrich A. Hayek: A Centenary Appreciation" The Freeman
Frowen, S. ed. (1997). Hayek: economist and social philosopher
Gamble, Andrew (1996). The Iron Cage of Liberty, an analysis of Hayek's ideas
Goldsworthy, J. D. (1986). "Hayek's Political and Legal Philosophy: An Introduction" [1986] SydLawRw 3; 11(1) Sydney Law Review 44
Gray, John (1998). Hayek on Liberty.
Hacohen, Malach (2000). Karl Popper: The Formative Years, 1902–1945.
Horwitz, Steven (2005). "Friedrich Hayek, Austrian Economist". Journal of the History of Economic Thought 27(1): 71–85. Fulltext: in Swetswise, Ingenta and Ebsco
Issing, O. (1999). Hayek, currency competition and European monetary union
Jones, Daniel Stedman. (2012) Masters of the Universe: Hayek, Friedman, and the Birth of Neoliberal Politics (Princeton University Press; 424 pages)
Kasper, Sherryl (2002). The Revival of Laissez-Faire in American Macroeconomic Theory: A Case Study of Its Pioneers. Chpt. 4.
Kley, Roland (1994). Hayek's Social and Political Thought. Oxford Univ. Press.
Leeson, Robert, ed. Hayek: A Collaborative Biography, Part I: Influences, from Mises to Bartley (Palgrave MacMillan, 2013), 241 pages
Muller, Jerry Z. (2002). The Mind and the Market: Capitalism in Western Thought. Anchor Books.
Marsh, Leslie (Ed.) (2011). Hayek in Mind: Hayek's Philosophical Psychology. Advances in Austrian Economics. Emerald
Pavlík, Ján (2004). nb.vse.cz. F. A. von Hayek and The Theory of Spontaneous Order. Professional Publishing 2004, Prague, profespubl.cz.
Plant, Raymond (2009). The Neo-liberal State Oxford University Press, 312 pages
Rosenof, Theodore (1974). "Freedom, Planning, and Totalitarianism: The Reception of F. A. Hayek's Road to Serfdom", Canadian Review of American Studies
Samuelson, Paul A. (2009). "A Few Remembrances of Friedrich von Hayek (1899–1992)", Journal of Economic Behavior & Organization, 69(1), pp. 1–4. Reprinted at J. Bradford DeLong <eblog.
Samuelson, Richard A. (1999). "Reaction to the Road to Serfdom." Modern Age 41(4): 309–17. Fulltext: in Ebsco
Schreuder, Hein (1993). "Coase, Hayek and Hierarchy", In: S. Lindenberg & Hein Schreuder, eds., Interdisciplinary Perspectives on Organization Studies, Oxford: Pergamon Press
Shearmur, Jeremy (1996). Hayek and after: Hayekian Liberalism as a Research Programme. Routledge.
Tebble, Adam James (2013). F A Hayek. Bloomsbury Academic. ISBN 978-1441109064.
Touchie, John (2005). Hayek and Human Rights: Foundations for a Minimalist Approach to Law. Edward Elgar.
Vanberg, V. (2001). "Hayek, Friedrich A von (1899–1992)," International Encyclopedia of the Social & Behavioral Sciences, pp. 6482–86.
Wapshott, Nicholas (2011). Keynes Hayek: The Clash That Defined Modern Economics, (W.W. Norton & Company) 382 pages ISBN 978-0393077483; covers the debate with Keynes in letters, articles, conversation, and by the two economists' disciples.
Weimer, W., and Palermo, D., eds. (1982). Cognition and the Symbolic Processes. Lawrence Erlbaum Associates. Contains Hayek's essay, "The Sensory Order after 25 Years" with "Discussion."
Wolin, Richard. (2004). The Seduction of Unreason: The Intellectual Romance with Fascism from Nietzsche to Postmodernism. Princeton University Press, Princeton.
Introduction
Boudreaux, Donald J. (2014). The Essential Hayek
Butler, Eamonn (2012). Friedrich Hayek: The Ideas and Influence of the Libertarian Economist
Primary sources
Hayek, Friedrich. The collected works of F. A. Hayek, ed. W.W. Bartley, III and others (U. of Chicago Press, 1988–); "Plan of the Collected Works of F. A. Hayek" for 19 volumes; vol 2 excerpt and text search; vol 7 2012 excerpt
External links
The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1974: Gunnar Myrdal, Friedrich August von Hayek Press release regarding the award of the Nobel Prize.
The Pretence of Knowledge 1974 lecture at NobelPrize.org
Register of the Friedrich A. von Hayek Papers at the Hoover Institution Archives.
The Mont Pèlerin Society Records at the Hoover Institution Archives
The Hayek Interviews
Taking Hayek Seriously
Mises.org The Road to Serfdom in cartoons – The cartoon-booklet version.
Booknotes interview with Alan Ebenstein on Friedrich Hayek: A Biography, July 8, 2001.
The Liberalism/Conservatism of Edmund Burke and F. A. Hayek: A Critical Comparison, Linda C. Raeder* From Humanitas, Volume X, No. 1, 1997. © National Humanities Institute
Category:1899 births
Category:1992 deaths
Category:Writers from Vienna
Category:20th-century economists
Category:20th-century philosophers
Category:20th-century Austrian writers
Category:20th-century British writers
Category:Academics of the London School of Economics
Category:Austrian economists
Category:Austrian philosophers
Category:Bohemian nobility
Category:Austrian School economists
Category:Austrian libertarians
Category:Austrian anti-communists
Category:Austro-Hungarian military personnel of World War I
Category:British economists
Category:British libertarians
Category:British classical liberals
Category:Libertarian economists
Category:Microeconomists
Category:Macroeconomists
Category:Anti-nationalists
Category:Nobel laureates in Economics
Category:Austrian Nobel laureates
Category:British Nobel laureates
Category:Members of the Order of the Companions of Honour
Category:Recipients of the Austrian Decoration for Science and Art
Category:Recipients of the Pour le Mérite for Arts and Sciences
Category:Recipients of the Grand Decoration with Star for Services to the Republic of Austria
Category:Presidential Medal of Freedom recipients
Category:Mont Pelerin Society members
Category:Philadelphia Society members
Category:Naturalised citizens of the United Kingdom
Category:New York University faculty
Category:Graduate Institute of International and Development Studies faculty
Category:People associated with the London School of Economics
Category:Philosophers of law
Category:University of Chicago faculty
Category:University of Freiburg faculty
Category:University of Vienna alumni
Category:Austrian people of Moravian-German descent
Category:Philosophers of social science
Category:British philosophers
Category:British expatriate academics in the United States
Category:Austrian expatriates in the United States
Category:British expatriates in Germany
Category:Austrian expatriates in Germany
Category:Austrian academics
Category:Austrian male writers
Category:Guggenheim Fellows
Category:Austrian emigrants to England
Category:Cato Institute people
Category:English Roman Catholics
Category:Austrian Roman Catholics | 11,646 | 2017-01 |
Alaska | Alaska () is a U.S. state located in the northwest extremity of North America. The Canadian administrative divisions of British Columbia and Yukon border the state to the east; its most extreme western part is Attu Island; it has a maritime border with Russia to the west across the Bering Strait. To the north are the Chukchi and Beaufort seas–the southern parts of the Arctic Ocean. The Pacific Ocean lies to the south and southwest. Alaska is the largest state in the United States by area, the 3rd least populous and the least densely populated of the 50 United States. Approximately half of Alaska's residents (the total estimated at 738,432 by the U.S. Census Bureau in 2015) live within the Anchorage metropolitan area. Alaska's economy is dominated by the fishing, natural gas, and oil industries, resources which it has in abundance. Military bases and tourism are also a significant part of the economy.
The United States purchased Alaska from the Russian Empire on March 30, 1867, for 7.2 million U.S. dollars at approximately two cents per acre ($4.74/km2). The area went through several administrative changes before becoming organized as a territory on May 11, 1912. It was admitted as the 49th state of the U.S. on January 3, 1959.
Etymology
The name "Alaska" (Аляска) was introduced in the Russian colonial period when it was used to refer to the peninsula. It was derived from an Aleut, or Unangam idiom, which figuratively refers to the mainland of Alaska. Literally, it means object to which the action of the sea is directed., at pp. 49 (Alaxsxi-x = mainland Alaska), 50 (alagu-x = sea), 508 (-gi = suffix, object of its action).Ransom, J. Ellis. 1940. "Derivation of the Word "Alaska", " American Anthropologist n.s., 42: pp. 550–551
Geography
Alaska is the northernmost and westernmost state in the United States and has the most easterly longitude in the United States because the Aleutian Islands extend into the Eastern Hemisphere. Alaska is the only non-contiguous U.S. state on continental North America; about of British Columbia (Canada) separates Alaska from Washington. It is technically part of the continental U.S., but is sometimes not included in colloquial use; Alaska is not part of the contiguous U.S., often called "the Lower 48". The capital city, Juneau, is situated on the mainland of the North American continent but is not connected by road to the rest of the North American highway system.
The state is bordered by Yukon and British Columbia in Canada, to the east, the Gulf of Alaska and the Pacific Ocean to the south and southwest, the Bering Sea, Bering Strait, and Chukchi Sea to the west and the Arctic Ocean to the north. Alaska's territorial waters touch Russia's territorial waters in the Bering Strait, as the Russian Big Diomede Island and Alaskan Little Diomede Island are only apart. Alaska has a longer coastline than all the other U.S. states combined.
thumb|Alaska's size compared with the 48 contiguous states. (Albers equal-area conic projection)
Alaska is the largest state in the United States in land area at , over twice the size of Texas, the next largest state. Alaska is larger than all but 18 sovereign countries. Counting territorial waters, Alaska is larger than the combined area of the next three largest states: Texas, California, and Montana. It is also larger than the combined area of the 22 smallest U.S. states.
Regions
There are no officially defined borders demarcating the various regions of Alaska, but there are six widely accepted regions:
South Central
The most populous region of Alaska, containing Anchorage, the Matanuska-Susitna Valley and the Kenai Peninsula. Rural, mostly unpopulated areas south of the Alaska Range and west of the Wrangell Mountains also fall within the definition of South Central, as do the Prince William Sound area and the communities of Cordova and Valdez.
Southeast
Also referred to as the Panhandle or Inside Passage, this is the region of Alaska closest to the rest of the United States. As such, this was where most of the initial non-indigenous settlement occurred in the years following the Alaska Purchase. The region is dominated by the Alexander Archipelago as well as the Tongass National Forest, the largest national forest in the United States. It contains the state capital Juneau, the former capital Sitka, and Ketchikan, at one time Alaska's largest city. The Alaska Marine Highway provides a vital surface transportation link throughout the area, as only three communities (Haines, Hyder and Skagway) enjoy direct connections to the contiguous North American road system. Officially designated in 1963.
Interior
thumb|Denali is the highest peak in North America.
The Interior is the largest region of Alaska; much of it is uninhabited wilderness. Fairbanks is the only large city in the region. Denali National Park and Preserve is located here. Denali is the highest mountain in North America.
Southwest
thumb|right|upright|Grizzly bear fishing for salmon at Brooks Falls, part of Katmai National Park and Preserve.
Southwest Alaska is a sparsely inhabited region stretching some inland from the Bering Sea. Most of the population lives along the coast. Kodiak Island is also located in Southwest. The massive Yukon–Kuskokwim Delta, one of the largest river deltas in the world, is here. Portions of the Alaska Peninsula are considered part of Southwest, with the remaining portions included with the Aleutian Islands (see below).
North Slope
The North Slope is mostly tundra peppered with small villages. The area is known for its massive reserves of crude oil, and contains both the National Petroleum Reserve–Alaska and the Prudhoe Bay Oil Field. Barrow, the northernmost city in the United States, is located here. The Northwest Arctic area, anchored by Kotzebue and also containing the Kobuk River valley, is often regarded as being part of this region. However, the respective Inupiat of the North Slope and of the Northwest Arctic seldom consider themselves to be one people.
Aleutian Islands
More than 300 small volcanic islands make up this chain, which stretches over into the Pacific Ocean. Some of these islands fall in the Eastern Hemisphere, but the International Date Line was drawn west of 180° to keep the whole state, and thus the entire North American continent, within the same legal day. Two of the islands, Attu and Kiska, were occupied by Japanese forces during World War II.
Natural features
thumb|Augustine Volcano erupting on January 12, 2006
With its myriad islands, Alaska has nearly of tidal shoreline. The Aleutian Islands chain extends west from the southern tip of the Alaska Peninsula. Many active volcanoes are found in the Aleutians and in coastal regions. Unimak Island, for example, is home to Mount Shishaldin, which is an occasionally smoldering volcano that rises to above the North Pacific. It is the most perfect volcanic cone on Earth, even more symmetrical than Japan's Mount Fuji. The chain of volcanoes extends to Mount Spurr, west of Anchorage on the mainland. Geologists have identified Alaska as part of Wrangellia, a large region consisting of multiple states and Canadian provinces in the Pacific Northwest, which is actively undergoing continent building.
One of the world's largest tides occurs in Turnagain Arm, just south of Anchorage – tidal differences can be more than .
Alaska has more than three million lakes. Marshlands and wetland permafrost cover (mostly in northern, western and southwest flatlands). Glacier ice covers some of land and of tidal zone. The Bering Glacier complex near the southeastern border with Yukon covers alone. With over 100,000 glaciers, Alaska has half of all in the world.
Land ownership
thumb|Alaska has more public land owned by the federal government than any other state.
According to an October 1998 report by the United States Bureau of Land Management, approximately 65% of Alaska is owned and managed by the U.S. federal government as public lands, including a multitude of national forests, national parks, and national wildlife refuges. Of these, the Bureau of Land Management manages , or 23.8% of the state. The Arctic National Wildlife Refuge is managed by the United States Fish and Wildlife Service. It is the world's largest wildlife refuge, comprising .
Of the remaining land area, the state of Alaska owns , its entitlement under the Alaska Statehood Act. A portion of that acreage is occasionally ceded to organized boroughs, under the statutory provisions pertaining to newly formed boroughs. Smaller portions are set aside for rural subdivisions and other homesteading-related opportunities. These are not very popular due to the often remote and roadless locations. The University of Alaska, as a land grant university, also owns substantial acreage which it manages independently.
Another are owned by 12 regional, and scores of local, Native corporations created under the Alaska Native Claims Settlement Act (ANCSA) of 1971. Regional Native corporation Doyon, Limited often promotes itself as the largest private landowner in Alaska in advertisements and other communications. Provisions of ANCSA allowing the corporations' land holdings to be sold on the open market starting in 1991 were repealed before they could take effect. Effectively, the corporations hold title (including subsurface title in many cases, a privilege denied to individual Alaskans) but cannot sell the land. Individual Native allotments can be and are sold on the open market, however.
Various private interests own the remaining land, totaling about one percent of the state. Alaska is, by a large margin, the state with the smallest percentage of private land ownership when Native corporation holdings are excluded.
Climate
300px|thumb|Köppen climate types of Alaska.
thumb|Map depicting the climate zones of Alaska.
The climate in Southeast Alaska is a mid-latitude oceanic climate (Köppen climate classification: Cfb) in the southern sections and a subarctic oceanic climate (Köppen Cfc) in the northern parts. On an annual basis, Southeast is both the wettest and warmest part of Alaska with milder temperatures in the winter and high precipitation throughout the year. Juneau averages over of precipitation a year, and Ketchikan averages over . This is also the only region in Alaska in which the average daytime high temperature is above freezing during the winter months.
The climate of Anchorage and south central Alaska is mild by Alaskan standards due to the region's proximity to the seacoast. While the area gets less rain than southeast Alaska, it gets more snow, and days tend to be clearer. On average, Anchorage receives of precipitation a year, with around of snow, although there are areas in the south central which receive far more snow. It is a subarctic climate (Köppen: Dfc) due to its brief, cool summers.
The climate of Western Alaska is determined in large part by the Bering Sea and the Gulf of Alaska. It is a subarctic oceanic climate in the southwest and a continental subarctic climate farther north. The temperature is somewhat moderate considering how far north the area is. This region has a tremendous amount of variety in precipitation. An area stretching from the northern side of the Seward Peninsula to the Kobuk River valley (i. e., the region around Kotzebue Sound) is technically a desert, with portions receiving less than of precipitation annually. On the other extreme, some locations between Dillingham and Bethel average around of precipitation.
The climate of the interior of Alaska is subarctic. Some of the highest and lowest temperatures in Alaska occur around the area near Fairbanks. The summers may have temperatures reaching into the 90s °F (the low-to-mid 30s °C), while in the winter, the temperature can fall below . Precipitation is sparse in the Interior, often less than a year, but what precipitation falls in the winter tends to stay the entire winter.
The highest and lowest recorded temperatures in Alaska are both in the Interior. The highest is in Fort Yukon (which is just inside the arctic circle) on June 27, 1915, making Alaska tied with Hawaii as the state with the lowest high temperature in the United States. The lowest official Alaska temperature is in Prospect Creek on January 23, 1971, one degree above the lowest temperature recorded in continental North America (in Snag, Yukon, Canada).
The climate in the extreme north of Alaska is Arctic (Köppen: ET) with long, very cold winters and short, cool summers. Even in July, the average low temperature in Barrow is .History for Barrow, Alaska. Monthly Summary for July 2006. Weather Underground. Retrieved October 23, 2006. Precipitation is light in this part of Alaska, with many places averaging less than per year, mostly as snow which stays on the ground almost the entire year.
+Average daily maximum and minimum temperatures for selected locations in AlaskaLocationJuly (°F)July (°C)January (°F)January (°C)Anchorage 65/51 18/10 22/11 −5/–11Juneau 64/50 17/11 32/23 0/–4Ketchikan 64/51 17/11 38/28 3/–1Unalaska 57/46 14/8 36/28 2/–2Fairbanks 72/53 22/11 1/–17 −17/–27Fort Yukon 73/51 23/10 −11/–27 −23/–33Nome 58/46 14/8 13/–2 −10/–19Barrow 47/34 8/1 −7/–19 −21/–28
History
Alaska Natives
thumb|right|upright|A modern Alutiiq dancer in traditional festival garb.
Numerous indigenous peoples occupied Alaska for thousands of years before the arrival of European peoples to the area. Linguistic and DNA studies done here have provided evidence for the settlement of North America by way of the Bering land bridge.National Geographic. "Atlas of the Human Journey." 2005. May 2, 2007 The Tlingit people developed a society with a matrilineal kinship system of property inheritance and descent in what is today Southeast Alaska, along with parts of British Columbia and the Yukon. Also in Southeast were the Haida, now well known for their unique arts. The Tsimshian people came to Alaska from British Columbia in 1887, when President Grover Cleveland, and later the U.S. Congress, granted them permission to settle on Annette Island and found the town of Metlakatla. All three of these peoples, as well as other indigenous peoples of the Pacific Northwest Coast, experienced smallpox outbreaks from the late 18th through the mid-19th century, with the most devastating epidemics occurring in the 1830s and 1860s, resulting in high fatalities and social disruption.Brian C. Hosmer, American Indians in the Marketplace: Persistence and Innovation among the Menominees and Metlakatlans, 1870–1920 (Lawrence, Kansas: University Press of Kansas, 1999), pp. 129–131, 200.
The Aleutian Islands are still home to the Aleut people's seafaring society, although they were the first Native Alaskans to be exploited by Russians. Western and Southwestern Alaska are home to the Yup'ik, while their cousins the Alutiiq ~ Sugpiaq lived in what is now Southcentral Alaska. The Gwich'in people of the northern Interior region are Athabaskan and primarily known today for their dependence on the caribou within the much-contested Arctic National Wildlife Refuge. The North Slope and Little Diomede Island are occupied by the widespread Inupiat people.
Colonization
Some researchers believe that the first Russian settlement in Alaska was established in the 17th century.Свердлов Л. М. Русское поселение на Аляске в XVII в.? "Природа". М., 1992. № 4. С.67–69. According to this hypothesis, in 1648 several koches of Semyon Dezhnyov's expedition came ashore in Alaska by storm and founded this settlement. This hypothesis is based on the testimony of Chukchi geographer Nikolai Daurkin, who had visited Alaska in 1764–1765 and who had reported on a village on the Kheuveren River, populated by "bearded men" who "pray to the icons". Some modern researchers associate Kheuveren with Koyuk River.
thumb|The Russian settlement of St. Paul's Harbor (present-day Kodiak town), Kodiak Island, 1814.
The first European vessel to reach Alaska is generally held to be the St. Gabriel under the authority of the surveyor M. S. Gvozdev and assistant navigator I. Fyodorov on August 21, 1732 during an expedition of Siberian cossak A. F. Shestakov and Belorussian explorer Dmitry Pavlutsky (1729—1735).Аронов В. Н. Патриарх Камчатского мореходства. // "Вопросы истории рыбной промышленности Камчатки": Историко-краеведческий сб. – Вып. 3. – 2000. Вахрин С. Покорители великого океана. Петроп.-Камч.: Камштат, 1993.
Another European contact with Alaska occurred in 1741, when Vitus Bering led an expedition for the Russian Navy aboard the St. Peter. After his crew returned to Russia with sea otter pelts judged to be the finest fur in the world, small associations of fur traders began to sail from the shores of Siberia toward the Aleutian Islands. The first permanent European settlement was founded in 1784.
Between 1774 and 1800, Spain sent several expeditions to Alaska in order to assert its claim over the Pacific Northwest. In 1789 a Spanish settlement and fort were built in Nootka Sound. These expeditions gave names to places such as Valdez, Bucareli Sound, and Cordova. Later, the Russian-American Company carried out an expanded colonization program during the early-to-mid-19th century.
Sitka, renamed New Archangel from 1804 to 1867, on Baranof Island in the Alexander Archipelago in what is now Southeast Alaska, became the capital of Russian America. It remained the capital after the colony was transferred to the United States. The Russians never fully colonized Alaska, and the colony was never very profitable. Evidence of Russian settlement in names and churches survive throughout southeast Alaska.
William H. Seward, the United States Secretary of State, negotiated the Alaska Purchase (also known as Seward's Folly) with the Russians in 1867 for $7.2 million. Alaska was loosely governed by the military initially, and was administered as a district starting in 1884, with a governor appointed by the President of the United States. A federal district court was headquartered in Sitka.
thumb|Miners and prospectors climb the Chilkoot Trail during the 1898 Klondike Gold Rush.
For most of Alaska's first decade under the United States flag, Sitka was the only community inhabited by American settlers. They organized a "provisional city government," which was Alaska's first municipal government, but not in a legal sense. Legislation allowing Alaskan communities to legally incorporate as cities did not come about until 1900, and home rule for cities was extremely limited or unavailable until statehood took effect in 1959.
Territory
Starting in the 1890s and stretching in some places to the early 1910s, gold rushes in Alaska and the nearby Yukon Territory brought thousands of miners and settlers to Alaska. Alaska was officially incorporated as an organized territory in 1912. Alaska's capital, which had been in Sitka until 1906, was moved north to Juneau. Construction of the Alaska Governor's Mansion began that same year. European immigrants from Norway and Sweden also settled in southeast Alaska, where they entered the fishing and logging industries.
thumb|U.S. troops navigate snow and ice during the Battle of Attu in May 1943.
During World War II, the Aleutian Islands Campaign focused on the three outer Aleutian Islands – Attu, Agattu and Kiskathese three Aleutian outer islands are about away from continental USSR, from continental Alaska (U.S.), from Japan. – that were invaded by Japanese troops and occupied between June 1942 and August 1943. During the occupation, one Alaskan civilian was killed by Japanese troops and nearly fifty were interned in Japan, where about half of them died. Unalaska/Dutch Harbor became a significant base for the United States Army Air Forces and Navy submariners.
The United States Lend-Lease program involved the flying of American warplanes through Canada to Fairbanks and thence Nome; Soviet pilots took possession of these aircraft, ferrying them to fight the German invasion of the Soviet Union. The construction of military bases contributed to the population growth of some Alaskan cities.
Statehood
Statehood for Alaska was an important cause of James Wickersham early in his tenure as a congressional delegate. Decades later, the statehood movement gained its first real momentum following a territorial referendum in 1946. The Alaska Statehood Committee and Alaska's Constitutional Convention would soon follow. Statehood supporters also found themselves fighting major battles against political foes, mostly in the U.S. Congress but also within Alaska. Statehood was approved by Congress on July 7, 1958. Alaska was officially proclaimed a state on January 3, 1959.
In 1960, the Census Bureau reported Alaska's population as 77.2% White, 3% Black, and 18.8% American Indian and Alaska Native.
thumb|left|upright|Kodiak, before and after the tsunami which followed the Good Friday earthquake in 1964, destroying much of the townsite.
Good Friday earthquake
On March 27, 1964, the massive Good Friday earthquake killed 133 people and destroyed several villages and portions of large coastal communities, mainly by the resultant tsunamis and landslides. It was the second-most-powerful earthquake in the recorded history of the world, with a moment magnitude of 9.2. It was over one thousand times more powerful than the 1989 San Francisco earthquake. The time of day (5:36 pm), time of year and location of the epicenter were all cited as factors in potentially sparing thousands of lives, particularly in Anchorage.
Discovery of oil
The 1968 discovery of oil at Prudhoe Bay and the 1977 completion of the Trans-Alaska Pipeline System led to an oil boom. Royalty revenues from oil have funded large state budgets from 1980 onward. That same year, not coincidentally, Alaska repealed its state income tax.
In 1989, the Exxon Valdez hit a reef in the Prince William Sound, spilling over of crude oil over of coastline. Today, the battle between philosophies of development and conservation is seen in the contentious debate over oil drilling in the Arctic National Wildlife Refuge and the proposed Pebble Mine.
Alaska Heritage Resources Survey
The Alaska Heritage Resources Survey (AHRS) is a restricted inventory of all reported historic and prehistoric sites within the state of Alaska; it is maintained by the Office of History and Archaeology. The survey's inventory of cultural resources includes objects, structures, buildings, sites, districts, and travel ways, with a general provision that they are over 50 years old. As of January 31, 2012, over 35,000 sites have been reported.Alaska Heritage Resources Survey, Department of Natural Resources – Alaska.gov (retrieved May 9, 2014)
Demographics
The United States Census Bureau estimates that the population of Alaska was 738,432 on July 1, 2015, a 3.97% increase since the 2010 United States Census.
In 2010, Alaska ranked as the 47th state by population, ahead of North Dakota, Vermont, and Wyoming (and Washington, D.C.) Alaska is the least densely populated state, and one of the most sparsely populated areas in the world, at , with the next state, Wyoming, at . Alaska is the largest U.S. state by area, and the tenth wealthiest (per capita income). As of November 2014, the state's unemployment rate was 6.6%.
Race and ancestry
According to the 2010 United States Census, Alaska had a population of 710,231. In terms of race and ethnicity, the state was 66.7% White (64.1% Non-Hispanic White), 14.8% American Indian and Alaska Native, 5.4% Asian, 3.3% Black or African American, 1.0% Native Hawaiian and Other Pacific Islander, 1.6% from Some Other Race, and 7.3% from Two or More Races. Hispanics or Latinos of any race made up 5.5% of the population.
, 50.7% of Alaska's population younger than one year of age belonged to minority groups (i.e., did not have two parents of non-Hispanic white ancestry).
+ Alaska Racial Breakdown of Population Racial composition 1970 1990 2000 2010 White 78.8% 75.5% 69.3% 66.7% Native 16.9% 15.6% 15.6% 14.8% Asian 0.9% 3.6% 4.0% 5.4% Black 3.0% 4.1% 3.5% 3.3% Native Hawaiian and other Pacific Islander – – 0.5% 1.0% Other race 0.4% 1.2% 1.6% 1.6% Two or more races – – 5.5% 7.3%
Languages
According to the 2011 American Community Survey, 83.4% of people over the age of five speak only English at home. About 3.5% speak Spanish at home. About 2.2% speak another Indo-European language at home and about 4.3% speak an Asian language at home. About 5.3% speak other languages at home.
The Alaska Native Language Center at the University of Alaska Fairbanks claims that at least 20 Alaskan native languages exist and there are also some languages with different dialects.Languages, Alaska Native Language Center, http://www.uaf.edu/anlc/languages/ Most of Alaska's native languages belong to either the Eskimo–Aleut or Na-Dene language families however some languages are thought to be isolates (e.g. Haida) or have not yet been classified (e.g. Tsimshianic).
nearly all of Alaska's native languages were classified as either threatened, shifting, moribund, nearly extinct, or dormant languages.Languages, Alaska Native Language Center, Ethnologue (classifications), http://www.uaf.edu/anlc/languages/stats/
A total of 5.2% of Alaskans speak one of the state's 20 indigenous languages,Graves, K, PhD, MSW, Rosich, R, PhD, McBride, M, PhD, RN, Charles, G, Phd and LaBelle, J, MA: Health and health care if Alaska Native Older Adults. http://geriatrics.stanford.edu/ethnomed/alaskan/. In Periyakoil VS, eds. eCampus Geriatrics, Standford Ca, 2010. known locally as "native languages".
In October 2014, the governor of Alaska signed a bill declaring the state's 20 indigenous languages as official languages."Alaska's indigenous languages attain official status", Reuters.com, October 24, 2014. Retrieved October 30, 2014. This bill gave the languages symbolic recognition as official languages, though they have not been adopted for official use within the government. The 20 languages that were included in the bill are:
Inupiaq
Siberian Yupik
Central Alaskan Yup’ik
Alutiiq
Unangax
Dena’ina
Deg Xinag
Holikachuk
Koyukon
Upper Kuskokwim
Gwich’in
Tanana
Upper Tanana
Tanacross
Hän
Ahtna
Eyak
Tlingit
Haida
Tsimshian
Religion
thumb|upright|St. Michael's Russian Orthodox Cathedral in downtown Sitka.
According to statistics collected by the Association of Religion Data Archives from 2010, about 34% of Alaska residents were members of religious congregations. 100,960 people identified as Evangelical Protestants, 50,866 as Roman Catholic, and 32,550 as mainline Protestants. Roughly 4% are Mormon, 0.5% are Jewish, 1% are Muslim, 0.5% are Buddhist, and 0.5% are Hindu. The largest religious denominations in Alaska were the Catholic Church with 50,866 adherents, non-denominational Evangelical Protestants with 38,070 adherents, The Church of Jesus Christ of Latter-day Saints with 32,170 adherents, and the Southern Baptist Convention with 19,891 adherents. Alaska has been identified, along with Pacific Northwest states Washington and Oregon, as being the least religious states of the USA, in terms of church membership.
In 1795, the First Russian Orthodox Church was established in Kodiak. Intermarriage with Alaskan Natives helped the Russian immigrants integrate into society. As a result, an increasing number of Russian Orthodox churches gradually became established within Alaska. Alaska also has the largest Quaker population (by percentage) of any state. In 2009 there were 6,000 Jews in Alaska (for whom observance of halakha may pose special problems).Table 76. Religious Bodies—Selected Data. U.S. Census Bureau, Statistical Abstract of the United States: 2011. Alaskan Hindus often share venues and celebrations with members of other Asian religious communities, including Sikhs and Jains.
Estimates for the number of Muslims in Alaska range from 2,000 to 5,000.{} The Islamic Community Center of Anchorage began efforts in the late 1990s to construct a mosque in Anchorage. They broke ground on a building in south Anchorage in 2010 and were nearing completion in late 2014. When completed, the mosque will be the first in the state and one of the northernmost mosques in the world.
{| class="wikitable sortable" font-size:80%;"
|+ style="font-size:100%" | Religious affiliation in Alaska (2014)
|-
! Affiliation
! colspan="2"|% of population
|-
| Christian
|align=right| |-
| style="text-align:left; text-indent:15px;"| Protestant
|align=right|
|-
| style="text-align:left; text-indent:30px;"| Evangelical Protestant
|align=right| |-
| style="text-align:left; text-indent:30px;"| Mainline Protestant
|align=right|
|-
| style="text-align:left; text-indent:30px;"| Black church
|align=right| |-
| style="text-align:left; text-indent:15px;"| Catholic
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Mormon
|align=right| |-
| style="text-align:left; text-indent:15px;"| Jehovah's Witnesses
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Eastern Orthodox
|align=right| |-
| style="text-align:left; text-indent:15px;"| Other Christian
|align=right|
|-
| Unaffiliated
|align=right| |-
| style="text-align:left; text-indent:15px;"| Nothing in particular
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Agnostic
|align=right| |-
| style="text-align:left; text-indent:15px;"| Atheist
|align=right|
|-
| Non-Christian faiths
|align=right| |-
| style="text-align:left; text-indent:15px;"| Jewish
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Muslim
|align=right| |-
| style="text-align:left; text-indent:15px;"| Buddhist
|align=right|
|-
| style="text-align:left; text-indent:15px;"| Hindu
|align=right| |-
| style="text-align:left; text-indent:15px;"| Other Non-Christian faiths
|align=right|
|-
| Don't know/refused answer
|align=right| |-
| Total || '|}
Economy
thumb|Aerial view of infrastructure at the Prudhoe Bay Oil Field.
The 2007 gross state product was $44.9 billion, 45th in the nation. Its per capita personal income for 2007 was $40,042, ranking 15th in the nation. According to a 2013 study by Phoenix Marketing International, Alaska had the fifth-largest number of millionaires per capita in the United States, with a ratio of 6.75 percent. The oil and gas industry dominates the Alaskan economy, with more than 80% of the state's revenues derived from petroleum extraction. Alaska's main export product (excluding oil and natural gas) is seafood, primarily salmon, cod, Pollock and crab.
Agriculture represents a very small fraction of the Alaskan economy. Agricultural production is primarily for consumption within the state and includes nursery stock, dairy products, vegetables, and livestock. Manufacturing is limited, with most foodstuffs and general goods imported from elsewhere.
Employment is primarily in government and industries such as natural resource extraction, shipping, and transportation. Military bases are a significant component of the economy in the Fairbanks North Star, Anchorage and Kodiak Island boroughs, as well as Kodiak. Federal subsidies are also an important part of the economy, allowing the state to keep taxes low. Its industrial outputs are crude petroleum, natural gas, coal, gold, precious metals, zinc and other mining, seafood processing, timber and wood products. There is also a growing service and tourism sector. Tourists have contributed to the economy by supporting local lodging.
Energy
thumb|upright|The Trans-Alaska Pipeline transports oil, Alaska's most financially important export, from the North Slope to Valdez. Pertinent are the heat pipes in the column mounts, which disperse heat upwards and prevent melting of permafrost.
Alaska has vast energy resources, although its oil reserves have been largely depleted. Major oil and gas reserves were found in the Alaska North Slope (ANS) and Cook Inlet basins, but according to the Energy Information Administration, by February 2014 Alaska had fallen to fourth place in the nation in crude oil production after Texas, North Dakota, and California. Prudhoe Bay on Alaska's North Slope is still the second highest-yielding oil field in the United States, typically producing about , although by early 2014 North Dakota's Bakken Formation was producing over . Prudhoe Bay was the largest conventional oil field ever discovered in North America, but was much smaller than Canada's enormous Athabasca oil sands field, which by 2014 was producing about of unconventional oil, and had hundreds of years of producible reserves at that rate.
The Trans-Alaska Pipeline can transport and pump up to of crude oil per day, more than any other crude oil pipeline in the United States. Additionally, substantial coal deposits are found in Alaska's bituminous, sub-bituminous, and lignite coal basins. The United States Geological Survey estimates that there are of undiscovered, technically recoverable gas from natural gas hydrates on the Alaskan North Slope. Alaska also offers some of the highest hydroelectric power potential in the country from its numerous rivers. Large swaths of the Alaskan coastline offer wind and geothermal energy potential as well.
Alaska's economy depends heavily on increasingly expensive diesel fuel for heating, transportation, electric power and light. Though wind and hydroelectric power are abundant and underdeveloped, proposals for statewide energy systems (e.g. with special low-cost electric interties) were judged uneconomical (at the time of the report, 2001) due to low (less than 50¢/gal) fuel prices, long distances and low population. The cost of a gallon of gas in urban Alaska today is usually 30–60¢ higher than the national average; prices in rural areas are generally significantly higher but vary widely depending on transportation costs, seasonal usage peaks, nearby petroleum development infrastructure and many other factors.
Permanent Fund
The Alaska Permanent Fund is a constitutionally authorized appropriation of oil revenues, established by voters in 1976 to manage a surplus in state petroleum revenues from oil, largely in anticipation of the then recently constructed Trans-Alaska Pipeline System. The fund was originally proposed by Governor Keith Miller on the eve of the 1969 Prudhoe Bay lease sale, out of fear that the legislature would spend the entire proceeds of the sale (which amounted to $900 million) at once. It was later championed by Governor Jay Hammond and Kenai state representative Hugh Malone. It has served as an attractive political prospect ever since, diverting revenues which would normally be deposited into the general fund.
The Alaska Constitution was written so as to discourage dedicating state funds for a particular purpose. The Permanent Fund has become the rare exception to this, mostly due to the political climate of distrust existing during the time of its creation. From its initial principal of $734,000, the fund has grown to $50 billion as a result of oil royalties and capital investment programs. Most if not all the principal is invested conservatively outside Alaska. This has led to frequent calls by Alaskan politicians for the Fund to make investments within Alaska, though such a stance has never gained momentum.
Starting in 1982, dividends from the fund's annual growth have been paid out each year to eligible Alaskans, ranging from an initial $1,000 in 1982 (equal to three years' payout, as the distribution of payments was held up in a lawsuit over the distribution scheme) to $3,269 in 2008 (which included a one-time $1,200 "Resource Rebate"). Every year, the state legislature takes out 8% from the earnings, puts 3% back into the principal for inflation proofing, and the remaining 5% is distributed to all qualifying Alaskans. To qualify for the Permanent Fund Dividend, one must have lived in the state for a minimum of 12 months, maintain constant residency subject to allowable absences, and not be subject to court judgments or criminal convictions which fall under various disqualifying classifications or may subject the payment amount to civil garnishment.
The Permanent Fund is often considered to be one of the leading examples of a "Basic Income" policy in the world.
Cost of living
The cost of goods in Alaska has long been higher than in the contiguous 48 states. Federal government employees, particularly United States Postal Service (USPS) workers and active-duty military members, receive a Cost of Living Allowance usually set at 25% of base pay because, while the cost of living has gone down, it is still one of the highest in the country.
Rural Alaska suffers from extremely high prices for food and consumer goods compared to the rest of the country, due to the relatively limited transportation infrastructure.
Agriculture and fishing
thumb|right|upright|Halibut is important to the state's economy as both a commercial and sport-caught fish.
Due to the northern climate and short growing season, relatively little farming occurs in Alaska. Most farms are in either the Matanuska Valley, about northeast of Anchorage, or on the Kenai Peninsula, about southwest of Anchorage. The short 100-day growing season limits the crops that can be grown, but the long sunny summer days make for productive growing seasons. The primary crops are potatoes, carrots, lettuce, and cabbage.
The Tanana Valley is another notable agricultural locus, especially the Delta Junction area, about southeast of Fairbanks, with a sizable concentration of farms growing agronomic crops; these farms mostly lie north and east of Fort Greely. This area was largely set aside and developed under a state program spearheaded by Hammond during his second term as governor. Delta-area crops consist predominately of barley and hay. West of Fairbanks lies another concentration of small farms catering to restaurants, the hotel and tourist industry, and community-supported agriculture.
Alaskan agriculture has experienced a surge in growth of market gardeners, small farms and farmers' markets in recent years, with the highest percentage increase (46%) in the nation in growth in farmers' markets in 2011, compared to 17% nationwide. The peony industry has also taken off, as the growing season allows farmers to harvest during a gap in supply elsewhere in the world, thereby filling a niche in the flower market.
Alaska, with no counties, lacks county fairs. However, a small assortment of state and local fairs (with the Alaska State Fair in Palmer the largest), are held mostly in the late summer. The fairs are mostly located in communities with historic or current agricultural activity, and feature local farmers exhibiting produce in addition to more high-profile commercial activities such as carnival rides, concerts and food. "Alaska Grown" is used as an agricultural slogan.
Alaska has an abundance of seafood, with the primary fisheries in the Bering Sea and the North Pacific. Seafood is one of the few food items that is often cheaper within the state than outside it. Many Alaskans take advantage of salmon seasons to harvest portions of their household diet while fishing for subsistence, as well as sport. This includes fish taken by hook, net or wheel.
Hunting for subsistence, primarily caribou, moose, and Dall sheep is still common in the state, particularly in remote Bush communities. An example of a traditional native food is Akutaq, the Eskimo ice cream, which can consist of reindeer fat, seal oil, dried fish meat and local berries.
Alaska's reindeer herding is concentrated on Seward Peninsula, where wild caribou can be prevented from mingling and migrating with the domesticated reindeer.
Most food in Alaska is transported into the state from "Outside", and shipping costs make food in the cities relatively expensive. In rural areas, subsistence hunting and gathering is an essential activity because imported food is prohibitively expensive. Though most small towns and villages in Alaska lie along the coastline, the cost of importing food to remote villages can be high, because of the terrain and difficult road conditions, which change dramatically, due to varying climate and precipitation changes. The cost of transport can reach as high as 50¢ per pound ($1.10/kg) or more in some remote areas, during the most difficult times, if these locations can be reached at all during such inclement weather and terrain conditions. The cost of delivering a of milk is about $3.50 in many villages where per capita income can be $20,000 or less. Fuel cost per gallon is routinely 20–30¢ higher than the continental United States average, with only Hawaii having higher prices.
Transportation
thumb|The Sterling Highway, near its intersection with the Seward Highway.
Roads
thumb|left|The Susitna River bridge on the Denali Highway is long.
thumb|Alaska Interstate Highways.
Alaska has few road connections compared to the rest of the U.S. The state's road system covers a relatively small area of the state, linking the central population centers and the Alaska Highway, the principal route out of the state through Canada. The state capital, Juneau, is not accessible by road, only a car ferry, which has spurred several debates over the decades about moving the capital to a city on the road system, or building a road connection from Haines. The western part of Alaska has no road system connecting the communities with the rest of Alaska.
One unique feature of the Alaska Highway system is the Anton Anderson Memorial Tunnel, an active Alaska Railroad tunnel recently upgraded to provide a paved roadway link with the isolated community of Whittier on Prince William Sound to the Seward Highway about southeast of Anchorage at Portage. At , the tunnel was the longest road tunnel in North America until 2007.completion of the Interstate 93 tunnel as part of the "Big Dig" project in Boston, Massachusetts. The tunnel is the longest combination road and rail tunnel in North America.
Rail
thumb|An Alaska Railroad locomotive and tanker cars crossing the George Parks Highway in 1994.
thumb|The White Pass and Yukon Route traverses rugged terrain north of Skagway near the Canada–US border.
Built around 1915, the Alaska Railroad (ARR) played a key role in the development of Alaska through the 20th century. It links north Pacific shipping through providing critical infrastructure with tracks that run from Seward to Interior Alaska by way of South Central Alaska, passing through Anchorage, Eklutna, Wasilla, Talkeetna, Denali, and Fairbanks, with spurs to Whittier, Palmer and North Pole. The cities, towns, villages, and region served by ARR tracks are known statewide as "The Railbelt". In recent years, the ever-improving paved highway system began to eclipse the railroad's importance in Alaska's economy.
The railroad played a vital role in Alaska's development, moving freight into Alaska while transporting natural resources southward (i.e., coal from the Usibelli coal mine near Healy to Seward and gravel from the Matanuska Valley to Anchorage). It is well known for its summertime tour passenger service.
The Alaska Railroad was one of the last railroads in North America to use cabooses in regular service and still uses them on some gravel trains. It continues to offer one of the last flag stop routes in the country. A stretch of about of track along an area north of Talkeetna remains inaccessible by road; the railroad provides the only transportation to rural homes and cabins in the area. Until construction of the Parks Highway in the 1970s, the railroad provided the only land access to most of the region along its entire route.
In northern Southeast Alaska, the White Pass and Yukon Route also partly runs through the state from Skagway northwards into Canada (British Columbia and Yukon Territory), crossing the border at White Pass Summit. This line is now mainly used by tourists, often arriving by cruise liner at Skagway. It was featured in the 1983 BBC television series Great Little Railways.The Alaska Rail network is not connected to Outside. In 2000, the U.S. Congress authorized $6 million to study the feasibility of a rail link between Alaska, Canada, and the lower 48.
Alaska Rail Marine provides car float service between Whittier and Seattle.
Marine transport
Many cities, towns and villages in the state do not have road or highway access; the only modes of access involve travel by air, river, or the sea.
thumb|The (named after Tustumena Glacier) is one of the state's many ferries, providing service between the Kenai Peninsula, Kodiak Island and the Aleutian Chain.
Alaska's well-developed state-owned ferry system (known as the Alaska Marine Highway) serves the cities of southeast, the Gulf Coast and the Alaska Peninsula. The ferries transport vehicles as well as passengers. The system also operates a ferry service from Bellingham, Washington and Prince Rupert, British Columbia in Canada through the Inside Passage to Skagway. The Inter-Island Ferry Authority also serves as an important marine link for many communities in the Prince of Wales Island region of Southeast and works in concert with the Alaska Marine Highway.
In recent years, cruise lines have created a summertime tourism market, mainly connecting the Pacific Northwest to Southeast Alaska and, to a lesser degree, towns along Alaska's gulf coast. The population of Ketchikan may rise by over 10,000 people on many days during the summer, as up to four large cruise ships at a time can dock, debarking thousands of passengers.
Air transport
Cities not served by road, sea, or river can be reached only by air, foot, dogsled, or snowmachine, accounting for Alaska's extremely well developed bush air services—an Alaskan novelty. Anchorage and, to a lesser extent Fairbanks, is served by many major airlines. Because of limited highway access, air travel remains the most efficient form of transportation in and out of the state. Anchorage recently completed extensive remodeling and construction at Ted Stevens Anchorage International Airport to help accommodate the upsurge in tourism (in 2012–2013, Alaska received almost 2 million visitors).State of Alaska Office of Economic Development. Economic Impact of Alaska's Visitor Industry . January 2014. Retrieved May 21, 2014.
Regular flights to most villages and towns within the state that are commercially viable are challenging to provide, so they are heavily subsidized by the federal government through the Essential Air Service program. Alaska Airlines is the only major airline offering in-state travel with jet service (sometimes in combination cargo and passenger Boeing 737-400s) from Anchorage and Fairbanks to regional hubs like Bethel, Nome, Kotzebue, Dillingham, Kodiak, and other larger communities as well as to major Southeast and Alaska Peninsula communities.
thumb|A Bombardier Dash 8, operated by Era Alaska, on approach to Ted Stevens Anchorage International Airport.
The bulk of remaining commercial flight offerings come from small regional commuter airlines such as Ravn Alaska, PenAir, and Frontier Flying Service. The smallest towns and villages must rely on scheduled or chartered bush flying services using general aviation aircraft such as the Cessna Caravan, the most popular aircraft in use in the state. Much of this service can be attributed to the Alaska bypass mail program which subsidizes bulk mail delivery to Alaskan rural communities. The program requires 70% of that subsidy to go to carriers who offer passenger service to the communities.
Many communities have small air taxi services. These operations originated from the demand for customized transport to remote areas. Perhaps the most quintessentially Alaskan plane is the bush seaplane. The world's busiest seaplane base is Lake Hood, located next to Ted Stevens Anchorage International Airport, where flights bound for remote villages without an airstrip carry passengers, cargo, and many items from stores and warehouse clubs. In 2006 Alaska had the highest number of pilots per capita of any U.S. state.Out of the estimated 663,661 residents, 8,550 were pilots, or about one in 78, Federal Aviation Administration. 2005 U.S. Civil Airman Statistics
Other transport
Another Alaskan transportation method is the dogsled. In modern times (that is, any time after the mid-late 1920s), dog mushing is more of a sport than a true means of transportation. Various races are held around the state, but the best known is the Iditarod Trail Sled Dog Race, a trail from Anchorage to Nome (although the distance varies from year to year, the official distance is set at ). The race commemorates the famous 1925 serum run to Nome in which mushers and dogs like Togo and Balto took much-needed medicine to the diphtheria-stricken community of Nome when all other means of transportation had failed. Mushers from all over the world come to Anchorage each March to compete for cash, prizes, and prestige. The "Serum Run" is another sled dog race that more accurately follows the route of the famous 1925 relay, leaving from the community of Nenana (southwest of Fairbanks) to Nome.
In areas not served by road or rail, primary transportation in summer is by all-terrain vehicle and in winter by snowmobile or "snow machine," as it is commonly referred to in Alaska.
Data transport
Alaska's internet and other data transport systems are provided largely through the two major telecommunications companies: GCI and Alaska Communications. GCI owns and operates what it calls the Alaska United Fiber Optic system and as of late 2011 Alaska Communications advertised that it has "two fiber optic paths to the lower 48 and two more across Alaska.Alaska Communications Coverage Map. Alaska Communications. In January 2011, it was reported that a $1 billion project to connect Asia and rural Alaska was being planned, aided in part by $350 million in stimulus from the federal government.Arctic fiber-optic cable could benefit far-flung Alaskans . Anchorage Daily News.
Law and government
State government
thumb|The center of state government in Juneau. The large buildings in the background are, from left to right: the Court Plaza Building (known colloquially as the "Spam Can"), the State Office Building (behind), the Alaska Office Building, the John H. Dimond State Courthouse, and the Alaska State Capitol. Many of the smaller buildings in the foreground are also occupied by state government agencies.
Like all other U.S. states, Alaska is governed as a republic, with three branches of government: an executive branch consisting of the Governor of Alaska and the other independently elected constitutional officers; a legislative branch consisting of the Alaska House of Representatives and Alaska Senate; and a judicial branch consisting of the Alaska Supreme Court and lower courts.
The state of Alaska employs approximately 16,000 people statewide.
The Alaska Legislature consists of a 40-member House of Representatives and a 20-member Senate. Senators serve four-year terms and House members two. The Governor of Alaska serves four-year terms. The lieutenant governor runs separately from the governor in the primaries, but during the general election, the nominee for governor and nominee for lieutenant governor run together on the same ticket.
Alaska's court system has four levels: the Alaska Supreme Court, the Alaska Court of Appeals, the superior courts and the district courts. The superior and district courts are trial courts. Superior courts are courts of general jurisdiction, while district courts only hear certain types of cases, including misdemeanor criminal cases and civil cases valued up to $100,000.
The Supreme Court and the Court of Appeals are appellate courts. The Court of Appeals is required to hear appeals from certain lower-court decisions, including those regarding criminal prosecutions, juvenile delinquency, and habeas corpus. The Supreme Court hears civil appeals and may in its discretion hear criminal appeals.
State politics
Gubernatorial election results Year Republican Democratic195839.4% 19,299
|align="center" |59.6% 29,189196247.7% 27,054
|align="center" |52.3% 29,627196650.0% 33,145
|align="center" |48.4% 32,065197046.1% 37,264
|align="center" |52.4% 42,309197447.7% 45,840
|align="center" |47.4% 45,553197839.1% 49,580
|align="center" |20.2% 25,656 198237.1% 72,291
|align="center" |46.1% 89,918 198642.6% 76,515
|align="center" |47.3% 84,943199026.2% 50,991
|align="center" |30.9% 60,201199440.8% 87,157
|align="center" |41.1% 87,693199817.9% 39,331
|align="center" |51.3% 112,879200255.9% 129,279
|align="center" |40.7% 94,216 200648.3% 114,697
|align="center" |41.0% 97,238201059.1% 151,318
|align="center" |37.7% 96,519201445.9% 128,435
|align="center" |
|}
Although in its early years of statehood Alaska was a Democratic state, since the early 1970s it has been characterized as Republican-leaning. Local political communities have often worked on issues related to land use development, fishing, tourism, and individual rights. Alaska Natives, while organized in and around their communities, have been active within the Native corporations. These have been given ownership over large tracts of land, which require stewardship.
Alaska was formerly the only state in which possession of one ounce or less of marijuana in one's home was completely legal under state law, though the federal law remains in force.
The state has an independence movement favoring a vote on secession from the United States, with the Alaskan Independence Party.
Six Republicans and four Democrats have served as governor of Alaska. In addition, Republican Governor Wally Hickel was elected to the office for a second term in 1990 after leaving the Republican party and briefly joining the Alaskan Independence Party ticket just long enough to be reelected. He subsequently officially rejoined the Republican party in 1994.
Alaska's voter initiative making marijuana legal took effect 24 February 2015, placing Alaska alongside Colorado and Washington as the first three U.S. states where recreational marijuana is legal. The new law means people over age 21 can consume small amounts of pot — if they can find it. There is a rather lengthy and involved application process, per Alaska Measure 2 (2014). The first legal marijuana store opened in Valdez in October 2016.Andrews, Laurel,Marijuana milestone: Alaska's first pot shop opens to the public in Valdez Alaska Dispatch News, 10/29/2016
Taxes
To finance state government operations, Alaska depends primarily on petroleum revenues and federal subsidies. This allows it to have the lowest individual tax burden in the United States.CNN Money (2005). "How tax friendly is your state?" Retrieved from CNN website. It is one of five states with no state sales tax, one of seven states that do not levy an individual income tax, and one of the two states that has neither. The Department of Revenue Tax Division reports regularly on the state's revenue sources. The Department also issues an annual summary of its operations, including new state laws that directly affect the tax division.
While Alaska has no state sales tax, 89 municipalities collect a local sales tax, from 1.0–7.5%, typically 3–5%. Other local taxes levied include raw fish taxes, hotel, motel, and bed-and-breakfast 'bed' taxes, severance taxes, liquor and tobacco taxes, gaming (pull tabs) taxes, tire taxes and fuel transfer taxes. A part of the revenue collected from certain state taxes and license fees (such as petroleum, aviation motor fuel, telephone cooperative) is shared with municipalities in Alaska.
Fairbanks has one of the highest property taxes in the state as no sales or income taxes are assessed in the Fairbanks North Star Borough (FNSB). A sales tax for the FNSB has been voted on many times, but has yet to be approved, leading lawmakers to increase taxes dramatically on goods such as liquor and tobacco.
In 2014 the Tax Foundation ranked Alaska as having the fourth most "business friendly" tax policy, behind only Wyoming, South Dakota, and Nevada.
Federal politics
{| class="wikitable" style="float:right; margin:1em; font-size:95%;"
|+ Washington, D.C vote|Presidential election results
|- style="background:lightgrey;"
! Year
! Republican
! Democratic
|-
|align="center" |1960
|align="center" |50.9% 30,95349.1% 29,809
|-
|align="center" |1964
|align="center" |34.1% 22,93065.9% 44,329
|-
|align="center" |1968
|align="center" |45.3% 37,60042.7% 35,411
|-
|align="center" |1972
|align="center" |58.1% 55,34934.6% 32,967
|-
|align="center" |1976
|align="center" |57.9% 71,55535.7% 44,058
|-
|align="center" |1980
|align="center" |54.4% 86,11226.4% 41,842
|-
|align="center" |1984
|align="center" |66.7% 138,37729.9% 62,007
|-
|align="center" |1988
|align="center" |59.6% 119,25136.3% 72,584
|-
|align="center" |1992
|align="center" |39.5% 102,00030.3% 78,294199650.8% 122,746
|align="center" |33.3% 80,380200058.6% 167,398
|align="center" |27.7% 79,004200461.1% 190,889
|align="center" |35.5% 111,025200859.4% 193,841
|align="center" |37.8% 123,594201254.8% 164,676
|align="center" |40.8% 122,640201651.3% 163,387
|align="center" |36.6% 116,454
Alaska regularly supports Republicans in presidential elections and has done so since statehood. Republicans have won the state's electoral college votes in all but one election that it has participated in (1964). No state has voted for a Democratic presidential candidate fewer times. Alaska was carried by Democratic nominee Lyndon B. Johnson during his landslide election in 1964, while the 1960 and 1968 elections were close. Since 1972, however, Republicans have carried the state by large margins. In 2008, Republican John McCain defeated Democrat Barack Obama in Alaska, 59.49% to 37.83%. McCain's running mate was Sarah Palin, the state's governor and the first Alaskan on a major party ticket. Obama lost Alaska again in 2012, but he captured 40% of the state's vote in that election, making him the first Democrat to do so since 1968.
The Alaska Bush, central Juneau, midtown and downtown Anchorage, and the areas surrounding the University of Alaska Fairbanks campus and Ester have been strongholds of the Democratic Party. The Matanuska-Susitna Borough, the majority of Fairbanks (including North Pole and the military base), and South Anchorage typically have the strongest Republican showing. , well over half of all registered voters have chosen "Non-Partisan" or "Undeclared" as their affiliation, despite recent attempts to close primaries to unaffiliated voters.
Because of its population relative to other U.S. states, Alaska has only one member in the U.S. House of Representatives. This seat is held by Republican Don Young, who was re-elected to his 21st consecutive term in 2012. Alaska's at-large congressional district is one of the largest parliamentary constituencies in the world.
In 2008, Governor Sarah Palin became the first Republican woman to run on a national ticket when she became John McCain's running mate. She continued to be a prominent national figure even after resigning from the governor's job in July 2009.
Alaska's United States Senators belong to Class 2 and Class 3. In 2008, Democrat Mark Begich, mayor of Anchorage, defeated long-time Republican senator Ted Stevens. Stevens had been convicted on seven felony counts of failing to report gifts on Senate financial discloser forms one week before the election. The conviction was set aside in April 2009 after evidence of prosecutorial misconduct emerged.
Republican Frank Murkowski held the state's other senatorial position. After being elected governor in 2002, he resigned from the Senate and appointed his daughter, State Representative Lisa Murkowski as his successor. She won full six-year terms in 2004 and 2010.
Cities, towns and boroughs
thumb|180px|Anchorage, Alaska, Alaska's largest city.
thumb|180px|Fairbanks, Alaska's second-largest city and by a significant margin the largest city in Alaska's interior.
thumb|180px|Juneau, Alaska's third-largest city and its capital.
thumb|180px|Bethel, the largest city in the Unorganized Borough and in rural Alaska.
thumb|180px|Homer, showing (from bottom to top) the edge of downtown, its airport and the Spit.
thumb|180px|Barrow (Browerville neighborhood near Eben Hopson Middle School shown), known colloquially for many years by the nickname "Top of the World", is the northernmost city in the United States.
thumb|180px|Cordova, built in the early 20th century to support the Kennecott Mines and the Copper River and Northwestern Railway, has persevered as a fishing community since their closure.
thumb|180px|Main Street in Talkeetna.
Alaska is not divided into counties, as most of the other U.S. states, but it is divided into boroughs. Many of the more densely populated parts of the state are part of Alaska's 16 boroughs, which function somewhat similarly to counties in other states. However, unlike county-equivalents in the other 49 states, the boroughs do not cover the entire land area of the state. The area not part of any borough is referred to as the Unorganized Borough.
The Unorganized Borough has no government of its own, but the U.S. Census Bureau in cooperation with the state divided the Unorganized Borough into 11 census areas solely for the purposes of statistical analysis and presentation. A recording district is a mechanism for administration of the public record in Alaska. The state is divided into 34 recording districts which are centrally administered under a State Recorder. All recording districts use the same acceptance criteria, fee schedule, etc., for accepting documents into the public record.
Whereas many U.S. states use a three-tiered system of decentralization—state/county/township—most of Alaska uses only two tiers—state/borough. Owing to the low population density, most of the land is located in the Unorganized Borough. As the name implies, it has no intermediate borough government but is administered directly by the state government. In 2000, 57.71% of Alaska's area has this status, with 13.05% of the population.
Anchorage merged the city government with the Greater Anchorage Area Borough in 1975 to form the Municipality of Anchorage, containing the city proper and the communities of Eagle River, Chugiak, Peters Creek, Girdwood, Bird, and Indian. Fairbanks has a separate borough (the Fairbanks North Star Borough) and municipality (the City of Fairbanks).
The state's most populous city is Anchorage, home to 278,700 people in 2006, 225,744 of whom live in the urbanized area. The richest location in Alaska by per capita income is Halibut Cove ($89,895). Yakutat City, Sitka, Juneau, and Anchorage are the four largest cities in the U.S. by area.
Cities and census-designated places (by population)
As reflected in the 2010 United States Census, Alaska has a total of 355 incorporated cities and census-designated places (CDPs). The tally of cities includes four unified municipalities, essentially the equivalent of a consolidated city–county. The majority of these communities are located in the rural expanse of Alaska known as "The Bush" and are unconnected to the contiguous North American road network. The table at the bottom of this section lists the 100 largest cities and census-designated places in Alaska, in population order.
Of Alaska's 2010 Census population figure of 710,231, 20,429 people, or 2.88% of the population, did not live in an incorporated city or census-designated place. Approximately three-quarters of that figure were people who live in urban and suburban neighborhoods on the outskirts of the city limits of Ketchikan, Kodiak, Palmer and Wasilla. CDPs have not been established for these areas by the United States Census Bureau, except that seven CDPs were established for the Ketchikan-area neighborhoods in the 1980 Census (Clover Pass, Herring Cove, Ketchikan East, Mountain Point, North Tongass Highway, Pennock Island and Saxman East), but have not been used since. The remaining population was scattered throughout Alaska, both within organized boroughs and in the Unorganized Borough, in largely remote areas.
№ Community name Type 2010 Pop. 1 Anchorage City 291,826 2 Fairbanks City 31,535 3 Juneau City 31,275 4 Badger CDP 19,482 5 Knik-Fairview CDP 14,923 6 College CDP 12,964 7 Sitka City 8,881 8 Lakes CDP 8,364 9 Tanaina CDP 8,197 10 Ketchikan City 8,050 11 Kalifornsky CDP 7,850 12 Wasilla City 7,831 13 Meadow Lakes CDP 7,570 14 Kenai City 7,100 15 Steele Creek CDP 6,662 16 Kodiak City 6,130 17 Bethel City 6,080 18 Palmer City 5,937 19 Chena Ridge CDP 5,791 20 Sterling CDP 5,617 21 Gateway CDP 5,552 22 Homer City 5,003 23 Farmers Loop CDP 4,853 24 Fishhook CDP 4,679 25 Nikiski CDP 4,493 26 Unalaska City 4,376 27 Barrow City 4,212 28 Soldotna City 4,163 29 Valdez City 3,976 30 Nome City 3,598 31 Goldstream CDP 3,557 32 Big Lake CDP 3,350 33 Butte CDP 3,246 34 Kotzebue City 3,201 35 Petersburg City 2,948 36 Seward City 2,693 37 Eielson AFB CDP 2,647 38 Ester CDP 2,422 39 Wrangell City 2,369 40 Dillingham City 2,329 41 Deltana CDP 2,251 42 Cordova City 2,239 43 Prudhoe Bay CDP 2,174 44 North Pole City 2,117 45 Willow CDP 2,102 46 Ridgeway CDP 2,022 47 Bear Creek CDP 1,956 48 Fritz Creek CDP 1,932 49 Anchor Point CDP 1,930 50 Houston City 1,912 № Community name Type 2010 Pop. 51 Haines CDP 1,713 52 Lazy Mountain CDP 1,479 53 Sutton-Alpine CDP 1,447 54 Metlakatla CDP 1,405 55 Cohoe CDP 1,364 56 Kodiak Station CDP 1,301 57 Susitna North CDP 1,260 58 Tok CDP 1,258 59 Craig City 1,201 60 Diamond Ridge CDP 1,156 61 Salcha CDP 1,095 62 Hooper Bay City 1,093 63 Farm Loop CDP 1,028 64 Akutan City 1,027 65 Healy CDP 1,021 66 Salamatof CDP 980 67 Sand Point City 976 68 Delta Junction City 958 69 Chevak City 938 King Cove City 71 Skagway CDP 920 72 Ninilchik CDP 883 73 Funny River CDP 877 74 Talkeetna CDP 876 75 Buffalo Soapstone CDP 855 76 Selawik City 829 77 Togiak City 817 78 Mountain Village City 813 79 Emmonak City 762 80 Hoonah City 760 81 Klawock City 755 82 Moose Creek CDP 747 83 Knik River CDP 744 84 Pleasant Valley CDP 725 85 Kwethluk City 72186 Two Rivers CDP 719 Women's Bay CDP 88 Unalakleet City 688 89 Fox River CDP 685 90 Gambell City 681 91 Alakanuk City 677 92 Point Hope City 674 93 Savoonga City 671 94 Quinhagak City 669 95 Noorvik City 668 96 Yakutat CDP 662 97 Kipnuk CDP 639 98 Akiachak CDP 627 99 Happy Valley CDP 593 100 Big Delta CDP 591
Education
thumb|The Kachemak Bay Campus of the University of Alaska Anchorage, located in downtown Homer.
The Alaska Department of Education and Early Development administers many school districts in Alaska. In addition, the state operates a boarding school, Mt. Edgecumbe High School in Sitka, and provides partial funding for other boarding schools, including Nenana Student Living Center in Nenana and The Galena Interior Learning Academy in Galena.
There are more than a dozen colleges and universities in Alaska. Accredited universities in Alaska include the University of Alaska Anchorage, University of Alaska Fairbanks, University of Alaska Southeast, and Alaska Pacific University.These are the only three universities in the state ranked by U.S. News & World Report. Alaska is the only state that has no institutions that are part of the NCAA Division I.
The Alaska Department of Labor and Workforce Development operates AVTEC, Alaska's Institute of Technology. Campuses in Seward and Anchorage offer 1 week to 11-month training programs in areas as diverse as Information Technology, Welding, Nursing, and Mechanics.
Alaska has had a problem with a "brain drain". Many of its young people, including most of the highest academic achievers, leave the state after high school graduation and do not return. , Alaska did not have a law school or medical school. The University of Alaska has attempted to combat this by offering partial four-year scholarships to the top 10% of Alaska high school graduates, via the Alaska Scholars Program.
Public health and public safety
The Alaska State Troopers are Alaska's statewide police force. They have a long and storied history, but were not an official organization until 1941. Before the force was officially organized, law enforcement in Alaska was handled by various federal agencies. Larger towns usually have their own local police and some villages rely on "Public Safety Officers" who have police training but do not carry firearms. In much of the state, the troopers serve as the only police force available. In addition to enforcing traffic and criminal law, wildlife Troopers enforce hunting and fishing regulations. Due to the varied terrain and wide scope of the Troopers' duties, they employ a wide variety of land, air, and water patrol vehicles.
Many rural communities in Alaska are considered "dry," having outlawed the importation of alcoholic beverages. Suicide rates for rural residents are higher than urban.
Domestic abuse and other violent crimes are also at high levels in the state; this is in part linked to alcohol abuse. Alaska has the highest rate of sexual assault in the nation, especially in rural areas. The average age of sexually assaulted victims is 16 years old. In four out of five cases, the suspects were relatives, friends or acquaintances.
Culture
thumb|A dog team in the Iditarod Trail Sled Dog Race, arguably the most popular winter event in Alaska.
Some of Alaska's popular annual events are the Iditarod Trail Sled Dog Race that starts in Anchorage and ends in Nome, World Ice Art Championships in Fairbanks, the Blueberry Festival and Alaska Hummingbird Festival in Ketchikan, the Sitka Whale Fest, and the Stikine River Garnet Fest in Wrangell. The Stikine River attracts the largest springtime concentration of American bald eagles in the world.
The Alaska Native Heritage Center celebrates the rich heritage of Alaska's 11 cultural groups. Their purpose is to encourage cross-cultural exchanges among all people and enhance self-esteem among Native people. The Alaska Native Arts Foundation promotes and markets Native art from all regions and cultures in the State, using the internet.
Music
Influences on music in Alaska include the traditional music of Alaska Natives as well as folk music brought by later immigrants from Russia and Europe. Prominent musicians from Alaska include singer Jewel, traditional Aleut flautist Mary Youngblood, folk singer-songwriter Libby Roderick, Christian music singer/songwriter Lincoln Brewster, metal/post hardcore band 36 Crazyfists and the groups Pamyua and Portugal. The Man.
There are many established music festivals in Alaska, including the Alaska Folk Festival, the Fairbanks Summer Arts Festival, the Anchorage Folk Festival, the Athabascan Old-Time Fiddling Festival, the Sitka Jazz Festival, and the Sitka Summer Music Festival. The most prominent orchestra in Alaska is the Anchorage Symphony Orchestra, though the Fairbanks Symphony Orchestra and Juneau Symphony are also notable. The Anchorage Opera is currently the state's only professional opera company, though there are several volunteer and semi-professional organizations in the state as well.
The official state song of Alaska is "Alaska's Flag", which was adopted in 1955; it celebrates the flag of Alaska.
Alaska in film and on television
thumb|right|upright|Films featuring Alaskan wolves usually employ domesticated wolf-dog hybrids to stand in for wild wolves.
Alaska's first independent picture entirely made in Alaska was The Chechahcos, produced by Alaskan businessman Austin E. Lathrop and filmed in and around Anchorage. Released in 1924 by the Alaska Moving Picture Corporation, it was the only film the company made.
One of the most prominent movies filmed in Alaska is MGM's Eskimo/Mala The Magnificent, starring Alaska Native Ray Mala. In 1932 an expedition set out from MGM's studios in Hollywood to Alaska to film what was then billed as "The Biggest Picture Ever Made." Upon arriving in Alaska, they set up "Camp Hollywood" in Northwest Alaska, where they lived during the duration of the filming. Louis B. Mayer spared no expense in spite of the remote location, going so far as to hire the chef from the Hotel Roosevelt in Hollywood to prepare meals.
When Eskimo premiered at the Astor Theatre in New York City, the studio received the largest amount of feedback in its history to that point. Eskimo was critically acclaimed and released worldwide; as a result, Mala became an international movie star. Eskimo won the first Oscar for Best Film Editing at the Academy Awards, and showcased and preserved aspects of Inupiat culture on film.
The 1983 Disney movie Never Cry Wolf was at least partially shot in Alaska. The 1991 film White Fang, based on Jack London's novel and starring Ethan Hawke, was filmed in and around Haines. Steven Seagal's 1994 On Deadly Ground, starring Michael Caine, was filmed in part at the Worthington Glacier near Valdez. The 1999 John Sayles film Limbo, starring David Strathairn, Mary Elizabeth Mastrantonio, and Kris Kristofferson, was filmed in Juneau.
The psychological thriller Insomnia, starring Al Pacino and Robin Williams, was shot in Canada, but was set in Alaska. The 2007 film directed by Sean Penn, Into The Wild, was partially filmed and set in Alaska. The film, which is based on the novel of the same name, follows the adventures of Christopher McCandless, who died in a remote abandoned bus along the Stampede Trail west of Healy in 1992.
Many films and television shows set in Alaska are not filmed there; for example, Northern Exposure, set in the fictional town of Cicely, Alaska, was filmed in Roslyn, Washington. The 2007 horror feature 30 Days of Night is set in Barrow, but was filmed in New Zealand.
Many reality television shows are filmed in Alaska. In 2011 the Anchorage Daily News'' found ten set in the state.
State symbols
thumb|The forget-me-not is the state's official flower and bears the same blue and gold as the state flag.
State motto: North to the Future
Nicknames: "The Last Frontier" or "Land of the Midnight Sun" or "Seward's Icebox"
State bird: willow ptarmigan, adopted by the Territorial Legislature in 1955. It is a small () Arctic grouse that lives among willows and on open tundra and muskeg. Plumage is brown in summer, changing to white in winter. The willow ptarmigan is common in much of Alaska.
State fish: king salmon, adopted 1962.
State flower: wild/native forget-me-not, adopted by the Territorial Legislature in 1917. It is a perennial that is found throughout Alaska, from Hyder to the Arctic Coast, and west to the Aleutians.
State fossil: woolly mammoth, adopted 1986.
State gem: jade, adopted 1968.
State insect: four-spot skimmer dragonfly, adopted 1995.
State land mammal: moose, adopted 1998.
State marine mammal: bowhead whale, adopted 1983.
State mineral: gold, adopted 1968.
State song: "Alaska's Flag"
State sport: dog mushing, adopted 1972.
State tree: Sitka spruce, adopted 1962.
State dog: Alaskan Malamute, adopted 2010.
State soil: Tanana,TANANA – ALASKA STATE SOIL U.S. Department of Agriculture adopted unknown.
See also
Index of Alaska-related articles
Outline of Alaska – organized list of topics about Alaska
Sports in Alaska
Notes
References
External links
Alaska's Digital Archives
Alaska Inter-Tribal Council
US federal government
Alaska State Guide from the Library of Congress
Energy & Environmental Data for Alaska
USGS real-time, geographic, and other scientific resources of Alaska
US Census Bureau
Alaska State Facts
Alaska Statehood Subject Guide from the Eisenhower Presidential Library
Alaska Statehood documents, Dwight D. Eisenhower Presidential Library
Alaska state government
State of Alaska website
Alaska State Databases – Annotated list of searchable databases produced by Alaska state agencies and compiled by the Government Documents Roundtable of the American Library Association.
Alaska Department of Natural Resources, Recorder's Office
Category:Arctic Ocean
Category:Former Russian colonies
Category:States and territories established in 1959
Category:States of the United States
Category:U.S. states with multiple time zones
Category:1959 establishments in the United States
Category:Western United States | 624 | 2017-01 |
Buddhism | thumb|alt=standing Buddha statue with draped garmet and halo|Standing Buddha statue at the Tokyo National Museum. One of the earliest known representations of the Buddha, 1st–2nd century CE.
Buddhism ( or ) is a religion"Buddhism". (2009). In Encyclopædia Britannica. Retrieved November 26, 2009, from Encyclopædia Britannica Online Library Edition and dharma that encompasses a variety of traditions, beliefs and spiritual practices largely based on teachings attributed to the Buddha. Buddhism originated in India sometime between the 6th and 4th centuries BCE, from where it spread through much of Asia, whereafter it declined in India during the middle ages. Two major extant branches of Buddhism are generally recognized by scholars: Theravada (Pali: "The School of the Elders") and Mahayana (Sanskrit: "The Great Vehicle"). Buddhism is the world's fourth-largest religion, with over 500 million followers or 7% of the global population, known as Buddhists.
Buddhist schools vary on the exact nature of the path to liberation, the importance and canonicity of various teachings and scriptures, and especially their respective practices. Practices of Buddhism include taking refuge in the Buddha, the Dharma and the Sangha, study of scriptures, observance of moral precepts, renunciation of craving and attachment, the practice of meditation (including calm and insight), the cultivation of wisdom, loving-kindness and compassion, the Mahayana practice of bodhicitta and the Vajrayana practices of generation stage and completion stage.
In Theravada the ultimate goal is the attainment of the sublime state of Nirvana, achieved by practicing the Noble Eightfold Path (also known as the Middle Way), thus escaping what is seen as a cycle of suffering and rebirth. Theravada has a widespread following in Sri Lanka and Southeast Asia.
Mahayana, which includes the traditions of Pure Land, Zen, Nichiren Buddhism, Shingon and Tiantai (Tendai), is found throughout East Asia. Rather than Nirvana, Mahayana instead aspires to Buddhahood via the bodhisattva path, a state wherein one remains in the cycle of rebirth to help other beings reach awakening. Vajrayana, a body of teachings attributed to Indian siddhas, may be viewed as a third branch or merely a part of Mahayana. Tibetan Buddhism, which preserves the Vajrayana teachings of eighth century India, is practiced in regions surrounding the Himalayas, Mongolia and Kalmykia."Candles in the Dark: A New Spirit for a Plural World" by Barbara Sundberg Baudot, p305 Tibetan Buddhism aspires to Buddhahood or rainbow body.
Life of the Buddha
thumb|left|alt=stone relief sculpture of horse and men |"The Great Departure", relic depicting Gautama leaving home, first or second century (Musée Guimet).
Buddhism is an Indian religion attributed to the teachings of Buddha. The details of Buddha's life are mentioned in many early Buddhist texts but are inconsistent, his social background and life details are difficult to prove, the precise dates uncertain.
The evidence of the early texts suggests that he was born as Siddhārtha Gautama in Lumbini and grew up in Kapilavatthu, a town in the plains region of modern Nepal-India border, and that he spent his life in what is now modern Bihar and Uttar Pradesh. Some hagiographic legends state that his father was a king named Suddhodana, his mother queen Maya, and he was born in Lumbini gardens. However, scholars such as Richard Gombrich consider this a dubious claim because a combination of evidence suggests he was born in the Shakyas community – one that later gave him the title Shakyamuni, and the Shakya community was governed by a small oligarchy or republic-like council where there were no ranks but where seniority mattered instead. Some of the stories about Buddha, his life, his teachings, and claims about the society he grew up in may have been invented and interpolated at a later time into the Buddhist texts.
thumb|alt=Dhamek Stupa shrine in Sarnath, India, built by Ashoka where the Buddha gave his first sermon|Dhamek Stupa in Sarnath, India, where the Buddha gave his first sermon. It was built by Ashoka.
Early Buddhist canonical texts and early biographies of Buddha state that Gautama studied under Vedic teachers, such as Alara Kalama (Sanskrit: Arada Kalama) and Uddaka Ramaputta (Sanskrit: Udraka Ramaputra), learning meditation and ancient philosophies, particularly the concept of "nothingness, emptiness" from the former, and "what is neither seen nor unseen" from the latter.
thumb|left|alt=Gold colored statue of Buddha reclining on his right side|Buddha statue depicting Parinirvana (Mahaparinirvana Temple, Kushinagar, Uttar Pradesh, India).
Buddha was moved by the innate suffering of humanity. He meditated on this alone for an extended period of time, in various ways including asceticism, on the nature of suffering and means to overcome suffering. He famously sat in meditation under a Ficus religiosa tree now called the Bodhi Tree in the town of Bodh Gaya in Gangetic plains region of South Asia. He reached enlightenment, discovering what Buddhists call the Middle Way (Skt. madhyamā-pratipad), a path of spiritual practice to end suffering (dukkha) from rebirths in Saṃsāra. As an enlightened being (Skt. ), he attracted followers and founded a Sangha (monastic order). Now, as the Buddha, he spent the rest of his life teaching the Dharma he had discovered, and died at the age of 80 in Kushinagar, India.
Buddha's teachings were propagated by his followers, which in the last centuries of the 1st millennium BCE became over 18 Buddhist sub-schools of thought, each with its own basket of texts containing different interpretations and authentic teachings of the Buddha; these over time evolved into many traditions of which the more well known and widespread in the modern era are Theravada, Mahayana and Vajrayana Buddhism.
Buddhist concepts
Dukkha
thumb|alt=color manuscript illustration of Buddha teaching the Four Noble Truths, Nalanda, Bihar, India|The Buddha teaching the Four Noble Truths. Sanskrit manuscript. Nalanda, Bihar, India.
Dukkha is a central concept of Buddhism and part of its Four Noble Truths doctrine, and a central characteristic of life in this world. It can be translated as "incapable of satisfying,"Ajahn Sumedho, The First Noble Truth (nb: links to index-page; click "The First Noble Truth" for correct page. "the unsatisfactory nature and the general insecurity of all conditioned phenomena"; "painful." Dukkha is most commonly translated as "suffering," which is an incorrect translation, since it refers not to literal suffering, but to the ultimately unsatisfactory nature of temporary states and things, including pleasant but temporary experiences.
The Four Truths express the basic orientation of Buddhism: we crave and cling to impermanent states and things, which is dukkha, "incapable of satisfying"Ajahn Sumedho, The First Noble Truth (nb: links to index-page; click "The First Noble Truth" for correct page. and painful. This keeps us caught in saṃsāra, the endless cycle of repeated rebirth, dukkha and dying again.
But there is a way to liberation from this endless cycle to the state of nirvana, namely following the Noble Eightfold Path.
The truth of dukkha is the basic insight that life in this "mundane world," with its clinging and craving to impermanent states and things" is dukkha, and unsatisfactory. We expect happiness from states and things which are impermanent, and therefore cannot attain real happiness.
Dukkha arises when we crave (Pali: tanha) and cling to these changing phenomena. The clinging and craving produces karma, which ties us to samsara, the round of death and rebirth.The Four Noble Truths – By Bhikkhu Bodhi Craving includes kama-tanha, craving for sense-pleasures; bhava-tanha, craving to continue the cycle of life and death, including rebirth; and vibhava-tanha, craving to not experience the world and painful feelings.
Dukkha ceases, or can be confined, when craving and clinging cease or are confined. This also means that no more karma is being produced, and rebirth ends. Cessation is nirvana, "blowing out," and peace of mind.
By following the Buddhist path to moksha, liberation, one starts to disengage from craving and clinging to impermanent states and things. The term "path" is usually taken to mean the Noble Eightfold Path, but other versions of "the path" can also be found in the Nikayas. The Theravada tradition regards insight into the four truths as liberating in itself.
In Buddhism, dukkha is one of the three marks of existence, along with impermanence and anattā (non-self). Buddhism, like other major Indian religions, asserts that everything is impermanent (anicca), but, unlike them, also asserts that there is no permanent self or soul in living beings (anattā).Anatta Buddhism, Encyclopedia Britannica (2013)[a] [b] Gombrich (2006), page 47, Quote: "(...) Buddha's teaching that beings have no soul, no abiding essence. This 'no-soul doctrine' (anatta-vada) he expounded in his second sermon."[a] Anatta, Encyclopedia Britannica (2013), Quote: "Anatta in Buddhism, the doctrine that there is in humans no permanent, underlying soul. The concept of anatta, or anatman, is a departure from the Hindu belief in atman ("the self").";[b] Steven Collins (1994), Religion and Practical Reason (Editors: Frank Reynolds, David Tracy), State Univ of New York Press, ISBN 978-0791422175, page 64; "Central to Buddhist soteriology is the doctrine of not-self (Pali: anattā, Sanskrit: anātman, the opposed doctrine of ātman is central to Brahmanical thought). Put very briefly, this is the [Buddhist] doctrine that human beings have no soul, no self, no unchanging essence.";[c] John C. Plott et al (2000), Global History of Philosophy: The Axial Age, Volume 1, Motilal Banarsidass, ISBN 978-8120801585, page 63, Quote: "The Buddhist schools reject any Ātman concept. As we have already observed, this is the basic and ineradicable distinction between Hinduism and Buddhism";[d] Katie Javanaud (2013), Is The Buddhist 'No-Self' Doctrine Compatible With Pursuing Nirvana?, Philosophy Now;[e] David Loy (1982), Enlightenment in Buddhism and Advaita Vedanta: Are Nirvana and Moksha the Same?, International Philosophical Quarterly, Volume 23, Issue 1, pages 65–74 The ignorance or misperception (avijjā) that anything is permanent or that there is self in any being is considered a wrong understanding, and the primary source of clinging and dukkha., Quote: "(...) anatta is the doctrine of non-self, and is an exteme empiricist doctrine that holds that the notion of an unchanging permanent self is a fiction and has no reality. According to Buddhist doctrine, the individual person consists of five skandhas or heaps – the body, feelings, perceptions, impulses and consciousness. The belief in a self or soul, over these five skandhas, is illusory and the cause of suffering."
Rebirth
left|thumb|alt=Traditional Tibetan Buddhist Thangka depicting the Wheel of Life|Traditional Tibetan Buddhist Thangka depicting the Wheel of Life with its six realms
Saṃsāra
Saṃsāra means "wandering" or "world", with the connotation of cyclic, circuitous change. It refers to the theory of rebirth and "cyclicality of all life, matter, existence", a fundamental assumption of Buddhism, as with all major Indian religions. Samsara in Buddhism is considered to be dukkha, unsatisfactory and painful, perpetuated by desire and avidya (ignorance), and the resulting karma.
The theory of rebirths, and realms in which these rebirths can occur, is extensively developed in Buddhism, in particular Tibetan Buddhism with its wheel of existence (Bhavacakra) doctrine. Liberation from this cycle of existence, Nirvana, has been the foundation and the most important historical justification of Buddhism.
The later Buddhist texts assert that rebirth can occur in six realms of existence, namely three good realms (heavenly, demi-god, human) and three evil realms (animal, hungry ghosts, hellish). Samsara ends if a person attains nirvana, the "blowing out" of the desires and the gaining of true insight into impermanence and non-self reality.
Rebirth
thumb|alt=A very large hill behind two palm trees and a boulevard, people walking are about one fifth the hill's height|Gautama's cremation site, Ramabhar Stupa in Kushinagar, Uttar Pradesh, India
Rebirth refers to a process whereby beings go through a succession of lifetimes as one of many possible forms of sentient life, each running from conception to death. In Buddhist thought, this rebirth does not involve any soul, because of its doctrine of anattā (Sanskrit: anātman, no-self doctrine) which rejects the concepts of a permanent self or an unchanging, eternal soul, as it is called in Hinduism and Christianity. According to Buddhism there ultimately is no such thing as a self in any being or any essence in any thing.[a] [b] , Quote: "(...) anatta is the doctrine of non-self, and is an exteme empiricist doctrine that holds that the notion of an unchanging permanent self is a fiction and has no reality. According to Buddhist doctrine, the individual person consists of five skandhas or heaps – the body, feelings, perceptions, impulses and consciousness. The belief in a self or soul, over these five skandhas, is illusory and the cause of suffering."[c] Gombrich (2006), page 47, Quote: "(...) Buddha's teaching that beings have no soul, no abiding essence. This 'no-soul doctrine' (anatta-vada) he expounded in his second sermon."
The Buddhist traditions have traditionally disagreed on what it is in a person that is reborn, as well as how quickly the rebirth occurs after each death. Some Buddhist traditions assert that "no self" doctrine means that there is no perduring self, but there is avacya (inexpressible) self which migrates from one life to another. The majority of Buddhist traditions, in contrast, assert that vijñāna (a person's consciousness) though evolving, exists as a continuum and is the mechanistic basis of what undergoes rebirth, rebecoming and redeath. The rebirth depends on the merit or demerit gained by one's karma, as well as those accrued on one's behalf by a family member.
Each rebirth takes place within one of five realms according to Theravadins, or six according to other schools – heavenly, demi-gods, humans, animals, hungry ghosts and hellish.
In East Asian and Tibetan Buddhism, rebirth is not instantaneous, and there is an intermediate state (Tibetan "bardo") between one life and the next. The orthodox Theravada position rejects the wait, and asserts that rebirth of a being is immediate. However there are passages in the Samyutta Nikaya of the Pali Canon that seem to lend support to the idea that the Buddha taught of an intermediate stage between one life and the next.
Karma
In Buddhism, Karma (from Sanskrit: "action, work") drives saṃsāra—the endless cycle of suffering and rebirth for each being. Good, skilful deeds (Pali: "kusala") and bad, unskilful deed (Pāli: "akusala") produce "seeds" in the unconscious receptacle (ālaya) that mature later either in this life or in a subsequent rebirth. The existence of Karma is a core belief in Buddhism, as with all major Indian religions, it implies neither fatalism nor that everything that happens to a person is caused by Karma.
A central aspect of Buddhist theory of karma is that intent (cetanā) matters and is essential to bring about a consequence or phala "fruit" or vipāka "result". However, good or bad karma accumulates even if there is no physical action, and just having ill or good thoughts create karmic seeds; thus, actions of body, speech or mind all lead to karmic seeds. In the Buddhist traditions, life aspects affected by the law of karma in past and current births of a being include form of rebirth, realm of rebirth, social class, character and major circumstances of a lifetime. It operates like the laws of physics, without external intervention, on every being in all six realms of existence including human beings and gods.
A notable aspect of the karma theory in Buddhism is merit transfer. A person accumulates merit not only through intentions and ethical living, but also is able to gain merit from others by exchanging goods and services, such as through dāna (charity to monks or nuns). Further, a person can transfer one's own good karma to living family members and ancestors.
Liberation
thumb|right|alt=stone Mahabodhi temple in Bodh Gaya, India, where Gautama Buddha attained Nirvana under the Bodhi Tree|Mahabodhi Temple in Bodh Gaya, India, where Gautama Buddha attained nirvana under the Bodhi Tree (left)
Nirvana (nibbāna) has been the primary and the soteriological goal of the Buddhist path for monastic life, since the time of the Buddha. The term "path" is usually taken to mean the Noble Eightfold Path, but other versions of "the path" can also be found in the Nikayas. For example, in some Pali Canons, the Buddha explains that the cultivation of the noble eightfold path by a learner monk leads to the development of two further paths of the Arhats, which are right knowledge or insight (sammā-ñāṇa), and right liberation or release (sammā-vimutti).
Nirvana literally means "blowing out, quenching, becoming extinguished". In early Buddhist texts, it is the state of restraint and self-control that leads to the "blowing out" and the ending of the cycles of sufferings associated with rebirths and redeaths., Quote: "This general scheme remained basic to later Hinduism, to Jainism, and to Buddhism. Eternal salvation, to use the Christian term, is not conceived of as world without end; we have already got that, called samsara, the world of rebirth and redeath: that is the problem, not the solution. The ultimate aim is the timeless state of moksha, or as the Buddhists seem to have been the first to call it, nirvana." Many later Buddhist texts describe nirvana as identical with Anatta with complete "Emptiness, Nothingness". In some texts, the state is described with greater detail, such as passing through the gate of Emptiness (sunyata) – realizing that there is no soul or self in any living being, then passing through the gate of signlessness (animitta) – realizing that nirvana cannot be perceived, and finally passing through the gate of wishlessness (apranihita) – realizing that nirvana is the state of not even wishing for nirvana.
The nirvana state has been described in Buddhist texts partly in a manner similar to other Indian religions, as the state of complete liberation, enlightenment, highest happiness, bliss, fearlessness, freedom, permanence, non-dependent origination, unfathomable, indescribable. It has also been described in part differently, as a state of spiritual release marked by "emptiness" and realization of non-Self.
While Buddhism considers the liberation from Saṃsāra as the ultimate spiritual goal, in traditional practice, the primary focus of a vast majority of lay Buddhists has been to seek and accumulate merit through good deeds, donations to monks and various Buddhist rituals in order to gain better rebirths rather than nirvana.
Bhavana (practice, cultivation)
Basic practices include sila (ethics), samadhi (meditation, dhyana) and prajna (wisdom), as described in the Noble Eightfold Path. An important additional practice is a kind and compassionate attitude toward every living being and the world. Devotion is also important in some Buddhist traditions, and in the Tibetan traditions visualizations of deities and mandalas are important. The value of textual study is regarded differently in the various Buddhist traditions. It is central to Theravada and highly important to Tibetan Buddhism, while the Zen tradition takes an ambiguous stance.
The Buddhist path
While the Noble Eightfold Path is best-known in the west, a wide variety of practices and stages have been used and described in the Buddhist traditions. Even in the Theravada canon, the Pali-suttas, various often irreconcilable sequences can be found. According to Carol Anderson, the Theravada-canon lacks "an overriding and comprehensive structure of the path to nibbana."
Middle Way
An important guiding principle of Buddhist practice is the Middle Way (madhyamapratipad). It was a part of Buddha's first sermon, where he presented the Noble Eightfold Path that was a 'middle way' between the extremes of asceticism and hedonistic sense pleasures. In Buddhism, states Harvey, the doctrine of "dependent arising" (conditioned arising, pratītyasamutpāda) to explain rebirth is viewed as the 'middle way' between the doctrines that a being has a "permanent soul" involved in rebirth (eternalism) and "death is final and there is no rebirth" (annihilationism).
Theravada
thumb|160px|alt=ships wheel with eight spokes represents the Noble Eightfold Path|The Dharmachakra represents the Noble Eightfold Path
Noble Eightfold Path
The Noble Eightfold Path, or "Eightfold Path of the Noble Ones", consists of a set of eight interconnected factors or conditions, that when developed together, lead to the cessation of dukkha. These eight factors are: Right View (or Right Understanding), Right Intention (or Right Thought), Right Speech, Right Action, Right Livelihood, Right Effort, Right Mindfulness, and Right Concentration.
This Eightfold Path is the fourth of the Buddha's Four Noble Truths, and asserts the path to the cessation of dukkha (suffering, pain, unsatisfactoriness). The path teaches that the way of the enlightened ones stopped their craving, clinging and karmic accumulations, and thus ended their endless cycles of rebirth and suffering.
The Noble Eightfold Path is grouped into three basic divisions, as follows:
Division Eightfold factor Sanskrit, Pali DescriptionWisdom(Sanskrit: prajñā,Pāli: paññā)1. Right viewsamyag dṛṣṭi,sammā ditthithe belief that there is an afterlife and not everything ends with death, that Buddha taught and followed a successful path to nirvana; According to Peter Harvey, the right view is held in Buddhism as a belief in the Buddhist principles of karma and rebirth, and the importance of the Four Noble Truths and the True Realities.2. Right intentionsamyag saṃkalpa,sammā saṅkappathe giving up home and adopting the life of a religious mendicant in order to follow the path; this concept, states Harvey, aims at peaceful renunciation, into an environment of non-sensuality, non-ill-will (to lovingkindness), away from cruelty (to compassion).Moral virtues(Sanskrit: śīla,Pāli: sīla)3. Right speechsamyag vāc,sammā vācano lying, no rude speech, no telling one person what another says about him, speaking that which leads to salvation;4. Right actionsamyag karman,sammā kammantano killing or injurying, no taking what is not given; no sexual acts in monastic pursuit, for lay Buddhists no sensual misconduct such as sexual involvement with someone married, or with an unmarried woman protected by her parents or relatives.5. Right livelihoodsamyag ājīvana,sammā ājīvaFor monks, beg to feed, only possessing what is essential to sustain life. For lay Buddhists, the canonical texts state right livelihood as abstaining from wrong livelihood, explained as not becoming a source or means of suffering to sentient beings by cheating them, or harming or killing them in any way.; Quote: These five trades, O monks, should not be taken up by a lay follower: trading with weapons, trading in living beings, trading in meat, trading in intoxicants, trading in poison."Meditation(Sanskrit and Pāli: samādhi)6. Right effortsamyag vyāyāma,sammā vāyāmaguard against sensual thoughts; this concept, states Harvey, aims at preventing unwholesome states that disrupt meditation.7. Right mindfulnesssamyag smṛti,sammā satinever be absent minded, conscious of what one is doing; this, states Harvey, encourages the mindfulness about impermanence of body, feeling and mind, as well as to experience the five skandhas, the five hindrances, the four True Realities and seven factors of awakening.8. Right concentrationsamyag samādhi,sammā samādhiCorrect meditation or concentration, explained as the four jhānas.
Mahayana
Six paramitas
thumb|Dāna or charitable giving to monks is a virtue in Buddhism, leading to merit accumulation and better rebirths.
Mahāyāna Buddhism is based principally upon the path of a Bodhisattva. A Bodhisattva refers to one who is on the path to buddhahood. The term Mahāyāna was originally a synonym for Bodhisattvayāna or "Bodhisattva Vehicle."
In the earliest texts of Mahayana Buddhism, the path of a bodhisattva was to awaken the bodhicitta. Between 1st and 3rd century CE, this tradition introduced the Ten Bhumi doctrine, which means ten levels or stages of awakening. This development was followed by the acceptance that it is impossible to achieve Buddhahood in one (current) lifetime, and the best goal is not nirvana for oneself, but Buddhahood after climbing through the ten levels during multiple rebirths. Mahayana scholars then outlined an elaborate path, for monks and laypeople, and the path includes the vow to help teach Buddhist knowledge to other beings, so as to help them cross samsara and liberate themselves, once one reaches the Buddhahood in a future rebirth. One part of this path are the Pāramitā (perfections, to cross over), derived from the Jatakas tales of Buddha's numerous rebirths.
The Mahayana texts are inconsistent in their discussion of the Paramitas, and some texts include lists of two, others four, six, ten and fifty two. The six paramitas have been most studied, and these are:
Dāna pāramitā: perfection of giving; primarily to monks, nuns and the Buddhist monastic establishment dependent on the alms and gifts of the lay householders, in return for generating religious merit; some texts recommend ritually transferring the merit so accumulated for better rebirth to someone else
Śīla pāramitā : perfection of morality; it outlines ethical behaviour for both the laity and the Mahayana monastic community; this list is similar to Śīla in the Eightfold Path (i.e. Right Speech, Right Action, Right Livelihood)
pāramitā : perfection of patience, willingness to endure hardship
Vīrya pāramitā : perfection of vigour; this is similar to Right Effort in the Eightfold Path
Dhyāna pāramitā : perfection of meditation; this is similar to Right Concentration in the Eightfold Path
Prajñā pāramitā : perfection of insight (wisdom), awakening to the characteristics of existence such as karma, rebirths, impermanence, no-self, dependent origination and emptiness; this is complete acceptance of the Buddha teaching, then conviction, followed by ultimate realization that "dharmas are non-arising".
In Mahayana Sutras that include ten Paramitas, the additional four perfections are "skillful means, vow, power and knowledge". The most discussed Paramita and the highest rated perfection in Mahayana texts is the "Prajna-paramita", or the "perfection of insight". This insight in the Mahayana tradition, states Shōhei Ichimura, has been the "insight of non-duality or the absence of reality in all things".
Refuge in the Three Jewels
thumb|alt=stone footprint Gautama Buddha with Dharmachakra and Three Jewels|Relic depicting footprint of the Buddha with Dharmachakra and triratna, 1st century CE, Gandhāra.
Traditionally, the first step in most Buddhist schools requires taking Three Refuges, also called the Three Jewels (Sanskrit: triratna, Pali: tiratana) as the foundation of one's religious practice. Pali texts employ the Brahmanical motif of the triple refuge, found in the Rigveda 9.97.47, Rigveda 6.46.9 and Chandogya Upanishad 2.22.3–4. Tibetan Buddhism sometimes adds a fourth refuge, in the lama. The three refuges are believed by Buddhists to be protective and a form of reverence.
The Three Jewels are:
The Buddha, the Gotama, the Blessed One, the Awakened with true knowledge
The Dharma, the precepts, the practice, the Four Truths, the Eightfold Path
The Sangha, order of monks, the community of Buddha's disciples
Reciting the three refuges is considered in Buddhism not as a place to hide, rather a thought that purifies, uplifts and strengthens.
Śīla – Buddhist ethics
thumb|alt=stone statue of Gautama Buddha, 1st century CE, Gandhara|Statue of Gautama Buddha, first century CE, Gandhara, present-day Pakistan. (Guimet Museum)
Śīla (Sanskrit) or sīla (Pāli) is the concept of "moral virtues", that is the second group and an integral part of the Noble Eightfold Path. It consists of right speech, right action and right livelihood.
Śīla appear as ethical precepts for both lay and ordained Buddhist devotees. It includes the Five Precepts for laypeople, Eight or Ten Precepts for monastic life, as well as rules of Dhamma (Vinaya or Patimokkha) adopted by a monastery.
Precepts
The five precepts (panca-sila) are moral behavioural and ritual guidelines for lay devotee in Buddhism, while those following a monastic life have rules of conduct (patimokkha). The five precepts apply to both male and female devotee, and these are:
Abstain from killing (Ahimsa);
Abstain from stealing;
Abstain from sensual (including sexual) misconduct;
Abstain from lying;
Abstain from intoxicants.
These precepts are not commandments and transgressions did not invite religious sanctions, but their power has been in the Buddhist belief in karmic consequences and their impact in afterlife during rebirth. Killing in Buddhist belief leads to rebirth in the hellish realm, and for a longer time in more severe conditions if the murder victim was a monk. Adultery, similarly, invites a rebirth as prostitute or in hell, depending on whether the partner was unmarried or married. Saving animals from slaughter for meat, is believed to be a way to acquire merit for better rebirth. These moral precepts have been voluntarily self-enforced in lay Buddhist culture through the associated belief in karma and rebirth.
The monastic life in Buddhism have additional precepts as part of patimokkha, and unlike lay people, transgressions by monks do invite sanctions. Full expulsion from sangha follows any instance of killing, engaging in sexual intercourse, theft or false claims about one's knowledge. Temporary expulsion follows a lesser offence. The sanctions vary by the monastic fraternity (nikaya).
The precepts for monks in many Buddhist fraternities are eight (asta shila) or ten (das shila). Four of these are same as for the lay devotee: no killing, no stealing, no lying, and no intoxicants. The other four precepts are:
No sexual activity;
Abstain from eating at wrong time (e.g. only eat solid food before 12 noon);
Abstain from jewelry, perfume, adornment, entertainment;
Abstain from sleeping on high beds;
Some sangha add two more precepts: abstain from dancing and singing, abstain from accepting money. In addition to these precepts, Buddhist monasteries have hundreds of rules of dhamma conduct, which are a part of its patimokkha.The Ten Precepts, Dasa Sila, The Buddhist Monastic Code, Volume I, Thanissaro Bhikkhu
Vinaya
thumb|alt=Buddhist monks in saffron robes standing performing a ceremony in Hangzhou, China|Monks performing a ceremony in Hangzhou, China
Vinaya is the specific code of conduct for a sangha of monks or nuns. It includes the Patimokkha, a set of 227 offences including 75 rules of decorum for monks, along with penalties for transgression, in the Theravadin tradition. The precise content of the Vinaya Pitaka (scriptures on the Vinaya) differs in different schools and tradition, and different monasteries set their own standards on its implementation. The list of pattimokkha is recited every fortnight in a ritual gathering of all monks. Buddhist text with vinaya rules for monasteries have been traced in all Buddhist traditions, with the oldest surviving being the ancient Chinese translations.
Monastic communities in the Buddhist tradition, cut normal social ties to family and community, and live as "islands unto themselves". Within a monastic fraternity, a sangha has its own rules. A monk abides by these institutionalized rules, and living life as the vinaya prescribes it is not merely a means, but very nearly the end in itself. Transgressions by a monk on Sangha vinaya rules invites enforcement, which can include temporary or permanent expulsion.
Meditation and insight
thumb|right|alt=bronze Statue of the Buddha in meditation position, Haw Phra Kaew, Vientiane Laos|Statue of the Buddha in meditation position, Haw Phra Kaew, Vientiane, Laos
The Buddhist tradition has incorporated two traditions regarding the use of dhyāna (meditation, Pali jhāna). There is a tradition that stresses attaining prajñā (insight, bodhi, kenshō, vipassana) as the means to awakening and liberation. But it has also incorporated the yogic tradition, as reflected in the use of jhana, which is rejected in other sutras as not resulting in the final result of liberation. Schmithausen discerns three possible roads to liberation as described in the suttas, to which Vetter adds the sole practice of dhyana itself, which he sees as the original "liberating practice":
The four Rupa Jhanas themselves constituted the core liberating practice of early buddhism, c.q. the Buddha;
Mastering the four Rupa Jhanas, where-after "liberating insight" is attained;
Mastering the four Rupa Jhanas and the four Arupa Jhanas, where-after "liberating insight" is attained;
Liberating insight itself suffices.
Dhyana – meditation
left|thumb|alt=Bhikkhus in saffron robes kneeling in Thailand|Bhikkhus in Thailand
A wide range of meditation practices has developed in the Buddhist traditions, but "meditation" primarily refers to the practice of dhyana c.q. jhana. It is a practice in which the attention of the mind is first narrowed to the focus on one specific object, such as the breath, a concrete object, or a specific thought, mental image or mantra. After this initial focussing of the mind, the focus is coupled to mindfulness, maintaining a calm mind while being aware of one's surroundings. The practice of dhyana aids in maintaining a calm mind, and avoiding disturbance of this calm mind by mindfulness of disturbing thoughts and feelings.
Origins
The earliest evidence of yogis and their meditative tradition, states Karel Werner, is found in the Keśin hymn 10.136 of the Rigveda. While evidence suggests meditation was practiced in the centuries preceding the Buddha, the meditative methodologies described in the Buddhist texts are some of the earliest among texts that have survived into the modern era. These methodologies likely incorporate what existed before the Buddha as well as those first developed within Buddhism.
According to Bronkhorst, the Four Dhyanas was a Buddhist invention. Bronkhorst notes that the Buddhist canon has a mass of contradictory statements, little is known about their relative chronology, and "there can be no doubt that the canon – including the older parts, the Sutra and Vinaya Pitaka – was composed over a long period of time". Meditative practices were incorporated from other sramanic movements; the Buddhist texts describe Buddha learnt the practice of the formless dhyana from Brahmanical practices, in the Nikayas ascribed to Alara Kalama and Uddaka Ramaputta. The Buddhist canon also describes and criticizes alternative dhyana practices, which likely mean the pre-existing mainstream meditation practices of Jainism and Hinduism.
Buddha added a new focus and interpretation, particularly through the Four Dhyanas methodology, in which mindfulness is maintained. Further, the focus of meditation and the underlying theory of liberation guiding the meditation has been different in Buddhism. For example, states Bronkhorst, the verse 4.4.23 of the Brihadaranyaka Upanishad with its "become calm, subdued, quiet, patiently enduring, concentrated, one sees soul in oneself" is most probably a meditative state. The Buddhist discussion of meditation is without the concept of soul and the discussion criticizes both the ascetic meditation of Jainism and the "real self, soul" meditation of Hinduism.
Four rupa-jhāna and four arupa-jhāna
For Nirvana, Buddhist texts teach various meditation methodologies, of which rupa-jhana (four meditations in the realm of form) and arupa-jhana (four meditations in the formless realm) have been the most studied. These are described in the Pali Canon as trance-like states in the world of desirelessness. The four dhyanas under rupa-jhanas are:
First dhyana: detach from all sensory desires and sinful states that are a source of unwholesome karma. Success here is described in Buddhist texts as leading to discursive thinking, deliberation, detachment, sukha (pleasure) and priti (rapture).
Second dhyana: cease deliberation and all discursive thoughts. Success leads to one-pointed thinking, serenity, pleasure and rapture.
Third dhyana: lose feeling of rapture. Success leads to equanimity, mindfulness and pleasure, without rapture.
Fourth dhyana: cease all effects, lose all happiness and sadness. Success in the fourth meditation stage leads to pure equanimity and mindfulness, without any pleasure or pain.
The arupa-jhanas (formless realm meditation) are also four, which are entered by those who have mastered the rupa-jhanas (Arhats). The first formless dhyana gets to infinite space without form or colour or shape, the second to infinity of perception base of the infinite space, the third formless dhyana transcends object-subject perception base, while the fourth is where he dwells in nothing-at-all where there are no feelings, no ideas, nor are there non-ideas, unto total cessation. The four rupa-dhyanas in Buddhist practice leads to rebirth in successfully better rupa Brahma heavenly realms, while arupa-dhyanas into arupa heavens.
Richard Gombrich notes that the sequence of the four rupa-jhanas describes two different cognitive states. The first two describe a narrowing of attention, while in the third and fourth jhana attention is expanded again.Original publication: Alexander Wynne further explains that the dhyana-scheme is poorly understood. According to Wynne, words expressing the inculcation of awareness, such as sati, sampajāno, and upekkhā, are mistranslated or understood as particular factors of meditative states, whereas they refer to a particular way of perceiving the sense objects.
The Brahma-vihara
thumb|alt=gilded statue of Buddha in Wat Phra Si Rattana Mahathat, Thailand|Statue of Buddha in Wat Phra Si Rattana Mahathat, Phitsanulok, Thailand
The four immeasurables or four abodes, also called Brahma-viharas, are virtues or directions for meditation in Buddhist traditions, which helps a person be reborn in the heavenly (Brahma) realm. These are traditionally believed to be a characteristic of the deity Brahma and the heavenly abode he resides in.
The four Brahma-vihara are:
Loving-kindness (Pāli: mettā, Sanskrit: maitrī) is active good will towards all;
Compassion (Pāli and Sanskrit: karuṇā) results from metta, it is identifying the suffering of others as one's own;
Empathetic joy (Pāli and Sanskrit: muditā): is the feeling of joy because others are happy, even if one did not contribute to it, it is a form of sympathetic joy;
Equanimity (Pāli: upekkhā, Sanskrit: upekṣā): is even-mindedness and serenity, treating everyone impartially.
According to Peter Harvey, the Buddhist scriptures acknowledge that the four Brahmavihara meditation practices "did not originate within the Buddhist tradition". The Brahmavihara (sometimes as Brahmaloka), along with the tradition of meditation and the above four immeasurables are found in pre-Buddha and post-Buddha Vedic and Sramanic literature. Aspects of the Brahmavihara practice for rebirths into heavenly realm has been an important part of Buddhist meditation tradition.
According to Gombrich, the Buddhist usage of the brahma-vihāra originally referred to an awakened state of mind, and a concrete attitude toward other beings which was equal to "living with Brahman" here and now. The later tradition took those descriptions too literal, linking them to cosmology and understanding them as "living with Brahman" by rebirth in the Brahma-world. According to Gombrich, "the Buddha taught that kindness – what Christians tend to call love – was a way to salvation.
Visualizations: deities, mandalas
thumb|Mandala are used in Buddhism for initiation ceremonies and visualization.
Idols of deity and icons have been a part of the historic practice, and Buddhist texts such as the 11th-century Sadanamala, wherein a devotee visualizes and identifies himself or herself with the imagined deity as part of meditation. This has been particularly popular in Vajrayana meditative traditions, but also found in Mahayana and Theravada traditions, particularly in temples and with Buddha image.
In Tibetan Buddhism tradition, mandala are mystical maps for the visualization process with cosmic symbolism. There are numerous deities, each with a mandala, and they are used during initiation ceremonies and meditation. The mandalas are concentric geometric shapes symbolizing layers of external world, gates and sacred space. The meditation deity is in the centre, sometimes surrounded by protective gods and goddesses. Visualizations with deities and mandalas in Buddhism is a tradition traceable to ancient times, and likely well established by the time the 5th-century text Visuddhimagga was composed.
Practice: monks, laity
According to Peter Harvey, whenever Buddhism has been healthy, not only ordained but also more committed lay people have practiced formal meditation. Loud devotional chanting however, adds Harvey, has been the most prevalent Buddhist practice and considered a form of meditation that produces "energy, joy, lovingkindness and calm", purifies mind and benefits the chanter.
Throughout most of Buddhist history, meditation has been primarily practiced in Buddhist monastic tradition, and historical evidence suggests that serious meditation by lay people has been an exception. In recent history, sustained meditation has been pursued by a minority of monks in Buddhist monasteries. Western interest in meditation has led to a revival where ancient Buddhist ideas and precepts are adapted to Western mores and interpreted liberally, presenting Buddhism as a meditation-based form of spirituality.
Prajñā – insight
thumb|alt=monks wearing crimson robes debating at Sera Monastery, Tibet|Monks debating at Sera Monastery, Tibet
Prajñā (Sanskrit) or paññā (Pāli) is insight or knowledge of the true nature of existence. The Buddhist tradition regards ignorance (avidyā), a fundamental ignorance, misunderstanding or mis-perception of the nature of reality, as one of the basic causes of dukkha and samsara. By overcoming ignorance or misunderstanding one is enlightened and liberated. This overcoming includes awakening to impermanence and non-self nature of reality, and this develops dispassion for the objects of clinging, and liberates a being from dukkha and saṃsāra., Quote: Suffering describes the condition of samsaric (this worldly) existence that arises from actions generated by ignorance of anatta and anicca. The doctrines of no-self and impermanence are thus the keystones of dhammic order." Prajñā is important in all Buddhist traditions, and is the wisdom about the dharmas, functioning of karma and rebirths, realms of samsara, impermanence of everything, no-self in anyone or anything, and dependent origination.
Origins
The origins of "liberating insight" is unclear. Buddhist texts, states Bronkhorst, do not describe it explicitly, and the content of "liberating insight" is likely not original to Buddhism and was "added under the influence of mainstream meditation".
Bronkhorst suggests that the conception of what exactly constituted "liberating insight" for Buddhists developed over time. Whereas originally it may not have been specified as an insight, later on the Four Noble Truths served as such, to be superseded by pratityasamutpada, and still later, in the Hinayana schools, by the doctrine of the non-existence of a substantial self or person.
In the Pali Canon liberating insight is attained in the fourth dhyana. However, states Vetter, modern scholarship on the Pali Canon has uncovered a "whole series of inconsistencies in the transmission of the Buddha's word", and there are many conflicting versions of what constitutes higher knowledge and samadhi that leads to the liberation from rebirth and suffering. Even within the Four Dhyana methodology of meditation, Vetter notes that "penetrating abstract truths and penetrating them successively does not seem possible in a state of mind which is without contemplation and reflection." According to Vetter, dhyāna itself constituted the original "liberating practice".
Carol Anderson notes that insight is often depicted in the Vinaya as the opening of the Dhamma eye, which sets one on the Buddhist path to liberation.
Theravada
thumb|alt=color monument of Buddha in lotus position, Shwezigon Paya near Bagan, Myanmar|Shwezigon Pagoda near Bagan, Myanmar
Vipassanā
In Theravada Buddhism, but also in Tibetan Buddhism, two types of meditation Buddhist practices are being followed, namely samatha (Pāli; Sanskrit: śamatha; "calm") and vipassana (insight). Samatha is also called "calming meditation", and was adopted into Buddhism from pre-Buddha Indian traditions. Vipassanā meditation was added by Buddha, and refers to "insight meditation". Vipassana does not aim at peace and tranquillity, states Damien Keown, but "the generation of penetrating and critical insight (panna)".
The focus of Vipassana meditation is to continuously and thoroughly know impermanence of everything (annica), no-Self in anything (anatta) and dukkha teachings of Buddhism.
Contemporary Theravada orthodoxy regards samatha as a preparation for vipassanā, pacifying the mind and strengthening the concentration in order to allow the work of insight, which leads to liberation. In contrast, the Vipassana Movement argues that insight levels can be discerned without the need for developing samatha further due to the risks of going out of course when strong samatha is developed.
Dependent arising
Pratityasamutpada, also called "dependent arising, or dependent origination", is the Buddhist theory to explain the nature and relations of being, becoming, existence and ultimate reality. Buddhism asserts that there is nothing independent, except the state of nirvana. All physical and mental states depend on and arise from other pre-existing states, and in turn from them arise other dependent states while they cease.
The 'dependent arisings' have a causal conditioning, and thus Pratityasamutpada is the Buddhist belief that causality is the basis of ontology, not a creator God nor the ontological Vedic concept called universal Self (Brahman) nor any other 'transcendent creative principle'., Quote: "[Buddhism's ontological hypotheses] that nothing in reality has its own-being and that all phenomena reduce to the relativities of pratitya samutpada. The Buddhist ontological hypothesese deny that there is any ontologically ultimate object such a God, Brahman, the Dao, or any transcendent creative source or principle." However, the Buddhist thought does not understand causality in terms of Newtonian mechanics, rather it understands it as conditioned arising. In Buddhism, dependent arising is referring to conditions created by a plurality of causes that necessarily co-originate a phenomenon within and across lifetimes, such as karma in one life creating conditions that lead to rebirth in one of realms of existence for another lifetime.
Buddhism applies the dependent arising theory to explain origination of endless cycles of dukkha and rebirth, through its Twelve Nidānas or "twelve links" doctrine. It states that because Avidyā (ignorance) exists Saṃskāras (karmic formations) exists, because Saṃskāras exists therefore Vijñāna (consciousness) exists, and in a similar manner it links Nāmarūpa (sentient body), Ṣaḍāyatana (six senses), Sparśa (sensory stimulation), Vedanā (feeling), Taṇhā (craving), Upādāna (grasping), Bhava (becoming), Jāti (birth), Jarāmaraṇa (old age, death, sorrow, pain).
By breaking the circuitous links of Twelve Nidanas, Buddhism asserts that a liberation from this endless cycles of rebirth and dukkha can be attained.
Mahayana
left|thumb|alt=bronze Great Statue of Amitābha in Kamakura, Japan|The Great Statue of Amitābha in Kamakura, Japan
Emptiness
Śūnyatā, or "emptiness", is a central concept in Nagarjuna's Madhyamaka school, and widely attested in the Prajñāpāramitā sutras. It brings together key Buddhist doctrines, particularly anatta and dependent origination, to refute the metaphysics of Sarvastivada and Sautrāntika (extinct non-Mahayana schools). Not only sentient beings are empty of ātman; all phenomena (dharmas) are without any svabhava (literally "own-nature" or "self-nature"), and thus without any underlying essence, and "empty" of being independent; thus the heterodox theories of svabhava circulating at the time were refuted on the basis of the doctrines of early Buddhism.
Mind-only
Sarvastivada teachings, which were criticized by Nāgārjuna, —were reformulated by scholars such as Vasubandhu and Asanga and were adapted into the Yogachara school. While the Mādhyamaka school held that asserting the existence or non-existence of any ultimately real thing was inappropriate, some exponents of Yogachara asserted that the mind and only the mind is ultimately real (a doctrine known as cittamatra). Not all Yogacharins asserted that mind was truly existent; Vasubandhu and Asanga in particular did not. These two schools of thought, in opposition or synthesis, form the basis of subsequent Mahayana metaphysics in the Indo-Tibetan tradition.
Buddha-nature
Buddha-nature is a concept found in some 1st-millennium CE Buddhist texts, such as the Tathāgatagarbha sūtras. This concept has been controversial in Buddhism, but has a following in the East Asian Buddhism. These Sutras suggest, states Paul Williams, that 'all sentient beings contain a Tathagata' as their 'essence, core inner nature, Self'. The Tathagatagarbha doctrine, at its earliest probably appeared about the later part of the 3rd century CE, and it contradicts the Anatta doctrine (non-Self) in a vast majority of Buddhist texts, leading scholars to posit that the Tathagatagarbha Sutras were written to promote Buddhism to non-Buddhists., Quote: "Some texts of the tathagatagarbha literature, such as the Mahaparinirvana Sutra actually refer to an atman, though other texts are careful to avoid the term. This would be in direct opposition to the general teachings of Buddhism on anatta. Indeed, the distinctions between the general Indian concept of atman and the popular Buddhist concept of Buddha-nature are often blurred to the point that writers consider them to be synonymous." However, the Buddhist text Ratnagotravibhāga states that the "Self" implied in Tathagatagarbha doctrine is actually "not-Self".
Devotion
thumb|Bhatti (devotion) at a Buddhist temple, Tibet. Chanting during Bhatti Puja (devotional worship) is often a part of the Theravada Buddhist tradition.
Devotion is an important part of the practice of most Buddhists. Devotional practices include ritual prayer, prostration, offerings, pilgrimage, and chanting. In Pure Land Buddhism, devotion to the Buddha Amitabha is the main practice. In Nichiren Buddhism, devotion to the Lotus Sutra is the main practice. Bhakti (called Bhatti in Pali) has been a common practice in Theravada Buddhism, where offerings and group prayers are made to deities and particularly images of Buddha.Donald Swearer (2003), Buddhism in the Modern World: Adaptations of an Ancient Tradition (Editors: Heine and Prebish), Oxford University Press, ISBN 978-0195146981, pages 9–25 According to Karel Werner and other scholars, devotional worship has been a significant practice in Theravada Buddhism, and deep devotion is part of Buddhist traditions starting from the earliest days.Karel Werner (1995), Love Divine: Studies in Bhakti and Devotional Mysticism, Routledge, ISBN 978-0700702350, pages 45–46
Guru devotion is a central practice of Tibetan Buddhism. The guru is considered essential and to the Buddhist devotee, the guru is the "enlightened teacher and ritual master" in Vajrayana spiritual pursuits.
For someone seeking Buddhahood, the guru is the Buddha, the Dhamma and the Sangha, wrote the 12th-century Buddhist scholar Sadhanamala. The venerance of and obedience to teachers is also important in Theravada and Zen Buddhism.
Buddhist texts
thumb|alt=Buddhist monk Geshe Konchog Wangdu in red robe reads Mahayana sutras on stand|Buddhist monk Geshe Konchog Wangdu reads Mahayana sutras from an old woodblock copy of the Tibetan Kanjur.
Buddhism, like all Indian religions, was an oral tradition in the ancient times. The Buddha's words, the early doctrines and concepts, the interpretations were transmitted from one generation to the next by the word of mouth in monasteries, and not through written texts. The first Buddhist canonical texts, were likely written down in Sri Lanka, about 400 years after the Buddha died. The texts were part of the Tripitakas, and many versions appeared thereafter claiming to be the words of the Buddha. Scholarly Buddhist commentary texts, with named authors, appeared in India, around the 2nd century CE. These texts were written in Pali or Sanskrit, sometimes regional languages, as palm-leaf manuscripts, birch bark, painted scrolls, carved into temple walls, and later on paper.
Unlike what the Bible is to Christianity and Quran is to Islam, but like all major ancient Indian religions, there is no consensus among the different Buddhist traditions as to what constitutes the scriptures or a common canon in Buddhism. The general belief among Buddhists is that the canonical corpus is vast. This corpus includes the ancient Sutras organized into Nikayas, itself the part of three basket of texts called the Tripitakas. Each Buddhist tradition has its own collection of texts, much of which is translation of ancient Pali and Sanskrit Buddhist texts of India. The Chinese Buddhist canon, for example, includes 2184 texts in 55 volumes, while the Tibetan canon comprises 1108 texts – all claimed to have been spoken by the Buddha – and another 3461 texts composed by Indian scholars revered in the Tibetan tradition. The Buddhist textual history has been vast; over 40,000 manuscripts mostly Buddhist, some non-Buddhist, were discovered in 1900 in the Dunhuang Chinese cave alone.
Pāli Tipitaka
The Pāli Tipitaka (Sanskrit: Tripiṭaka, three pitakas), which means "three baskets", refers to the Vinaya Pitaka, the Sutta Pitaka, and the Abhidhamma Pitaka. These constitute the oldest known canonical works of Buddhism. The Vinaya Pitaka contains disciplinary rules for the Buddhist monasteries. The Sutta Pitaka contains words attributed to the Buddha. The Abhidhamma Pitaka contain expositions and commentaries on the Sutta, and these vary significantly between Buddhist schools.
The Pāli Tipitaka is the only surviving early Tipitaka. According to some sources, some early schools of Buddhism had five or seven pitakas. Much of the material in the Canon is not specifically "Theravadin", but is instead the collection of teachings that this school preserved from the early, non-sectarian body of teachings. According to Peter Harvey, it contains material at odds with later Theravadin orthodoxy. He states: "The Theravadins, then, may have added texts to the Canon for some time, but they do not appear to have tampered with what they already had from an earlier period."
Theravada texts
In addition to the Pali Canon, the important commentary texts of the Theravada tradition include the 5th-century Visuddhimagga by Buddhaghosa of the Mahavihara school. It includes sections on shila (virtues), samadhi (concentration), panna (wisdom) as well as Theravada tradition's meditation methodology.Visuddhimagga, Encyclopedia Britannica (2015)
Mahayana sutras
thumb|alt=Tripiṭaka Koreana in South Korea, over 81,000 wood printing blocks stored in racks|The Tripiṭaka Koreana in South Korea, an edition of the Chinese Buddhist canon carved and preserved in over 81,000 wood printing blocks.
The Mahayana sutras are a very broad genre of Buddhist scriptures that the Mahayana Buddhist tradition holds are original teachings of the Buddha. Some adherents of Mahayana accept both the early teachings (including in this the Sarvastivada Abhidharma, which was criticized by Nagarjuna and is in fact opposed to early Buddhist thought) and the Mahayana sutras as authentic teachings of Gautama Buddha, and claim they were designed for different types of persons and different levels of spiritual understanding.
The Mahayana sutras often claim to articulate the Buddha's deeper, more advanced doctrines, reserved for those who follow the bodhisattva path. That path is explained as being built upon the motivation to liberate all living beings from unhappiness. Hence the name Mahāyāna (lit., the Great Vehicle). The Theravada school does not treat the Mahayana Sutras as authoritative or authentic teachings of the Buddha.
Generally, scholars conclude that the Mahayana scriptures were composed from the 1st century CE onwards: "Large numbers of Mahayana sutras were being composed in the period between the beginning of the common era and the fifth century".
Tibetan texts: Śālistamba Sutra
Many ancient Indian texts have not survived into the modern era, creating a challenge in establishing the historic commonalities between Theravada and Mahayana. The texts preserved in the Tibetan Buddhist monasteries, with parallel Chinese translations, have provided a breakthrough. Among these is the Mahayana text Śālistamba Sutra which no longer exists in Sanskrit version, but does in Tibetan and Chinese versions. This Mahayana text contains numerous sections which are remarkably same as the Theravada Pali Canon and Nikaya Buddhism. The Śālistamba Sutra was cited by Mahayana scholars such as the 8th-century Yasomitra to be authoritative. This suggests that Buddhist literature of different traditions shared a common core of Buddhist texts in the early centuries of its history, until Mahayana literature diverged about and after the 1st century CE.
History
Historical roots
thumb|alt=people sitting before stone shrine the Buddhist "Carpenter's Cave" at Ellora in Maharashtra, India|The Buddhist "Carpenter's Cave" at Ellora in Maharashtra, India
Historically, the roots of Buddhism lie in the religious thought of Iron Age India around the middle of the first millennium BCE. That was a period, states Abraham Eraly, of great intellectual ferment, when the Upanishads were composed marking a change in the historical Vedic religion, as well as the emergence of great Sramanic traditions. According to Richard Gombrich, this was not only a period of intellectual ferment but also socio-cultural change quite distinct from the early Vedic period.
New ideas developed both in the Vedic tradition in the form of the Upanishads, and outside of the Vedic tradition through the Śramaṇa movements.; Quote: "But the Upanishadic ultimate meaning of the Vedas, was, from the viewpoint of the Vedic canon in general, clearly a new idea.."; p.95: The [oldest] Upanishads in particular were part of the Vedic corpus (...) When these various new ideas were brought together and edited, they were added on to the already existing Vedic..."; p.294: "When early Jainism came into existence, various ideas mentioned in the extant older Upanishads were current,....".;Quote: "In the Aranyakas therefore, thought and inner spiritual awareness started to separate subtler, deeper aspects from the context of ritual performance and myth with which they had been united up to then. This process was then carried further and brought to completion in the Upanishads. (...) The knowledge and attainment of the Highest Goal had been there from the Vedic times. But in the Upanishads inner awareness, aided by major intellectual breakthroughs, arrived at a language in which Highest Goal could be dealt with directly, independent of ritual and sacred lore".;; Quote: "But he [Bronkhorst] talks about the simultaneous emergence of a Vedic and a non-Vedic asceticism. (...) [On Olivelle] Thus, the challenge for old Vedic views consisted of a new theology, written down in the early Upanishads like the Brhadaranyaka and the Mundaka Upanishad. The new set of ideas contained the...." The term Śramaṇa refers to several Indian religious movements parallel to but separate from the historical Vedic religion, including Buddhism, Jainism and others such as Ājīvika.AL Basham (1951), History and Doctrines of the Ajivikas – a Vanished Indian Religion, Motilal Banarsidass, ISBN 978-8120812048, pages 94–103
Several Śramaṇa movements are known to have existed in India before the 6th century BCE (pre-Buddha, pre-Mahavira), and these influenced both the āstika and nāstika traditions of Indian philosophy.Reginald Ray (1999), Buddhist Saints in India, Oxford University Press, ISBN 978-0195134834, pages 237–240, 247–249 According to Martin Wilshire, the Sramana tradition evolved in India over two phases, namely Paccekabuddha and Savaka phases, the former being the tradition of individual ascetic and latter of disciples, and that Buddhism and Jainism ultimately emerged from these.Martin Wiltshire (1990), Ascetic Figures Before and in Early Buddhism, De Gruyter, ISBN 978-3110098969, page 293 Brahmanical and non-Brahmanical ascetic groups shared and used several similar ideas, but the Śramaṇa traditions also drew upon already established Brahmanical concepts and philosophical roots, states Wiltshire, to formulate their own doctrines.Martin Wiltshire (1990), Ascetic Figures Before and in Early Buddhism, De Gruyter, ISBN 978-3110098969, pages 226–227 Brahmanical motifs can be found in the oldest Buddhist texts, using them to introduce and explain Buddhist ideas. For example, prior to Buddhist developments, the Brahmanical tradition internalized and variously reinterpreted the three Vedic sacrificial fires as concepts such as Truth, Rite, Tranquility or Restraint. Buddhist texts also refer to the three Vedic sacrificial fires, reinterpreting and explaining them as ethical conduct.
The Sramanic religions challenged and broke with the Brahmanic tradition on core assumptions such as Atman (soul, self), Brahman, the nature of afterlife, and they rejected the authority of the Vedas and Upanishads.P. Billimoria (1988), Śabdapramāṇa: Word and Knowledge, Studies of Classical India Volume 10, Springer, ISBN 978-94-010-7810-8, pages 1–30 Buddhism was one among several Indian religions that did so.
thumb|alt=Rock-cut Lord Buddha statue at Bojjanakonda near Anakapalle India|Rock-cut Lord Buddha statue at Bojjanakonda near Anakapalle in the Visakhapatnam district of Andhra Pradesh, India
Indian Buddhism
The history of Indian Buddhism may be divided into five periods: Early Buddhism (occasionally called pre-sectarian Buddhism), Nikaya Buddhism or Sectarian Buddhism: The period of the early Buddhist schools, Early Mahayana Buddhism, later Mahayana Buddhism, and Vajrayana Buddhism.
thumb|Sanchi Stupa
Pre-sectarian Buddhism
Pre-sectarian Buddhism is the earliest phase of Buddhism, recognized by nearly all scholars. Its main scriptures are the Vinaya Pitaka and the four principal Nikāyas or Agamas.
Tracing the oldest teachings
Information of the oldest teachings may be obtained by analysis of the oldest texts. One method to obtain information on the oldest core of Buddhism is to compare the oldest extant versions of the Theravadin Pāli Canon and other texts. The reliability of these sources, and the possibility to draw out a core of oldest teachings, is a matter of dispute. According to Vetter, inconsistencies remain, and other methods must be applied to resolve those inconsistencies.
According to Schmithausen, three positions held by scholars of Buddhism can be distinguished:
"Stress on the fundamental homogeneity and substantial authenticity of at least a considerable part of the Nikayic materials;"
"Scepticism with regard to the possibility of retrieving the doctrine of earliest Buddhism;"
"Cautious optimism in this respect."
Core teachings
According to Mitchell, certain basic teachings appear in many places throughout the early texts, which has led most scholars to conclude that Gautama Buddha must have taught something similar to the Four Noble Truths, the Noble Eightfold Path, Nirvana, the three marks of existence, the five aggregates, dependent origination, karma and rebirth. Yet critical analysis reveals discrepancies, which point to alternative possibilities.
Bruce Matthews notes that there is no cohesive presentation of karma in the Sutta Pitaka, which may mean that the doctrine was incidental to the main perspective of early Buddhist soteriology. Schmithausen has questioned whether karma already played a role in the theory of rebirth of earliest Buddhism. According to Vetter, "the Buddha at first sought "the deathless" (amata/amrta), which is concerned with the here and now. Only later did he become acquainted with the doctrine of rebirth." Bronkhorst disagrees, and concludes that the Buddha "introduced a concept of karma that differed considerably from the commonly held views of his time." According to Bronkhorst, not physical and mental activities as such were seen as responsible for rebirth, but intentions and desire.
Another core problem in the study of early Buddhism is the relation between dhyana and insight. Schmithausen, states that the four noble truths as "liberating insight", may be a later addition to texts such as Majjhima Nikaya 36.
According to both Bronkhorst and Anderson, the Four Noble Truths became a substitution for prajna, or "liberating insight", in the suttas in those texts where "liberating insight" was preceded by the four jhānas. The four truths may not have been formulated in earliest Buddhism, and did not serve in earliest Buddhism as a description of "liberating insight". Gotama's teachings may have been personal, "adjusted to the need of each person."
The three marks of existence – Dukkha, Annica, Anatta – may reflect Upanishadic or other influences. K.R. Norman supposes that these terms were already in use at the Buddha's time, and were familiar to his hearers. According to Vetter, the description of the Buddhist path may initially have been as simple as the term "the middle way". In time, this short description was elaborated, resulting in the description of the eightfold path. Similarly nibbāna is the common term for the desired goal of this practice, yet many other terms can be found throughout the Nikāyas, which are not specified.
Early Buddhist schools
thumb|Buddha at Xumishan Grottoes, ca. 6th century CE.Nancy Steinhardt (2011), The Sixth Century in East Asian Architecture, Ars Orientalis, Vol. 41, pages 27–71
According to the scriptures, soon after the (from Sanskrit: "highest extinguishment") of Gautama Buddha, the first Buddhist council was held. As with any ancient Indian tradition, transmission of teaching was done orally. The primary purpose of the assembly was to collectively recite the teachings to ensure that no errors occurred in oral transmission. Richard Gombrich states that the monastic assembly recitations of the Buddha's teaching likely began during Buddha's lifetime, similar to the First Council, that helped compose Buddhist scriptures.
The Second Buddhist council resulted in the first schism in the Sangha, probably caused by a group of reformists called Sthaviras who split from the conservative majority Mahāsāṃghikas. After unsuccessfully trying to modify the Vinaya, a small group of "elderly members", i.e. sthaviras, broke away from the majority Mahāsāṃghika during the Second Buddhist council, giving rise to the Sthavira sect.Skilton, Andrew. A Concise History of Buddhism. 2004. p. 49, 64
The Sthaviras gave rise to several schools, one of which was the Theravada school. Originally, these schisms were caused by disputes over monastic disciplinary codes of various fraternities, but eventually, by about 100 CE if not earlier, schisms were being caused by doctrinal disagreements too. Buddhist monks of different fraternities became distinct schools, stopped doing official Sangha business together, but continued to study each other's doctrines.
Following (or leading up to) the schisms, each Saṅgha started to accumulate their own version of Tripiṭaka (Pali Canons, triple basket of texts).Tipitaka Encyclopedia Britannica (2015) In their Tripiṭaka, each school included the Suttas of the Buddha, a Vinaya basket (disciplinary code) and added an Abhidharma basket which were texts on detailed scholastic classification, summary and interpretation of the Suttas. The doctrine details in the Abhidharmas of various Buddhist schools differ significantly, and these were composed starting about 3th century BCE and through the 1st millennium CE. Eighteen early Buddhist schools are known, each with its own Tripitaka, but only one collection from Sri Lanka has survived, in nearly complete state, into the modern era.
Early Mahayana Buddhism
thumb|alt=stone statue group, a Buddhist triad depicting, left to right, a Kushan, the future buddha Maitreya, Gautama Buddha, the bodhisattva Avalokiteśvara, and a Buddhist monk. 2nd—3rd century. Guimet Museum|A Buddhist triad depicting, left to right, a Kushan, the future buddha Maitreya, Gautama Buddha, the bodhisattva Avalokiteśvara, and a monk. Second—third century. Guimet Museum
Several scholars have suggested that the Mahayana Buddhism tradition started in south India (modern Andhra Pradesh), and it is there that Prajnaparamita sutras, among the earliest Mahayana sutras,Williams, Paul. Buddhist Thought. Routledge, 2000, pages 131.Williams, Paul. Mahayana Buddhism: The Doctrinal Foundations 2nd edition. Routledge, 2009, pg. 47. developed among the Mahāsāṃghika along the Kṛṣṇa River region about the 1st century BCE.Guang Xing. The Concept of the Buddha: Its Evolution from Early Buddhism to the Trikaya Theory. 2004. pp. 65–66 "Several scholars have suggested that the Prajñāpāramitā probably developed among the Mahasamghikas in Southern India, in the Andhra country, on the Krsna River."Akira, Hirakawa (translated and edited by Paul Groner) (1993). A History of Indian Buddhism. Delhi: Motilal Banarsidass: pp. 252–253, 263, 268Warder, A.K. Indian Buddhism. 2000. p. 313
There is no evidence that Mahayana ever referred to a separate formal school or sect of Buddhism, but rather that it existed as a certain set of ideals, and later doctrines, for bodhisattvas. Initially it was known as Bodhisattvayāna (the "Vehicle of the Bodhisattvas"). Paul Williams states that the Mahāyāna never had nor ever attempted to have a separate Vinaya or ordination codes from the early schools of Buddhism. Records written by Chinese monks visiting India indicate that both Mahāyāna and non-Mahāyāna monks could be found in the same monasteries, with the difference that Mahayana monks worshipped figures of Bodhisattvas, while non-Mahayana monks did not.
Much of the early extant evidence for the origins of Mahāyāna comes from early Chinese translations of Mahāyāna texts. These Mahayana teachings were first propagated into China by Lokakṣema, the first translator of Mahayana sutras into Chinese during the 2nd century CE. Some scholars have traditionally considered the earliest Mahāyāna sūtras to include the very first versions of the Prajnaparamita series, along with texts concerning Akṣobhya, which were probably composed in the 1st century BCE in the south of India.
Late Mahayana Buddhism
During the period of Late Mahayana Buddhism, four major types of thought developed: Madhyamaka, Yogachara, Tathagatagarbha, and Buddhist logic as the last and most recent. In India, the two main philosophical schools of the Mahayana were the Madhyamaka and the later Yogachara. According to Dan Lusthaus, Madhyamaka and Yogachara have a great deal in common, and the commonality stems from early Buddhism. There were no great Indian teachers associated with tathagatagarbha thought.
Vajrayana (Esoteric Buddhism)
Scholarly research concerning Esoteric Buddhism is still in its early stages and has a number of problems that make research difficult:
Vajrayana Buddhism was influenced by Hinduism, and therefore research must include exploring Hinduism as well.
The scriptures of Vajrayana have not yet been put in any kind of order.
Ritual must be examined as well, not just doctrine.
Spread of Buddhism
thumb|left|alt=map showing diffusion of Buddhism at the time of emperor Ashoka from India|The spread of Buddhism at the time of emperor Ashoka (260–218 BCE).
Buddhism may have spread only slowly in India until the time of the Mauryan emperor Ashoka, who was a public supporter of the religion. The support of Aśoka and his descendants led to the construction of more stūpas (Buddhist religious memorials) and to efforts to spread Buddhism throughout the enlarged Maurya empire and into neighbouring lands such as Central Asia, beyond the Mauryas' northwest border, and to the island of Sri Lanka south of India. These two missions, in opposite directions, would ultimately lead, in the first case to the spread of Buddhism into China, and in the second case, to the emergence of Theravāda Buddhism and its spread from Sri Lanka to the coastal lands of Southeast Asia.
This period marks the first known spread of Buddhism beyond India. According to the edicts of Aśoka, emissaries were sent to various countries west of India to spread Buddhism (Dharma), particularly in eastern provinces of the neighbouring Seleucid Empire, and even farther to Hellenistic kingdoms of the Mediterranean. It is a matter of disagreement among scholars whether or not these emissaries were accompanied by Buddhist missionaries.
thumb|alt=Coin depicting Indo-Greek king Menander facing right with headband|Coin depicting Indo-Greek king Menander, who, according to Buddhist tradition records in the Milinda Panha, converted to the Buddhist faith and became an arhat in the 2nd century BCE . (British Museum)
In central and west Asia, Buddhist influence grew, through Greek-speaking Buddhist monarchs and ancient Asian trade routes. An example of this is evidenced in Chinese and Pali Buddhist records, such as Milindapanha and the Greco-Buddhist art of Gandhāra. The Milindapanha describes a conversation between a Buddhist monk and 2nd-century BCE Greek king, Menander, after which Menander abdicates and himself goes into monastic life in the pursuit of nirvana. Modern scholarship has questioned the Milindapanha version, expressing doubts whether Menander was Buddhist or just favourably disposed to Buddhist monks.
The Theravada school spread south from India in the 3rd century BCE, to Sri Lanka, later to southeast Asia (Myanmar, Malaysia, Indonesia, Thailand, Cambodia and coastal Vietnam). The Dharmagupta school spread (also in the 3rd century BCE) north to Kashmir, Gandhara and Bactria (Afghanistan).
The Silk Road transmission of Buddhism to China is most commonly thought to have started in the late 2nd or the 1st century CE, though the literary sources are all open to question.
The first documented translation efforts by foreign Buddhist monks in China were in the 2nd century CE, probably as a consequence of the expansion of the Kushan Empire into the Chinese territory of the Tarim Basin.
In the 2nd century CE, Mahayana Sutras spread to China, and then to Korea and Japan, and were translated into Chinese. During the Indian period of Esoteric Buddhism (from the 8th century onwards), Buddhism spread from India to Tibet and Mongolia. Johannes Bronkhorst states that esoteric form was attractive because it allowed both a secluded monastic community as well as the social rites and rituals important to laypersons and to kings for the maintenance of a political state during succession and wars to resist invasion. During the middle ages, Buddhism slowly declined in India, while it vanished from Persia and Central Asia as Islam became the state religion.
Schools and traditions
thumb|alt=color map showing Buddhism is a major religion worldwide|300px|Distribution of major Buddhist traditions
Buddhists generally classify themselves as either Theravada or Mahayana. This classification is also used by some scholars and is the one ordinarily used in the English language. An alternative scheme used by some scholars divides Buddhism into the following three traditions or geographical or cultural areas: Theravada, East Asian Buddhism and Tibetan Buddhism.
thumb|alt=monks in orange robes on stone steps in Cambodia|Young monks in Cambodia
Some scholars use other schemes. Buddhists themselves have a variety of other schemes. Hinayana (literally "lesser or inferior vehicle") is used by Mahayana followers to name the family of early philosophical schools and traditions from which contemporary Theravada emerged, but as the Hinayana term is considered derogatory, a variety of other terms are used instead, including Śrāvakayāna, Nikaya Buddhism, early Buddhist schools, sectarian Buddhism and conservative Buddhism.
Not all traditions of Buddhism share the same philosophical outlook, or treat the same concepts as central. Each tradition, however, does have its own core concepts, and some comparisons can be drawn between them:
Both Theravada and Mahayana traditions accept the Buddha as the founder, Theravada considers him unique, but Mahayana considers him one of many Buddhas
Both accept the Middle way, dependent origination, the Four Noble Truths, the Noble Eightfold Path and the Three marks of existence
Nirvana is attainable by the monks in Theravada tradition, while Mahayana considers it broadly attainable; Arhat state is aimed for in the Theravada, while Buddhahood is aimed for in the Mahayana
Religious practice consists of meditation for monks and prayer for laypersons in Theravada, while Mahayana includes prayer, chanting and meditation for both
Theravada has been a more rationalist, historical form of Buddhism; while Mahayana has included more rituals, mysticism and worldly flexibility in its scope.
Timeline
This is a rough timeline of the development of the different schools/traditions:
Theravada school
thumb|alt=A young monk in saffron robes standing in Sri Lanka temple|A young bhikkhu in Sri Lanka
The Theravada tradition traces its roots to the words of the Buddha preserved in the Pali Canon, and considers itself to be the more orthodox form of Buddhism.; Quote: "Orthodox forms of Buddhism are collectively called Hinayana (...). Present-day practioners of orthodox Buddhism prefer to use the name Theravada (Buddhism of the Elders)."; Quote: "Theravadins claim that they alone represent true Buddhist orthodoxy, and that other sects are heretics".
Theravada flourished in south India and Sri Lanka in ancient times, from there it spread for the first time into mainland southeast Asia about the 11th century into its elite urban centres. By the 13th century, Theravada had widely spread into the rural areas of mainland southeast Asia, displacing Mahayana Buddhism and some traditions of Hinduism which had arrived in places such as Thailand, Cambodia, Vietnam, Indonesia and Malaysia around mid 1st millennium CE. The later traditions were well established in south Thailand and Java by the 7th century, under the sponsorship of Srivijaya dynasty. The political separation between Khmer and Sukhothai, led the Sukhothai king to welcome Sri Lankan emissaries, helping them establish the first Theravada Buddhist sangha in the 13th century, in contrast to the Mahayana tradition of Khmer earlier.
Sinhalese Buddhist reformers in the late nineteenth and early twentieth centuries portrayed the Pali Canon as the original version of scripture. They also emphasized Theravada being rational and scientific.
Theravāda is primarily practiced today in Sri Lanka, Burma, Laos, Thailand, Cambodia as well as small portions of China, Vietnam, Malaysia and Bangladesh. It has a growing presence in the west.
Mahayana traditions
thumb|left|alt=Nagarjuna, a Mahayana scholar|The ideas of the 2nd century scholar Nagarjuna helped shape the Mahayana traditions.
Mahayana schools consider the Mahayana Sutras as authoritative scriptures and accurate rendering of Buddha's words. These traditions have been the more liberal form of Buddhism allowing different and new interpretations that emerged over time.
Mahayana flourished in India from the time of Ashoka, through to the dynasty of the Guptas (4th to 6th-century). Mahāyāna monastic foundations and centres of learning were established by the Buddhist kings, and the Hindu kings of the Gupta dynasty as evidenced by records left by three Chinese visitors to India. The Gupta dynasty, for example, helped establish the famed Nālandā University in Bihar. These monasteries and foundations helped Buddhist scholarship, as well as studies into non-Buddhist traditions and secular subjects such as medicine, host visitors and spread Buddhism into East and Central Asia.
Native Mahayana Buddhism is practiced today in China, Japan, Korea, Singapore, parts of Russia and most of Vietnam (also commonly referred to as "Eastern Buddhism"). The Buddhism practiced in Tibet, the Himalayan regions, and Mongolia is also Mahayana in origin, but is discussed below under the heading of Vajrayana (also commonly referred to as "Northern Buddhism"). There are a variety of strands in Eastern Buddhism, of which "the Pure Land school of Mahayana is the most widely practised today.". In most of this area however, they are fused into a single unified form of Buddhism. In Japan in particular, they form separate denominations with the five major ones being: Nichiren, peculiar to Japan; Pure Land; Shingon, a form of Vajrayana; Tendai, and Zen. In Korea, nearly all Buddhists belong to the Chogye school, which is officially Son (Zen), but with substantial elements from other traditions.
Vajrayana traditions
thumb|alt=7th century Buddhist monastery|7th-century Potala Palace in Lhasa valley symbolizes Tibetan Buddhism and is a UNESCO world heritage site.
The goal and philosophy of the Vajrayāna remains Mahāyānist, but its methods are seen as far more powerful, so as to lead to Buddhahood in just one lifetime. The practice of using mantras was adopted from Hinduism, where they were first used in the Vedas. Tantric Buddhism is largely concerned with ritual and meditative practices.
Various classes of Vajrayana literature developed as a result of royal courts sponsoring both Buddhism and Saivism.Sanderson, Alexis. "The Śaiva Age: The Rise and Dominance of Śaivism during the Early Medieval Period." In: Genesis and Development of Tantrism,edited by Shingo Einoo. Tokyo: Institute of Oriental Culture, University of Tokyo, 2009. Institute of Oriental Culture Special Series, 23, pp. 124. The Mañjusrimulakalpa, which later came to classified under Kriyatantra, states that mantras taught in the Saiva, Garuda and Vaisnava tantras will be effective if applied by Buddhists since they were all taught originally by Manjushri.Sanderson, Alexis. "The Śaiva Age: The Rise and Dominance of Śaivism during the Early Medieval Period." In: Genesis and Development of Tantrism,edited by Shingo Einoo. Tokyo: Institute of Oriental Culture, University of Tokyo, 2009. Institute of Oriental Culture Special Series, 23, pp. 129–131. The Guhyasiddhi of Padmavajra, a work associated with the Guhyasamaja tradition, prescribes acting as a Saiva guru and initiating members into Saiva Siddhanta scriptures and mandalas.Sanderson, Alexis. "The Śaiva Age: The Rise and Dominance of Śaivism during the Early Medieval Period." In: Genesis and Development of Tantrism,edited by Shingo Einoo. Tokyo: Institute of Oriental Culture, University of Tokyo, 2009. Institute of Oriental Culture Special Series, 23, pp. 144–145. The Samvara tantra texts adopted the pitha list from the Saiva text Tantrasadbhava, introducing a copying error where a deity was mistaken for a place.
Tibetan Buddhism preserves the Vajrayana teachings of eighth century India. In the Tibetan tradition, practices can include sexual yoga, though only for some very advanced practitioners.
Zen
thumb|alt=Ginkaku-ji, a Zen temple in Kyoto, Japan with stone slab bridge over stream|Ginkaku-ji, a Zen temple in Kyoto, Japan
Zen Buddhism (禅), pronounced Chán in Chinese, seon in Korean or zen in Japanese (derived from the Sanskrit term dhyāna, meaning "meditation") is a form of Mahayana Buddhism found in China, Korea and Japan. It lays special emphasis on meditation, and direct discovery of the Buddha-nature.
Zen Buddhism is divided into two main schools: Rinzai (臨済宗) and Sōtō (曹洞宗), the former greatly favouring the use in meditation on the koan (公案, a meditative riddle or puzzle) as a device for spiritual break-through, and the latter (while certainly employing koans) focusing more on shikantaza or "just sitting".
Zen Buddhism is primarily found in Japan, with some presence in South Korea and Vietnam. The scholars of Japanese Soto Zen tradition in recent times have critiqued the mainstream Japanese Buddhism for dhatu-vada, that is assuming things have substantiality, a view they assert to be non-Buddhist and "out of tune with the teachings of non-Self and conditioned arising", states Peter Harvey.
Buddhism today
thumb|alt=Buddhist monk in Siberia in robes leaning on railing looking at temple|left|Buryat Buddhist monk in Siberia
There is growing worldwide interest in Buddhism.
Buddhism has spread across the world, and Buddhist texts are increasingly translated into local languages. While in the West Buddhism is often seen as exotic and progressive, in the East it is regarded as familiar and traditional. In countries such as Cambodia and Bhutan, it is recognized as the state religion and receives government support. In certain regions such as Afghanistan and Pakistan, Buddhist monuments have been targets of violence and destruction.
Modern influences increasingly lead to new forms of Buddhism that are diverse and that significantly depart from traditional beliefs and practices. A number of modern movements or tendencies in Buddhism emerged during the second half of the 20th Century, including the Dalit Buddhist movement, Engaged Buddhism, and the further development of various Western Buddhist traditions.
Modern Buddhist movements include Won Buddhism in Korea, the Dhammakaya movement in Thailand and several Japanese organizations, such as Shinnyo-en, Risshō Kōsei-kai or Soka Gakkai.
Demographics
Buddhism is practiced by an estimated 488 million, 495 million, or 535 million people as of the 2010s, representing 7% to 8% of the world's total population.
thumb|340px|alt=purple Percentage of Buddhists by country, showing high in Burma to low in United States|Percentage of Buddhists by country, according to the Pew Research Center, as of 2010.
China is the country with the largest population of Buddhists, approximately 244 million or 18.2% of its total population. They are mostly followers of Chinese schools of Mahayana, making this the largest body of Buddhist traditions. Mahayana, also practiced in broader East Asia, is followed by over half of world Buddhists.
According to a demographic analysis reported by Peter Harvey (2013): Mahayana has 360 million adherents; Theravada has 150 million adherents; and Vajrayana has 18,2 million adherents.
According to Johnson and Grim (2013), Buddhism has grown from a total of 138 million adherents in 1910, of which 137 million were in Asia, to 495 million in 2010, of which 487 million are in Asia. Over 98% of all Buddhists live in Asia-Pacific and South Asia region. North America had about 3.9 million Buddhists, Europe 1.3 million, while the South America, Africa and the Middle East had an estimated combined total of about 1 million Buddhists in 2010.
After China where nearly half of the worldwide Buddhists live, the 10 countries with the largest Buddhist population densities are:
+ Buddhism by percentage as of 2010 Country Estimated Buddhist population Buddhists as % of total population 13,701,660 96.90% 64,419,840 93.20% 38,415,960 80.10% 563,000 74.70% 14,455,980 69.30% 4,092,000 66.00% 1,520,760 55.10% 45,807,480or 84,653,000 36.20% or 67% 1,725,510 33.90% 4,945,600or 8,000,000 21.10% or 35%Taiwan, US State Department
See also
Outline of Buddhism
Buddhism by country
Buddhism and science
Chinese folk religion
Easily confused Buddhist representations
Iconography of Gautama Buddha in Laos and Thailand
Index of Buddhism-related articles
Indian religions
List of books related to Buddhism
List of Buddhist temples
Nonviolence
Criticism of Buddhism
Notes
Subnotes
References
Sources
Printed sources
; reprinted in Williams, Buddhism, volume I; NB in the online transcript a little text has been accidentally omitted: in section 4, between "... none of the other contributions in this section envisage a date before 420 B.C." and "to 350 B.C." insert "Akira Hirakawa defends the short chronology and Heinz Bechert himself sets a range from 400 B.C."
Goleman, Daniel (2008). Destructive Emotions: A Scientific Dialogue with the Dalai Lama. Bantam. Kindle Edition.
ISBN 0-7734-5985-5.
Online sources
External links
Worldwide Buddhist Information and Education Network, BuddhaNet
Early Buddhist texts, translations, and parallels, SuttaCetral
East Asian Buddhist Studies: A Reference Guide, Robert Buswell and William Bodiford, UCLA
Buddhist Bibliography (China and Tibet), East West Center
Ten Philosophical Questions: Buddhism, Richard Hayes, Leiden University
Readings in Theravada Buddhism, Access to Insight
Readings in Zen Buddhism, Hakuin Ekaku (Ed: Monika Bincsik)
Readings in Sanskrit Buddhist Canon, Nagarjuna Institute – UWest
Readings in Buddhism, Vipassana Research Institute (English, Southeast Asian and Indian Languages)
Religion and Spirituality: Buddhism at Open Directory Project
The Future of Buddhism series, from Patheos
Buddhist Art, Smithsonian
Buddhism — objects, art and history, V&A Museum
Category:Transtheism
Category:Gautama Buddha
Category:Indian religions | 3,267,529 | 2017-01 |
Kathmandu | Kathmandu (; , Nepali pronunciation: ) is the capital city of the Federal Democratic Republic of Nepal, the largest Himalayan state in Asia. It is the largest metropolis in Nepal, with a population of 1.4 million in the city proper, and 5 million in its urban agglomeration across the Kathmandu Valley, which includes the towns of Lalitpur, Kirtipur, Madhyapur Thimi and Bhaktapur. Kathmandu is also the largest metropolis in the Himalayan hill region.
The city stands at an elevation of approximately above sea level in the bowl-shaped Kathmandu Valley of central Nepal. The valley is historically termed as "Nepal Proper" and has been the home of Newar culture, a cosmopolitan urban civilization in the Himalayan foothills. The city was the royal capital of the Kingdom of Nepal and hosts palaces, mansions and gardens of the Nepalese aristocracy. It has been home to the headquarters of the South Asian Association for Regional Cooperation (SAARC) since 1985. Today, it is the seat of government of the Nepalese republic established in 2008; and is part of the Bagmati Zone in Nepalese administrative geography.
Kathmandu has been the center of Nepal's history, art, culture and economy. It has a multiethnic population within a Hindu and Buddhist majority. Religious and cultural festivities form a major part of the lives of people residing in Kathmandu. Tourism is an important part of the economy as the city is the gateway to the Nepalese Himalayas. There are also seven casinos in the city. In 2013, Kathmandu was ranked third among the top ten upcoming travel destinations in the world by TripAdvisor, and ranked first in Asia. Historic areas of Kathmandu were devastated by a 7.8 magnitude earthquake on 25 April 2015. Nepali is the most spoken language in the city, while English is understood by the city's educated residents.
Etymology
The city of Kathmandu is named after Kasthamandap temple, that stood in Durbar Square. In Sanskrit, Kāṣṭha () means "wood" and Maṇḍap () means "covered shelter". This temple, also known as Maru Satal in the Newar language, was built in 1596 by Biseth in the period of King Laxmi Narsingh Malla. The two-story structure was made entirely of wood, and used no iron nails nor supports. According to legend, all the timber used to build the pagoda was obtained from a single tree. The structure collapsed during the major earthquake on 25 April 2015.
The colophons of ancient manuscripts, dated as late as the 20th century, refer to Kathmandu as Kāṣṭhamaṇḍap Mahānagar in Nepal Mandala. Mahānagar means "great city". The city is called "Kāṣṭhamaṇḍap" in a vow that Buddhist priests still recite to this day. Thus, Kathmandu is also known as Kāṣṭhamaṇḍap. During medieval times, the city was sometimes called Kāntipur (कान्तिपुर). This name is derived from two Sanskrit words – Kānti and pur. "Kānti" is a word that stands for "beauty" and is mostly associated with light and "pur" means place. Thus, giving it a meaning as "City of light".
Among the indigenous Newar people, Kathmandu is known as Yeṃ Deśa (येँ देश), and Patan and Bhaktapur are known as Yala Deśa (यल देश) and Khwopa Deśa (ख्वप देश). pages 162–163 "Yen" is the shorter form of Yambu (यम्बु), which originally referred to the northern half of Kathmandu.
History
thumb|right|Manjusri|Manjushree, with Chandrahrasa, the Buddhist deity said to have created the valley
Archaeological excavations in parts of Kathmandu have found evidence of ancient civilizations. The oldest of these findings is a statue, found in Maligaon, that was dated at 185 AD. The excavation of Dhando Chaitya uncovered a brick with an inscription in Brahmi script. Archaeologists believe it is two thousand years old. Stone inscriptions are a ubiquitous element at heritage sites and are key sources for the history of Nepal.
The earliest Western reference to Kathmandu appears in an account of Jesuit Fathers Johann Grueber and Albert d'Orville. In 1661, they passed through Nepal on their way from Tibet to India, and reported that they reached "Cadmendu", the capital of Nepal kingdom.
Ancient history
The ancient history of Kathmandu is described in its traditional myths and legends. According to Swayambhu Purana, present-day Kathmandu was once a huge and deep lake named "Nagdaha", as it was full of snakes. The lake was cut drained by Bodhisatwa Manjusri with his sword, and the water was evacuated out from there. He then established a city called Manjupattan, and made Dharmakar the ruler of the valley land. After sometimes, a demon named Banasur closed the outlet, and the valley was again a lake. Then lord Krishna came to Nepal, killed Banasur, and again drained out the water. He has brought some Gops with him and made Bhuktaman the king of Nepal.
Kotirudra Samhita of Shiva Purana, Chapter 11, shloka 18 refers to the place as Nayapala city, which was famous for its Pashupati Shivalinga. The name Nepal probably originates from this city Nayapala.
Very few historical records exist of the period before the medieval Licchavis rulers. According to Gopalraj Vansawali, a genealogy of Nepali monarchs, the rulers of Kathmandu Valley before the Licchavis were Gopalas, Mahispalas, Aabhirs, Kirants, and Somavanshi.Article:गोपालराज वंशावली Language: Nepalbhasa, Journal:नेपालभाषा केन्द्रीय विभागया जर्नल, Edition:1, Date: 1998, Page: 18-25, 44 The Kirata dynasty was established by Yalamber. During the Kirata era, a settlement called Yambu existed in the northern half of old Kathmandu. In some of the Sino-Tibetan languages, Kathmandu is still called Yambu. Another smaller settlement called Yengal was present in the southern half of old Kathmandu, near Manjupattan. During the reign of the seventh Kirata ruler, Jitedasti, Buddhist monks entered Kathmandu valley and established a forest monastery at Sankhu.
thumb|right|Map of Kathmandu, 1802
Licchavi era
The Licchavis from the Indo-Gangetic plain migrated north and defeated the Kiratas, establishing the Licchavi dynasty, circa 400 AD. During this era, following the genocide of Shakyas in Lumbini by Virudhaka, the survivors migrated north and entered the forest monastery in Sankhu masquerading as Koliyas. From Sankhu, they migrated to Yambu and Yengal (Lanjagwal and Manjupattan) and established the first permanent Buddhist monasteries of Kathmandu. This created the basis of Newar Buddhism, which is the only surviving Sanskrit-based Buddhist tradition in the world. With their migration, Yambu was called Koligram and Yengal was called Dakshin Koligram during most of the Licchavi era.
Eventually, the Licchavi ruler Gunakamadeva merged Koligram and Dakshin Koligram, founding the city of Kathmandu. The city was designed in the shape of Chandrahrasa, the sword of Manjushri. The city was surrounded by eight barracks guarded by Ajimas. One of these barracks is still in use at Bhadrakali (in front of Singha Durbar). The city served as an important transit point in the trade between India and Tibet, leading to tremendous growth in architecture. Descriptions of buildings such as Managriha, Kailaskut Bhawan, and Bhadradiwas Bhawan have been found in the surviving journals of travelers and monks who lived during this era. For example, the famous 7th-century Chinese traveller Xuanzang described Kailaskut Bhawan, the palace of the Licchavi king Amshuverma. The trade route also led to cultural exchange as well. The artistry of the Newar people—the indigenous inhabitants of the Kathmandu Valley—became highly sought after during this era, both within the Valley and throughout the greater Himalayas. Newar artists travelled extensively throughout Asia, creating religious art for their neighbors. For example, Araniko led a group of his compatriot artists through Tibet and China. Bhrikuti, the princess of Nepal who married Tibetan monarch Songtsän Gampo, was instrumental in introducing Buddhism to Tibet.
Malla era
thumb|Skyline of Kathmandu, circa 1793
thumb|Kathmandu Durbar Square, 1852
The Licchavi era was followed by the Malla era. Rulers from Tirhut, upon being attacked by Muslims, fled north to the Kathmandu valley. They intermarried with Nepali royalty, and this led to the Malla era. The early years of the Malla era were turbulent, with raids and attacks from Khas and Turk Muslims. There was also a devastating earthquake which claimed the lives of a third of Kathmandu's population, including the king Abhaya Malla. These disasters led to the destruction of most of the architecture of the Licchavi era (such as Mangriha and Kailashkut Bhawan), and the loss of literature collected in various monasteries within the city. Despite the initial hardships, Kathmandu rose to prominence again and, during most of the Malla era, dominated the trade between India and Tibet. Nepali currency became the standard currency in trans-Himalayan trade.
During the later part of the Malla era, Kathmandu Valley comprised four fortified cities: Kantipur, Lalitpur, Bhaktapur, and Kirtipur. These served as the capitals of the Malla confederation of Nepal. These states competed with each other in the arts, architecture, aesthetics, and trade, resulting in tremendous development. The kings of this period directly influenced or involved themselves in the construction of public buildings, squares, and temples, as well as the development of water spouts, the institutionalization of trusts (called guthis), the codification of laws, the writing of dramas, and the performance of plays in city squares. Evidence of an influx of ideas from India, Tibet, China, Persia, and Europe among other places can be found in a stone inscription from the time of king Pratap Malla. Books have been found from this era that describe their tantric tradition (e.g. Tantrakhyan), medicine (e.g. Haramekhala), religion (e.g. Mooldevshashidev), law, morals, and history. Amarkosh, a Sanskrit-Nepal Bhasa dictionary from 1381 AD, was also found. Architecturally notable buildings from this era include Kathmandu Durbar Square, Patan Durbar Square, Bhaktapur Durbar Square, the former durbar of Kirtipur, Nyatapola, Kumbheshwar, the Krishna temple, and others.
Modern era
thumb|The now demolished old royal palace in 1920
Early Shah rule
The Gorkha Kingdom ended the Malla confederation after the Battle of Kathmandu in 1768. This marked the beginning of the modern era in Kathmandu. The Battle of Kirtipur was the start of the Gorkha conquest of the Kathmandu Valley. Kathmandu was adopted as the capital of the Gorkha empire, and the empire itself was dubbed Nepal. During the early part of this era, Kathmandu maintained its distinctive culture. Buildings with characteristic Nepali architecture, such as the nine-story tower of Basantapur, were built during this era. However, trade declined because of continual war with neighboring nations. Bhimsen Thapa supported France against Great Britain; this led to the development of modern military structures, such as modern barracks in Kathmandu. The nine-storey tower Dharahara was originally built during this era.
Rana rule
Rana rule over Nepal started with the Kot Massacre, which occurred near Hanuman Dhoka Durbar. During this massacre, most of Nepal's high-ranking officials were massacred by Jang Bahadur Rana and his supporters. Another massacre, the Bhandarkhal Massacre, was also conducted by Kunwar and his supporters in Kathmandu. During the Rana regime, Kathmandu's alliance shifted from anti-British to pro-British; this led to the construction of the first buildings in the style of Western European architecture. The most well-known of these buildings include Singha Durbar, Garden of Dreams, Shital Niwas, and the old Narayanhiti palace.The first modern commercial road in the Kathmandu Valley, the New Road, was also built during this era. Trichandra College (the first college of Nepal), Durbar School (the first modern school of Nepal), and Bir Hospital (the first hospital of Nepal) were built in Kathmandu during this era. Rana rule was marked by despotism, economic exploitation and religious persecution.
Geography
thumb|center|700px|View of Himalayan peaks from the Kathmandu Valley
left|thumb|Map of central Kathmandu
right|thumb|Urban expansion in Kathmandu(Mar.2015)
Kathmandu is in the northwestern part of the Kathmandu Valley to the north of the Bagmati River and covers an area of . The average elevation is above sea level. The city is bounded by several other municipalities of the Kathmandu valley: south of the Bagmati by Lalitpur Sub-Metropolitan City (Patan), with which it forms one urban area surrounded by a ring road, to the southwest by Kirtipur Municipality and to the east by Madyapur Thimi Municipality. To the north the urban area extends into several Village Development Committees. However, the urban agglomeration extends well beyond the neighboring municipalities, e.g. to Bhaktapur, and nearly covers the entire Kathmandu valley.
Kathmandu is dissected by eight rivers, the main river of the valley, the Bagmati and its tributaries, of which the Bishnumati, Dhobi Khola, Manohara Khola, Hanumant Khola, and Tukucha Khola are predominant. The mountains from where these rivers originate are in the elevation range of , and have passes which provide access to and from Kathmandu and its valley. An ancient canal once flowed from Nagarjuna hill through Balaju to Kathmandu; this canal is now extinct.
Kathmandu and its valley are in the Deciduous Monsoon Forest Zone (altitude range of ), one of five vegetation zones defined for Nepal. The dominant tree species in this zone are oak, elm, beech, maple and others, with coniferous trees at higher altitude.Shrestha S.H. p. 35
right|thumb|The green, vegetated slopes that surround the Kathmandu metro area (light gray, image centre) include both forest reserves and national parks
Kathmandu administration
Kathmandu and adjacent cities are composed of neighborhoods, which are utilized quite extensively and more familiar among locals. However, administratively the city is divided into 35 wards, numbered from 1 to 35.
Kathmandu agglomeration
There is no officially defined agglomeration of Kathmandu. The urban area of the Kathmandu valley is split among three different districts (collections of local government units within a zone) which extend very little beyond the valley fringe, except towards the southern ranges, which have comparatively small population. They have the three highest population densities in the country. Within these 3 districts lie VDCs (villages), 20 municipalities (nagarpalika), 1 sub-metropolitan municipality (up-maha nagarpalika: Lalitpur), and 1 metropolitan municipality (maha-nagarpalika: Kathmandu). The following data table describes these districts which likely would be considered an agglomeration:
Administrative district (Nepali: जिल्ला; jillā) Area (km²) Population (2001 Census) Population (2011 Census) Population density (/km²) Kathmandu District 395 1,081,845 1,740,977 4408 Lalitpur District 385 337,785 466,784 1212 Bhaktapur District 119 225,461 303,027 2546 Kathmandu agglomeration 899 1,645,091 2,510,788 2793
Climate
Five major climatic regions are found in Nepal. Of these, Kathmandu Valley is in the Warm Temperate Zone (elevation ranging from ), where the climate is fairly temperate, atypical for the region. This zone is followed by the Cool Temperate Zone with elevation varying between . Under Köppen's climate classification, portions of the city with lower elevations have a humid subtropical climate (Cwa), while portions of the city with higher elevations generally have a subtropical highland climate. In the Kathmandu Valley, which is representative of its valley's climate, the average summer temperature varies from . The average winter temperature is .
The city generally has a climate with warm days followed by cool nights and mornings. Unpredictable weather is expected, given that temperatures can drop to or less during the winter. During a 2013 cold front, the winter temperatures of Kathmandu dropped to , and the lowest temperature was recorded on 10 January 2013, at . Rainfall is mostly monsoon-based (about 65% of the total concentrated during the monsoon months of June to August), and decreases substantially () from eastern Nepal to western Nepal. Rainfall has been recorded at about for the Kathmandu valley, and averages for the city of Kathmandu. On average humidity is 75%.
The chart below is based on data from the Nepal Bureau of Standards & Meteorology, "Weather Meteorology" for 2005. The chart provides minimum and maximum temperatures during each month. The annual amount of precipitation was for 2005, as per monthly data included in the table above.
The decade of 2000–2010 saw highly variable and unprecedented precipitation anomalies in Kathmandu. This was mostly due to the annual variation of the southwest monsoon. For example, 2003 was the wettest year ever in Kathmandu, totalling over of precipitation due to an exceptionally strong monsoon season. In contrast, 2001 recorded only of precipitation due to an extraordinarily weak monsoon season.
|source 2 = Danish Meteorological Institute (sun and relative humidity), Sistema de Clasificación Bioclimática Mundial (extremes)
Economy
thumb|left|Hotel Shanker is one of the city's popular heritage hotels
thumb|Central Bank of Nepal
thumb|The Kathmandu-based billionaire Binod Chaudhary is listed by Forbes as Nepal's richest man
The location and terrain of Kathmandu have played a significant role in the development of a stable economy which spans millennia. The city is in an ancient lake basin, with fertile soil and flat terrain. This geography helped form a society based on agriculture. This, combined with its location between India and China, helped establish Kathmandu as an important trading center over the centuries. Kathmandu's trade is an ancient profession that flourished along an offshoot of the Silk Road which linked India and Tibet. From centuries past, Lhasa Newar merchants of Kathmandu have conducted trade across the Himalaya and contributed to spreading art styles and Buddhism across Central Asia. Other traditional occupations are farming, metal casting, woodcarving, painting, weaving, and pottery.
Kathmandu is the most important industrial and commercial center in Nepal. The Nepal Stock Exchange, the head office of the national bank, the chamber of commerce, as well as head-offices of national and international banks, tele-communication companies, the electricity authority, and various other national and international organizations are in Kathmandu. The major economic hubs are the New Road, Durbar Marg, Ason and Putalisadak.
The economic output of the metropolitan area alone is worth more than one third of national GDP around $6.5billion in terms of nominal GDP NR.s 550 billion approximately per year $2200 per capital income approx three times national average. Kathmandu exports handicrafts, artworks, garments, carpets, pashmina, paper; trade accounts for 21% of its finances. Manufacturing is also important and accounts for 19% of the revenue that Kathmandu generates. Garments and woolen carpets are the most notable manufactured products. Other economic sectors in Kathmandu include agriculture (9%), education (6%), transport (6%), and hotels and restaurants (5%). Kathmandu is famous for lokta paper and pashmina shawls.
Tourism
thumb|Hyatt Regency, Kathmandu
Tourism is considered another important industry in Nepal. This industry started around 1950, as the country's political makeup changed and ended the country's isolation from the rest of the world. In 1956, air transportation was established and the Tribhuvan Highway, between Kathmandu and Raxaul (at India's border), was started. Separate organizations were created in Kathmandu to promote this activity; some of these include the Tourism Development Board, the Department of Tourism and the Civil Aviation Department. Furthermore, Nepal became a member of several international tourist associations. Establishing diplomatic relations with other nations further accentuated this activity. The hotel industry, travel agencies, training of tourist guides, and targeted publicity campaigns are the chief reasons for the remarkable growth of this industry in Nepal, and in Kathmandu in particular.Shrestha pp.86–89
Since then, tourism in Nepal has thrived; it is the country's most important industry. Tourism is a major source of income for most of the people in the city, with several hundred thousand visitors annually. Hindu and Buddhist pilgrims from all over the world visit Kathmandu's religious sites such as Pashupatinath, Swayambhunath, Boudhanath and Budhanilkantha. From a mere 6,179 tourists in 1961/62, the number increased to 491,504 in 1999/2000. Following the end of the Maoist insurgency, there was a significant rise of 509,956 tourist arrivals in 2009. Since then, tourism has improved as the country turned into a Democratic Republic. In economic terms, the foreign exchange registered 3.8% of the GDP in 1995/96 but then started declining. The high level of tourism is attributed to the natural grandeur of the Himalayas and the rich cultural heritage of the country.
The neighbourhood of Thamel is Kathmandu's primary "traveller's ghetto", packed with guest houses, restaurants, shops, and bookstores, catering to tourists. Another neighbourhood of growing popularity is Jhamel, a name for Jhamsikhel coined to rhyme with Thamel. Jhochhen Tol, also known as Freak Street, is Kathmandu's original traveler's haunt, made popular by the hippies of the 1960s and 1970s; it remains a popular alternative to Thamel. Asan is a bazaar and ceremonial square on the old trade route to Tibet, and provides a fine example of a traditional neighbourhood.
With the opening of the tourist industry after the change in the political scenario of Nepal in 1950, the hotel industry drastically improved.Shrestha pp.86–87 Now Kathmandu boasts several luxury such as the Hyatt Regency, Dwarika's, theYak & Yeti, The Everest Hotel, Hotel Radisson, Hotel De L'Annapurna, The Malla Hotel, Shangri-La Hotel (which is not operated by the Shangri-La Hotel Group) and The Shanker Hotel. There are several four-star hotels such as Hotel Vaishali, Hotel Narayani, The Blue Star and Grand Hotel. The Garden Hotel, Hotel Ambassador, and Aloha Inn are among the three-star hotels in Kathmandu. Hotels like Hyatt Regency, De L'Annapurna and Hotel Yak & Yeti are among the five-star hotels providing casinos as well.
Government and public services
thumb|Office of the Prime Minister of Nepal
Civic administration
Kathmandu Municipal Corporation, abbreviated KMC, is the chief nodal agency for the administration of Kathmandu. The Municipality of Kathmandu was upgraded to incorporated in 1994.
thumb|SAARC Secretariat in Kathmandu
Metropolitan Kathmandu is divided into five sectors: the Central Sector, the East Sector, the North Sector, the City Core and the West Sector. For civic administration, the city is further divided into 35 administrative wards. The Council administers the Metropolitan area of Kathmandu city through its 177 elected representatives and 20 nominated members. It holds biannual meetings to review, process and approve the annual budget and make major policy decisions. The ward's profile documents for the 35 wards prepared by the Kathmandu Metropolitan Council is detailed and provides information for each ward on population, the structure and condition of houses, the type of roads, educational, health and financial institutions, entertainment facilities, parking space, security provisions, etc. It also includes lists of development projects completed, on-going and planned, along with informative data about the cultural heritage, festivals, historical sites and the local inhabitants. Ward 16 is the largest, with an area of 437.4 ha; ward 26 is the smallest, with an area of 4 ha.
Kathmandu is headquarters of the surrounding Kathmandu District. The city of Kathmandu forms this district with Kirtipur Municipality and some 57 Village Development Committees. According to the 2001 census, there are 235,387 households in the metropolitan city.
Law and order
The Metropolitan Police is the main law enforcement agency in the city. It is headed by a commissioner of police. The Metropolitan Police is a division of the Nepal Police, and the administrative control lies with the National Home Ministry.
thumb|Royal Netherlands Embassy. Kathmandu hosts 28 diplomatic missions
Fire service
The fire service, known as the Barun Yantra Karyalaya, opened its first station in Kathmandu in 1937 with a single vehicle. An iron tower was erected to monitor the city and watch for fire. As a precautionary measure, firemen were sent to the areas which were designated as accident-prone areas. In 1944, the fire service was extended to the neighboring cities of Lalitpur and Bhaktapur. In 1966, a fire service was established in Kathmandu airport. In 1975, a West German government donation added seven fire engines to Kathmandu's fire service. The fire service in the city is also overlooked by an international non-governmental organization, the Firefighters Volunteer Association of Nepal (FAN), which was established in 2000 with the purpose of raising public awareness about fire and improving safety.
Electricity and water supply
Electricity in Kathmandu is regulated and distributed by the NEA Nepal Electricity Authority. While water supply and sanitation facilities are provided by the Kathmandu Upatyaka Khanepani Limited (KUKL).
There is a severe shortage of water for household purposes such as drinking, bathing, cooking and washing. People have been using mineral water bottle and mineral water tanks for all the purposes related to water.
Waste management
There is no proper waste management in Kathmandu, so rubbish piles up on roads, pavements and in waterways.
Waste management may be through composting in municipal waste management units, and at houses with home composting units. Both systems are common and established in India and neighbouring countries.
Demographics
Kathmandu's urban cosmopolitan character has made it the most populous city in Nepal, recording a population of 671,846 residents living in 235,387 households in the metropolitan area, according to the 2001 census. According to the National Population Census of 2011, the total population of Kathmandu city was 975,543 with an annual growth rate of 6.12% with respect to the population figure of 2001. 70% of the total population residing in Kathmandu are aged between 15 and 59.
Over the years the city has been home to people of various ethnicities, resulting in a range of different traditions and cultural practices. In one decade, the population increased from 427,045 in 1991 to 671,805 in 2001. The population was projected to reach 915,071 in 2011 and 1,319,597 by 2021. To keep up this population growth, the KMC-controlled area of has expanded to in 2001. With this new area, the population density which was 85 in 1991 is still 85 in 2001; it is likely to jump to 111 in 2011 and 161 in 2021.
Ethnic groups
The largest ethnic groups are Newar (29.6%), Mongoloid (25.1% Kirat, Gurung, Magars, Tamang, Sherpa etc.), Khas Brahmins (20.51%), and Chettris (18.5%). Tamangs originating from surrounding hill districts can be seen in Kathmandu. More recently, other hill ethnic groups and Caste groups from Terai have come to represent a substantial proportion of the city's population. The major languages are Nepali and Nepal Bhasa, while English is understood by many, particularly in the service industry. The major religions are Hinduism and Buddhism.
The linguistic profile of Kathmandu underwent drastic changes during the Shah dynasty's rule because of its strong bias towards the Brahminic culture. Sanskrit language therefore was preferred and people were encouraged to learn it even by attending Sanskrit learning centers in Terai. Sanskrit schools were specially set up in Kathmandu and in the Terai region to inculcate traditional Hindu culture and practices originated from Nepal.Jha p.21
Architecture and cityscape
The ancient trade route between India and Tibet that passed through Kathmandu enabled a fusion of artistic and architectural traditions from other cultures to be amalgamated with local art and architecture. The monuments of Kathmandu City have been influenced over the centuries by Hindu and Buddhist religious practices. The architectural treasure of the Kathmandu valley has been categorized under the well-known seven groups of heritage monuments and buildings. In 2006 UNESCO declared these seven groups of monuments as a World Heritage Site (WHS). The seven monuments zones cover an area of , with the buffer zone extending to . The Seven Monument Zones (Mzs) inscribed originally in 1979 and with a minor modification in 2006 are Durbar squares of Hanuman Dhoka, Patan and Bhaktapur, Hindu temples of Pashupatinath and Changunarayan, the Buddhist stupas of Swayambhu and Boudhanath.
Durbar squares
The literal meaning of Durbar Square is a "place of palaces". There are three preserved Durbar Squares in Kathmandu valley and one unpreserved in Kirtipur. The Durbar Square of Kathmandu is in the old city and has heritage buildings representing four kingdoms (Kantipur, Lalitpur, Bhaktapur, Kirtipur); the earliest is the Licchavi dynasty. The complex has 50 temples and is distributed in two quadrangles of the Durbar Square. The outer quadrangle has the Kasthamandap, Kumari Ghar, and Shiva-Parvati Temple; the inner quadrangle has the Hanuman Dhoka palace. The squares were severely damaged in the April 2015 Nepal earthquake.
Hanuman Dhoka is a complex of structures with the Royal Palace of the Malla kings and of the Shah dynasty. It is spread over five acres. The eastern wing, with ten courtyards, is the oldest part, dating to the mid-16th century. It was expanded by King Pratap Malla in the 17th century with many temples. The royal family lived in this palace until 1886 when they moved to Narayanhiti Palace. The stone inscription outside is in fifteen languages.
Kumari Ghar is a palace in the center of the Kathmandu city, next to the Durbar square where a Royal Kumari selected from several Kumaris resides. Kumari, or Kumari Devi, is the tradition of worshipping young pre-pubescent girls as manifestations of the divine female energy or devi in South Asian countries. In Nepal the selection process is very rigorous. Kumari is believed to be the bodily incarnation of the goddess Taleju (the Nepali name for Durga) until she menstruates, after which it is believed that the goddess vacates her body. Serious illness or a major loss of blood from an injury are also causes for her to revert to common status. The current Royal Kumari, Matina Shakya, age four, was installed in October 2008 by the Maoist government that replaced the monarchy.
Kasthamandap is a three-storeyed temple enshrining an image of Gorakhnath. It was built in the 16th century in pagoda style. The name of Kathmandu is a derivative of the word Kasthamandap. It was built under the reign of King Laxmi Narsingha Malla. Kasthamandap stands at the intersection of two ancient trade routes linking India and Tibet at Maru square. It was originally built as a rest house for travelers.
Pashupatinath temple
thumb|center|upright=4.3|Panorama of the Pashupatinath Temple from the other bank of Bagmati river
The Pashupatinath Temple is a famous 5th century Hindu temple dedicated to Lord Shiva (Pashupati). On the banks of the Bagmati River in the eastern part of Kathmandu, Pashupatinath Temple is the oldest Hindu temple in Kathmandu. It served as the seat of national deity, Lord Pashupatinath, until Nepal was secularized. However, a significant part of the temple was destroyed by Mughal invaders in the 14th century and little or nothing remains of the original 5th-century temple exterior. The temple as it stands today was built in the 19th century, although the image of the bull and the black four-headed image of Pashupati are at least 300 years old. The temple is a UNESCO World Heritage Site. Shivaratri, or the night of Lord Shiva, is the most important festival that takes place here, attracting thousands of devotees and sadhus.
Believers in Pashupatinath (mainly Hindus) are allowed to enter the temple premises, but non-Hindu visitors are allowed to view the temple only from the across the Bagmati River. The priests who perform the services at this temple have been Brahmins from Karnataka, South India since the time of Malla king Yaksha Malla. This tradition is believed to have been started at the request of Adi Shankaracharya who sought to unify the states of Bharatam (Unified India) by encouraging cultural exchange. This procedure is followed in other temples around India, which were sanctified by Adi Shankaracharya.
The temple is built in the pagoda style of architecture, with cubic constructions, carved wooden rafters (tundal) on which they rest, and two-level roofs made of copper and gold.
Boudhanath
thumb|center|upright=3.64|Buildings around Boudha Stupa
The Boudhanath, (also written Bouddhanath, Bodhnath, Baudhanath or the Khāsa Chaitya), is one of the holiest Buddhist sites in Nepal, along with Swayambhu. It is a very popular tourist site. Boudhanath is known as Khāsti by Newars and as Bauddha or Bodhnāth by speakers of Nepali.Snellgrove (1987), p. 365. About from the center and northeastern outskirts of Kathmandu, the stupa's massive mandala makes it one of the largest spherical stupas in Nepal. Boudhanath became a UNESCO World Heritage Site in 1979.
thumb|right|Boudhanath Stupa, one of the largest in Nepal
The base of the stupa has 108 small depictions of the Dhyani Buddha Amitabha. It is surrounded with a brick wall with 147 niches, each with four or five prayer wheels engraved with the mantra, om mani padme hum. At the northern entrance where visitors must pass is a shrine dedicated to Ajima, the goddess of smallpox. Every year the stupa attracts many Tibetan Buddhist pilgrims who perform full body prostrations in the inner lower enclosure, walk around the stupa with prayer wheels, chant, and pray. Thousands of prayer flags are hoisted up from the top of the stupa downwards and dot the perimeter of the complex. The influx of many Tibetan refugees from China has seen the construction of over 50 Tibetan gompas (monasteries) around Boudhanath.
Swayambhu
Swayambhu is a Buddhist stupa atop a hillock at the northwestern part of the city. This is among the oldest religious sites in Nepal. Although the site is considered Buddhist, it is revered by both Buddhists and Hindus. The stupa consists of a dome at the base; above the dome, there is a cubic structure with the eyes of Buddha looking in all four directions. There are pentagonal Toran above each of the four sides, with statues engraved on them. Behind and above the torana there are thirteen tiers. Above all the tiers, there is a small space above which lies a gajur.
Rani Pokhari
Ranipokhari is a historic artificial pond in the heart of Kathmandu. It was built by king Pratap Mall in 1670 AD. A large stone statue of an elephant in south signifies the image of Pratap Malla and his two sons. Rani Pokhari is opened once a year during the final day of Tihar i.e. Bhai Tika and Chhath festival. The world largest Chhath takes place every year in Ranipokhari. The pond is one of Kathmandu's most famous landmarks and is known for its religious and aesthetic significance.
Culture
thumb|left|A man in the Nepalese national dress
thumb|right|Stone carvings, called Chaityas, seen in street corners and courtyards
Arts
Kathmandu valley is described as "an enormous treasure house of art and sculptures", which are made of wood, stone, metal, and terracotta, and found in profusion in temples, shrines, stupas, gompas, chaityasm and palaces. The art objects are also seen in street corners, lanes, private courtyards and in open ground. Most art is in the form of icons of gods and goddesses. Kathmandu valley has had this art treasure for a very long time, but received worldwide recognition only after the country opened to the outside world in 1950.
The religious art of Nepal and Kathmandu in particular consists of an iconic symbolism of the Mother Goddesses such as: Bhavani, Durga, Gaja-Lakshmi, Hariti-Sitala, Mahsishamardini, Saptamatrika (seven mother goddesses), and Sri-Lakshmi(wealth-goddess). From the 3rd century BC, apart from the Hindu gods and goddesses, Buddhist monuments from the Ashokan period (it is said that Ashoka visited Nepal in 250 BC) have embellished Nepal in general and the valley in particular. These art and architectural edifices encompass three major periods of evolution: the Licchavi or classical period (500 to 900 AD), the post-classical period (1000 to 1400 AD), with strong influence of the Palla art form; the Malla period (1400 onwards) that exhibited explicitly tantric influences coupled with the art of Tibetan Demonology.Jha p.23
A broad typology has been ascribed to the decorative designs and carvings created by the people of Nepal. These artists have maintained a blend of Hinduism and Buddhism. The typology, based on the type of material used are: stone art, metal art, wood art, terracotta art, and painting.Jha pp.23–24
Museums
Kathmandu is home to a number of museums and art galleries, including the National Museum of Nepal and the Natural History Museum of Nepal. Nepal's art and architecture is an amalgamation of two ancient religions, Hinduism and Buddhhism. These are amply reflected in the many temples, shrines, stupas, monasteries, and palaces in the seven well-defined Monument Zones of the Kathmandu valley are part of a UNESCO World Heritage Site. This amalgamation is also reflected in the planning and exhibitions in museums and art galleries throughout Kathmandu and its sister cities of Patan and Bhaktapur. The museums display unique artifacts and paintings from the 5th century CE to the present day, including archeological exportation.
Kathmandu museums and art galleries include:
The National Museum
The Natural History Museum
Hanumandhoka Palace Complex
The Kaiser Library
The National Art Gallery
The NEF-ART (Nepal Fine Art) Gallery
The Nepal Art Council Gallery
Narayanhity Palace Museum
The Taragaon Museum
thumb|right|National Museum of Nepal
The National Museum is in the western part of Kathmandu, near the Swayambhunath stupa in an historical building. This building was constructed in the early 19th century by General Bhimsen Thapa. It is the most important museum in the country, housing an extensive collection of weapons, art and antiquities of historic and cultural importance. The museum was established in 1928 as a collection house of war trophies and weapons, and the initial name of this museum was Chhauni Silkhana, meaning "the stone house of arms and ammunition". Given its focus, the museum contains many weapons, including locally made firearms used in wars, leather cannons from the 18th–19th century, and medieval and modern works in wood, bronze, stone and paintings.
The Natural History Museum is in the southern foothills of Swayambhunath hill and has a sizeable collection of different species of animals, butterflies, and plants. The museum is noted for its display of species, from prehistoric shells to stuffed animals.
The Tribhuvan Museum contains artifacts related to the King Tribhuvan (1906–1955). It has a variety of pieces including his personal belongings, letters and papers, memorabilia related to events he was involved in and a rare collection of photos and paintings of Royal family members. The Mahendra Museum is dedicated to king Mahendra of Nepal (1920–1972). Like the Tribhuvan Museum, it includes his personal belongings such as decorations, stamps, coins and personal notes and manuscripts, but it also has structural reconstructions of his cabinet room and office chamber. The Hanumandhoka Palace, a lavish medieval palace complex in the Durbar, contains three separate museums of historic importance. These museums include the Birendra museum, which contains items related to the second-last monarch, Birendra of Nepal.
The enclosed compound of the Narayanhity Palace Museum is in the north-central part of Kathmandu. "Narayanhity" comes from Narayana, a form of the Hindu god Lord Vishnu, and Hiti, meaning "water spout" (Vishnu's temple is opposite the palace, and the water spout is east of the main entrance to the precinct). Narayanhity was a new palace, in front of the old palace built in 1915, and was built in 1970 in the form of a contemporary Pagoda. It was built on the occasion of the marriage of King Birenda Bir Bikram Shah, then heir apparent to the throne. The southern gate of the palace is at the crossing of Prithvipath and Darbar Marg roads. The palace area covers () and is fully secured with gates on all sides. This palace was the scene of the Nepali royal massacre. After the fall of the monarchy, it was converted to a museum.
The Taragaon Museum presents the modern history of the Kathmandu Valley.The Taragaon Museum on Facebook
It seeks to document 50 years of research and cultural heritage conservation of the Kathmandu Valley, documenting what artists photographers architects anthropologists from abroad had contributed in the second half of the 20th century.
The actual structure of the Museum showcases restoration and rehabilitation efforts to preserve the built heritage of Kathmandu. It was designed by Carl Pruscha (master-planner of the Kathmandy Valley ) in 1970 and constructed in 1971. Restoration works began in 2010 to rehabilitate the Taragaon hostel into the Taragaon Museum. The design uses local brick along with modern architectural design elements, as well as the use of circle, triangles and squares.
The Museum is within a short walk from the Boudhnath stupa, which itself can be seen from the Museum tower.
Art galleries
thumb|upright|A Buddhist statue display in Kathmandu
Kathmandu is a center for art in Nepal, displaying the work of contemporary artists in the country and also collections of historical artists. Patan in particular is an ancient city noted for its fine arts and crafts. Art in Kathmandu is vibrant, demonstrating a fusion of traditionalism and modern art, derived from a great number of national, Asian, and global influences. Nepali art is commonly divided into two areas: the idealistic traditional painting known as Paubhas in Nepal and perhaps more commonly known as Thangkas in Tibet, closely linked to the country's religious history and on the other hand the contemporary western-style painting, including nature-based compositions or abstract artwork based on Tantric elements and social themes of which painters in Nepal are well noted for. Internationally, the British-based charity, the Kathmandu Contemporary Art Centre is involved with promoting arts in Kathmandu.
Kathmandu contains many notable art galleries. The NAFA Gallery, operated by the Arts and crafts Department of the Nepal Academy is housed in Sita Bhavan, a neo-classical old Rana palace.
The Srijana Contemporary Art Gallery, inside the Bhrikutimandap Exhibition grounds, hosts the work of contemporary painters and sculptors, and regularly organizes exhibitions. It also runs morning and evening classes in the schools of art. Also of note is the Moti Azima Gallery, in a three-storied building in Bhimsenthan which contains an impressive collection of traditional utensils and handmade dolls and items typical of a medieval Newar house, giving an important insight into Nepali history. The J Art Gallery is also in Kathmandu, near the Royal Palace in Durbarmarg, Kathmandu and displays the artwork of eminent, established Nepali painters. The Nepal Art Council Gallery, in the Babar Mahal, on the way to Tribhuvan International Airport contains artwork of both national and international artists and extensive halls regularly used for art exhibitions.
Literature
thumb|right|Asa Archives
The National Library of Nepal is in Patan. It is the largest library in the country with more than 70,000 books. English, Nepali, Sanskrit, Hindi, and Nepal Bhasa books are found here. The library is in possession of rare scholarly books in Sanskrit and English dating from the 17th century AD. Kathmandu also contains the Kaiser Library, in the Kaiser Mahal on the ground floor of the Ministry of Education building. This collection of around 45,000 books is derived from a personal collection of Kaiser Shamsher Jang Bahadur Rana. It covers a wide range of subjects including history, law, art, religion, and philosophy, as well as a Sanskrit manual of Tantra, which is believed to be over 1,000 years old. The 2015 earthquake caused severe damage to the Ministry of Education building, and the contents of the Kaiser Library have been temporarily relocated.
The Asa Archives are also noteworthy. They specialize in medieval history and religious traditions of the Kathmandu Valley. The archives, in Kulambhulu, have a collection of some 6,000 loose-leaf handwritten books and 1,000 palm-leaf manuscripts (mostly in Sanskrit or Nepal Bhasa) and a manuscript dated to 1464.
Cinema and theatre
Kathmandu is home to Nepali cinema and theaters. The city contains several theaters, including the National Dance Theatre in Kanti Path, the Ganga Theatre, the Himalayan Theatre and the Aarohan Theater Group founded in 1982. The M. Art Theater is based in the city. The Gurukul School of Theatre organizes the Kathmandu International Theater Festival, attracting artists from all over the world. A mini theater is also at the Hanumandhoka Durbar Square, established by the Durbar Conservation and Promotion Committee.
Kathmandu has a number of movie theatres (old single screen establishments and some new multiplexes) showing Nepali, Bollywood, and Hollywood films. Some old establishments include Vishwajyoti Cinema Hall, Jai Nepal Hall, Kumari Cinema Hall, Gopi Krishna Cinema Hall and Guna Cinema Hall. Kathmandu also houses some international standard cinema theatres and multiplexes, such as QFX Cinemas, Cine De Chef, Fcube Cinemas, Q's Cinemas and Big Movies.
Music
thumb|right|Traditional Buddhist musical performance during Gunla
Kathmandu is the centre of music and dance in Nepal, and these art forms are integral to understanding the city. Musical performances are organized in cultural venues. Music is a part of the traditional aspect of Kathmandu. Gunla is the traditional music festival according to Nepal Sambat. Newar music originated in Kathmandu. Furthermore, music from all over Nepal can be found in Kathmandu.
A number of hippies visited Kathmandu during the 1970s and introduced rock and roll, rock, and jazz to the city. Kathmandu is noted internationally for its jazz festival, popularly known as Jazzmandu. It is the only jazz festival in the Himalayan region and was established in March 2002. The festival attracts musicians from countries worldwide, such as Australia, Denmark, United States, Benin, and India.
The city has been referenced in numerous songs, including works by Cat Stevens ('Katmandu', Mona Bone Jakon (1970)), Bob Seger ('Katmandu', Beautiful Loser (1975)), Rush ('A Passage to Bangkok', Pulling into Kathmandu; 2112, 1976), Krematorij ('Kathmandu', Three Springs (2000)) and Fito Páez (Tráfico por Katmandú – "Traffic through Kathmandu").
Cuisine
right|thumb|A typical Nepali meal Dal bhat in Kathmandu
The staple food of most people in Kathmandu is dal bhat. This consists of rice and lentil soup, generally served with vegetable curries, achar and sometimes Chutney. Momo, a type of Nepali version of Tibetan dumpling, has become prominent in Nepal with many street vendors selling it. It is one of the most popular fast foods in Kathmandu. Various Nepali variants of momo including buff (i.e. buffalo) momo, chicken momo, and vegetarian momo are famous in Kathmandu.
Most of the cuisines found in Kathmandu are non-vegetarian. However, the practice of vegetarianism is not uncommon, and vegetarian cuisines can be found throughout the city. Consumption of beef is very uncommon and considered taboo in many places. Buff (meat of water buffalo) is very common. There is a strong tradition of buff consumption in Kathmandu, especially among Newars, which is not found in other parts of Nepal. Consumption of pork was considered taboo until a few decades ago. Due to the intermixing with Kirat cuisine from eastern Nepal, pork has found a place in Kathmandu dishes. A fringe population of devout Hindus and Muslims consider it taboo. The Muslims forbid eating buff as from Quran while Hindus eat all varieties except Cow's meat as they consider Cow to be a goddess and symbol of purity. The chief breakfast for locals and visitors is mostly Momo or Chowmein.
Kathmandu had only one western-style restaurant in 1955.Lonely Planet (2003), pp. 91–92 A large number of restaurants in Kathmandu have since opened, catering Nepali cuisine, Tibetan cuisine, Chinese cuisine and Indian cuisine in particular. Many other restaurants have opened to accommodate locals, expatriates, and tourists. The growth of tourism in Kathmandu has led to culinary creativity and the development of hybrid foods to accommodate for tourists such as American chop suey, which is a sweet-and-sour sauce with crispy noodles with a fried egg commonly added on top and other westernized adaptations of traditional cuisine. Continental cuisine can be found in selected places. International chain restaurants are rare, but some outlets of Pizza Hut and KFC have recently opened there. It also has several outlets of the international ice-cream chain Baskin-Robbins
Kathmandu has a larger proportion of tea drinkers than coffee drinkers. Tea is widely served but is extremely weak by western standards. It is richer and contains tea leaves boiled with milk, sugar and spices. Alcohol is widely drunk, and there are numerous local variants of alcoholic beverages. Drinking and driving is illegal, and authorities have a zero tolerance policy. Ailaa and thwon (alcohol made from rice) are the alcoholic beverages of Kathmandu, found in all the local bhattis (alcohol serving eateries). Chhyaang, tongba (fermented millet or barley) and rakshi are alcoholic beverages from other parts of Nepal which are found in Kathmandu. However, shops and bars in Kathmandu widely sell western and Nepali beers.
left|thumb|Samyak, a Buddhist festival during which statues of Buddhas from the ancient monasteries are displayed together. Note the statue of Hanuman next to the Buddhas in the picture, a common example of religious harmony in Kathmandu.
Festivals
thumb|right|President of Nepal Dr. Ram Baran Yadav observing the street festival of Yenya, which literally means "festival of Kathmandu"
thumb|right|Nepali Lakhe dancer
thumb|right|View of Kathmandu valley from Halchowk hill in Dipawali 2013
Most of the fairs and festivals in Kathmandu originated in the Malla period or earlier. Traditionally, these festivals were celebrated by Newars. In recent years, these festivals have found wider participation from other Kathmanduites as well. As the capital of the Republic of Nepal, various national festivals are celebrated in Kathmandu. With mass migration to the city, the cultures of Khas from the west, Kirats from the east, Bon/Tibetan from the north, and Mithila from the south meet in the capital and mingle harmoniously. The festivities such as the Ghode (horse) Jatra, Indra Jatra, Dashain Durga Puja festivals, Shivratri and many more are observed by all Hindu and Buddhist communities of Kathmandu with devotional fervor and enthusiasm. Social regulation in the codes enacted incorporate Hindu traditions and ethics. These were followed by the Shah kings and previous kings, as devout Hindus and protectors of Buddhist religion.
Cultural continuity has been maintained for centuries in the exclusive worship of goddesses and deities in Kathmandu and the rest of the country. These deities include the Ajima, Taleju (or Tulja Bhavani), Digu taleju, and Kumari (the living goddess). The artistic edifices have now become places of worship in the everyday life of the people, therefore a roster is maintained to observe annual festivals. There are 133 festivals held in the year.
Some of the traditional festivals observed in Kathmandu, apart from those previously mentioned, are Bada Dashain, Tihar, Chhath, Maghe Sankranti, Naga Panchami, Janai Poornima, Pancha Dan, Teej/Rishi Panchami, Pahan Charhe, Jana Baha Dyah Jatra (White Machchhendranath Jatra), and Matatirtha Aunsi.
Hinduism
Assumedly, together with the kingdom of Licchhavi (c. 400 to 750), Hinduism and the endogam social stratification of the Caste was established in Kathmandu Valley. The Pashupatinath Temple, Changu Narayan temple (the oldest), and the Kasthamandap are of particular importance to Hindus. Other notable Hindu temples in Kathmandu and the surrounding valley include Bajrayogini Temple, Dakshinkali Temple, Guhyeshwari Temple, and the Sobha Bhagwati shrine.
The Bagmati River which flows through Kathmandu is considered a holy river both by Hindus and Buddhists, and many Hindu temples are on the banks of this river. The importance of the Bagmati also lies in the fact that Hindus are cremated on its banks, and Kirants are buried in the hills by its side. According to the Nepali Hindu tradition, the dead body must be dipped three times into the Bagmati before cremation. The chief mourner (usually the first son) who lights the funeral pyre must take a holy riverwater bath immediately after cremation. Many relatives who join the funeral procession also take bath in the Bagmati River or sprinkle the holy water on their bodies at the end of cremation as the Bagmati is believed to purify people spiritually.
Buddhism
Buddhism started in Kathmandu with the arrival of Buddhist monks during the time of Buddha (c. 563 – 483 BCL. S. Cousins (1996), "The dating of the historical Buddha: a review article", Journal of the Royal Asiatic Society (3)6(1): 57–63.). They started a forest monastery in Sankhu. This monastery was renovated by Shakyas after they fled genocide from Virudhaka (rule: 491–461 BC).
During the Hindu Lichchavi era (c. 400 to 750), various monasteries and orders were created which successively led to the formation of Newar Buddhism, which is still practiced in the primary liturgical language of Hinduism, Sanskrit.
Legendary Princess Bhrikuti (7th-century) and artist Araniko (1245–1306 AD) from that tradition of Kathmandu valley played a significant role in spreading Buddhism in Tibet and China. There are over 108 traditional monasteries (Bahals and Bahis) in Kathmandu based on Newar Buddhism. Since the 1960s, the permanent Tibetan Buddhist population of Kathmandu has risen significantly so that there are now over fifty Tibetan Buddhist monasteries in the area. Also, with the modernization of Newar Buddhism, various Theravada Bihars have been established.
Kirant Mundhum
Kirant Mundhum is one of the indigenous animistic practices of Nepal. It is practiced by Kirat people. Some animistic aspects of Kirant beliefs, such as ancestor worship (worship of Ajima) are also found in Newars of Kirant origin. Ancient religious sites believed to be worshipped by ancient Kirats, such as Pashupatinath, Wanga Akash Bhairabh (Yalambar) and Ajima are now worshipped by people of all Dharmic religions in Kathmandu. Kirats who have migrated from other parts of Nepal to Kathmandu practice Mundhum in the city.
Others
Sikhism is practiced primarily in Gurudwara at Kupundole. An earlier temple of Sikhism is also present in Kathmandu which is now defunct. Jainism is practiced by a small community. A Jain temple is present in Gyaneshwar, where Jains practice their faith. According to the records of the Spiritual Assembly of the Baha'is of Nepal, there are approximately 300 Baha'is in Kathmandu valley. They have a National Office in Shantinagar, Baneshwor. The Baha'is also have classes for children at the National Centre and other localities in Kathmandu. Islam is practised in Kathmandu but Muslims are a minority, accounting for about 4.2% of the population of Nepal. It is said that in Kathmandu alone there are 170 Christian churches. Christian missionary hospitals, welfare organizations, and schools are also operating. Nepali citizens who served as soldiers in Indian and British armies, who had converted to Christianity while in service, on return to Nepal continue to practice their religion. They have contributed to the spread of Christianity and the building of churches in Nepal and in Kathmandu, in particular.
Education
The oldest modern school in Nepal is Durbar School, and the oldest college, Tri Chandra College, are both in Kathmandu city. The largest (according to number of students and colleges), oldest and most distinguished university in Nepal is in Kirtipur and is called Tribhuvan University. The second largest university, Kathmandu University (KU), is in Dhulikhel, Kavre on the outskirts of Kathmandu. It is the second oldest university in Nepal, established in November 1991.
Medical colleges
Institute of Medicine, the central college of Tribhuwan University is the first medical college of Nepal and is in Maharajgunj, Kathmandu. It was established in 1972 and started to impart medical education from 1978. A number of medical colleges including Kathmandu Medical College, Nepal Medical College, KIST Medical College, Nepal Army Institute of Health Sciences, National Academy of Medical Sciences (NAMS) and Kathmandu University School of Medical Sciences (KUSMS), are also in or around Kathmandu.
Sports
thumb|left|A football stadium in Kathmandu
Football and cricket are the most popular sports among the younger generation in Nepal and there are several stadiums in the city. The sport is governed by the All Nepal Football Association (ANFA) from its headquarters in Kathmandu. The only international football stadium in the city is the Dasarath Rangasala Stadium, a multi-purpose stadium used mostly for football matches and cultural events, in the neighborhood of Tripureshwor. It is the largest stadium in Nepal with a capacity of 25,000 spectators, built in 1956. Martyr's Memorial League is also held in this ground every year. The stadium was renovated with Chinese help before the 8th South Asian Games were held in Kathmandu and had floodlights installed. Kathmandu is home to the oldest football clubs of Nepal such as RCT, Sankata and NRT. Other prominent clubs include MMC, Machhindra FC, Tribhuwan Army Club (TAC) and MPC.
Kathmandu is also home of some of the oldest cricket clubs in Nepal, such as Yengal Sports Club. Kathmandu has the only recognised international cricket ground in the country, at a university site in Kirtipur.
An international stadium for swimming events is in Satdobato, Lalitpur, near Kathmandu. The ANFA Technical Football Center is just adjacent to this stadium.
Transport
right|thumb|Aerial view of a road in Kathmandu
The total length of roads in Nepal is recorded to be (), as of 2003–04. This fairly large network has helped the economic development of the country, particularly in the fields of agriculture, horticulture, vegetable farming, industry and also tourism.Shrestha pp.91–96 In view of the hilly terrain, transportation takes place in Kathmandu are mainly by road and air. Kathmandu is connected by the Tribhuvan Highway to the south, Prithvi Highway to the west and Araniko Highway to the north. The BP Highway, connecting Kathmandu to the eastern part of Nepal is under construction.
The main international airport serving Kathmandu and thus Nepal is the Tribhuvan International Airport, about from the city centre. Operated by the Civil Aviation Authority of Nepal it has two terminals, one domestic and one international. At present, about 22 international airlines connect Nepal to other destinations in Europe, Asia and the Middle East, to cities such as Istanbul, Delhi, Mumbai, Bangalore, Kolkata, Singapore, Bangkok, Kuala Lumpur, Dhaka, Islamabad, Paro, Lhasa, Chengdu, and Guangzhou. A recent extension to the international terminal has made the distance to the airplanes shorter and in October 2009 it became possible to fly directly to Kathmandu from Amsterdam with Arkefly. Since 2013, Turkish Airlines connects Istanbul to Kathmandu. Regionally, several Nepali airlines operate from the city, including Agni Air, Buddha Air, Cosmic Air, Nepal Airlines and Yeti Airlines, to other major towns across Nepal.
Ropeways
Ropeways are another important transportation means in hilly terrain. A ropeway operated between Kathmandu and Hetauda over a length of which carried 25 tonnes of goods per hour. It has since been discontinued due to poor carrying capacity and maintenance issues. During the Rana period, a ropeway was constructed between Kathmandu (then Mathathirtha) to Dhorsing (Makawanpur) of over in length, which carried cargo of 8 tonnes per hour. Now there is a cable car operated in kathmandu in chandragiri
Healthcare
Healthcare in Kathmandu is the most developed in Nepal, and the city and surrounding valley is home to some of the best hospitals and clinics in the country. Bir Hospital is the oldest, established in July 1889 by Bir Shamsher Jang Bahadur Rana. Notable hospitals include Bir Hospital, Tribhuwan University Institute of Medicine (Teaching Hospital), Patan Hospital, Kathmandu Model Hospital, Scheer Memorial Hospital, Om Hospital, Norvic Hospital, Grande International Hospital, Nobel Hospital.
The city is supported by specialist hospitals/clinics such as Shahid Shukra Tropical Hospital, Shahid Gangalal Foundation, Kathmandu Veterinary Hospital, Nepal Eye Hospital, Kanti Children's Hospital, Nepal International Clinic (Travel and Mountain medicine center), Neuro Center, Spinal Rehabilitation center and Bhaktapur Cancer Hospital. Most of the general hospitals are in the city centre, although several clinics are elsewhere in Kathmandu district.
Tilganga Institute of Ophthalmology is an Ophthalmological hospital in Kathmandu. It pioneered the production of low cost intraocular lenses (IOLs), which are used in cataract surgery. The team of Dr. Sanduk Ruit in the same hospital pioneered sutureless small-incision cataract surgery (SICS), a technique which has been used to treat 4 million of the world's 20 million people with cataract blindness.
Media
thumb|A Nepali language magazine cover in 1951
Kathmandu is the television hub of Nepal. Nepal Television, established in 1985, is the oldest and most watched television channel in Nepal, as is government-owned NTV 2 Metro, Channel Nepal, Image Channel, Kantipur Television, Sagarmatha TV, Himalayan Television and other channels.
The headquarters of many of the country's news outlets are also in the city including the government-owned Gorkhapatra, the oldest national daily newspaper in Nepal, The Kathmandu Post, Nepali Times, Kantipur Publications and its paper Kantipur, the largest selling Nepali language paper, The Himalayan Times, the largest selling English broadsheet in Nepal, Karobar Economic Daily and Aarthik Abhiyan National Daily are the only economic daily in Nepal and Jana Aastha National Weekly.
Nepal Republic Media, the publisher of MyRepublica, joined a publishing alliance with the International Herald Tribune (IHT), to publish the Asia Pacific Edition of IHT from Kathmandu from 20 July 2011. There is a state-run National News Agency (RSS).
Radio Nepal is a state-run organization which operates national and regional radio stations. These stations are: Hits FM (Nepal), HBC 94 FM, Radio Sagarmatha, Kantipur FM and Image FM. The BBC also has an FM broadcasting station in Kathmandu. Among them small part of FM radio come from Community radio Station, that are Radio Pratibodh F.M. – 102.4 MHz, Radio Upatyaka – 87.6 MHz etc.
International Organisations
Kathmandu is home to several international and regional organizations, including the South Asian Association for Regional Cooperation (SAARC).
International relations
Kathmandu Metropolitan City (KMC), in order to promote international relations has established an International Relations Secretariat (IRC). KMC's first international relationship was established in 1975 with the city of Eugene, Oregon, United States. This activity has been further enhanced by establishing formal relationships with 8 other cities: Motsumoto City of Japan, Rochester of the USA, Yangon (formerly Rangoon) of Myanmar, Xi'an of the People's Republic of China, Minsk of Belarus, and Pyongyang of the Democratic Republic of Korea. KMC's constant endeavor is to enhance its interaction with SAARC countries, other International agencies and many other major cities of the world to achieve better urban management and developmental programs for Kathmandu.
Twin towns – Sister cities
Kathmandu is twinned with:
Edinburgh, United Kingdom
Eugene, Oregon, United States
Isfahan, Iran
Johannesburg, South Africa
Kyoto, Japan
Matsumoto, Nagano, Japan
Nicosia, Cyprus
Miami, Florida, United States Minsk, Belarus
Pau, France
Mumbai, Maharashtra, India
Québec City, Canada
Xi'an, China
Istanbul, Turkey
See also
Hippie trail
Footnotes
References
Beal, Samuel (1884). Si-Yu-Ki: Buddhist Records of the Western World, by Hiuen Tsiang. 2 vols. Translated by Samuel Beal. London. 1884. Reprint: Delhi. Oriental Books Reprint Corporation. 1969.
Nanjio, Bunyiu (1883). A Catalogue of the Chinese Translation of the Buddhist Pantheon. Oxford at the Clarendon Press.
Shaha, Rishikesh (1992). Ancient and Medieval Nepal. Manohar Publications, New Delhi. ISBN 978-81-85425-69-6.
Snellgrove, David (1987). Indo-Tibetan Buddhism: Indian Buddhists & Their Tibetan Successors. Two Volumes. Shambhala Publications, Boston. ISBN 978-0-87773-311-9 (v. 1); ISBN 978-0-87773-379-9 (v. 2).
Tamot, Kashinath, and Ian Alsop. (2001). "A Kushan-period Sculpture from the reign of Jaya Varma-, AD 184/185, Kathmandu, Nepal." (2001). Asianart.com
Tamot, Kashinath, and Ian Alsop. (date unknown. Update of previous article). "A Kushan-period Sculpture from the reign of Jaya Varman, AD 185, Kathmandu, Nepal." Asianart.com
Thapa, Rajesh Bahadur, Murayama, Yuji, and Ale, Shailja (2008). "City Profile: Kathmandu". Cities, Vol.25 (1), 45–57.
Thapa, Rajesh Bahadur and Murayama, Yuji (2009). Spatiotemporal Urbanization Patterns in Kathmandu Valley, Nepal: Remote Sensing and Spatial Metrics Approaches. Vol.1 (3), 534–556.
Thapa, Rajesh Bahadur and Murayama, Yuji (2010). Drivers of urban growth in the Kathmandu valley, Nepal: Examining the efficacy of the analytic hierarchy process Applied Geography, Vol. 30 (1), 70–83.
Thapa, Rajesh Bahadur and Murayama, Yuji (2011). Urban growth modeling of Kathmandu metropolitan region, Nepal. Computers, Environment and Urban Systems, Vol 35 (1) 25–34.
Watters, Thomas. (1904–05). On Yuan Chwang's Travels in India. (AD 629–645). Royal Asiatic Society. Second Indian Edition. Munshhiram Manoharlal Publishers, New Delhi. (1973).
External links
Category:Capitals in Asia
Category:Districts of Nepal
Category:Hill stations in Nepal
Category:Newar
Category:Populated places in Nepal | 17,168 | 2017-01 |
Yale University | Yale University is an American private Ivy League research university in New Haven, Connecticut. Founded in 1701 in Saybrook Colony to train Congregationalist ministers, it is the third-oldest institution of higher education in the United States.
The "Collegiate School" moved to New Haven in 1716, and shortly after was renamed Yale College in recognition of a gift from British East India Company governor Elihu Yale. Originally restricted to theology and sacred languages, the curriculum began to incorporate humanities and sciences by the time of the American Revolution. In the 19th century the school introduced graduate and professional instruction, awarding the first Ph.D. in the United States in 1861 and organizing as a university in 1887.
Yale is organized into fourteen constituent schools: the original undergraduate college, the Yale Graduate School of Arts and Sciences, and twelve professional schools. While the university is governed by the Yale Corporation, each school's faculty oversees its curriculum and degree programs. In addition to a central campus in downtown New Haven, the University owns athletic facilities in western New Haven, including the Yale Bowl, a campus in West Haven, Connecticut, and forest and nature preserves throughout New England. The university's assets include an endowment valued at $25.6 billion as of September 2015, the second largest of any educational institution. The Yale University Library, serving all constituent schools, holds more than 15 million volumes and is the third-largest academic library in the United States.
Yale College undergraduates follow a liberal arts curriculum with departmental majors and are organized into a social system of residential colleges. Almost all faculty teach undergraduate courses, more than 2,000 of which are offered annually. Students compete intercollegiately as the Yale Bulldogs in the NCAA Division I Ivy League.
Yale has graduated many notable alumni, including five U.S. Presidents, 19 U.S. Supreme Court Justices, 13 living billionaires, and many heads of state. In addition, Yale has graduated hundreds of members of Congress and many high-level U.S. diplomats. 52 Nobel laureates, 5 Fields Medalists, 247 Rhodes Scholars, and 119 Marshall Scholars have been affiliated with the University.
History
Early history of Yale College
Origins
160px|thumb|Official seal used by the College and the University
Yale traces its beginnings to "An Act for Liberty to Erect a Collegiate School," passed by the General Court of the Colony of Connecticut on October 9, 1701, while meeting in New Haven. The Act was an effort to create an institution to train ministers and lay leadership for Connecticut. Soon thereafter, a group of ten Congregationalist ministers: Samuel Andrew, Thomas Buckingham, Israel Chauncy, Samuel Mather, Rev. James Noyes II (son of James Noyes), James Pierpont, Abraham Pierson, Noadiah Russell, Joseph Webb and Timothy Woodbridge, all alumni of Harvard, met in the study of Reverend Samuel Russell in Branford, Connecticut, to pool their books to form the school's library.The Harvard Crimson: "I'm Gonna Git Yoy Sukka: Classic Stories of Revenge at Harvard.". Retrieved April 10, 2007. The group, led by James Pierpont, is now known as "The Founders".
Originally known as the "Collegiate School," the institution opened in the home of its first rector, Abraham Pierson,Although Pierson was "rector" in his own time, he is today considered the first president of Yale. in Killingworth (now Clinton). The school moved to Saybrook, and then Wethersfield. In 1716 the college moved to New Haven, Connecticut.
Meanwhile, there was a rift forming at Harvard between its sixth president Increase Mather and the rest of the Harvard clergy, whom Mather viewed as increasingly liberal, ecclesiastically lax, and overly broad in Church polity. The feud caused the Mathers to champion the success of the Collegiate School in the hope that it would maintain the Puritan religious orthodoxy in a way that Harvard had not., Encyclopædia Britannica Eleventh Edition, Encyclopædia Britannica
Naming and development
In 1718, at the behest of either Rector Samuel Andrew or the colony's Governor Gurdon Saltonstall, Cotton Mather contacted the successful Boston born businessman Elihu Yale to ask him for financial help in constructing a new building for the college. Through the persuasion of Jeremiah Dummer, Yale, who had made a fortune through trade while living in Madras as a representative of the East India Company, donated nine bales of goods, which were sold for more than £560, a substantial sum at the time. Cotton Mather suggested that the school change its name to "Yale College". (The name Yale is the Anglicised spelling of the Welsh toponym, Iâl. from the family estate at Plas yn Iâl near the village of Llandegla, Denbighshire, Wales).Henry Davidson Love Indian Records Series Vestiges of Old Chennai 1640-1800 Mittal Publications
Meanwhile, a Harvard graduate working in England convinced some 180 prominent intellectuals that they should donate books to Yale. The 1714 shipment of 500 books represented the best of modern English literature, science, philosophy and theology. It had a profound effect on intellectuals at Yale. Undergraduate Jonathan Edwards discovered John Locke's works and developed his original theology known as the "new divinity." In 1722 the Rector and six of his friends, who had a study group to discuss the new ideas, announced that they had given up Calvinism, become Arminians, and joined the Church of England. They were ordained in England and returned to the colonies as missionaries for the Anglican faith. Thomas Clapp became president in 1745, and struggled to return the college to Calvinist orthodoxy; but he did not close the library. Other students found Deist books in the library.Edmund S. Morgan, American Heroes: Profiles of Men and Women Who Shaped Early America (2010) pp 26–32
Curriculum
Yale was swept up by the great intellectual movements of the period—the Great Awakening and the Enlightenment—due to the religious and scientific interests of presidents Thomas Clap and Ezra Stiles. They were both instrumental in developing the scientific curriculum at Yale, while dealing with wars, student tumults, graffiti, "irrelevance" of curricula, desperate need for endowment, and fights with the Connecticut legislature.Louis Leonard Tucker, Puritan Protagonist: President Thomas Clap of Yale College (1970); Edmund S. Morgan, The Gentle Puritan: A Life of Ezra Stiles, 1727–1795 (1970).
Serious American students of theology and divinity, particularly in New England, regarded Hebrew as a classical language, along with Greek and Latin, and essential for study of the Old Testament in the original words. The Reverend Ezra Stiles, president of the College from 1778 to 1795, brought with him his interest in the Hebrew language as a vehicle for studying ancient Biblical texts in their original language (as was common in other schools), requiring all freshmen to study Hebrew (in contrast to Harvard, where only upperclassmen were required to study the language) and is responsible for the Hebrew phrase אורים ותמים (Urim and Thummim) on the Yale seal. A 1746 graduate of Yale, Stiles came to the college with experience in education, having played an integral role in the founding of Brown University in addition to having been a minister.Edmund S Morgan, The Gentle Puritan: A Life of Ezra Stiles, 1727–1795 (New York: W. W. Norton & Company, 1962), 205. Stiles' greatest challenge occurred in July 1779 when hostile British forces occupied New Haven and threatened to raze the College. However, Yale graduate Edmund Fanning, Secretary to the British General in command of the occupation, interceded and the College was saved. Fanning later was granted an honorary degree LL.D., at 1803, for his efforts.
thumb|First diploma awarded by Yale College, granted to Nathaniel Chauncey, 1702.
Students
As the only college in Connecticut, Yale educated the sons of the elite.Historian Bruce Daniels has used biographical dictionaries of the college graduates of Yale University, presents statistics on Yale graduates from the classes of 1702 to 1780, focusing on the graduates' career choices, their success in life, religious affiliation, vital statistics, the percentage of those who supported the American Revolution, and geographic mobility. See Bruce C. Daniels, "College Students and Puritan Society: a Quantitative Profile of Yale Graduates in Colonial America," Connecticut History 1982 (23): 1–23 Offenses for which students were punished included cardplaying, tavern-going, destruction of college property, and acts of disobedience to college authorities. During the period, Harvard was distinctive for the stability and maturity of its tutor corps, while Yale had youth and zeal on its side.Kathryn McDaniel. Moore, "The War with the Tutors: Student-faculty Conflict at Harvard and Yale, 1745–1771," History of Education Quarterly 1978 18(2): 115–127,
The emphasis on classics gave rise to a number of private student societies, open only by invitation, which arose primarily as forums for discussions of modern scholarship, literature and politics. The first such organizations were debating societies: Crotonia in 1738, Linonia in 1753, and Brothers in Unity in 1768.None of these continue to exist today. They are commemorated in names given to campus structures, such as Brothers in Unity Courtyard in Branford College.
19th century
thumb|Woolsey Hall in c. 1905The Yale Report of 1828 was a dogmatic defense of the Latin and Greek curriculum against critics who wanted more courses in modern languages, mathematics, and science. Unlike higher education in Europe, there was no national curriculum for colleges and universities in the United States. In the competition for students and financial support, college leaders strove to keep current with demands for innovation. At the same time, they realized that a significant portion of their students and prospective students demanded a classical background. The Yale report meant the classics would not be abandoned. All institutions experimented with changes in the curriculum, often resulting in a dual track. In the decentralized environment of higher education in the United States, balancing change with tradition was a common challenge because no one could afford to be completely modern or completely classical.Michael S. Pak, "The Yale Report of 1828: A New Reading and New Implications," History of Education Quarterly 2008 48(1): 30–57; Melvin I. Urofsky, "Reforms and Response: The Yale Report of 1828," History of Education Quarterly, Vol. 5, No. 1 (Mar. 1965), pp. 53–67 in JSTOR A group of professors at Yale and New Haven Congregationalist ministers articulated a conservative response to the changes brought about by the Victorian culture. They concentrated on developing a whole man possessed of religious values sufficiently strong to resist temptations from within, yet flexible enough to adjust to the 'isms' (professionalism, materialism, individualism, and consumerism) tempting him from without.Louise L. Stevenson, Scholarly Means to Evangelical Ends: The New Haven Scholars and the Transformation of Higher Learning in America, 1830–1890 (1986) William Graham Sumner, professor from 1872 to 1909, taught in the emerging disciplines of economics and sociology to overflowing classrooms. He bested President Noah Porter, who disliked social science and wanted Yale to lock into its traditions of classical education. Porter objected to Sumner's use of a textbook by Herbert Spencer that espoused agnostic materialism because it might harm students.Alfred McClung Lee, "The Forgotten Sumner," Journal of the History of Sociology 1980–1981 3(1): 87–106
Until 1887, the legal name of the university was "The President and Fellows of Yale College, in New Haven." In 1887, under an act passed by the Connecticut General Assembly, Yale gained its current, and shorter, name of "Yale University."
Sports and debate
The Revolutionary War soldier Nathan Hale (Yale 1773) was the prototype of the Yale ideal in the early 19th century: a manly yet aristocratic scholar, equally well-versed in knowledge and sports, and a patriot who "regretted" that he "had but one life to lose" for his country. Western painter Frederic Remington (Yale 1900) was an artist whose heroes gloried in combat and tests of strength in the Wild West. The fictional, turn-of-the-20th-century Yale man Frank Merriwell embodied the heroic ideal without racial prejudice, and his fictional successor Frank Stover in the novel Stover at Yale (1911) questioned the business mentality that had become prevalent at the school. Increasingly the students turned to athletic stars as their heroes, especially since winning the big game became the goal of the student body, and the alumni, as well as the team itself.Robert Higgs, "'Götterdämmerung' and Palingenesis: Yale and the Heroic Ideal, 1865–1914," Proteus 1986 3(1): 18–24thumb|left|upright|Yale's four-oared crew team, posing with 1876 Centennial Regatta trophy, won at Philadelphia.Along with Harvard and Princeton, Yale students rejected elite British concepts about 'amateurism' in sports and constructed athletic programs that were uniquely American, such as football.Ronald A. Smith, Sports and Freedom: The Rise of Big Time College Athletics (1988) The Harvard–Yale football rivalry began in 1875.
Between 1892, when Harvard and Yale met in one of the first intercollegiate debates, and 1909, the year of the first Triangular Debate of Harvard, Yale, and Princeton, the rhetoric, symbolism, and metaphors used in athletics were used to frame these early debates. Debates were covered on front pages of college newspapers and emphasized in yearbooks, and team members even received the equivalent of athletic letters for their jackets. There even were rallies sending off the debating teams to matches. Yet, the debates never attained the broad appeal that athletics enjoyed. One reason may be that debates do not have a clear winner, as is the case in sports, and that scoring is subjective. In addition, with late 19th-century concerns about the impact of modern life on the human body, athletics offered hope that neither the individual nor the society was coming apart.Roberta J. Park, "Muscle, Mind, and 'Agon:' Intercollegiate Debating and Athletics at Harvard and Yale, 1892–1909," Journal of Sport History 1987 14(3): 263–285
In 1909–10, football faced a crisis resulting from the failure of the previous reforms of 1905–06 to solve the problem of serious injuries. There was a mood of alarm and mistrust, and, while the crisis was developing, the presidents of Harvard, Yale, and Princeton developed a project to reform the sport and forestall possible radical changes forced by government upon the sport. President Arthur Hadley of Yale, A. Lawrence Lowell of Harvard, and Woodrow Wilson of Princeton worked to develop moderate changes to reduce injuries. Their attempts, however, were reduced by rebellion against the rules committee and formation of the Intercollegiate Athletic Association. The big three had tried to operate independently of the majority, but changes did reduce injuries.John S., Watterson III, "The Football Crisis of 1909–1910: the Response of the Eastern 'Big Three'," Journal of Sport History 1981 8(1): 33–49
Expansion
thumb|Connecticut Hall, oldest building on the Yale campus, built between 1750 and 1753.
Yale expanded gradually, establishing the Yale School of Medicine (1810), Yale Divinity School (1822), Yale Law School (1843), Yale Graduate School of Arts and Sciences (1847), the Sheffield Scientific School (1847),Sheffield was originally named Yale Scientific School; it was renamed in 1861 after a major donation from Joseph E. Sheffield. and the Yale School of Fine Arts (1869). In 1887, as the college continued to grow under the presidency of Timothy Dwight V, Yale College was renamed Yale University, with the name Yale College subsequently applied to the undergraduate college. The university would later add the Yale School of Music (1894), the Yale School of Forestry & Environmental Studies (founded by Gifford Pinchot in 1900), the Yale School of Public Health (1915), the Yale School of Nursing (1923), the Yale School of Drama (1955), the Yale Physician Associate Program (1973), and the Yale School of Management (1976). It would also reorganize its relationship with the Sheffield Scientific School.
Expansion caused controversy about Yale's new roles. Noah Porter, moral philosopher, was president from 1871 to 1886. During an age of tremendous expansion in higher education, Porter resisted the rise of the new research university, claiming that an eager embrace of its ideals would corrupt undergraduate education. Many of Porter's contemporaries criticized his administration, and historians since have disparaged his leadership. Levesque argues Porter was not a simple-minded reactionary, uncritically committed to tradition, but a principled and selective conservative.George Levesque, "Noah Porter Revisited," Perspectives on the History of Higher Education 2007 26: 29–66, He did not endorse everything old or reject everything new; rather, he sought to apply long-established ethical and pedagogical principles to a rapidly changing culture. He may have misunderstood some of the challenges of his time, but he correctly anticipated the enduring tensions that have accompanied the emergence and growth of the modern university.
thumb|right|Richard Rummell's 1906 watercolor of the Yale campus, facing north.|291x291px
20th century
Behavioral sciences
Between 1925 and 1940, philanthropic foundations, especially ones connected with the Rockefellers, contributed about $7 million to support the Yale Institute of Human Relations and the affiliated Yerkes Laboratories of Primate Biology. The money went toward behavioral science research, which was supported by foundation officers who aimed to "improve mankind" under an informal, loosely defined human engineering effort. The behavioral scientists at Yale, led by President James R. Angell and psychobiologist Robert M. Yerkes, tapped into foundation largesse by crafting research programs aimed to investigate, then suggest, ways to control, sexual and social behavior. For example, Yerkes analyzed chimpanzee sexual behavior in hopes of illuminating the evolutionary underpinnings of human development and providing information that could ameliorate dysfunction. Ultimately, the behavioral-science results disappointed foundation officers, who shifted their human-engineering funds toward biological sciences.Kersten Jacobson Biehn, "Psychobiology, Sex Research and Chimpanzees: Philanthropic Foundation Support for the Behavioral Sciences at Yale University, 1923–41," History of the Human Sciences 2008 21(2): 21–43,
thumb|Old Brick Row in 1807.
Biology
Slack (2003) compares three groups that conducted biological research at Yale during overlapping periods between 1910 and 1970. Yale proved important as a site for this research. The leaders of these groups were Ross Granville Harrison, Grace E. Pickford, and G. Evelyn Hutchinson, and their members included both graduate students and more experienced scientists. All produced innovative research, including the opening of new subfields in embryology, endocrinology, and ecology, respectively, over a long period of time. Harrison's group is shown to have been a classic research school; Pickford's and Hutchinson's were not. Pickford's group was successful in spite of her lack of departmental or institutional position or power. Hutchinson and his graduate and postgraduate students were extremely productive, but in diverse areas of ecology rather than one focused area of research or the use of one set of research tools. Hutchinson's example shows that new models for research groups are needed, especially for those that include extensive field research.Nancy G. Slack, "Are Research Schools Necessary? Contrasting Models of 20th Century Research at Yale Led by Ross Granville Harrison, Grace E. Pickford and G. Evelyn Hutchinson," Journal of the History of Biology 2003 36(3): 501–529,
Medicine
Milton Winternitz led the Yale School of Medicine as its dean from 1920 to 1935. Dedicated to the new scientific medicine established in Germany, he was equally fervent about "social medicine" and the study of humans in their culture and environment. He established the "Yale System" of teaching, with few lectures and fewer exams, and strengthened the full-time faculty system; he also created the graduate-level Yale School of Nursing and the Psychiatry Department, and built numerous new buildings. Progress toward his plans for an Institute of Human Relations, envisioned as a refuge where social scientists would collaborate with biological scientists in a holistic study of humankind, unfortunately lasted for only a few years before the opposition of resentful anti-Semitic colleagues drove him to resign.Howard Spiro and Priscilla Waters Norton, "Dean Milton C. Winternitz at Yale," Perspectives in Biology & Medicine 2003 46(3): 403–412,
Faculty
Before World War II, most elite university faculties counted among their numbers few, if any, Jews, blacks, women, or other minorities; Yale was no exception. By 1980, this condition had been altered dramatically, as numerous members of those groups held faculty positions.William Palmer, "On or about 1950 or 1955 History Departments Changed: A Step in the Creation of the Modern History Department," Journal of the Historical Society (1529921x); 2007 7(3): 385–405
History and American studies
The American studies program reflected the worldwide anti-Communist ideological struggle. Norman Holmes Pearson, who worked for the Office of Strategic Studies in London during World War II, returned to Yale and headed the new American studies program, in which scholarship quickly became an instrument of promoting liberty. Popular among undergraduates, the program sought to instruct them in the fundamentals of American civilization and thereby instill a sense of nationalism and national purpose.Michael Holzman, "The Ideological Origins of American Studies at Yale," American Studies 40:2 (Summer 1999): 71–99 Also during the 1940s and 1950s, Wyoming millionaire William Robertson Coe made large contributions to the American studies programs at Yale University and at the University of Wyoming. Coe was concerned to celebrate the 'values' of the Western United States in order to meet the "threat of communism."Liza Nicholas, "Wyoming as America: Celebrations, a Museum, and Yale," American Quarterly, Vol. 54, No. 3 (Sep. 2002), pp. 437–465 in JSTOR
Women
Women studied at Yale University as early as 1892, in graduate-level programs at the Yale Graduate School of Arts and Sciences.A Brief History of Yale :: Resources on Yale History. Library.yale.edu (February 24, 2005). Retrieved on 2013-07-15.
In 1966, Yale began discussions with its sister school Vassar College about merging to foster coeducation at the undergraduate level. Vassar, then all-female and part of the Seven Sisters—elite higher education schools that historically served as sister institutions to the Ivy League when the Ivy League still only admitted men—tentatively accepted, but then declined the invitation. Both schools introduced coeducation independently in 1969. Amy Solomon was the first woman to register as a Yale undergraduate;Yale Bulletin and Calendar: "Transformations brought about by Yale women.". Retrieved April 10, 2007. she was also the first woman at Yale to join an undergraduate society, St. Anthony Hall. The undergraduate class of 1973 was the first class to have women starting from freshman year; at the time, all undergraduate women were housed in Vanderbilt Hall at the south end of Old Campus.
A decade into co-education, student assault and harassment by faculty became the impetus for the trailblazing lawsuit Alexander v. Yale. While unsuccessful in the courts, the legal reasoning behind the case changed the landscape of sex discrimination law and resulted in the establishment of Yale's Grievance Board and the Yale Women's Center. In March 2011 a Title IX complaint was filed against Yale by students and recent graduates, including editors of Yale's feminist magazine Broad Recognition, alleging that the university had a hostile sexual climate.Huffington Post: "Yale Students File Title IX Suit Against the University". Retrieved April 29, 2011. In response, the university formed a Title IX steering committee to address complaints of sexual misconduct., Associated Press, "Yale Forms Committee To Address Sexual Misconduct," Huffington Post. Retrieved February 7, 2014.
Class
Yale, like other Ivy League schools, instituted policies in the early 20th century designed to maintain the proportion of white Protestants of notable families in the student body (see numerus clausus), and was one of the last of the Ivies to eliminate such preferences, beginning with the class of 1970.Yale Alumni Magazine: "The Birth of a New Institution.". Retrieved April 10, 2007.
Town–gown relations
Yale has a complicated relationship with its home city; for example, thousands of students volunteer every year in a myriad of community organizations, but city officials, who decry Yale's exemption from local property taxes, have long pressed the university to do more to help. Under President Levin, Yale has financially supported many of New Haven's efforts to reinvigorate the city. Evidence suggests that the town and gown relationships are mutually beneficial. Still, the economic power of the university increased dramatically with its financial success amid a decline in the local economy.Gordon Lafer, "Land and Labor in the Post-Industrial University Town: Remaking Social Geography," Political Geography 2003 22(1): 89–117, focuses on Yale.
21st century
In 2006, Yale and Peking University (PKU) established a Joint Undergraduate Program in Beijing, an exchange program allowing Yale students to spend a semester living and studying with PKU honor students. In July 2012, the Peking University-Yale University Program ended due to weak participation.
In 2007 outgoing Yale President Rick Levin characterized Yale's institutional priorities: "First, among the nation's finest research universities, Yale is distinctively committed to excellence in undergraduate education. Second, in our graduate and professional schools, as well as in Yale College, we are committed to the education of leaders."
President George W. Bush, a Yale alumnus, criticized the university for the snobbery and intellectual arrogance he encountered as a student there.
The Boston Globe wrote that "if there's one school that can lay claim to educating the nation's top national leaders over the past three decades, it's Yale."Boston Globe November 17, 2002, Magazine, p. 6 Yale alumni were represented on the Democratic or Republican ticket in every U.S. Presidential election between 1972 and 2004. Yale-educated Presidents since the end of the Vietnam War include Gerald Ford, George H.W. Bush, Bill Clinton, and George W. Bush, and major-party nominees during this period include Hillary Clinton (2016), John Kerry (2004), Joseph Lieberman (Vice President, 2000), and Sargent Shriver (Vice President, 1972). Other Yale alumni who made serious bids for the Presidency during this period include Howard Dean (2004), Gary Hart (1984 and 1988), Paul Tsongas (1992), Pat Robertson (1988) and Jerry Brown (1976, 1980, 1992).
Several explanations have been offered for Yale's representation in national elections since the end of the Vietnam War. Various sources note the spirit of campus activism that has existed at Yale since the 1960s, and the intellectual influence of Reverend William Sloane Coffin on many of the future candidates.Los Angeles Times October 4, 2000, p. E1 Yale President Richard Levin attributes the run to Yale's focus on creating "a laboratory for future leaders," an institutional priority that began during the tenure of Yale Presidents Alfred Whitney Griswold and Kingman Brewster. Richard H. Brodhead, former dean of Yale College and now president of Duke University, stated: "We do give very significant attention to orientation to the community in our admissions, and there is a very strong tradition of volunteerism at Yale." Yale historian Gaddis Smith notes "an ethos of organized activity" at Yale during the 20th century that led John Kerry to lead the Yale Political Union's Liberal Party, George Pataki the Conservative Party, and Joseph Lieberman to manage the Yale Daily News. Camille Paglia points to a history of networking and elitism: "It has to do with a web of friendships and affiliations built up in school." CNN suggests that George W. Bush benefited from preferential admissions policies for the "son and grandson of alumni", and for a "member of a politically influential family." New York Times correspondent Elisabeth Bumiller and The Atlantic Monthly correspondent James Fallows credit the culture of community and cooperation that exists between students, faculty, and administration, which downplays self-interest and reinforces commitment to others.
During the 1988 presidential election, George H. W. Bush (Yale '48) derided Michael Dukakis for having "foreign-policy views born in Harvard Yard's boutique". When challenged on the distinction between Dukakis's Harvard connection and his own Yale background, he said that, unlike Harvard, Yale's reputation was "so diffuse, there isn't a symbol, I don't think, in the Yale situation, any symbolism in it" and said Yale did not share Harvard's reputation for "liberalism and elitism". In 2004 Howard Dean stated, "In some ways, I consider myself separate from the other three (Yale) candidates of 2004. Yale changed so much between the class of '68 and the class of '71. My class was the first class to have women in it; it was the first class to have a significant effort to recruit African Americans. It was an extraordinary time, and in that span of time is the change of an entire generation".
In 2009, former British Prime Minister Tony Blair picked Yale as one location – the others are Britain's Durham University and Universiti Teknologi Mara – for the Tony Blair Faith Foundation's United States Faith and Globalization Initiative. As of 2009, former Mexican President Ernesto Zedillo is the director of the Yale Center for the Study of Globalization and teaches an undergraduate seminar, "Debating Globalization". As of 2009, former presidential candidate and DNC chair Howard Dean teaches a residential college seminar, "Understanding Politics and Politicians." Also in 2009, an alliance was formed among Yale, University College London, and both schools' affiliated hospital complexes to conduct research focused on the direct improvement of patient care—a growing field known as translational medicine. President Richard Levin noted that Yale has hundreds of other partnerships across the world, but "no existing collaboration matches the scale of the new partnership with UCL".
New international Yale initiatives launched included (among many others):
Jackson Institute for Global Affairs, promoting international education University-wide;
Global Health Initiative, uniting and expanding global health efforts across campus;
Yale India Initiative, expanding the study of and engagement with India;
Yale Center for the Study of Globalization, bridging the gap between academia and the world of public policy; and
Yale China Law Center, promoting the rule of law in China.
Yale – Management Guild
New global research and educational partnerships included (among many others):
Yale-Universidad de Chile International Program in Astronomy Education and Research;
Peking-Yale Joint Center for Plant Molecular Genetics and Agrobiology;
Todai–Yale Initiative for the Study of Japan;
Fudan-Yale Biomedical Research Center in Shanghai;
Yale-University College London Collaboration; and
UNSAAC-Yale Center for the Study of Machu Picchu and Inca Culture in Peru.
The most ambitious international partnership to date is Yale-NUS College in Singapore, a joint effort with the National University of Singapore to create a new liberal arts college in Asia featuring an innovative curriculum that weaves Western and Asian traditions, set to open in August 2013.Karin Fischer, "With Opening Near, Yale Defends Singapore Venture" The New York Times August 27, 2012
Administration and organization
Leadership
School founding School Year founded Yale College 1701 Yale School of Medicine 1810 Yale Divinity School 1822 Yale Law School 1843 Yale Graduate School of Arts and Sciences 1847 Sheffield Scientific School 1847 Yale School of Fine Arts 1869 Yale School of Music 1894 Yale School of Forestry & Environmental Studies 1900 Yale School of Public Health 1915 Yale School of Architecture 1916 Yale School of Nursing 1923 Yale School of Drama 1955 Yale School of Management 1976
The President and Fellows of Yale College, also known as the Yale Corporation, is the governing board of the University.
Yale's former president Richard C. Levin was, at the time, one of the highest paid university presidents in the United States with a 2008 salary of $1.5 million.
The Yale Provost's Office has launched several women into prominent university presidencies. In 1977 Hanna Holborn Gray was appointed acting President of Yale from this position, and went on to become President of the University of Chicago, the first woman to be full president of a major university. In 1994 Yale Provost Judith Rodin became the first female president of an Ivy League institution at the University of Pennsylvania. In 2002 Provost Alison Richard became the Vice Chancellor of the University of Cambridge. In 2004, Provost Susan Hockfield became the President of the Massachusetts Institute of Technology. In 2007 Deputy Provost Kim Bottomly was named President of Wellesley College. In 2003, the Dean of the Divinity School, Rebecca Chopp, was appointed president of Colgate University and now heads Swarthmore College.
The university has three major academic components: Yale College (the undergraduate program), the Graduate School of Arts and Sciences, and the professional schools. In 2008 Provost Andrew Hamilton was confirmed to be the Vice Chancellor of the University of Oxford.Yale Daily News: "Bottomly to Leave for Wellesley Presidency." Former Dean of Yale College Richard H. Brodhead serves as the President of Duke University.
thumb|Yale Art Gallery Sculpture. The gallery is free and open to the public.
Staff and labor unions
Much of Yale University's staff, including most maintenance staff, dining hall employees, and administrative staff, are unionized. Clerical and technical employees are represented by Local 34 of UNITE HERE and service and maintenance workers by Local 35 of the same international. Together with the Graduate Employees and Students Organization (GESO), an unrecognized union of graduate employees, Locals 34 and 35 make up the Federation of Hospital and University Employees. Also included in FHUE are the dietary workers at Yale-New Haven Hospital, who are members of 1199 SEIU. In addition to these unions, officers of the Yale University Police Department are members of the Yale Police Benevolent Association, which affiliated in 2005 with the Connecticut Organization for Public Safety Employees. Finally, Yale security officers voted to join the International Union of Security, Police and Fire Professionals of America in fall 2010 after the National Labor Relations Board ruled they could not join AFSCME; the Yale administration contested the election.
Yale has a history of difficult and prolonged labor negotiations, often culminating in strikes.See Toni Gilpin, Gary Isaac, Dan Letwin, and Jack McKivigan, On Strike for Respect: The Clerical and Technical Workers' Strike at Yale University, 1984–85 (Urbana: University of Illinois Press, 1995). There have been at least eight strikes since 1968, and The New York Times wrote that Yale has a reputation as having the worst record of labor tension of any university in the U.S. Yale's unusually large endowment exacerbates the tension over wages. Moreover, Yale has been accused of failing to treat workers with respect. In a 2003 strike, however, the university claimed that more union employees were working than striking. Professor David Graeber was 'retired' after he came to the defense of a student who was involved in campus labor issues.Charlie Rose Show, Interview with David Graeber, 2006, PBS
Campus
left|thumb|Yale Law School
Yale's central campus in downtown New Haven covers and comprises its main, historic campus and a medical campus adjacent to the Yale-New Haven Hospital. In western New Haven, the university holds of athletic facilities, including the Yale Golf Course. In 2008, Yale purchased the former Bayer Pharmaceutical campus in West Haven, Connecticut, the buildings of which are now used as laboratory and research space. Yale also owns seven forests in Connecticut, Vermont, and New Hampshire—the largest of which is the Yale-Myers Forest in Connecticut's Quiet Corner—and nature preserves including Horse Island.
Yale is noted for its largely Collegiate Gothic campusAssorted pictures of Yale's campus.. Retrieved April 10, 2007. as well as for several iconic modern buildings commonly discussed in architectural history survey courses: Louis Kahn's Yale Art GalleryAbout the Yale Art Gallery., Retrieved April 10, 2007. and Center for British Art, Eero Saarinen's Ingalls Rink and Ezra Stiles and Morse Colleges, and Paul Rudolph's Art & Architecture Building. Yale also owns and has restored many noteworthy 19th-century mansions along Hillhouse Avenue, which was considered the most beautiful street in America by Charles Dickens when he visited the United States in the 1840s. In 2011, Travel+Leisure listed the Yale campus as one of the most beautiful in the United States."America's most beautiful college campuses", Travel+Leisure (September, 2011)
Many of Yale's buildings were constructed in the Collegiate Gothic architecture style from 1917 to 1931, financed largely by Edward S. Harkness, including the Yale Drama School.Synnott, Marcia Graham. The Half-Opened Door: Discrimination and admissions at Harvard, Yale, and Princeton, 1900–1970, Greenwood Press, 1979. Westport, Connecticut, London, England Stone sculpture built into the walls of the buildings portray contemporary college personalities such as a writer, an athlete, a tea-drinking socialite, and a student who has fallen asleep while reading. Similarly, the decorative friezes on the buildings depict contemporary scenes such as policemen chasing a robber and arresting a prostitute (on the wall of the Law School), or a student relaxing with a mug of beer and a cigarette. The architect, James Gamble Rogers, faux-aged these buildings by splashing the walls with acid,Yale Herald: "Donor steps up to fund CCL renovations.". Retrieved April 10, 2007. deliberately breaking their leaded glass windows and repairing them in the style of the Middle Ages, and creating niches for decorative statuary but leaving them empty to simulate loss or theft over the ages. In fact, the buildings merely simulate Middle Ages architecture, for though they appear to be constructed of solid stone blocks in the authentic manner, most actually have steel framing as was commonly used in 1930. One exception is Harkness Tower, tall, which was originally a free-standing stone structure. It was reinforced in 1964 to allow the installation of the Yale Memorial Carillon.
thumb|Vanderbilt Hall
Other examples of the Gothic (also called neo-Gothic and collegiate Gothic) style are on Old Campus by such architects as Henry Austin, Charles C. Haight and Russell Sturgis. Several are associated with members of the Vanderbilt family, including Vanderbilt Hall,Vanderbilt Hall Phelps Hall,Phelps Hall St. Anthony Hall (a commission for member Frederick William Vanderbilt), the Mason, Sloane and Osborn laboratories, dormitories for the Sheffield Scientific School (the engineering and sciences school at Yale until 1956) and elements of Silliman College, the largest residential college.Silliman College
upright|thumb|left|Statue of Nathan Hale in front of Connecticut Hall
The oldest building on campus, Connecticut Hall (built in 1750), is in the Georgian style. Georgian-style buildings erected from 1929 to 1933 include Timothy Dwight College, Pierson College, and Davenport College, except the latter's east, York Street façade, which was constructed in the Gothic style so as to co-ordinate with adjacent structures.
The Beinecke Rare Book and Manuscript Library, designed by Gordon Bunshaft of Skidmore, Owings & Merrill, is one of the largest buildings in the world reserved exclusively for the preservation of rare books and manuscripts.Beinecke Rare Book Library: "About the Library Building.". Retrieved April 10, 2007. It is located near the center of the University in Hewitt Quadrangle, which is now more commonly referred to as "Beinecke Plaza".
The library's six-story above-ground tower of book stacks is surrounded by a windowless rectangular building with walls made of translucent Vermont marble, which transmit subdued lighting to the interior and provide protection from direct light, while glowing from within after dark.
upright|thumb|Interior of Beinecke Library
The sculptures in the sunken courtyard by Isamu Noguchi are said to represent time (the pyramid), the sun (the circle), and chance (the cube).
Alumnus Eero Saarinen, Finnish-American architect of such notable structures as the Gateway Arch in St. Louis, Washington Dulles International Airport main terminal, Bell Labs Holmdel Complex and the CBS Building in Manhattan, designed Ingalls Rink at Yale and the newest residential colleges of Ezra Stiles and Morse. These latter were modelled after the medieval Italian hilltown of San Gimignano – a prototype chosen for the town's pedestrian-friendly milieu and fortress-like stone towers. These tower forms at Yale act in counterpoint to the college's many Gothic spires and Georgian cupolas.Assorted pictures of Ezra Stiles College, Retrieved April 10, 2007.
Yale's Office of Sustainability develops and implements sustainability practices at Yale. Yale is committed to reduce its greenhouse gas emissions 10% below 1990 levels by the year 2020. As part of this commitment, the university allocates renewable energy credits to offset some of the energy used by residential colleges. Eleven campus buildings are candidates for LEED design and certification. Yale Sustainable Food Project initiated the introduction of local, organic vegetables, fruits, and beef to all residential college dining halls. Yale was listed as a Campus Sustainability Leader on the Sustainable Endowments Institute's College Sustainability Report Card 2008, and received a "B+" grade overall.
Grove Street Cemetery, New Haven
Marsh Botanical Garden
Yale Sustainable Food Program Farm
Notable nonresidential campus buildings
Notable nonresidential campus buildings and landmarks include Battell Chapel, Beinecke Rare Book Library, Harkness Tower, Ingalls Rink, Kline Biology Tower, Osborne Memorial Laboratories, Payne Whitney Gymnasium, Peabody Museum of Natural History, Sterling Hall of Medicine, Sterling Law Buildings, Sterling Memorial Library, Woolsey Hall, Yale Center for British Art, Yale University Art Gallery, Yale Art & Architecture Building, and the Paul Mellon Centre for Studies in British Art in London.
Yale's secret society buildings (some of which are called "tombs") were built both to be private yet unmistakable. A diversity of architectural styles is represented: Berzelius, Donn Barber in an austere cube with classical detailing (erected in 1908 or 1910); Book and Snake, Louis R. Metcalfe in a Greek Ionic style (erected in 1901); Elihu, architect unknown but built in a Colonial style (constructed on an early 17th-century foundation although the building is from the 18th century); Mace and Chain, in a late colonial, early Victorian style (built in 1823). (Interior moulding is said to have belonged to Benedict Arnold);Manuscript Society, King Lui-Wu with Dan Kniley responsible for landscaping and Josef Albers for the brickwork intaglio mural. Building constructed in a mid-century modern style; Scroll and Key, Richard Morris Hunt in a Moorish- or Islamic-inspired Beaux-Arts style (erected 1869–70); Skull and Bones, possibly Alexander Jackson Davis or Henry Austin in an Egypto-Doric style utilizing Brownstone (in 1856 the first wing was completed, in 1903 the second wing, 1911 the Neo-Gothic towers in rear garden were completed); St. Elmo, (former tomb) Kenneth M. Murchison, 1912, designs inspired by Elizabethan manor. Current location, brick colonial; Shabtai, 1882, the Anderson Mansion built in the Second Empire architectural style; and Wolf's Head, Bertram Grosvenor Goodhue, erected 1923–1924, Collegiate Gothic.
Campus safety
Several campus safety strategies have been pioneered at Yale. The first campus police force was founded at Yale in 1894, when the university contracted city police officers to exclusively cover the campus. Later hired by the university, the officers were originally brought in to quell unrest between students and city residents and curb destructive student behavior. In addition to the Yale Police Department, a variety of safety services are available including blue phones, a safety escort, and 24-hour shuttle service.
In the 1970s and 1980s, poverty and violent crime rose in New Haven, dampening Yale's student and faculty recruiting efforts.AJ Giannini. Life, love, death and prestige in New Haven. Neon. 27:113–116, 1984. Between 1990 and 2006, New Haven's crime rate fell by half, helped by a community policing strategy by the New Haven Police and Yale's campus became the safest among the Ivy League and other peer schools.Office of Post-Secondary Education: "Security search.", Retrieved April 9, 2007. Nonetheless, across the board, the city of New Haven has retained the highest levels of crime of any Ivy League city for more than a decade.City-Data.com:, Retrieved December 4, 2010.
In 2004, the national non-profit watchdog group Security on Campus filed a complaint with the U.S. Department of Education, accusing Yale of under-reporting rape and sexual assaults.
Academics
Admissions
Sterling Memorial Library|upright|thumb|Yale University's Sterling Memorial Library, as seen from Maya Lin's sculpture, Women's Table. The sculpture records the number of women enrolled at Yale over its history; female undergraduates were not admitted until 1969.
+ Fall Freshman Statistics 2016 2015 2014 2013 Applicants 31,455 30,236 30,932 29,610 Admits 1,972 2,034 1,950 2,031 Admit rate 6.3% 6.7% 6.3% 6.9% Enrolled 1,373 1,364 1,360 1,359 SAT range N/A 2140-2390 2120-2390 2140-2390 ACT range 32-36 31-35 31-35 31-35
Undergraduate admission to Yale College is considered "most selective" by U.S. News. In 2016, Yale accepted 1,972 students to the Class of 2020 out of 31,455 applicants, for an acceptance rate of 6.27%. 98% of students graduate within six years.
Through its program of need-based financial aid, Yale commits to meet the full demonstrated financial need of all applicants. Most financial aid is in the form of grants and scholarships that do not need to be paid back to the university, and the average need-based aid grant for the Class of 2017 was $46,395. 15% of Yale College students are expected to have no parental contribution, and about 50% receive some form of financial aid. About 16% of the Class of 2013 had some form of student loan debt at graduation, with an average debt of $13,000 among borrowers.
Half of all Yale undergraduates are women, more than 39% are ethnic minority U.S. citizens (19% are underrepresented minorities), and 10.5% are international students. Fifty-five percent attended public schools and 45% attended private, religious, or international schools, and 97% of students were in the top 10% of their high school class. Every year, Yale College also admits a small group of non-traditional students through the Eli Whitney Students Program.
Collections
thumb|left|The Night Café, Vincent van Gogh, 1888, Yale Art Gallery.
Yale University Library, which holds over 15 million volumes, is the third-largest university collection in the United States. The main library, Sterling Memorial Library, contains about 4 million volumes, and other holdings are dispersed at subject libraries.
Rare books are found in several Yale collections. The Beinecke Rare Book Library has a large collection of rare books and manuscripts. The Harvey Cushing/John Hay Whitney Medical Library includes important historical medical texts, including an impressive collection of rare books, as well as historical medical instruments. The Lewis Walpole Library contains the largest collection of 18th‑century British literary works. The Elizabethan Club, technically a private organization, makes its Elizabethan folios and first editions available to qualified researchers through Yale.
Yale's museum collections are also of international stature. The Yale University Art Gallery, the country's first university-affiliated art museum, contains more than 180,000 works, including Old Masters and important collections of modern art, in the Swartout and Kahn buildings. The latter, Louis Kahn's first large-scale American work (1953), was renovated and reopened in December 2006. The Yale Center for British Art, the largest collection of British art outside of the UK, grew from a gift of Paul Mellon and is housed in another Kahn-designed building.
The Peabody Museum of Natural History in New Haven is used by school children and contains research collections in anthropology, archaeology, and the natural environment. The Yale University Collection of Musical Instruments, affiliated with the Yale School of Music, is perhaps the least-known of Yale's collections, because its hours of opening are restricted.
The museums also house the artifacts brought to the United States from Peru by Yale history professor Hiram Bingham in his expedition to Machu Picchu in 1912 – when the removal of such artifacts was legal. Peru would now like to have the items returned; Yale has so far declined. In November 2010, a Yale University representative agreed to return the artifacts to a Peruvian university.
Rankings
The U.S. News & World Report ranked Yale third among U.S. national universities for 2016, as it has for each of the past sixteen years, in every list trailing only Princeton and Harvard. It was ranked 15th in the 2016/17 QS World University Rankings and 12th in the 2016 Times Higher Education World University Rankings. The Academic Ranking of World Universities placed Yale at 11th in the world in 2016.
Faculty, research, and intellectual traditions
The college is, after normalization for institution size, the tenth-largest baccalaureate source of doctoral degree recipients in the United States, and the largest such source within the Ivy League.Centre.edu "Baccalaureate Origins Peer Analysis 2000, Center College."
Yale's English and Comparative Literature departments were part of the New Criticism movement. Of the New Critics, Robert Penn Warren, W.K. Wimsatt, and Cleanth Brooks were all Yale faculty. Later, the Yale Comparative literature department became a center of American deconstruction. Jacques Derrida, the father of deconstruction, taught at the Department of Comparative Literature from the late seventies to mid-1980s. Several other Yale faculty members were also associated with deconstruction, forming the so-called "Yale School". These included Paul de Man who taught in the Departments of Comparative Literature and French, J. Hillis Miller, Geoffrey Hartman (both taught in the Departments of English and Comparative Literature), and Harold Bloom (English), whose theoretical position was always somewhat specific, and who ultimately took a very different path from the rest of this group. Yale's history department has also originated important intellectual trends. Historians C. Vann Woodward and David Brion Davis are credited with beginning in the 1960s and 1970s an important stream of southern historians; likewise, David Montgomery, a labor historian, advised many of the current generation of labor historians in the country. Yale's Music School and Department fostered the growth of Music Theory in the latter half of the 20th century. The Journal of Music Theory was founded there in 1957; Allen Forte and David Lewin were influential teachers and scholars.
Since summer 2010, Yale has also been host to Yale Publishing Course.
Campus life
Yale is a medium-sized research university, most of whose students are in the graduate and professional schools. Undergraduates, or Yale College students, come from a variety of ethnic, national, and socioeconomic backgrounds. Of the 2010–2011 freshman class, 10% are non‑U.S. citizens, while 54% went to public high schools.
Residential colleges
Yale's residential college system was established in 1933 by Edward S. Harkness, who admired the social intimacy of Oxford and Cambridge and donated significant funds to found similar colleges at Yale and Harvard. Though Yale's colleges resemble their English precursors organizationally and architecturally, they are dependent entities of Yale College and have limited autonomy. The colleges are led by a head and an academic dean, who reside in the college, and university faculty and affiliates comprise each college's fellowship. Colleges offer their own seminars, social events, and speaking engagements known as "Master's Teas," but do not contain programs of study or academic departments. Instead, all undergraduate courses are taught by the Faculty of Arts and Sciences and are open to members of any college.
All undergraduates are members of a college, to which they are assigned before their freshman year, and 85 percent live in the college quadrangle or a college-affiliated dormitory. While the majority of upperclassman live in the colleges, most on-campus freshmen live on the Old Campus, the university's oldest precinct.
While Harkness' original colleges were Georgian Revival or Collegiate Gothic in style, two colleges constructed in the 1960s, Morse and Ezra Stiles Colleges, have modernist designs. All twelve college quadrangles are organized around a courtyard, and each has a dining hall, courtyard, library, common room, and a range of student facilities. The twelve colleges are named for important alumni or significant places in university history. In 2017, the university expects to open two new colleges near Science Hill.Yale University Office of Public Affairs: "Yale to Establish Two New Residential Colleges.". Retrieved June 7, 2008.
Calhoun College
In the wake of the racially-motivated church shooting in Charleston, South Carolina, Yale was under criticism again in the summer of 2015 for Calhoun College, one of 12 residential colleges, which was named after John C. Calhoun, a slave-owner and strong slavery supporter in the nineteenth century. In July 2015 students signed a petition calling for the name change. They argued in the petition that—while Calhoun was respected in the 19th century as an "extraordinary American statesman"—he was "one of the most prolific defenders of slavery and white supremacy" in the history of the United States. In August 2015 Yale President Peter Salovey addressed the Freshman Class of 2019 in which he responded to the racial tensions but explained why the college would not be renamed. He described Calhoun as a "a notable political theorist, a vice president to two different U.S. presidents, a secretary of war and of state, and a congressman and senator representing South Carolina." He acknowledged that Calhoun also "believed that the highest forms of civilization depend on involuntary servitude. Not only that, but he also believed that the races he thought to be inferior, black people in particular, ought to be subjected to it for the sake of their own best interests." Student activism about this issue increased in the fall of 2015, and expanded to include protests centered on whether the Yale College Dean's Office should provide advice regarding Halloween costumes, which led to the labelling of some students as being members of Generation Snowflake.Fox, Claire (2016) "I find that offensive". Biteback. In April 2016 Salovey announced that "despite decades of vigorous alumni and student protests," Calhoun's name will remain on the Yale residential college explaining that it is preferable for Yale students to live in Calhoun's "shadow" so they will be "better prepared to rise to the challenges of the present and the future." He claimed that if they removed Calhoun's name, it would "obscure" his "legacy of slavery rather than addressing it." "Yale is part of that history" and "We cannot erase American history, but we can confront it, teach it and learn from it." One change that will be issued is the title of "master" for faculty members who serve as residential college leaders will be renamed to "head of college" due to its connotation of slavery.
Student organizations
In 2014, Yale had 385 registered student organizations, plus an additional one hundred groups in the process of registration.Wesley Yiin, Up Close: How many is too many?, Yale Daily News (April 9, 2014).
The university hosts a variety of student journals, magazines, and newspapers. Established in 1872, The Yale Record is the world's oldest humor magazine. Newspapers include the Yale Daily News, which was first published in 1878, and the weekly Yale Herald, which was first published in 1986. Dwight Hall, an independent, non-profit community service organization, oversees more than 2,000 Yale undergraduates working on more than 70 community service initiatives in New Haven. The Yale College Council runs several agencies that oversee campus wide activities and student services. The Yale Dramatic Association and Bulldog Productions cater to the theater and film communities, respectively. In addition, the Yale Drama Coalition serves to coordinate between and provide resources for the various Sudler Fund sponsored theater productions which run each weekend. WYBC Yale Radio is the campus's radio station, owned and operated by students. While students used to broadcast on AM & FM frequencies, they now have an Internet-only stream.
The Yale College Council (YCC) serves as the campus's undergraduate student government. All registered student organizations are regulated and funded by a subsidiary organization of the YCC, known as the Undergraduate Organizations Committee (UOC). The Graduate and Professional Student Senate (GPSS) serves as Yale's graduate and professional student government.
The Yale Political Union is advised by alumni political leaders such as John Kerry and George Pataki. The Yale International Relations Association functions as the umbrella organization for the top-ranked Model UN team.
The campus includes several fraternities and sororities. The campus features at least 18 a cappella groups, the most famous of which is The Whiffenpoofs, who are unusual among college singing groups in being made up solely of senior men.
Yale's secret societies include Skull and Bones, Scroll and Key, Wolf's Head, Book and Snake, Elihu, Berzelius, St. Elmo, Manuscript, Shabtai, Mace and Chain and Sage and Chalice. The two oldest existing honor societies are the Aurelian (1910) and the Torch Honor Society (1916).
The Elizabethan Club, a social club, has a membership of undergraduates, graduates, faculty and staff with literary or artistic interests. Membership is by invitation. Members and their guests may enter the "Lizzie's" premises for conversation and tea. The club owns first editions of a Shakespeare Folio, several Shakespeare Quartos, a first edition of Milton's Paradise Lost, among other important literary texts.
Traditions
thumb|Yale's motto translated from Latin, means light & truth.
thumb|Yale, exterior engraving. Photo taken in winter 2016.
Yale seniors at graduation smash clay pipes underfoot to symbolize passage from their "bright college years," though in recent history the pipes have been replaced with "bubble pipes". ("Bright College Years," the University's alma mater, was penned in 1881 by Henry Durand, Class of 1881, to the tune of Die Wacht am Rhein.) Yale's student tour guides tell visitors that students consider it good luck to rub the toe of the statue of Theodore Dwight Woolsey on Old Campus. Actual students rarely do so. In the second half of the 20th century Bladderball, a campus-wide game played with a large inflatable ball, became a popular tradition but was banned by administration due to safety concerns. In spite of administration opposition, students revived the game in 2009, 2011, and 2014, but its future remains uncertain.
Athletics
thumb|right|The Walter Camp Gate at the Yale Athletic Complex.
Yale supports 35 varsity athletic teams that compete in the Ivy League Conference, the Eastern College Athletic Conference, the New England Intercollegiate Sailing Association. Yale athletic teams compete intercollegiately at the NCAA Division I level. Like other members of the Ivy League, Yale does not offer athletic scholarships.
Yale has numerous athletic facilities, including the Yale Bowl (the nation's first natural "bowl" stadium, and prototype for such stadiums as the Los Angeles Memorial Coliseum and the Rose Bowl), located at The Walter Camp Field athletic complex, and the Payne Whitney Gymnasium, the second-largest indoor athletic complex in the world.Yale Herald: "House of Payne gets ready for the new millennium." Retrieved April 9, 2007.
In 2016, the men's basketball team won the Ivy League Championship title for the first time in 54 years, earning a spot in the NCAA Men's Division I Basketball Tournament. In the first round of the tournament, the Bulldogs beat the Baylor Bears 79-75 in the school's first-ever tournament win.
October 21, 2000, marked the dedication of Yale's fourth new boathouse in 157 years of collegiate rowing. The Gilder Boathouse is named to honor former Olympic rower Virginia Gilder '79 and her father Richard Gilder '54, who gave $4 million towards the $7.5 million project. Yale also maintains the Gales Ferry site where the heavyweight men's team trains for the Yale-Harvard Boat Race.
Yale crew is the oldest collegiate athletic team in America, and won Olympic Games Gold Medal for men's eights in 1924 and 1956. The Yale Corinthian Yacht Club, founded in 1881, is the oldest collegiate sailing club in the world.
In 1896, Yale and Johns Hopkins played the first known ice hockey game in the United States. Since 2006, the school's ice hockey clubs have played a commemorative game.
For kicks, between 1954 and 1982, residential college teams and student organizations played bladderball.
Yale students claim to have invented Frisbee, by tossing empty Frisbie Pie Company tins.
Yale athletics are supported by the Yale Precision Marching Band. "Precision" is used here ironically; the band is a scatter-style band that runs wildly between formations rather than actually marching. The band attends every home football game and many away, as well as most hockey and basketball games throughout the winter.
Yale intramural sports are also a significant aspect of student life. Students compete for their respective residential colleges, fostering a friendly rivalry. The year is divided into fall, winter, and spring seasons, each of which includes about ten different sports. About half the sports are coeducational. At the end of the year, the residential college with the most points (not all sports count equally) wins the Tyng Cup.
Song
Notable among the songs commonly played and sung at events such as commencement, convocation, alumni gatherings, and athletic games are the alma mater, "Bright College Years", and the Yale fight song, "Down the Field."
Two other fight songs, "Bulldog, Bulldog" and "Bingo Eli Yale", written by Cole Porter during his undergraduate days, are still sung at football games. Another fight song sung at games is "Boola Boola". According to "College Fight Songs: An Annotated Anthology" published in 1998, "Down the Field" ranks as the fourth-greatest fight song of all time.
Mascot
The school mascot is "Handsome Dan," the Yale bulldog, and the Yale fight song (written by Cole Porter while he was a student at Yale) contains the refrain, "Bulldog, bulldog, bow wow wow." The school color, since 1894, is Yale Blue.(prior to 1894, Yale's color was green) (see: ) Yale's Handsome Dan is believed to be the first college mascot in America, having been established in 1889.
Notable people
Benefactors
Yale has had many financial supporters, but some stand out by the magnitude or timeliness of their contributions. Among those who have made large donations commemorated at the university are: Elihu Yale; Jeremiah Dummer; the Harkness family (Edward, Anna, and William); the Beinecke family (Edwin, Frederick, and Walter); John William Sterling; Payne Whitney; Joseph Earl Sheffield, Paul Mellon, Charles B. G. Murphy and William K. Lanman. The Yale Class of 1954, led by Richard Gilder, donated $70 million in commemoration of their 50th reunion. Charles B. Johnson, a 1954 graduate of Yale College, pledged a $250 million gift in 2013 to support the construction of two new residential colleges. The colleges have been named respectively in honor of Pauli Murray and Benjamin Franklin. Stephen Adams has contributed so the Yale School of Music is tuition-free and an Adams Center for Musical Arts has been built.
Notable alumni and faculty
left|thumb|upright|Academy Award Winning Actress Meryl Streep, Yale School of Drama class of 1975
upright|thumb|President and Chief Justice William Howard Taft graduated from Yale in 1878.
Yale has produced alumni distinguished in their respective fields. This includes U.S. Presidents William Howard Taft, Gerald Ford, George H.W. Bush, Bill Clinton and George W. Bush; heads of state, including Italian prime minister Mario Monti, Turkish prime minister Tansu Çiller, Mexican president Ernesto Zedillo, German president Karl Carstens, and Philippines president José Paciano Laurel; U.S. Supreme Court Justices Taft, Sonia Sotomayor, Samuel Alito and Clarence Thomas; U.S. Secretaries of State John Kerry, Hillary Clinton, Cyrus Vance, and Dean Acheson; U.S. Secretaries of the Treasury Oliver Wolcott, Robert Rubin and Nicholas F. Brady; and United States Attorneys General Nicholas Katzenbach, John Ashcroft, and Edward H. Levi.
Many royals have attended, among them: Crown Princess Victoria of Sweden, Prince Rostislav Romanov and Prince Akiiki Hosea Nyabongo;
In the arts, Yale alumni include authors Sinclair Lewis, Stephen Vincent Benét, John Hersey, Thornton Wilder, Doug Wright, William Matthews, and Tom Wolfe; actors, directors and producers Paul Newman, Henry Winkler, Vincent Price, Meryl Streep, Sigourney Weaver, Jodie Foster, Angela Bassett, Elia Kazan, George Roy Hill, Douglas Wick, Edward Norton, Lupita Nyong'o, James Whitmore, Oliver Stone, Brian Dennehy, and Sam Waterston; composers Charles Ives, Douglas Moore and Cole Porter; fine art photography popularizer Sam Wagstaff; sculptor Richard Serra; and entertainer Rudy Vallee.
In business, Time Magazine co-founder Henry Luce, Morgan Stanley founder Harold Stanley, Blackstone Group founder Stephen A. Schwarzman, Boeing CEO James McNerney, FedEx founder Frederick W. Smith, chairman and ceo of Sears Holdings Edward Lampert, Time Warner president Jeffrey Bewkes, Electronic Arts co-founder Bing Gordon, PepsiCo CEO Indra Nooyi, Pinterest co-founder and CEO Ben Silbermann, sports agent Donald Dell, and investor/philanthropist Sir John Templeton all hail from Yale.
In academia, distinguished Yale graduates and faculty have included literary critic and historian Henry Louis Gates, economists Irving Fischer, Mahbub ul Haq, and Paul Krugman; Nobel laureates in Physics, Ernest Lawrence and Murray Gell-Mann; Fields Medalist John G. Thompson; Human Genome Project director Francis S. Collins; "father of biochemistry" Russell Henry Chittenden; neurosurgeon Harvey Cushing; pioneering computer scientist Grace Hopper; chairman of Cal Tech's Jet Propulsion Laboratory Committee Clark Blanchard Millikan; education philosopher Robert Maynard Hutchins; pioneer in fractal geometry Benoit Mandelbrot; and mathematician/chemist Josiah Willard Gibbs.
Former Yale students in the sporting arena include "The perfect oarsman" Rusty Wailes; runner Frank Shorter; baseball executives Theo Epstein and George Weiss, and baseball players Ron Darling, Bill Hutchinson, and Craig Breslow; basketball player Chris Dudley; football players Dick Jauron, Kenny Hill, Calvin Hill, Gary Fencik, Chuck Mercein, Amos Alonzo Stagg, and "Father of American Football" Walter Camp; nine-time U.S. Squash men's champion Julian Illingworth; ice hockey player Chris Higgins; figure skater Sarah Hughes; and swimmer Don Schollander.
Yale also counts among its former students Secretary of State, Secretary of War and U.S. Senator John C. Calhoun; Peace Corps founder Sargent Shriver; child psychologist Benjamin Spock; architects Maya Lin, Eero Saarinen and Norman Foster; television personalities Stone Phillips, Dick Cavett and Anderson Cooper; pundits Garry Trudeau, William F. Buckley, Jr. and Fareed Zakaria; pioneer in electrical applications Austin Cornelius Dunham; inventors Samuel F.B. Morse and Eli Whitney; patriot and "first spy" Nathan Hale; lexicographer Noah Webster; and theologians Jonathan Edwards and Reinhold Niebuhr.
Yale in fiction and popular culture
Yale University, one of the oldest universities in the United States, is a cultural referent as an institution that produces some of the most elite members of society and its grounds, alumni, and students have been prominently portrayed in fiction and U.S. popular culture. For example, Owen Johnson's novel, Stover at Yale, follows the college career of Dink Stover and Frank Merriwell, the model for all later juvenile sports fiction, plays football, baseball, crew, and track at Yale while solving mysteries and righting wrongs.University of Georgia: "The Rise of Intercollegiate Football and Its Portrayal in American Popular Literature.". Retrieved April 9, 2007.The text of Frank Merriwell at Yale is published online by Project Gutenberg, Gutenberg.org Yale University also is featured in F. Scott Fitzgerald's novel "The Great Gatsby". The narrator, Nick Carraway, wrote a series of editorials for the Yale News, and Tom Buchanan was "one of the most powerful ends that ever played football" for Yale.
Notes and references
Further reading
Bagg, Lyman H. Four Years at Yale, New Haven, 1891.
Blum, John Morton. A life with history (2004) 283pp, memoir of history professor and advisor to the president
Brown, Chandos Michael. Benjamin Silliman: A Life in the Young Republic. (1989). 377 pp.
Buckley, William F., Jr. God and Man at Yale, 1951.
Dana, Arnold G. Yale Old and New, 78 vols. personal scrapbook, 1942.
Deming, Clarence. Yale Yesterdays, New Haven, Yale University Press, 1915.
Dexter, Franklin Bowditch. Biographical Sketches of Graduates of Yale: Yale College with Annals of the College History, 6 vols. New York, 1885–1912.
Dexter, Franklin Bowditch. Documentary History of Yale University: Under the Original Charter of the Collegiate School of Connecticut, 1701–1745. New Haven: Yale University Press, 1901.
Fitzmier, John R. New England's Moral Legislator: Timothy Dwight, 1752–1817 (1998). 261 pp.
French, Robert Dudley. The Memorial Quadrangle, New Haven, Yale University Press, 1929.
Furniss, Edgar S. The Graduate School of Yale, New Haven, 1965.
Gilpen, Toni, et al. On Strike For Respect, (updated edition: University of Illinois Press, 1995.)
Holden, Reuben A. Yale: A Pictorial History, New Haven, Yale University Press, 1967.
Kabaservice, Geoffrey. The Guardians: Kingman Brewster, His Circle, and the Rise of the Liberal Establishment, (2004). 573 pp.
Kalman, Laura. Legal Realism at Yale, 1927–1960 (1986). 314pp.
Kelley, Brooks Mather. Yale: A History. New Haven: Yale University Press, 1999. ISBN 978-0-300-07843-5; OCLC 810552
Kingsley, William L. Yale College. A Sketch of its History, 2 vols. New York, 1879.
Mendenhall, Thomas C. The Harvard-Yale Boat Race, 1852–1924, and the Coming of Sport to the American College. (1993). 371 pp.
Nelson, Cary. Will Teach for Food: Academic Labor in Crisis, Minneapolis, University of Minnesota Press, 1997.
Nissenbaum, Stephen, ed. The Great Awakening at Yale College (1972). 263 pp.
Oren, Dan A. Joining the Club: A History of Jews and Yale, New Haven, Yale University Press, 1985.* Oviatt, Edwin. The Beginnings of Yale (1701–1726), New Haven, Yale University Press, 1916.
Pierson, George Wilson. Yale College, An Educational History (1871–1921), (Yale University Press, 1952); Yale, The University College (1921–1937), (Yale University Press, 1955)
Pierson, George Wilson. The Founding of Yale: The Legend of the Forty Folios, New Haven, Yale University Press, 1988.
Pinnell, Patrick L. The Campus Guide: Yale University, Princeton Architectural Press, New York, 1999.
Stevenson, Louise L. Scholarly Means to Evangelical Ends: The New Haven Scholars and the Transformation of Higher Learning in America, 1830–1890 (1986). 221 pp.
Scully, Vincent et al., eds. Yale in New Haven: Architecture and Urbanism. New Haven: Yale University, 2004.
Stokes, Anson Phelps. Memorials of Eminent Yale Men, 2 vols. New Haven, Yale University Press, 1914.
Synnott, Marcia Graham. The Half-Opened Door: Discrimination and Admissions at Harvard, Yale, and Princeton, 1900–1970 (1979). 310 pp.
Tucker, Louis Leonard. Connecticut's Seminary of Sedition: Yale College. Chester, Conn.: Pequot, 1973. 78 pp.
Warch, Richard. School of the Prophets: Yale College, 1701–1740. (1973). 339 pp.
Welch, Lewis Sheldon, and Walter Camp. Yale, her campus, class-rooms, and athletics (1900). online
Whitehead, John S. The Separation of College and State: Columbia, Dartmouth, Harvard, and Yale, 1776–1876 (1973). 262 pp.
Wilson, Leonard G., ed. Benjamin Silliman and His Circle: Studies on the Influence of Benjamin Silliman on Science in America (1979). 228 pp.
Secret societies
Robbins, Alexandra, Secrets of the Tomb: Skull and Bones, the Ivy League, and the Hidden Paths of Power, Little Brown & Co., 2002; ISBN 0-316-73561-2 (paper edition).
Millegan, Kris (ed.), Fleshing Out Skull & Bones, TrineDay, 2003. ISBN 0-9752906-0-6 (paper edition).
External links
Yale Athletics website
Category:Buildings and structures in New Haven, Connecticut
Category:Colonial Colleges
Category:Education in New Haven, Connecticut
Category:Educational institutions established in the 1700s
Category:Non-profit organizations based in Connecticut
Category:Universities and colleges in New Haven County, Connecticut
Category:Tourist attractions in New Haven, Connecticut
Category:1701 establishments in Connecticut
Category:V-12 Navy College Training Program | 34,273 | 2017-01 |
Guinea-Bissau | Guinea-Bissau (, ), officially the Republic of Guinea-Bissau (, ), is a country in West Africa. It covers with an estimated population of 1,704,000.
Guinea-Bissau was once part of the kingdom of Gabu, as well as part of the Mali Empire. Parts of this kingdom persisted until the 18th century, while a few others were under some rule by the Portuguese Empire since the 16th century. In the 19th century, it was colonised as Portuguese Guinea. Upon independence, declared in 1973 and recognised in 1974, the name of its capital, Bissau, was added to the country's name to prevent confusion with Guinea (formerly French Guinea). Guinea-Bissau has a history of political instability since independence, and no elected president has successfully served a full five-year term.
Only 14% of the population speaks Portuguese, established as the official language in the colonial period. Almost half the population (44%) speaks Crioulo, a Portuguese-based creole language, and the remainder speak a variety of native African languages. The main religions are African traditional religions and Islam; there is a Christian (mostly Roman Catholic) minority. The country's per-capita gross domestic product is one of the lowest in the world.
Guinea-Bissau is a member of the United Nations, African Union, Economic Community of West African States, Organisation of Islamic Cooperation, the Latin Union, Community of Portuguese Language Countries, La Francophonie and the South Atlantic Peace and Cooperation Zone.
History
Guinea-Bissau was once part of the kingdom of Gabu, part of the Mali Empire; parts of this kingdom persisted until the 18th century. Other parts of the territory in the current country were considered by the Portuguese as part of their empire.Empire of Kaabu, West Africa. Accessgambia.com. Retrieved 22 June 2013. Portuguese Guinea was known as the Slave Coast, as it was a major area for the exportation of African slaves by Europeans to the western hemisphere.
Early reports of Europeans reaching this area include those of the Venetian Alvise Cadamosto's voyage of 1455,Alvise Cadamosto. Nndb.com. Retrieved 22 June 2013. the 1479–1480 voyage by Flemish-French trader Eustache de la Fosse, and Diogo Cão. In the 1480s this Portuguese explorer reached the Congo River and the lands of Bakongo, setting up the foundations of modern Angola, some 4200 km down the African coast from Guinea-Bissau.. win.tue.nl
Although the rivers and coast of this area were among the first places colonized by the Portuguese, who set up trading posts in the 16th century, they did not explore the interior until the 19th century. The local African rulers in Guinea, some of whom prospered greatly from the slave trade, controlled the inland trade and did not allow the Europeans into the interior. They kept them in the fortified coastal settlements where the trading took place."A Brief History of Guinea-Bissau – Part 1". Africanhistory, US Department of State, at About.com. Retrieved 22 June 2013. African communities that fought back against slave traders also distrusted European adventurers and would-be settlers. The Portuguese in Guinea were largely restricted to the port of Bissau and Cacheu. A small number of European settlers established isolated farms along Bissau's inland rivers.
For a brief period in the 1790s, the British tried to establish a rival foothold on an offshore island, at Bolama.British Library – Endangered Archive Programme (EAP). Inep-bissau.org (18 March 1921). Retrieved 22 June 2013. But by the 19th century the Portuguese were sufficiently secure in Bissau to regard the neighbouring coastline as their own special territory, also up north in part of present South Senegal.
An armed rebellion beginning in 1956 by the African Party for the Independence of Guinea and Cape Verde (PAIGC) under the leadership of Amílcar Cabral gradually consolidated its hold on then Portuguese Guinea.Amilcar Cabral 1966 "The Weapon of Theory". Address delivered to the first Tricontinental Conference of the Peoples of Asia, Africa and Latin America held in Havana in January 1966. Marxists.org. Retrieved 22 June 2013. Unlike guerrilla movements in other Portuguese colonies, the PAIGC rapidly extended its military control over large portions of the territory, aided by the jungle-like terrain, its easily reached borderlines with neighbouring allies, and large quantities of arms from Cuba, China, the Soviet Union, and left-leaning African countries.The PAIC Programme Appendix. Marxists.org. Retrieved 22 June 2013. Cuba also agreed to supply artillery experts, doctors, and technicians. The PAIGC even managed to acquire a significant anti-aircraft capability in order to defend itself against aerial attack. By 1973, the PAIGC was in control of many parts of Guinea, although the movement suffered a setback in January 1973 when Cabral was assassinated.
Independence (1973)
250px|thumb|PAIGC forces raise the flag of Guinea-Bissau in 1974.
Independence was unilaterally declared on 24 September 1973. Recognition became universal following 25 April 1974 socialist-inspired military coup in Portugal, which overthrew Lisbon's Estado Novo regime.Embassy of The Republic of Guinea-Bissau – Country Profile. Diplomaticandconsular.com (12 April 2012). Retrieved 22 June 2013.
Luís Cabral, brother of Amílcar and co-founder of PAIGC, was appointed the first President of Guinea-Bissau. Following independence, the PAIGC killed thousands of local Guinean soldiers who had fought along with the Portuguese Army against guerrillas. Some escaped to settle in Portugal or other African nations.Guiné-Bissau: Morreu Luís Cabral, primeiro presidente do país. Expresso.sapo.pt (30 May 2009). Retrieved 22 June 2013. One of the massacres occurred in the town of Bissorã. In 1980 the PAIGC acknowledged in its newspaper Nó Pintcha (dated 29 November 1980) that many Gueinean soldiers had been executed and buried in unmarked collective graves in the woods of Cumerá, Portogole, and Mansabá.
The country was controlled by a revolutionary council until 1984. The first multi-party elections were held in 1994. An army uprising in May 1998 led to the Guinea-Bissau Civil War and the president's ousting in June 1999.Uppsala Conflict Data Program Conflict Encyclopedia, Guinea Bissau: government, in depth, Negotiations, Veira's surrender and the end of the conflict, viewed 12 July 2013, Elections were held again in 2000, and Kumba Ialá was elected president.Guinea-Bissau's Kumba Yala: from crisis to crisis. Afrol.com. Retrieved 22 June 2013.
In September 2003, a military coup was conducted. The military arrested Ialá on the charge of being "unable to solve the problems".Smith, Brian (27 September 2003) "US and UN give tacit backing to Guinea Bissau coup", Wsws.org, September 2003. Retrieved 22 June 2013 After being delayed several times, legislative elections were held in March 2004. A mutiny of military factions in October 2004 resulted in the death of the head of the armed forces and caused widespread unrest.
Vieira years
thumb|300px|An abandoned tank from the 1998–1999 civil war in the capital Bissau, 2003.
In June 2005, presidential elections were held for the first time since the coup that deposed Ialá. Ialá returned as the candidate for the PRS, claiming to be the legitimate president of the country, but the election was won by former president João Bernardo Vieira, deposed in the 1999 coup. Vieira beat Malam Bacai Sanhá in a runoff election. Sanhá initially refused to concede, claiming that tampering and electoral fraud occurred in two constituencies including the capital, Bissau.GUINEA-BISSAU: Vieira officially declared president. irinnews.org (10 August 2005).
Despite reports of arms entering the country prior to the election and some "disturbances during campaigning," including attacks on government offices by unidentified gunmen, foreign election monitors described the 2005 election overall as "calm and organized".
Three years later, PAIGC won a strong parliamentary majority, with 67 of 100 seats, in the parliamentary election held in November 2008.Guinea Bissau vote goes smooth amid hopes for stability. AFP via Google.com (16 November 2008). Retrieved 22 June 2013. In November 2008, President Vieira's official residence was attacked by members of the armed forces, killing a guard but leaving the president unharmed.
On 2 March 2009, however, Vieira was assassinated by what preliminary reports indicated to be a group of soldiers avenging the death of the head of joint chiefs of staff, General Batista Tagme Na Wai, who had been killed in an explosion the day before.. news.com.au (2 March 2009). Vieira's death did not trigger widespread violence, but there were signs of turmoil in the country, according to the advocacy group Swisspeace. Military leaders in the country pledged to respect the constitutional order of succession. National Assembly Speaker Raimundo Pereira was appointed as an interim president until a nationwide election on 28 June 2009. It was won by Malam Bacai Sanhá of the PAIGC, against Kumba Ialá as the presidential candidate of the PRS.
On 9 January 2012, President Sanhá died of complications from diabetes, and Pereira was again appointed as an interim president. On the evening of 12 April 2012, members of the country's military staged a coup d'état and arrested the interim president and a leading presidential candidate. Former vice chief of staff, General Mamadu Ture Kuruma, assumed control of the country in the transitional period and started negotiations with opposition parties.
Politics
thumb|300px|The National People's Assembly of Guinea-Bissau.
Guinea-Bissau is a republic. In the past, the government had been highly centralized. Multi-party governance was not established until mid-1991. The president is the head of state and the prime minister is the head of government. Since 1974, no president has successfully served a full five-year term.
At the legislative level, a unicameral Assembleia Nacional Popular (National People's Assembly) is made up of 100 members. They are popularly elected from multi-member constituencies to serve a four-year term. The judicial system is headed by a Tribunal Supremo da Justiça (Supreme Court), made up of nine justices appointed by the president; they serve at the pleasure of the president.Guinea-Bissau Supreme Court. Stj.pt. Retrieved 22 June 2013.
The two main political parties are the PAIGC (African Party for the Independence of Guinea and Cape Verde) and the PRS (Party for Social Renewal). There are more than 20 minor parties.Guinea-Bissau Political Parties. Nationsencyclopedia.com. Retrieved 22 June 2013.
Foreign relations
Guinea-Bissau follows a nonaligned foreign policy and seeks friendly and cooperative relations with a wide variety of states and organizations.
Military
A 2008 estimate put the size of the Guinea-Bissau Armed Forces at around 4,000 personnel.
Administrative divisions
Guinea-Bissau is divided into eight regions () and one autonomous sector (). These, in turn, are subdivided into 37 Sectors. The regions are:
Geography
thumb|450px|A map of Guinea Bissau.
thumb|Bissau-Guinean landscape.
thumb|230px|Typical scenery in Guinea-Bissau.
Guinea-Bissau is bordered by Senegal to the north and Guinea to the south and east, with the Atlantic Ocean to its west. It lies mostly between latitudes 11° and 13°N (a small area is south of 11°), and longitudes 13° and 17°W.
At , the country is larger in size than Taiwan or Belgium. It lies at a low altitude; its highest point is . The terrain of is mostly low coastal plain with swamps of Guinean mangroves rising to Guinean forest-savanna mosaic in the east. Its monsoon-like rainy season alternates with periods of hot, dry harmattan winds blowing from the Sahara. The Bijagos Archipelago lies off of the mainland.Nossiter, Adam (4 November 2009) "Bijagós, a Tranquil Haven in a Troubled Land", The New York Times, 8 November 2009
Climate
Guinea-Bissau is warm all year around and there is little temperature fluctuation; it averages . The average rainfall for Bissau is although this is almost entirely accounted for during the rainy season which falls between June and September/October. From December through April, the country experiences drought.Guinea-Bissau Climate. Nationsencyclopedia.com. Retrieved 22 June 2013.
centre|600px
Environmental issues
Severe environmental issues include deforestation; soil erosion; overgrazing and overfishing.
Economy
thumb|350px|A neighborhood in Bissau.
Guinea-Bissau's GDP per capita is one of the lowest in the world, and its Human Development Index is one of the lowest on earth. More than two-thirds of the population lives below the poverty line.World Bank profile. World Bank.org (31 May 2013). Retrieved 22 June 2013. The economy depends mainly on agriculture; fish, cashew nuts and ground nuts are its major exports.
A long period of political instability has resulted in depressed economic activity, deteriorating social conditions, and increased macroeconomic imbalances. It takes longer on average to register a new business in Guinea-Bissau (233 days or about 33 weeks) than in any other country in the world except Suriname.
Guinea-Bissau has started to show some economic advances after a pact of stability was signed by the main political parties of the country, leading to an IMF-backed structural reform program.Guinea-Bissau and the IMF. Imf.org (13 May 2013). Retrieved 22 June 2013. The key challenges for the country in the period ahead are to achieve fiscal discipline, rebuild public administration, improve the economic climate for private investment, and promote economic diversification. After the country became independent from Portugal in 1974 due to the Portuguese Colonial War and the Carnation Revolution, the rapid exodus of the Portuguese civilian, military, and political authorities resulted in considerable damage to the country's economic infrastructure, social order, and standard of living.
After several years of economic downturn and political instability, in 1997, Guinea-Bissau entered the CFA franc monetary system, bringing about some internal monetary stability.CFA Franc and Guinea-Bissau. Uemoa.int. Retrieved 22 June 2013. The civil war that took place in 1998 and 1999, and a military coup in September 2003 again disrupted economic activity, leaving a substantial part of the economic and social infrastructure in ruins and intensifying the already widespread poverty. Following the parliamentary elections in March 2004 and presidential elections in July 2005, the country is trying to recover from the long period of instability, despite a still-fragile political situation.
Beginning around 2005, drug traffickers based in Latin America began to use Guinea-Bissau, along with several neighboring West African nations, as a transshipment point to Europe for cocaine.Guinea-Bissau:A narco-state?. Time. (29 October 2009). Retrieved 22 June 2013. The nation was described by a United Nations official as being at risk for becoming a "narco-state". The government and the military have done little to stop drug trafficking, which increased after the 2012 coup d'état.
Guinea-Bissau is a member of the Organization for the Harmonization of Business Law in Africa (OHADA).
Society
Demographics
According to the 2010 revision of the UN World Population Prospects, Guinea-Bissau's population was 1,515,000 in 2010, compared to 518,000 in 1950. The proportion of the population below the age of 15 in 2010 was 41.3%, 55.4% were aged between 15 and 65 years of age, while 3.3% were aged 65 years or older.
Ethnic groups
thumb|330px|Guinea-Bissau present-day settlement pattern of the ethnic groups.
The population of Guinea-Bissau is ethnically diverse and has many distinct languages, customs, and social structures.
Bissau-Guineans can be divided into the following ethnic groups:
Fula and the Mandinka-speaking people, who comprise the largest portion of the population and are concentrated in the north and northeast;
Balanta and Papel people, who live in the southern coastal regions; and
Manjaco and Mancanha, who occupy the central and northern coastal areas.
Most of the remainder are mestiços of mixed Portuguese and African descent, including a Cape Verdean minority.Guinea-Bissau ethnic classifications, Joshuaproject.net. Retrieved 22 June 2013.
Portuguese natives comprise a very small percentage of Bissau-Guineans. After Guinea-Bissau gained independence, most of the Portuguese nationals left the country. The country has a tiny Chinese population.China-Guinea-Bissau. China.org.cn. Retrieved 22 June 2013. These include traders and merchants of mixed Portuguese and Chinese ancestry from Macau, a former Asian Portuguese colony.
Major cities
thumb|right|330px|Guinea-Bissau's third largest city, Gabú
Main cities in Guinea-Bissau include:
Rank City Population Region1979 Census 2005 estimate 1 Bissau 109,214 388,028 Bissau 2 Bafatá 13,429 22,521 Bafatá 3 Gabú 7,803 14,430 Gabú 4 Bissorã N/A 12,688 Oio 5 Bolama 9,100 10,769 Bolama 6 Cacheu 7,600 10,490 Cacheu 7 Bubaque 8,400 9,941 Bolama 8 Catió 5,170 9,898 Tombali 9 Mansôa 5,390 7,821 Oio 10 Buba N/A 7,779 Quinara 11 Quebo N/A 7,072 Quinara 12 Canchungo 4,965 6,853 Cacheu 13 Farim 4,468 6,792 Oio 14 Quinhámel N/A 3,128 Biombo 15 Fulacunda N/A 1,327 Quinara
Languages
14% of the population speaks the official language Portuguese, the language of government and national communication during centuries of colonial rule. 44% speak Kriol, a Portuguese-based creole language, which is effectively a national language of communication among groups. The remainder speak a variety of native African languages unique to ethnicities.Crioulo, Upper Guinea. Ethnologue.org. Retrieved 22 June 2013.
Most Portuguese and Mestiços speakers also have one of the African languages and Kriol as additional languages. French is also taught in schools because Guinea-Bissau is surrounded by French-speaking nations. Guinea-Bissau is a full member of the Francophonie.WELCOME TO THE INTERNATIONAL ORGANISATION OF LA FRANCOPHONIE'S OFFICIAL WEBSITE. Francophonie.org. Retrieved 22 June 2013.
Religion
thumb|300px|Men in Islamic garb, Bafatá, Guinea-Bissau.
Throughout the 20th century, most Bissau-Guineans practiced some form of Animism. In the early 21st century, many have adopted Islam, which is now practiced by 50% of the country's population. Most of Guinea-Bissau's Muslims are of the Sunni denomination with approximately 2% belonging to the Ahmadiyya sect.
Approximately 10% of the country's population belong to the Christian community, and 40% continue to hold Indigenous beliefs. These statistics can be misleading, however, as many residents practice syncretic forms of Islamic and Christian faiths, combining their practices with traditional African beliefs."Guinea-Bissau", CIA the World Factbook, Cia.gov. Retrieved 5 February 2012."Guinea-Bissau", Encyclopædia Britannica
Roman Catholic Church claims most of Christian community.
Health
The WHO estimates there are fewer than 5 physicians per 100,000 persons in the country,The WHO identified only 78 physicians in the entire Guinea-Bissau health workforce in 2009 data. () And the World Bank estimates that Guinea-Bissau had 1,575,446 residents in 2008. At the current rate of growth, 2009 population was expected to reach about 1.61 million people. Only 0.0048% are known to be medical doctors involved in patient care. The WHO estimate an average of about 20 per 100,000 across Africa, but reports a density per 10,000 population of <0.5 in its Physicians data covering the period to 2009. Guinea-Bissau has an unusually high ratio of nursing staff to doctors: including nurses and midwives; there are 64 medical professionals per 100,000 Bissau-Guineans down from 12 per 100,000 in 2007.The WHO estimates that there were 188 physicians working in the entire country (). And The World Bank estimates that Guinea-Bissau had 1,541,040 residents in 2007 (). So, about 0.0122% of the permanent population were known to be medical doctors involved in patient care, .
The prevalence of HIV-infection among the adult population is 1.8%.The WHO estimates a 1.8% HIV-infection rate from 2007 data among 15- to 49-year-old Bissau-Guineans – see statistics on page 65 of: . (The section's introduction describes estimation methodology). Only 20% of infected pregnant women receive anti retroviral coverage to prevent transmission to newborns., only 20% of HIV-infected mothers or sufferers with advanced cases had anti-retroviral drug access, see: . Coverage in the general population is lower.
Malaria kills more residents; 9% of the population have reported infection, – 148,542 reported cases in 2008. It causes three times as many deaths as AIDS.According to the 2010 WHO report, the latest Malaria mortality rate per 100,000 Bissau-Guineans (180) is substantially greater than that for AIDS (65). () Among children younger than 5, malaria is nine times more deadly (p. 65). In 2008, fewer than half of children younger than five slept under antimalaria nets or had access to antimalarial drugs.
The WHO's estimate of life expectancy for a child born in 2008 was 49 years and 47 years for a boy.. Healthy life expectancy at birth was 42. The probability of dying between a live-birth and age 5 was 19.5% (down from 24% in 1990, p.51).
Despite lowering rates in surrounding countries, cholera rates were reported in November 2012 to be on the rise, with 1,500 cases reported and nine deaths. A 2008 cholera epidemic in Guinea-Bissau affected 14,222 people and killed 225.
The 2010 maternal mortality rate per 100,000 births for Guinea Bissau was 1000. This compares with 804.3 in 2008 and 966 in 1990. The under 5 mortality rate, per 1,000 births, was 195 and the neonatal mortality as a percentage of under 5's mortality was 24. The number of midwives per 1,000 live births was 3; one out of eighteen pregnant women die as a result of pregnancy. According to a 2013 UNICEF report, 50% of women in Guinea Bissau had undergone female genital mutilation.UNICEF 2013, p. 27. In 2010, Guinea Bissau had the 7th highest maternal mortality rate in the world.
Education
Education is compulsory from the age of 7 to 13. The enrollment of boys is higher than that of girls. In 1998, the gross primary enrollment rate was 53.5%, with higher enrollment ratio for males (67.7%) compared to females (40%).<ref name=ilab>"Guinea-Bissau". 2001 Findings on the Worst Forms of Child Labor. Bureau of International Labor Affairs, U.S. Department of Labor (2002). This article incorporates text from this source, which is in the public domain.</ref>
Child labor is very common. In 2011 the literacy rate was estimated at 55.3% (68.9% male, and 42.1% female).
Guinea-Bissau has several secondary schools (general as well as technical) and a number of universities, to which an institutionally autonomous Faculty of Law as well as a Faculty of MedicineThe latter is maintained by Cuba and functions in different cities. have been added.
Culture
thumb|300px|Carnival in Bissau.
Music
The music of Guinea-Bissau is usually associated with the polyrhythmic gumbe genre, the country's primary musical export. However, civil unrest and other factors have combined over the years to keep gumbe, and other genres, out of mainstream audiences, even in generally syncretist African countries.Lobeck, Katharina (21 May 2003) Manecas Costa Paraiso di Gumbe Review. BBC. Retrieved 22 June 2013.
The calabash is the primary musical instrument of Guinea-Bissau,The Kora. Freewebs.com. Retrieved 22 June 2013. and is used in extremely swift and rhythmically complex dance music. Lyrics are almost always in Guinea-Bissau Creole, a Portuguese-based creole language, and are often humorous and topical, revolving around current events and controversies.Radio Africa: Guinea Bissau vinyl discography. Radioafrica.com.au. Retrieved 22 June 2013.
The word gumbe is sometimes used generically, to refer to any music of the country, although it most specifically refers to a unique style that fuses about ten of the country's folk music traditions.http://gumbe.com Gumbe Tina and tinga are other popular genres, while extent folk traditions include ceremonial music used in funerals, initiations and other rituals, as well as Balanta brosca and kussundé, Mandinga djambadon, and the kundere sound of the Bissagos Islands.Music of Guinea-Bissau. Ccas11bijagos.pbworks.com. Retrieved 22 June 2013.
Cuisine
Rice is a staple in the diet of residents near the coast and millet a staple in the interior. Fruits and vegetables are commonly eaten along with cereal grains. The Portuguese encouraged peanut production. Vigna subterranea (Bambara groundnut) and Macrotyloma geocarpum (Hausa groundnut) are also grown. Black-eyed peas are also part of the diet. Palm oil is harvested.
Common dishes include soups and stews. Common ingredients include yams, sweet potato, cassava, onion, tomato and plantain. Spices, peppers and chilis are used in cooking, including Aframomum melegueta seeds (Guinea pepper).
Film
Flora Gomes is an internationally renowned film director; his most famous film is Nha Fala ().Nha Fala/My Voice. spot.pcc.edu (2002) Gomes's Mortu Nega (Death Denied) (1988)Mortu Nega. California Newsreel. Newsreel.org. Retrieved 22 June 2013. was the first fiction film and the second feature film ever made in Guinea-Bissau. (The first feature film was N’tturudu, by director Umban u’Kest in 1987.) At FESPACO 1989, Mortu Nega won the prestigious Oumarou Ganda Prize. Mortu Nega is in Creole with English subtitles. In 1992, Gomes directed Udju Azul di Yonta,Udju Azul di Yonta. California Newsreel. Newsreel.org. Retrieved 22 June 2013. which was screened in the Un Certain Regard section at the 1992 Cannes Film Festival. Gomes has also served on the boards of many Africa-centric film festivals.Flora Gomes The Two Faces of War: National Liberation in Guinea-Bissau. Watsoninstitute.org (25 October 2007). Retrieved 22 June 2013.
See also
Outline of Guinea-Bissau
Index of Guinea-Bissau-related articles
Transport in Guinea-Bissau
2010 Guinea-Bissau military unrest
References
Further reading
Abdel Malek, K.,"Le processus d'accès à l'indépendance de la Guinée-Bissau.",In : Bulletin de l'Association des Anciens Elèves de l'Institut National de Langues et de Cultures Orientales, N°1, Avril 1998. – pp. 53–60
Forrest, Joshua B., Lineages of State Fragility. Rural Civil Society in Guinea-Bissau (Ohio University Press/James Currey Ltd., 2003)
Galli, Rosemary E, Guinea Bissau: Politics, Economics and Society, (Pinter Pub Ltd, 1987)
Lobban, Jr., Richard Andrew and Mendy, Peter Karibe, Historical Dictionary of the Republic of Guinea-Bissau, third edition (Scarecrow Press, 1997)
Vigh, Henrik, Navigating Terrains of War: Youth And Soldiering in Guinea-Bissau, (Berghahn Books, 2006)
External links
Link collection related to Guinea-Bissau on bolama.net
Country Profile from BBC News
Guinea-Bissau from UCB Libraries GovPubsGuinea-Bissau at Encyclopædia Britannica''
Key Development Forecasts for Guinea-Bissau from International Futures
Government
Constitution of the Republic of Guinea-Bissau
Guinea-Bissau: Prime Minister’s fate unknown after apparent military coup – West Africa – Portuguese American Journal
Guinea-Bissau Holds First Post-Coup Election
Trade
Guinea-Bissau 2005 Summary Trade Statistics
News media
news headline links from AllAfrica.com
Tourism
Health
The State of the World's Midwifery – Guinea-Bissau Country Profile
GIS information
Master Thesis about the developing Geographical Information for Guinea-Bissau
Category:Economic Community of West African States
Category:Former Portuguese colonies
Category:Least developed countries
Category:Member states of the Organisation internationale de la Francophonie
Category:Member states of the African Union
Category:Member states of the Community of Portuguese Language Countries
Category:Member states of the Organisation of Islamic Cooperation
Category:Member states of the United Nations
Category:Portuguese-speaking countries and territories
Category:Republics
Category:Muslim-majority countries
Category:States and territories established in 1974
Category:West African countries
Category:Small Island Developing States
Category:1974 establishments in Guinea-Bissau | 12,186 | 2017-01 |
Anti-aircraft warfare | thumb|300px|American troops mount a Swedish Bofors 40mm anti-aircraft gun near the Algerian coastline in 1943
Anti-aircraft warfare or counter-air defence is defined by NATO as "all measures designed to nullify or reduce the effectiveness of hostile air action."AAP-6 They include ground-and air-based weapon systems, associated sensor systems, command and control arrangements and passive measures (e.g. barrage balloons). It may be used to protect naval, ground, and air forces in any location. However, for most countries the main effort has tended to be 'homeland defence'. NATO refers to airborne air defence as counter-air and naval air defence as anti-aircraft warfare. Missile defence is an extension of air defence as are initiatives to adapt air defence to the task of intercepting any projectile in flight.
In some countries, such as Britain and Germany during the Second World War, the Soviet Union and NATO's Allied Command Europe, ground based air defence and air defence aircraft have been under integrated command and control. However, while overall air defence may be for homeland defence including military facilities, forces in the field, wherever they are, invariably deploy their own air defence capability if there is an air threat. A surface-based air defence capability can also be deployed offensively to deny the use of airspace to an opponent.
Until the 1950s, guns firing ballistic munitions ranging from 20 mm to 150 mm were the standard weapon; guided missiles then became dominant, except at the very shortest ranges (as with close-in weapon systems, which typically use rotary autocannons or, in very modern systems, surface to air adaptations of short range air to air missiles).
Terminology
The term air defence was probably first used by Britain when Air Defence of Great Britain (ADGB) was created as a Royal Air Force command in 1925. However, arrangements in the UK were also called 'anti-aircraft', abbreviated as AA, a term that remained in general use into the 1950s. After the First World War it was sometimes prefixed by 'Light' or 'Heavy' (LAA or HAA) to classify a type of gun or unit. Nicknames for anti-aircraft guns include AA, AAA or triple-A, an abbreviation of anti-aircraft artillery; "ack-ack" (from the spelling alphabet used by the British for voice transmission of "AA");"ack-ack, adj. and n.". OED Online. September 2013. Oxford University Press. (accessed September 14, 2013). and archie (a World War I British term probably coined by Amyas Borton and believed to derive via the Royal Flying Corps from the music-hall comedian George Robey's line "Archibald, certainly not!").
NATO defines anti-aircraft warfare (AAW) as "measures taken to defend a maritime force against attacks by airborne weapons launched from aircraft, ships, submarines and land-based sites.".AAP-6 In some armies the term All-Arms Air Defence (AAAD) is used for air defence by non-specialist troops. Other terms from the late 20th century include GBAD (Ground Based AD) with related terms SHORAD (Short Range AD) and MANPADS ("Man Portable AD Systems": typically shoulder-launched missiles). Anti-aircraft missiles are variously called surface-to-air missile, abbreviated and pronounced "SAM" and Surface to Air Guided Weapon (SAGW).
Non-English terms for air defence include the German FlaK (FliegerabwehrKanone, "aircraft defence cannon", also cited as Flugabwehrkanone), whence English flak, and the Russian term Protivovozdushnaya oborona (Cyrillic: Противовозду́шная оборо́на), a literal translation of "anti-air defence", abbreviated as PVO.Bellamy pg 219 In Russian the AA systems are called zenitnye (i.e. "pointing to zenith") systems (guns, missiles etc.). In French, air defence is called DCA (Défense contre les aéronefs, "aéronef" being the generic term for all kind of airborne device (airplane, airship, balloon, missile, rocket, etc.)).le petit Larousse 2013 p20-p306
The maximum distance at which a gun or missile can engage an aircraft is an important figure. However, many different definitions are used but unless the same definition is used, performance of different guns or missiles cannot be compared. For AA guns only the ascending part of the trajectory can be usefully used. One term is 'ceiling', maximum ceiling being the height a projectile would reach if fired vertically, not practically useful in itself as few AA guns are able to fire vertically, and maximum fuse duration may be too short, but potentially useful as a standard to compare different weapons.
The British adopted "effective ceiling", meaning the altitude at which a gun could deliver a series of shells against a moving target; this could be constrained by maximum fuse running time as well as the gun's capability. By the late 1930s the British definition was "that height at which a directly approaching target at 400 mph (=643.6 km/h) can be engaged for 20 seconds before the gun reaches 70 degrees elevation".Hogg WW2 pg 99–100 However, effective ceiling for heavy AA guns was affected by non-ballistic factors:
The maximum running time of the fuse, this set the maximum usable time of flight.
The capability of fire control instruments to determine target height at long range.
The precision of the cyclic rate of fire, the fuse length had to be calculated and set for where the target would be at the time of flight after firing, to do this meant knowing exactly when the round would fire.
General description
The essence of air defence is to detect hostile aircraft and destroy them. The critical issue is to hit a target moving in three-dimensional space; an attack must not only match these three coordinates, but must do so at the time the target is at that position. This means that projectiles either have to be guided to hit the target, or aimed at the predicted position of the target at the time the projectile reaches it, taking into account speed and direction of both the target and the projectile.
Throughout the 20th century air defence was one of the fastest-evolving areas of military technology, responding to the evolution of aircraft and exploiting various enabling technologies, particularly radar, guided missiles and computing (initially electromechanical analog computing from the 1930s on, as with equipment described below). Air defence evolution covered the areas of sensors and technical fire control, weapons, and command and control. At the start of the 20th century these were either very primitive or non-existent.
Initially sensors were optical and acoustic devices developed during the First World War and continued into the 1930s,"Huge Ear Locates Planes and Tells Their Speed" Popular Mechanics, December 1930 article on French aircraft sound detector with photo but were quickly superseded by radar, which in turn was supplemented by optronics in the 1980s.
Command and control remained primitive until the late 1930s, when Britain created an integrated systemCheckland and Holwell pg. 127 for ADGB that linked the ground-based air defence of the army's AA Command, although field-deployed air defence relied on less sophisticated arrangements. NATO later called these arrangements an "air defence ground environment", defined as "the network of ground radar sites and command and control centres within a specific theatre of operations which are used for the tactical control of air defence operations".
Rules of Engagement are critical to prevent air defences engaging friendly or neutral aircraft. Their use is assisted but not governed by IFF (identification friend or foe) electronic devices originally introduced during the Second World War. While these rules originate at the highest authority, different rules can apply to different types of air defence covering the same area at the same time. AAAD usually operates under the tightest rules.
NATO calls these rules Weapon Control Orders (WCO), they are:
weapons free: a weapon control order imposing a status whereby weapons systems may be fired at any target not positively recognised as friendly.
weapons tight: a weapon control order imposing a status whereby weapons systems may be fired only at targets recognised as hostile.
weapons hold: a weapon control order imposing a status whereby weapons systems may only be fired in self-defence or in response to a formal order.
Until the 1950s guns firing ballistic munitions were the standard weapon; guided missiles then became dominant, except at the very shortest ranges. However, the type of shell or warhead and its fuzing and, with missiles the guidance arrangement, were and are varied. Targets are not always easy to destroy; nonetheless, damaged aircraft may be forced to abort their mission and, even if they manage to return and land in friendly territory, may be out of action for days or permanently. Ignoring small arms and smaller machine-guns, ground-based air defence guns have varied in calibre from 20 mm to at least 150 mm.Routledge pg. 456
Ground-based air defence is deployed in several ways:
Self-defence by ground forces using their organic weapons, AAAD.
Accompanying defence, specialist aid defence elements accompanying armoured or infantry units.
Point defence around a key target, such as a bridge, critical government building or ship.
Area air defence, typically 'belts' of air defence to provide a barrier, but sometimes an umbrella covering an area. Areas can vary widely in size. They may extend along a nation's border, e.g. the Cold War MIM-23 Hawk and Nike belts that ran north–south across Germany, across a military formation's manoeuvre area, or above a city or port. In ground operations air defence areas may be used offensively by rapid redeployment across current aircraft transit routes.
Air defence has included other elements, although after the Second World War most fell into disuse:
Tethered barrage balloons to deter and threaten aircraft flying below the height of the balloons, where they are susceptible to damaging collisions with steel tethers.
Searchlights to illuminate aircraft at night for both gun-layers and optical instrument operators. During World War II searchlights became radar controlled.
Large smoke screens created by large smoke canisters on the ground to screen targets and prevent accurate weapon aiming by aircraft.
Passive air defence is defined by NATO as "Passive measures taken for the physical defence and protection of personnel, essential installations and equipment in order to minimize the effectiveness of air and/or missile attack". It remains a vital activity by ground forces and includes camouflage and concealment to avoid detection by reconnaissance and attacking aircraft. Measures such as camouflaging important buildings were common in the Second World War. During the Cold War the runways and taxiways of some airfields were painted green.
Organization
While navies are usually responsible for their own air defence, at least for ships at sea, organizational arrangements for land-based air defence vary between nations and over time.
The most extreme case was the Soviet Union, and this model may still be followed in some countries: it was a separate service, on a par with the navy or ground force. In the Soviet Union this was called Voyska PVO, and had both fighter aircraft and ground-based systems. This was divided into two arms, PVO Strany, the Strategic Air defence Service responsible for Air Defence of the Homeland, created in 1941 and becoming an independent service in 1954, and PVO SV, Air Defence of the Ground Forces. Subsequently, these became part of the air force and ground forces respectivelyBellamy pg 82, 213
At the other extreme the United States Army has an Air Defense Artillery branch that provided ground-based air defence for both homeland and the army in the field. Many other nations also deploy an air-defence branch in the army.
In Britain and some other armies, the single artillery branch has been responsible for both home and overseas ground-based air defence, although there was divided responsibility with the Royal Navy for air defence of the British Isles in World War I. However, during the Second World War the RAF Regiment was formed to protect airfields everywhere, and this included light air defences. In the later decades of the Cold War this included the United States Air Force's operating bases in UK. However, all ground-based air defence was removed from Royal Air Force (RAF) jurisdiction in 2004. The British Army's Anti-Aircraft Command was disbanded in March 1955,Beckett 2008, 178. but during the 1960s and 1970s the RAF's Fighter Command operated long-range air -defence missiles to protect key areas in the UK. During World War II the Royal Marines also provided air defence units; formally part of the mobile naval base defence organisation, they were handled as an integral part of the army-commanded ground based air defences.
The basic air defence unit is typically a battery with 2 to 12 guns or missile launchers and fire control elements. These batteries, particularly with guns, usually deploy in a small area, although batteries may be split; this is usual for some missile systems. SHORAD missile batteries often deploy across an area with individual launchers several kilometres apart. When MANPADS is operated by specialists, batteries may have several dozen teams deploying separately in small sections; self-propelled air defence guns may deploy in pairs.
Batteries are usually grouped into battalions or equivalent. In the field army a light gun or SHORAD battalion is often assigned to a manoeuvre division. Heavier guns and long-range missiles may be in air-defence brigades and come under corps or higher command. Homeland air defence may have a full military structure. For example, the UK's Anti-Aircraft Command, commanded by a full British Army general was part of ADGB. At its peak in 1941–42 it comprised three AA corps with 12 AA divisions between them.Routledge pg. 396–397
History
Earliest use
The use of balloons by the Union Army during the American Civil War compelled the Confederates to develop methods of combating them. These included the use of artillery, small arms, and saboteurs. They were unsuccessful, but internal politics led the Union's Balloon Corps to be disbanded mid-war. The Confederates experimented with balloons as well.Spring 2007 issue of the American Association of Aviation Historians Journal
The earliest known use of weapons specifically made for the anti-aircraft role occurred during the Franco-Prussian War of 1870. After the disaster at Sedan, Paris was besieged and French troops outside the city started an attempt at communication via balloon. Gustav Krupp mounted a modified 1-pounder (37mm) gun — the Ballonabwehrkanone (Balloon defence cannon) or BaK — on top of a horse-drawn carriage for the purpose of shooting down these balloons.Essential Militaria: Facts, Legends, and Curiosities About Warfare Through the Ages, Nicholas Hobbs, Atlantic Monthly Press 2004, ISBN 0-8021-1772-4
By the early 20th century balloon, or airship, guns, for land and naval use were attracting attention. Various types of ammunition were proposed, high explosive, incendiary, bullet-chains, rod bullets and shrapnel. The need for some form of tracer or smoke trail was articulated. Fuzing options were also examined, both impact and time types. Mountings were generally pedestal type, but could be on field platforms. Trials were underway in most countries in Europe but only Krupp, Erhardt, Vickers Maxim, and Schneider had published any information by 1910. Krupp's designs included adaptations of their 65 mm 9-pounder, a 75 mm 12-pounder, and even a 105 mm gun. Erhardt also had a 12-pounder, while Vickers Maxim offered a 3-pounder and Schneider a 47 mm. The French balloon gun appeared in 1910, it was an 11-pounder but mounted on a vehicle, with a total uncrewed weight of 2 tons. However, since balloons were slow moving, sights were simple. But the challenges of faster moving airplanes were recognised.Bethel pg 56–80
By 1913 only France and Germany had developed field guns suitable for engaging balloons and aircraft and addressed issues of military organization. Britain's Royal Navy would soon introduce the QF 3-inch and QF 4-inch AA guns and also had Vickers 1-pounder quick firing "pom-pom"s that could be used in various mountings.Routledge pg 3–4
The first US anti-aircraft cannon was a 1-pounder concept design by Admiral Twining in 1911 to meet the perceived threat of airships, that eventually was used as the basis for the US Navy's first operational anti-aircraft cannon: the 3"/23 caliber gun."New American Aerial Weapons" Popular Mechanics, December 1911, p. 776.
First World War
thumb|1909 vintage Krupp 9-pounder anti-aircraft gun
thumb|left|A Canadian anti-aircraft unit of 1918 "taking post"
thumb|A French anti-aircraft motor battery (motorized AAA battery) that brought down a Zeppelin near Paris. From the journal Horseless Age, 1916.
On 30 September 1915, troops of the Serbian Army observed three enemy aircraft approaching Kragujevac. Soldiers shot at them with shotguns and machine-guns but failed to prevent them from dropping 45 bombs over the city, hitting military installations, the railway station and many other, mostly civilian, targets in the city. During the bombing raid, private Radoje Ljutovac fired his cannon at the enemy aircraft and successfully shot one down. It crashed in the city and both pilots died from their injuries. The cannon Ljutovac used was not designed as an anti-aircraft gun, it was a slightly modified Turkish cannon captured during the First Balkan War in 1912. This was the first occasion in military history that a military aircraft was shot down with ground-to-air fire.
The British recognised the need for anti-aircraft capability a few weeks before World War I broke out; on 8 July 1914, the New York Times reported that the British government had decided to 'dot the coasts of the British Isles with a series of towers, each armed with two quick-firing guns of special design,' while 'a complete circle of towers' was to be built around 'naval installations' and 'at other especially vulnerable points.' By December 1914 the Royal Naval Volunteer Reserve (RNVR) was manning AA guns and searchlights assembled from various sources at some nine ports. The Royal Garrison Artillery (RGA) was given responsibility for AA defence in the field, using motorised two-gun sections. The first were formally formed in November 1914. Initially they used QF 1-pounder "pom-pom" (a 37 mm version of the Maxim Gun).Routledge pg 4–5
thumb|A Maxim anti-aircraft machine gun.
All armies soon deployed AA guns often based on their smaller field pieces, notably the French 75 mm and Russian 76.2 mm, typically simply propped up on some sort of embankment to get the muzzle pointed skyward. The British Army adopted the 13-pounder quickly producing new mountings suitable for AA use, the 13-pdr QF 6 cwt Mk III was issued in 1915. It remained in service throughout the war but 18-pdr guns were lined down to take the 13-pdr shell with a larger cartridge producing the 13-pr QF 9 cwt and these proved much more satisfactory.Routledge pg 6 However, in general, these ad-hoc solutions proved largely useless. With little experience in the role, no means of measuring target, range, height or speed the difficulty of observing their shell bursts relative to the target gunners proved unable to get their fuse setting correct and most rounds burst well below their targets. The exception to this rule was the guns protecting spotting balloons, in which case the altitude could be accurately measured from the length of the cable holding the balloon.
The first issue was ammunition. Before the war it was recognised that ammunition needed to explode in the air. Both high explosive (HE) and shrapnel were used, mostly the former. Airburst fuses were either igniferious (based on a burning fuse) or mechanical (clockwork). Igniferious fuses were not well suited for anti-aircraft use. The fuse length was determined by time of flight, but the burning rate of the gunpowder was affected by altitude. The British pom-poms had only contact-fused ammunition. Zeppelins, being hydrogen-filled balloons, were targets for incendiary shells and the British introduced these with airburst fuses, both shrapnel type-forward projection of incendiary 'pot' and base ejection of an incendiary stream. The British also fitted tracers to their shells for use at night. Smoke shells were also available for some AA guns, these bursts were used as targets during training.The Ministry of Munitions pg 40–41
German air attacks on the British Isles increased in 1915 and the AA efforts were deemed somewhat ineffective, so a Royal Navy gunnery expert, Admiral Sir Percy Scott, was appointed to make improvements, particularly an integrated AA defence for London. The air defences were expanded with more RNVR AA guns, 75 mm and 3-inch, the pom-poms being ineffective. The naval 3-inch was also adopted by the army, the QF 3 inch 20 cwt (76 mm), a new field mounting was introduced in 1916. Since most attacks were at night, searchlights were soon used, and acoustic methods of detection and locating were developed. By December 1916 there were 183 AA Sections defending Britain (most with the 3-inch), 74 with the BEF in France and 10 in the Middle East.Routledge pg 8–17
AA gunnery was a difficult business. The problem was of successfully aiming a shell to burst close to its target's future position, with various factors affecting the shells' predicted trajectory. This was called deflection gun-laying, 'off-set' angles for range and elevation were set on the gunsight and updated as their target moved. In this method when the sights were on the target, the barrel was pointed at the target's future position. Range and height of the target determined fuse length. The difficulties increased as aircraft performance improved.
The British dealt with range measurement first, when it was realised that range was the key to producing a better fuse setting. This led to the Height/Range Finder (HRF), the first model being the Barr & Stroud UB2, a 2-metre optical coincident rangefinder mounted on a tripod. It measured the distance to the target and the elevation angle, which together gave the height of the aircraft. These were complex instruments and various other methods were also used. The HRF was soon joined by the Height/Fuse Indicator (HFI), this was marked with elevation angles and height lines overlaid with fuse length curves, using the height reported by the HRF operator, the necessary fuse length could be read off.Routledge pg 14, 15
However, the problem of deflection settings — 'aim-off' — required knowing the rate of change in the target's position. Both France and UK introduced tachymetric devices to track targets and produce vertical and horizontal deflection angles. The French Brocq system was electrical, the operator entered the target range and had displays at guns; it was used with their 75 mm. The British Wilson-Dalby gun director used a pair of trackers and mechanical tachymetry; the operator entered the fuse length, and deflection angles were read from the instruments.Routledge pg 14, 20The Ministry of Munitions pg 11
By the start of World War I, the 77 mm had become the standard German weapon, and came mounted on a large traverse that could be easily picked up on a wagon for movement. Krupp 75 mm guns were supplied with an optical sighting system that improved their capabilities. The German Army also adapted a revolving cannon that came to be known to Allied fliers as the "flaming onion" from the shells in flight. This gun had five barrels that quickly launched a series of 37 mm artillery shells.
As aircraft started to be used against ground targets on the battlefield, the AA guns could not be traversed quickly enough at close targets and, being relatively few, were not always in the right place (and were often unpopular with other troops), so changed positions frequently. Soon the forces were adding various machine-gun based weapons mounted on poles. These short-range weapons proved more deadly, and the "Red Baron" is believed to have been shot down by an anti-aircraft Vickers machine gun. When the war ended, it was clear that the increasing capabilities of aircraft would require better means of acquiring targets and aiming at them. Nevertheless, a pattern had been set: anti-aircraft weapons would be based around heavy weapons attacking high-altitude targets and lighter weapons for use when they came to lower altitudes.
thumb|left|A No.1 Mark III Predictor that was used with the QF 3.7 inch AA gun
thumb|Shooting with anti-aircraft gun in Sweden 1934
Interwar years
World War I demonstrated that aircraft could be an important part of the battlefield, but in some nations it was the prospect of strategic air attack that was the main issue, presenting both a threat and an opportunity. The experience of four years of air attacks on London by Zeppelins and Gotha G.V bombers had particularly influenced the British and was one of if not the main driver for forming an independent air force. As the capabilities of aircraft and their engines improved it was clear that their role in future war would be even more critical as their range and weapon load grew. However, in the years immediately after World War I the prospect of another major war seemed remote, particularly in Europe where the most militarily capable nations were, and little financing was available.
Four years of war had seen the creation of a new and technically demanding branch of military activity. Air defence had made huge advances, albeit from a very low starting point. However, it was new and often lacked influential 'friends' in the competition for a share of limited defence budgets. Demobilisation meant that most AA guns were taken out of service, leaving only the most modern.
However, there were lessons to be learned. In particular the British, who had had AA guns in most theatres in action in daylight and used them against night attacks at home. Furthermore, they had also formed an AA Experimental Section during the war and accumulated a lot of data that was subjected to extensive analysis. As a result, they published, in 1924–5, the two volume Textbook of Anti-Aircraft Gunnery. It included five key recommendations for HAA equipment:
Shells of improved ballistic shape with HE fillings and mechanical time fuses.
Higher rates of fire assisted by automation.
Height finding by long-base optical instruments.
Centralised control of fire on each gun position, directed by tachymetric instruments incorporating the facility to apply corrections of the moment for meteorological and wear factors.
More accurate sound-location for the direction of searchlights and to provide plots for barrage fire.
Two assumptions underpinned the British approach to HAA fire; first, aimed fire was the primary method and this was enabled by predicting gun data from visually tracking the target and having its height. Second, that the target would maintain a steady course, speed and height. This HAA was to engage targets up to 24,000 feet. Mechanical, as opposed to igniferous, time fuses were required because the speed of powder burning varied with height so fuse length was not a simple function of time of flight. Automated fire ensured a constant rate of fire that made it easier to predict where each shell should be individually aimed.Routledge pg 48–49
In 1925 the British adopted a new instrument developed by Vickers. It was a mechanical analogue computer Predictor AA No 1. Given the target height its operators tracked the target and the predictor produced bearing, quadrant elevation and fuse setting. These were passed electrically to the guns where they were displayed on repeater dials to the layers who 'matched pointers' (target data and the gun's actual data) to lay the guns. This system of repeater electrical dials built on the arrangements introduced by British coast artillery in the 1880s, and coast artillery was the background of many AA officers. Similar systems were adopted in other countries and for example the later Sperry device, designated M3A3 in the US was also used by Britain as the Predictor AA No 2. Height finders were also increasing in size, in Britain, the World War I Barr & Stroud UB 2 (7 feet optical base) was replaced by the UB 7 (9 feet optical base) and the UB 10 (18 feet optical base, only used on static AA sites). Goertz in Germany and Levallois in France produced 5 metre instruments. However, in most countries the main effort in HAA guns until the mid-1930s was improving existing ones, although various new designs were on drawing boards.Routledge pg 49–50
From the early 1930s eight countries developed radar, these developments were sufficiently advanced by the late 1930s for development work on sound locating acoustic devices to be generally halted, although equipment was retained. Furthermore, in Britain the volunteer Observer Corps formed in 1925 provided a network of observation posts to report hostile aircraft flying over Britain. Initially radar was used for airspace surveillance to detect approaching hostile aircraft. However, the German Würzburg radar was capable of providing data suitable for controlling AA guns and the British AA No 1 Mk 1 GL radar was designed to be used on AA gun positions.Routledge pg 95–97
The Treaty of Versailles prevented Germany having AA weapons, and for example, the Krupps designers joined Bofors in Sweden. Some World War I guns were retained and some covert AA training started in the late 1920s. Germany introduced the 8.8 cm FlaK 18 in 1933, 36 and 37 models followed with various improvements but ballistic performance was unchanged. In the late 1930s the 10.5 cm FlaK 38 appeared soon followed by the 39, this was designed primarily for static sites but had a mobile mounting and the unit had 220v 24 kW generators. In 1938 design started on the 12.8 cm FlaK.Hogg German WW2 pg 14, 162–177
The USSR introduced a new 76 mm M1931 in the early 1930s and an 85 mm M1938 towards the end of the decade.Hogg Allied WW2 pg 127–130
Britain had successfully tested a new HAA gun, 3.6-inch, in 1918. In 1928 3.7-inch became the preferred solution, but it took 6 years to gain funding. Production of the QF 3.7-inch (94 mm) began in 1937; this gun was used both on mobile carriages with the field army and transportable guns on fixed mountings for static positions. At the same time the Royal Navy adopted a new 4.5-inch (114 mm) gun in a twin turret, which the army adopted in simplified single-gun mountings for static positions, mostly around ports where naval ammunition was available. However, the performance of both 3.7 and 4.5-in guns was limited by their standard fuse No 199, with a 30-second running time, although a new mechanical time fuse giving 43 seconds was nearing readiness. In 1939 a Machine Fuse Setter was introduced to eliminate manual fuse setting.Hogg Allied WW2 pg 97–107
The US ended World War I with two 3-inch AA guns and improvements were developed throughout the inter-war period. However, in 1924 work started on a new 105 mm static mounting AA gun, but only a few were produced by the mid-1930s because by this time work had started on the 90 mm AA gun, with mobile carriages and static mountings able to engage air, sea and ground targets. The M1 version was approved in 1940. During the 1920s there was some work on a 4.7-inch which lapsed, but revived in 1937, leading to a new gun in 1944.Hogg Allied WW2 pg 114–119
While HAA and its associated target acquisition and fire control was the primary focus of AA efforts, low-level close-range targets remained and by the mid-1930s were becoming an issue.
Until this time the British, at RAF insistence, continued their World War I use of machine guns, and introduced twin MG mountings for AAAD. The army was forbidden from considering anything larger than .50-inch. However, in 1935 their trials showed that the minimum effective round was an impact fused 2 lb HE shell. The following year they decided to adopt the Bofors 40 mm and a twin barrel Vickers 2-pdr (40 mm) on a modified naval mount. The air-cooled Bofors was vastly superior for land use, being much lighter than the water-cooled pom-pom, and UK production of the Bofors 40 mm was licensed. The Predictor AA No 3, as the Kerrison Predictor was officially known, was introduced with it.Hogg Allied WW2 pg 108–110
The 40 mm Bofors had become available in 1931. In the late 1920s the Swedish Navy had ordered the development of a 40 mm naval anti-aircraft gun from the Bofors company. It was light, rapid-firing and reliable, and a mobile version on a four-wheel carriage was soon developed. Known simply as the 40 mm, it was adopted by some 17 different nations just before World War II and is still in use today in some applications such as on coastguard frigates.
Rheinmetall in Germany developed an automatic 20 mm in the 1920s and Oerlikon in Switzerland had acquired the patent to an automatic 20 mm gun designed in Germany during World War I. Germany introduced the rapid-fire 2 cm FlaK 30 and later in the decade it was redesigned by Mauser-Werke and became the 2 cm FlaK 38.Hogg German WW2 pg 144–147 Nevertheless, while 20 mm was better than a machine gun and mounted on a very small trailer made it easy to move, its effectiveness was limited. Germany therefore added a 3.7 cm. The first, the 3.7 cm FlaK 18 developed by Rheinmetall in the early 1930s, was basically an enlarged 2 cm FlaK 30. It was introduced in 1935 and production stopped the following year. A redesigned gun 3.7 cm FlaK 36 entered service in 1938, it too had a two-wheel carriage.Hogg German WW2 pg 150–152 However, by the mid-1930s the Luftwaffe realised that there was still a coverage gap between 3.7 cm and 8.8 cm guns. They started development of a 5 cm gun on a four-wheel carriage.Hogg German WW2 pg 155–156
After World War I the US Army started developing a dual-role (AA/ground) automatic 37 mm cannon, designed by John M. Browning. It was standardised in 1927 as the T9 AA cannon, but trials quickly revealed that it was worthless in the ground role. However, while the shell was a bit light (well under 2 lbs) it had a good effective ceiling and fired 125 rounds per minute; an AA carriage was developed and it entered service in 1939. The Browning 37mm proved prone to jamming, and was eventually replaced in AA units by the Bofors 40 mm. The Bofors had attracted attention from the US Navy, but none were acquired before 1939.Hogg Allied WW2 pg 115–117 Also, in 1931 the US Army worked on a mobile anti-aircraft machine mount on the back of a heavy truck having four .30 caliber water-cooled machine guns and an optical director. It proved unsuccessful and was abandoned."Uncle Sam's Latest Weapons For War In the Air", December 1931, Popular Mechanics
The Soviet Union also used a 37 mm, the 37 mm M1939, which appears to have been copied from the Bofors 40 mm. A Bofors 25 mm, essentially a scaled down 40 mm, was also copied as the 25 mm M1939.Hogg Allied WW2 pg 131
During the 1930s solid fuel rockets were under development in the Soviet Union and Britain. In Britain the interest was for anti-aircraft fire, it quickly became clear that guidance would be required for precision. However, rockets, or 'unrotated projectiles' as they were called could the used for anti-aircraft barrages. A 2-inch rocket using HE or wire obstacle warheads was introduced first to deal with low-level or dive bombing attacks on smaller targets such as airfields. The 3-inch was in development at the end of the inter-war period.Routledge pg 56
Second World War
thumb|left|366px|Rendering of a flak burst and damage in slow motion, not all fragments are visible but hits to the aircraft and pieces of it register as red squares
Poland's AA defences were no match for the German attack and the situation was similar in other European countries. Significant AA warfare started with the Battle of Britain in the summer of 1940. 3.7-inch HAA were to provide the backbone of the groundbased AA defences, although initially significant numbers of 3-inch 20-cwt were also used. The Army's Anti-aircraft command, which was under command of the Air Defence UK organisation, grew to 12 AA divisions in 3 AA corps. 40-mm Bofors entered service in increasing numbers. In addition the RAF regiment was formed in 1941 with responsibility for airfield air defence, eventually with Bofors 40mm as their main armament. Fixed AA defences, using HAA and LAA, were established by the Army in key overseas places, notably Malta, Suez Canal and Singapore.
While the 3.7 inch was the main HAA gun in fixed defences and the only mobile HAA gun with the field army, 4.5-inch, manned by artillery, was used in the vicinity of naval ports, making use of the naval ammunition supply. 4.5-inch at Singapore had the first success in shooting down Japanese bombers. Mid war 5.25-inch HAA gun started being emplaced in some permanent sites around London. This gun was also deployed in dual role coast defence/AA positions.
thumb|upright=1.3|German 88 mm flak gun in action against Allied bombers.
Germany's high-altitude needs were originally going to be filled by a 75 mm gun from Krupp, designed in collaboration with their Swedish counterpart Bofors, but the specifications were later amended to require much higher performance. In response Krupp's engineers presented a new 88 mm design, the FlaK 36. First used in Spain during the Spanish Civil War, the gun proved to be one of the best anti-aircraft guns in the world, as well as particularly deadly against light, medium, and even early heavy tanks.
After the Dambusters raid in 1943 an entirely new system was developed that was required to knock down any low-flying aircraft with a single hit. The first attempt to produce such a system used a 50 mm gun, but this proved inaccurate and a new 55 mm gun replaced it. The system used a centralised control system including both search and targeting radar, which calculated the aim point for the guns after considering windage and ballistics, and then sent electrical commands to the guns, which used hydraulics to point themselves at high speeds. Operators simply fed the guns and selected the targets. This system, modern even by today's standards, was in late development when the war ended.
thumbnail|German Soldier with MG34 Anti-aircraft gun in WW2
The British had already arranged licence building of the Bofors 40 mm, and introduced these into service. These had the power to knock down aircraft of any size, yet were light enough to be mobile and easily swung. The gun became so important to the British war effort that they even produced a movie, The Gun, that encouraged workers on the assembly line to work harder. The Imperial measurement production drawings the British had developed were supplied to the Americans who produced their own (unlicensed) copy of the 40 mm at the start of the war, moving to licensed production in mid-1941.
thumb|left|B-24 hit by flak over Italy, 10 April 1945
Service trials demonstrated another problem however: that ranging and tracking the new high-speed targets was almost impossible. At short range, the apparent target area is relatively large, the trajectory is flat and the time of flight is short, allowing to correct lead by watching the tracers. At long range, the aircraft remains in firing range for a long time, so the necessary calculations can in theory be done by slide rules - though, because small errors in distance cause large errors in shell fall height and detonation time, exact ranging is crucial.
For the ranges and speeds that the Bofors worked at, neither answer was good enough.
thumb|right|British QF 3.7 inch gun in London in 1939
The solution was automation, in the form of a mechanical computer, the Kerrison Predictor. Operators kept it pointed at the target, and the Predictor then calculated the proper aim point automatically and displayed it as a pointer mounted on the gun. The gun operators simply followed the pointer and loaded the shells. The Kerrison was fairly simple, but it pointed the way to future generations that incorporated radar, first for ranging and later for tracking. Similar predictor systems were introduced by Germany during the war, also adding radar ranging as the war progressed.
thumb|left|US Coast Guard sailors in the South Pacific man a 20 mm anti-aircraft cannon A plethora of anti-aircraft gun systems of smaller calibre were available to the German Wehrmacht combined forces, and among them the 1940-origin Flakvierling quadruple-20 mm-gun antiaircraft weapon system was one of the most often-seen weapons, seeing service on both land and sea. The similar Allied smaller-calibre air-defence weapons systems of the American forces were also quite capable, although they receive little attention. Their needs could cogently be met with smaller-calibre ordnance beyond using the usual singly-mounted M2 .50 caliber machine gun atop a tank's turret, as four of the ground-used "heavy barrel" (M2HB) guns were mounted together on the American Maxson firm's M45 Quadmount weapons system (as a direct answer to the Flakvierling),which were often mounted on the back of a half-track to form the Half Track, M16 GMC, Anti-Aircraft. Although of less power than Germany's 20 mm systems, the typical 4 or 5 combat batteries of an Army AAA battalion were often spread many kilometers apart from each other, rapidly attaching and detaching to larger ground combat units to provide welcome defence from enemy aircraft.
thumb|Indian troops manning a Bren light machine gun in an anti-aircraft mount in 1941.
AAA battalions were also used to help suppress ground targets. Their larger 90 mm M3 gun would prove, as did the eighty-eight, to make an excellent anti-tank gun as well, and was widely used late in the war in this role. Also available to the Americans at the start of the war was the 120 mm M1 gun stratosphere gun, which was the most powerful AA gun with an impressive altitude capability. No 120 M1 was ever fired at an enemy aircraft. The 90 mm and 120 mm guns would continue to be used into the 1950s.
The United States Navy had also put some thought into the problem, and came up with the 1.1"/75 (28mm) gun to replace the inadequate .50 caliber. This weapon had the teething troubles that most new weapons have, but the issues with the gun were never sorted out. It was replaced by the Bofors 40 mm wherever possible. The 5"/38 caliber gun turned out to be an excellent anti-aircraft weapon, once the Proximity fuse had been perfected.
thumb|left|One of six flak towers built during World War II in Vienna
thumb|left|A British North Sea World War II Maunsell Fort.
The Germans developed massive reinforced concrete blockhouses, some more than six stories high, which were known as Hochbunker "High Bunkers" or "Flaktürme" flak towers, on which they placed anti-aircraft artillery. Those in cities attacked by the Allied land forces became fortresses. Several in Berlin were some of the last buildings to fall to the Soviets during the Battle of Berlin in 1945. The British built structures such as the Maunsell Forts in the North Sea, the Thames Estuary and other tidal areas upon which they based guns. After the war most were left to rot. Some were outside territorial waters, and had a second life in the 1960s as platforms for pirate radio stations.
thumb|right|200px|A B-24 bomber emerges from a cloud of flak with its no. 2 engine smoking
Some nations started rocket research before World War II, including for anti-aircraft use. Further research started during the war. The first step was unguided missile systems like the British 2-inch RP and 3-inch, which was fired in large numbers from Z batteries, and were also fitted to warships. The firing of one of these devices during an air raid is suspected to have caused the Bethnal Green disaster in 1943. Facing the threat of Japanese Kamikaze attacks the British and US developed surface-to-air rockets like British Stooge or the American Lark as counter measures, but none of them were ready at the end of the war. The Germans missile research was the most advanced of the war as the Germans put considerable effort in the research and development of rocket systems for all purposes. Among them were several guided and unguided systems. Unguided systems involved the Fliegerfaust (literally "aircraft fist") as the first MANPADS. Guided systems were several sophisticated radio, wire, or radar guided missiles like the Wasserfall ("waterfall") rocket. Due to the severe war situation for Germany all of those systems were only produced in small numbers and most of them were only used by training or trial units.
thumb|right|300px|Flak in the Balkans, 1942 (drawing by
Helmuth Ellgaard).
Another aspect of anti-aircraft defence was the use of barrage balloons to act as physical obstacle initially to bomber aircraft over cities and later for ground attack aircraft over the Normandy invasion fleets. The balloon, a simple blimp tethered to the ground, worked in two ways. Firstly, it and the steel cable were a danger to any aircraft that tried to fly among them. Secondly, to avoid the balloons, bombers had to fly at a higher altitude, which was more favorable for the guns. Barrage balloons were limited in application, and had minimal success at bringing down aircraft, being largely immobile and passive defences.
The allies' most advanced technologies were showcased by the anti-aircraft defence against the German V-1 cruise missiles (V stands for Vergeltungswaffe, "retaliation weapon"). The 419th and 601st Antiaircraft Gun Battalions of the US Army were first allocated to the Folkestone-Dover coast to defend London, and then moved to Belgium to become part of the "Antwerp X" project coordinated from the https://www.youtube.com/watch?v=DxZdUuUDMcI in Keerbergen. With the liberation of Antwerp, the port city immediately became the highest priority target, and received the largest number of V-1 and V-2 missiles of any city. The smallest tactical unit of the operation was a gun battery consisting of four 90 mm guns firing shells equipped with a radio proximity fuse. Incoming targets were acquired and automatically tracked by SCR-584 radar, developed at the MIT Rad Lab. Output from the gun-laying radar was fed to the M-9 director, an electronic analog computer developed at Bell Laboratories to calculate the lead and elevation corrections for the guns. With the help of these three technologies, close to 90% of the V-1 missiles, on track to the defence zone around the port, were destroyed.Cruise Missile Defence: Defending Antwerp against the V-1, Lt. Col. John A. HamiltonThe Defence of Antwarp Against the V-1 Missile, R.J. Backus, LTC, Fort Leavenworth, KS, 1971
Post-war
right|thumb|A 1970s-era Talos anti-aircraft missile, fired from a cruiser
Post-war analysis demonstrated that even with newest anti-aircraft systems employed by both sides, the vast majority of bombers reached their targets successfully, on the order of 90%. While these figures were undesirable during the war, the advent of the nuclear bomb considerably altered the acceptability of even a single bomber reaching its target.
The developments during World War II continued for a short time into the post-war period as well. In particular the U.S. Army set up a huge air defence network around its larger cities based on radar-guided 90 mm and 120 mm guns. US efforts continued into the 1950s with the 75 mm Skysweeper system, an almost fully automated system including the radar, computers, power, and auto-loading gun on a single powered platform. The Skysweeper replaced all smaller guns then in use in the Army, notably the 40 mm Bofors. In Europe NATO's Allied Command Europe developed an integrated air defence system, NATO Air Defence Ground Environment (NADGE), that later became the NATO Integrated Air Defence System.
The introduction of the guided missile resulted in a significant shift in anti-aircraft strategy. Although Germany had been desperate to introduce anti-aircraft missile systems, none became operational during World War II. Following several years of post-war development, however, these systems began to mature into viable weapons systems. The US started an upgrade of their defences using the Nike Ajax missile, and soon the larger anti-aircraft guns disappeared. The same thing occurred in the USSR after the introduction of their SA-2 Guideline systems.
thumb|left|200px|A three-person JASDF fireteam fires a missile from a Type 91 Kai MANPAD during an exercise at Eielson Air Force Base, Alaska as part of Red Flag - Alaska.
As this process continued, the missile found itself being used for more and more of the roles formerly filled by guns. First to go were the large weapons, replaced by equally large missile systems of much higher performance. Smaller missiles soon followed, eventually becoming small enough to be mounted on armored cars and tank chassis. These started replacing, or at least supplanting, similar gun-based SPAAG systems in the 1960s, and by the 1990s had replaced almost all such systems in modern armies. Man-portable missiles, MANPADs as they are known today, were introduced in the 1960s and have supplanted or even replaced even the smallest guns in most advanced armies.
In the 1982 Falklands War, the Argentine armed forces deployed the newest west European weapons including the Oerlikon GDF-002 35 mm twin cannon and SAM Roland. The Rapier missile system was the primary GBAD system, used by both British artillery and RAF regiment, a few brand-new FIM-92 Stinger were used by British special forces. Both sides also used the Blowpipe missile. British naval missiles used included Sea Dart and the older Sea Slug longer range systems, Sea Cat and the new Sea Wolf short range systems. Machine guns in AA mountings was used both ashore and afloat.
During the 2008 South Ossetia war air power faced off against powerful SAM systems, like the 1980s Buk-M1.
In Somalia, militia members sometimes welded a steel plate in the exhaust end of an unguided RPG's tube to deflect pressure away from the shooter when shooting upwards at US helicopters. RPGs are used in this role only when more effective weapons are not available.
AA warfare systems
Although the firearms used by the infantry, particularly machine guns, can be used to engage low altitude air targets, on occasion with notable success, their effectiveness is generally limited and the muzzle flashes reveal infantry positions. Speed and altitude of modern jet aircraft limit target opportunities, and critical systems may be armored in aircraft designed for the ground attack role. Adaptations of the standard autocannon, originally intended for air-to-ground use, and heavier artillery systems were commonly used for most anti-aircraft gunnery, starting with standard pieces on new mountings, and evolving to specially designed guns with much higher performance prior to World War II.
The ammunition and shells fired by these weapons are usually fitted with different types of fuses (barometric, time-delay, or proximity) to explode close to the airborne target, releasing a shower of fast metal fragments. For shorter-range work, a lighter weapon with a higher rate of fire is required, to increase a hit probability on a fast airborne target. Weapons between 20 mm and 40 mm caliber have been widely used in this role. Smaller weapons, typically .50 caliber or even 8 mm rifle caliber guns have been used in the smallest mounts.
thumb|right|200px|A Soviet WW II-era armoured train with anti-aircraft gunners
Unlike the heavier guns, these smaller weapons are in widespread use due to their low cost and ability to quickly follow the target. Classic examples of autocannons and large caliber guns are the 40 mm autocannon and the 8.8 cm FlaK 18, 36 gun, both designed by Bofors of Sweden. Artillery weapons of this sort have for the most part been superseded by the effective surface-to-air missile systems that were introduced in the 1950s, although they were still retained by many nations. The development of surface-to-air missiles began in Nazi Germany during the late World War II with missiles such as the Wasserfall, though no working system was deployed before the war's end, and represented new attempts to increase effectiveness of the anti-aircraft systems faced with growing threat from bombers. Land-based SAMs can be deployed from fixed installations or mobile launchers, either wheeled or tracked. The tracked vehicles are usually armoured vehicles specifically designed to carry SAMs.
Larger SAMs may be deployed in fixed launchers, but can be towed/re-deployed at will. The SAMs launched by individuals are known in the United States as the Man-Portable Air Defence Systems (MANPADS). MANPADS of the former Soviet Union have been exported around the World, and can be found in use by many armed forces. Targets for non-ManPAD SAMs will usually be acquired by air-search radar, then tracked before/while a SAM is "locked-on" and then fired. Potential targets, if they are military aircraft, will be identified as friend or foe before being engaged. The developments in the latest and relatively cheap short-range missiles have begun to replace autocannons in this role.
right|thumb|Fire of anti-aircraft guns deployed in the neighborrhood of St Isaac's cathedral during the defence of Leningrad (former Petrograd, now called St. Petersburg, ) in 1941.
The interceptor aircraft (or simply interceptor) is a type of fighter aircraft designed specifically to intercept and destroy enemy aircraft, particularly bombers, usually relying on high speed and altitude capabilities. A number of jet interceptors such as the F-102 Delta Dagger, the F-106 Delta Dart, and the MiG-25 were built in the period starting after the end of World War II and ending in the late 1960s, when they became less important due to the shifting of the strategic bombing role to ICBMs. Invariably the type is differentiated from other fighter aircraft designs by higher speeds and shorter operating ranges, as well as much reduced ordnance payloads.
The radar systems use electromagnetic waves to identify the range, altitude, direction, or speed of aircraft and weather formations to provide tactical and operational warning and direction, primarily during defensive operations. In their functional roles they provide target search, threat, guidance, reconnaissance,
navigation, instrumentation, and weather reporting support to combat operations.
thumb|left|A Royal Navy Type 45 destroyer is a highly advanced anti-air ship
Future developments
Guns are being increasingly pushed into specialist roles, such as the Dutch Goalkeeper CIWS, which uses the GAU-8 Avenger 30 mm seven-barrel Gatling gun for last ditch anti-missile and anti-aircraft defence. Even this formerly front-line weapon is currently being replaced by new missile systems, such as the RIM-116 Rolling Airframe Missile, which is smaller, faster, and allows for mid-flight course correction (guidance) to ensure a hit. To bridge the gap between guns and missiles, Russia in particular produces the Kashtan CIWS, which uses both guns and missiles for final defence. Two six-barrelled 30 mm Gsh-6-30 Gatling guns and 9M311 surface-to-air missiles provide for its defensive capabilities.
Upsetting this development to all-missile systems is the current move to stealth aircraft. Long range missiles depend on long-range detection to provide significant lead. Stealth designs cut detection ranges so much that the aircraft is often never even seen, and when it is, it is often too late for an intercept. Systems for detection and tracking of stealthy aircraft are a major problem for anti-aircraft development.
However, as stealth technology grows, so does anti-stealth technology. Multiple transmitter radars such as those from bistatic radars and low-frequency radars are said to have the capabilities to detect stealth aircraft. Advanced forms of thermographic cameras such as those that incorporate QWIPs would be able to optically see a Stealth aircraft regardless of the aircraft's RCS. In addition, Side looking radars, High-powered optical satellites, and sky-scanning, high-aperture, high sensitivity radars such as radio telescopes, would all be able to narrow down the location of a stealth aircraft under certain parameters. The newest SAM's have a claimed ability to be able to detect and engage stealth targets, with the most notable being the S-400, which is claimed to be able to detect a target with a 0.05 meter squared RCS from 90 km away.
Another potential weapon system for anti-aircraft use is the laser. Although air planners have imagined lasers in combat since the late 1960s, only the most modern laser systems are currently reaching what could be considered "experimental usefulness". In particular the Tactical High Energy Laser can be used in the anti-aircraft and anti-missile role. If current developments continue, some believe it is reasonable to suggest that lasers will play a major role in air defence starting in the next ten years.
The future of projectile based weapons may be found in the railgun. Currently tests are underway on developing systems that could create as much damage as a Tomahawk (missile), but at a fraction of the cost. In February 2008 the US Navy tested a railgun; it fired a shell at per hour using 10 megajoules of energy. Its expected performance is over per hour muzzle velocity, accurate enough to hit a 5-meter target from away while shooting at 10 shots per minute. It is expected to be ready in 2020 to 2025. These systems while currently designed for static targets would only need the ability to be retargeted to become the next generation of AA system.
Force structures
Most Western and Commonwealth militaries integrate air defence purely with the traditional services, of the military (i.e. army, navy and air force), as a separate arm or as part of artillery. In the British Army for instance, air defence is part of the artillery arm, while in the Pakistan Army, it was split off from Artillery to form a separate arm of its own in 1990. This is in contrast to some (largely communist or ex-communist) countries where not only are there provisions for air defence in the army, navy and air force but there are specific branches that deal only with the air defence of territory, for example, the Soviet PVO Strany. The USSR also had a separate strategic rocket force in charge of nuclear intercontinental ballistic missiles.
Navy
thumb|Soviet AK-630 CIWS (close-in weapon system)
thumb|right|Model of the multirole IDAS missile of the German Navy, which can be fired from submerged anti-aircraft weapon systems Smaller boats and ships typically have machine-guns or fast cannons, which can often be deadly to low-flying aircraft if linked to a radar-directed fire-control system radar-controlled cannon for point defence. Some vessels like Aegis cruisers are as much a threat to aircraft as any land-based air defence system. In general, naval vessels should be treated with respect by aircraft, however the reverse is equally true. Carrier battle groups are especially well defended, as not only do they typically consist of many vessels with heavy air defence armament but they are also able to launch fighter jets for combat air patrol overhead to intercept incoming airborne threats.
Nations such as Japan use their SAM-equipped vessels to create an outer air defence perimeter and radar picket in the defence of its Home islands, and the United States also uses its Aegis-equipped ships as part of its Aegis Ballistic Missile Defense System in the defence of the Continental United States.
Some modern submarines, such as the Type 212 submarines of the German Navy, are equipped with surface-to-air missile systems, since helicopters and anti-submarine warfare aircraft are significant threats. The subsurface launched anti-air missile was first purposed by US Navy Rear Admiral Charles B. Momsen, in a 1953 article."Will the New Submarines Rule the Seas?" Popular Mechanics, August 1953, pp. 74-78, see page 78.
Layered air defence
thumb|RIM-67 intercepts Firebee drone at White Sands 1980
Air defence in naval tactics, especially within a carrier group, is often built around a system of concentric layers with the aircraft carrier at the centre. The outer layer will usually be provided by the carrier's aircraft, specifically its AEW&C aircraft combined with the CAP. If an attacker is able to penetrate this layer, then the next layers would come from the surface-to-air missiles carried by the carrier's escorts; the area-defence missiles, such as the RIM-67 Standard, with a range of up to 100 nmi, and the point-defence missiles, like the RIM-162 ESSM, with a range of up to 30 nmi. Finally, virtually every modern warship will be fitted with small-calibre guns, including a CIWS, which is usually a radar-controlled Gatling gun of between 20mm and 30mm calibre capable of firing several thousand rounds per minute."What it takes to successfully attack an American Aircraft carrier" - Lexington Institute
Army
Armies typically have air defence in depth, from integral MANPADS such as the RBS 70, Stinger and Igla at smaller force levels up to army-level missile defence systems such as Angara and Patriot. Often, the high-altitude long-range missile systems force aircraft to fly at low level, where anti-aircraft guns can bring them down. As well as the small and large systems, for effective air defence there must be intermediate systems. These may be deployed at regiment-level and consist of platoons of self-propelled anti-aircraft platforms, whether they are self-propelled anti-aircraft guns (SPAAGs), integrated air-defence systems like Tunguska or all-in-one surface-to-air missile platforms like Roland or SA-8 Gecko.
On a national level the United States Army was atypical in that it was primarily responsible for the missile air defences of the Continental United States with systems such as Project Nike.
Air force
thumb|F-22A Raptor -03-4058
Air defence by air forces is typically provided by fighter jets carrying air-to-air missiles. However, most air forces choose to augment airbase defence with surface-to-air missile systems as they are such valuable targets and subject to attack by enemy aircraft. In addition, countries without dedicated air defence forces often relegate these duties to the air force.
Area air defence
Area air defence, the air defence of a specific area or location, (as opposed to point defence), have historically been operated by both armies (Anti-Aircraft Command in the British Army, for instance) and Air Forces (the United States Air Force's CIM-10 Bomarc). Area defence systems have medium to long range and can be made up of various other systems and networked into an area defence system (in which case it may be made up of several short range systems combined to effectively cover an area). An example of area defence is the defence of Saudi Arabia and Israel by MIM-104 Patriot missile batteries during the first Gulf War, where the objective was to cover populated areas.
Tactics
Mobility
thumb|The Russian Pantsir-S1 can engage targets while moving, thus achieving high survivability.
Most modern air defence systems are fairly mobile. Even the larger systems tend to be mounted on trailers and are designed to be fairly quickly broken down or set up. In the past, this was not always the case. Early missile systems were cumbersome and required much infrastructure; many could not be moved at all. With the diversification of air defence there has been much more emphasis on mobility. Most modern systems are usually either self-propelled (i.e. guns or missiles are mounted on a truck or tracked chassis) or easily towed. Even systems that consist of many components (transporter/erector/launchers, radars, command posts etc.) benefit from being mounted on a fleet of vehicles. In general, a fixed system can be identified, attacked and destroyed whereas a mobile system can show up in places where it is not expected. Soviet systems especially concentrate on mobility, after the lessons learnt in the Vietnam war between the USA and Vietnam. For more information on this part of the conflict, see SA-2 Guideline.
Air defence versus air defence suppression
thumb|AGM-88 and AIM-9 on Tornado
Israel and the US Air Force, in conjunction with the members of NATO, have developed significant tactics for air defence suppression. Dedicated weapons such as anti-radiation missiles and advanced electronics intelligence and electronic countermeasures platforms seek to suppress or negate the effectiveness of an opposing air-defence system. It is an arms race; as better jamming, countermeasures and anti-radiation weapons are developed, so are better SAM systems with ECCM capabilities and the ability to shoot down anti-radiation missiles and other munitions aimed at them or the targets they are defending.
Insurgent tactics
Rocket-propelled grenades can be—and often are—used against hovering helicopters (e.g., by Somali militiamen during the Battle of Mogadishu (1993)). Firing an RPG at steep angles poses a danger to the user, because the backblast from firing reflects off the ground. In Somalia, militia members sometimes welded a steel plate in the exhaust end of an RPG's tube to deflect pressure away from the shooter when shooting up at US helicopters. RPGs are used in this role only when more effective weapons are not available.
For insurgents the most effective method of countering aircraft is to attempt to destroy them on the ground, either by trying to penetrate an airbase perimeter and destroy aircraft individually, e.g. the September 2012 Camp Bastion raid, or finding a position where aircraft can be engaged with indirect fire, such as mortars.
See also
Air supremacy
Artillery
Gun laying
List of anti-aircraft weapons
Self-propelled anti-aircraft weapon
The bomber will always get through
Notes
References
AAP-6 NATO Glossary of Terms. 2009.
Bellamy, Chris. 1986. "The Red God of War – Soviet Artillery and Rocket Forces". London: Brassey's
Bethel, Colonel HA. 1911. "Modern Artillery in the Field". London: Macmillan and Co Ltd
Checkland, Peter and Holwell, Sue. 1998. "Information, Systems and Information Systems – making sense of the field". Chichester: Wiley
Gander, T 2014. "The Bofors gun", 3rd edn. Barnsley, South Yorkshire: Pen & Sword Military.
Hogg, Ian V. 1998. "Allied Artillery of World War Two". Malborough: The Crowood Press ISBN 1-86126-165-9
Hogg, Ian V. 1998. "Allied Artillery of World War One" Malborough: The Crowood Press ISBN 1-86126-104-7
Hogg, Ian V. 1997. "German Artillery of World War Two" London: Greenhill Books ISBN 1-85367-261-0
Routledge, Brigadier NW. 1994. "History of the Royal regiment of Artillery – Anti-Aircraft Artillery 1914–55". London: Brassey's ISBN 1-85753-099-3
Handbook for the Ordnance, Q.F. 3.7-inch Mark II on Mounting, 3.7-inch A.A. Mark II – Land Service. 1940. London: War Office 26|Manuals|2494
History of the Ministry of Munitions. 1922. Volume X The Supply of Munitions, Part VI Anti-Aircraft Supplies. Reprinted by Naval & Military Press Ltd and Imperial War Museum.
Flavia Foradini: I bunker di Vienna", Abitare 2/2006, Milano
Flavia Foradini, Edoardo Conte: I templi incompiuti di Hitler", catalogo della mostra omonima, Milano, Spazio Guicciardini, 17.2-13.3.2009
External links
1914 1918 war in Alsace - The Battle of Linge 1915 - The 63rd Anti Aircraft Regiment in 14 18 - The 96th poste semi-fixed in the Vosges
Archie to SAM: A Short Operational History of Ground-Based Air Defense by Kenneth P. Werrell (book available for download)
Japanese Anti-aircraft land/vessel doctrines in 1943–44
2nd/3rd Australian Light Anti-Aircraft Regiment
Category:Military aviation
Category:Warfare by type | 146,640 | 2017-01 |
Solar energy | thumb|right|300px|The source of our solar power: The Sun
Solar energy is radiant light and heat from the Sun that is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, solar thermal energy, solar architecture, molten salt power plants and artificial photosynthesis.
It is an important source of renewable energy and its technologies are broadly characterized as either passive solar or active solar depending on how they capture and distribute solar energy or convert it into solar power. Active solar techniques include the use of photovoltaic systems, concentrated solar power and solar water heating to harness the energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light-dispersing properties, and designing spaces that naturally circulate air.
The large magnitude of solar energy available makes it a highly appealing source of electricity. The United Nations Development Programme in its 2000 World Energy Assessment found that the annual potential of solar energy was 1,575–49,837 exajoules (EJ). This is several times larger than the total world energy consumption, which was 559.8 EJ in 2012.
In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries’ energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating global warming, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared".
Potential
The Earth receives 174,000 terawatts (TW) of incoming solar radiation (insolation) at the upper atmosphere.Smil (1991), p. 240 Approximately 30% is reflected back to space while the rest is absorbed by clouds, oceans and land masses. The spectrum of solar light at the Earth's surface is mostly spread across the visible and near-infrared ranges with a small part in the near-ultraviolet. Most of the world's population live in areas with insolation levels of 150-300 watts/m², or 3.5-7.0 kWh/m² per day.
Solar radiation is absorbed by the Earth's land surface, oceans – which cover about 71% of the globe – and atmosphere. Warm air containing evaporated water from the oceans rises, causing atmospheric circulation or convection. When the air reaches a high altitude, where the temperature is low, water vapor condenses into clouds, which rain onto the Earth's surface, completing the water cycle. The latent heat of water condensation amplifies convection, producing atmospheric phenomena such as wind, cyclones and anti-cyclones. Sunlight absorbed by the oceans and land masses keeps the surface at an average temperature of 14 °C. By photosynthesis, green plants convert solar energy into chemically stored energy, which produces food, wood and the biomass from which fossil fuels are derived.
The total solar energy absorbed by Earth's atmosphere, oceans and land masses is approximately 3,850,000 exajoules (EJ) per year. In 2002, this was more energy in one hour than the world used in one year.http://www.nature.com/nature/journal/v443/n7107/full/443019a.html Photosynthesis captures approximately 3,000 EJ per year in biomass. The amount of solar energy reaching the surface of the planet is so vast that in one year it is about twice as much as will ever be obtained from all of the Earth's non-renewable resources of coal, oil, natural gas, and mined uranium combined,
Yearly solar fluxes & human consumption1Solar 3,850,000Smil (2006), p. 12Wind 2,250Biomass potential~200Primary energy use2539Electricity2~67 1 Energy given in Exajoule (EJ) = 1018 J = 278 TWh 2 Consumption as of year 2010
The potential solar energy that could be used by humans differs from the amount of solar energy present near the surface of the planet because factors such as geography, time variation, cloud cover, and the land available to humans limit the amount of solar energy that we can acquire.
Geography affects solar energy potential because areas that are closer to the equator have a greater amount of solar radiation. However, the use of photovoltaics that can follow the position of the sun can significantly increase the solar energy potential in areas that are farther from the equator. Time variation effects the potential of solar energy because during the nighttime there is little solar radiation on the surface of the Earth for solar panels to absorb. This limits the amount of energy that solar panels can absorb in one day. Cloud cover can affect the potential of solar panels because clouds block incoming light from the sun and reduce the light available for solar cells.
In addition, land availability has a large effect on the available solar energy because solar panels can only be set up on land that is otherwise unused and suitable for solar panels. Roofs have been found to be a suitable place for solar cells, as many people have discovered that they can collect energy directly from their homes this way. Other areas that are suitable for solar cells are lands that are not being used for businesses where solar plants can be established.
Solar technologies are characterized as either passive or active depending on the way they capture, convert and distribute sunlight and enable solar energy to be harnessed at different levels around the world, mostly depending on distance from the equator. Although solar energy refers primarily to the use of solar radiation for practical ends, all renewable energies, other than Geothermal power and Tidal power, derive their energy either directly or indirectly from the Sun.
Active solar techniques use photovoltaics, concentrated solar power, solar thermal collectors, pumps, and fans to convert sunlight into useful outputs. Passive solar techniques include selecting materials with favorable thermal properties, designing spaces that naturally circulate air, and referencing the position of a building to the Sun. Active solar technologies increase the supply of energy and are considered supply side technologies, while passive solar technologies reduce the need for alternate resources and are generally considered demand side technologies.
In 2000, the United Nations Development Programme, UN Department of Economic and Social Affairs, and World Energy Council published an estimate of the potential solar energy that could be used by humans each year that took into account factors such as insolation, cloud cover, and the land that is usable by humans. The estimate found that solar energy has a global potential of 1,575–49,837 EJ per year (see table below).
+Annual solar energy potential by region (Exajoules) Region North America Latin America and Caribbean Western Europe Central and Eastern Europe Former Soviet Union Middle East and North Africa Sub-Saharan Africa Pacific Asia South Asia Centrally planned Asia Pacific OECD Minimum 181.1 112.6 25.1 4.5 199.3 412.4 371.9 41.0 38.8 115.5 72.6 Maximum 7,410 3,385 914 154 8,655 11,060 9,528 994 1,339 4,135 2,263Note:
Total global annual solar energy potential amounts to 1,575 EJ (minimum) to 49,837 EJ (maximum)
Data reflects assumptions of annual clear sky irradiance, annual average sky clearance, and available land area. All figures given in Exajoules.
Quantitative relation of global solar potential vs. the world's primary energy consumption:
Ratio of potential vs. current consumption (402 EJ) as of year: 3.9 (minimum) to 124 (maximum)
Ratio of potential vs. projected consumption by 2050 (590–1,050 EJ): 1.5–2.7 (minimum) to 47–84 (maximum)
Ratio of potential vs. projected consumption by 2100 (880–1,900 EJ): 0.8–1.8 (minimum) to 26–57 (maximum)
Source: United Nations Development Programme – World Energy Assessment (2000)
Thermal energy
Solar thermal technologies can be used for water heating, space heating, space cooling and process heat generation.
Early commercial adaptation
thumb|upright=0.80|1917 Patent drawing of Shuman's solar collector
In 1897, Frank Shuman, a U.S. inventor, engineer and solar energy pioneer built a small demonstration solar engine that worked by reflecting solar energy onto square boxes filled with ether, which has a lower boiling point than water, and were fitted internally with black pipes which in turn powered a steam engine. In 1908 Shuman formed the Sun Power Company with the intent of building larger solar power plants. He, along with his technical advisor A.S.E. Ackermann and British physicist Sir Charles Vernon Boys, developed an improved system using mirrors to reflect solar energy upon collector boxes, increasing heating capacity to the extent that water could now be used instead of ether. Shuman then constructed a full-scale steam engine powered by low-pressure water, enabling him to patent the entire solar engine system by 1912.
Shuman built the world’s first solar thermal power station in Maadi, Egypt, between 1912 and 1913. His plant used parabolic troughs to power a engine that pumped more than of water per minute from the Nile River to adjacent cotton fields. Although the outbreak of World War I and the discovery of cheap oil in the 1930s discouraged the advancement of solar energy, Shuman’s vision and basic design were resurrected in the 1970s with a new wave of interest in solar thermal energy. In 1916 Shuman was quoted in the media advocating solar energy's utilization, saying:
Water heating
thumb|upright|Solar water heaters facing the Sun to maximize gain
Solar hot water systems use sunlight to heat water. In low geographical latitudes (below 40 degrees) from 60 to 70% of the domestic hot water use with temperatures up to 60 °C can be provided by solar heating systems. The most common types of solar water heaters are evacuated tube collectors (44%) and glazed flat plate collectors (34%) generally used for domestic hot water; and unglazed plastic collectors (21%) used mainly to heat swimming pools.
As of 2007, the total installed capacity of solar hot water systems was approximately 154 thermal gigawatt (GWth). China is the world leader in their deployment with 70 GWth installed as of 2006 and a long-term goal of 210 GWth by 2020. Israel and Cyprus are the per capita leaders in the use of solar hot water systems with over 90% of homes using them. In the United States, Canada, and Australia, heating swimming pools is the dominant application of solar hot water with an installed capacity of 18 GWth as of 2005.
Heating, cooling and ventilation
In the United States, heating, ventilation and air conditioning (HVAC) systems account for 30% (4.65 EJ/yr) of the energy used in commercial buildings and nearly 50% (10.1 EJ/yr) of the energy used in residential buildings. Solar heating, cooling and ventilation technologies can be used to offset a portion of this energy.
thumb|left|MIT's Solar House #1, built in 1939 in the U.S., used seasonal thermal energy storage for year-round heating.
Thermal mass is any material that can be used to store heat—heat from the Sun in the case of solar energy. Common thermal mass materials include stone, cement and water. Historically they have been used in arid climates or warm temperate regions to keep buildings cool by absorbing solar energy during the day and radiating stored heat to the cooler atmosphere at night. However, they can be used in cold temperate areas to maintain warmth as well. The size and placement of thermal mass depend on several factors such as climate, daylighting and shading conditions. When properly incorporated, thermal mass maintains space temperatures in a comfortable range and reduces the need for auxiliary heating and cooling equipment.Mazria (1979), pp. 29–35
A solar chimney (or thermal chimney, in this context) is a passive solar ventilation system composed of a vertical shaft connecting the interior and exterior of a building. As the chimney warms, the air inside is heated causing an updraft that pulls air through the building. Performance can be improved by using glazing and thermal mass materials in a way that mimics greenhouses.
Deciduous trees and plants have been promoted as a means of controlling solar heating and cooling. When planted on the southern side of a building in the northern hemisphere or the northern side in the southern hemisphere, their leaves provide shade during the summer, while the bare limbs allow light to pass during the winter.Mazria (1979), p. 255 Since bare, leafless trees shade 1/3 to 1/2 of incident solar radiation, there is a balance between the benefits of summer shading and the corresponding loss of winter heating.Balcomb (1992), p. 56 In climates with significant heating loads, deciduous trees should not be planted on the Equator-facing side of a building because they will interfere with winter solar availability. They can, however, be used on the east and west sides to provide a degree of summer shading without appreciably affecting winter solar gain.Balcomb (1992), p. 57
Cooking
thumb|Parabolic dish produces steam for cooking, in Auroville, India
Solar cookers use sunlight for cooking, drying and pasteurization. They can be grouped into three broad categories: box cookers, panel cookers and reflector cookers.Anderson and Palkovic (1994), p. xi The simplest solar cooker is the box cooker first built by Horace de Saussure in 1767.Butti and Perlin (1981), pp. 54–59 A basic box cooker consists of an insulated container with a transparent lid. It can be used effectively with partially overcast skies and will typically reach temperatures of .Anderson and Palkovic (1994), p. xii Panel cookers use a reflective panel to direct sunlight onto an insulated container and reach temperatures comparable to box cookers. Reflector cookers use various concentrating geometries (dish, trough, Fresnel mirrors) to focus light on a cooking container. These cookers reach temperatures of and above but require direct light to function properly and must be repositioned to track the Sun.Anderson and Palkovic (1994), p. xiii
Process heat
Solar concentrating technologies such as parabolic dish, trough and Scheffler reflectors can provide process heat for commercial and industrial applications. The first commercial system was the Solar Total Energy Project (STEP) in Shenandoah, Georgia, USA where a field of 114 parabolic dishes provided 50% of the process heating, air conditioning and electrical requirements for a clothing factory. This grid-connected cogeneration system provided 400 kW of electricity plus thermal energy in the form of 401 kW steam and 468 kW chilled water, and had a one-hour peak load thermal storage. Evaporation ponds are shallow pools that concentrate dissolved solids through evaporation. The use of evaporation ponds to obtain salt from seawater is one of the oldest applications of solar energy. Modern uses include concentrating brine solutions used in leach mining and removing dissolved solids from waste streams.Bartlett (1998), pp. 393–4 Clothes lines, clotheshorses, and clothes racks dry clothes through evaporation by wind and sunlight without consuming electricity or gas. In some states of the United States legislation protects the "right to dry" clothes. Unglazed transpired collectors (UTC) are perforated sun-facing walls used for preheating ventilation air. UTCs can raise the incoming air temperature up to and deliver outlet temperatures of . The short payback period of transpired collectors (3 to 12 years) makes them a more cost-effective alternative than glazed collection systems. As of 2003, over 80 systems with a combined collector area of had been installed worldwide, including an collector in Costa Rica used for drying coffee beans and a collector in Coimbatore, India, used for drying marigolds.
Water treatment
thumb|Solar water disinfection in Indonesia
Solar distillation can be used to make saline or brackish water potable. The first recorded instance of this was by 16th-century Arab alchemists.Tiwari (2003), pp. 368–371 A large-scale solar distillation project was first constructed in 1872 in the Chilean mining town of Las Salinas.Daniels (1964), p. 6 The plant, which had solar collection area of , could produce up to per day and operate for 40 years. Individual still designs include single-slope, double-slope (or greenhouse type), vertical, conical, inverted absorber, multi-wick, and multiple effect. These stills can operate in passive, active, or hybrid modes. Double-slope stills are the most economical for decentralized domestic purposes, while active multiple effect units are more suitable for large-scale applications.
Solar water disinfection (SODIS) involves exposing water-filled plastic polyethylene terephthalate (PET) bottles to sunlight for several hours. Exposure times vary depending on weather and climate from a minimum of six hours to two days during fully overcast conditions. It is recommended by the World Health Organization as a viable method for household water treatment and safe storage. Over two million people in developing countries use this method for their daily drinking water.
Solar energy may be used in a water stabilization pond to treat waste water without chemicals or electricity. A further environmental advantage is that algae grow in such ponds and consume carbon dioxide in photosynthesis, although algae may produce toxic chemicals that make the water unusable.
Molten salt technology
Molten salt can be employed as a thermal energy storage method to retain thermal energy collected by a solar tower or solar trough of a concentrated solar power plant, so that it can be used to generate electricity in bad weather or at night. It was demonstrated in the Solar Two project from 1995–1999. The system is predicted to have an annual efficiency of 99%, a reference to the energy retained by storing heat before turning it into electricity, versus converting heat directly into electricity.Molten salt energy storage system - A feasibility study Jones, B. G.; Roy, R. P.; Bohl, R. W. (1977) - Smithsonian/NASA ADS Physics Abstract Service. Abstract accessed December 2007 The molten salt mixtures vary. The most extended mixture contains sodium nitrate, potassium nitrate and calcium nitrate. It is non-flammable and nontoxic, and has already been used in the chemical and metals industries as a heat-transport fluid, so experience with such systems exists in non-solar applications.
The salt melts at . It is kept liquid at in an insulated "cold" storage tank. The liquid salt is pumped through panels in a solar collector where the focused sun heats it to . It is then sent to a hot storage tank. This is so well insulated that the thermal energy can be usefully stored for up to a week.Ehrlich, Robert, 2013, Renewable Energy: A First Course, CRC Press, Chap. 13.1.22 Thermal storage p. 375 ISBN 978-1439861158
When electricity is needed, the hot salt is pumped to a conventional steam-generator to produce superheated steam for a turbine/generator as used in any conventional coal, oil, or nuclear power plant. A 100-megawatt turbine would need a tank about tall and in diameter to drive it for four hours by this design.
Several parabolic trough power plants in SpainParabolic Trough Thermal Energy Storage Technology Parabolic Trough Solar Power Network. April 04, 2007. Accessed December 2007 and solar power tower developer SolarReserve use this thermal energy storage concept. The Solana Generating Station in the U.S. has six hours of storage by molten salt.
Electricity production
Solar power is the conversion of sunlight into electricity, either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP). CSP systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. PV converts light into electric current using the photoelectric effect.
Solar power is anticipated to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16 and 11 percent to the global overall consumption, respectively.
Commercial CSP plants were first developed in the 1980s. Since 1985 the eventually 354 MW SEGS CSP installation, in the Mojave Desert of California, is the largest solar power plant in the world. Other large CSP plants include the 150 MW Solnova Solar Power Station and the 100 MW Andasol solar power station, both in Spain. The 250 MW Agua Caliente Solar Project, in the United States, and the 221 MW Charanka Solar Park in India, are the world’s largest photovoltaic plants. Solar projects exceeding 1 GW are being developed, but most of the deployed photovoltaics are in small rooftop arrays of less than 5 kW, which are connected to the grid using net metering and/or a feed-in tariff. In 2013 solar generated less than 1% of the world's total grid electricity.Historical Data Workbook (2013 calendar year)
Photovoltaics
In the last two decades, photovoltaics (PV), also known as solar PV, has evolved from a pure niche market of small scale applications towards becoming a mainstream electricity source. A solar cell is a device that converts light directly into electricity using the photoelectric effect. The first solar cell was constructed by Charles Fritts in the 1880s.Perlin (1999), p. 147 In 1931 a German engineer, Dr Bruno Lange, developed a photo cell using silver selenide in place of copper oxide. Although the prototype selenium cells converted less than 1% of incident light into electricity, both Ernst Werner von Siemens and James Clerk Maxwell recognized the importance of this discovery.Perlin (1999), pp. 18–20 Following the work of Russell Ohl in the 1940s, researchers Gerald Pearson, Calvin Fuller and Daryl Chapin created the crystalline silicon solar cell in 1954.Perlin (1999), p. 29 These early solar cells cost 286 USD/watt and reached efficiencies of 4.5–6%.Perlin (1999), pp. 29–30, 38 By 2012 available efficiencies exceeded 20%, and the maximum efficiency of research photovoltaics was in excess of 40%.
Concentrated solar power
Concentrating Solar Power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. The concentrated heat is then used as a heat source for a conventional power plant. A wide range of concentrating technologies exists; the most developed are the parabolic trough, the concentrating linear fresnel reflector, the Stirling dish and the solar power tower. Various techniques are used to track the Sun and focus light. In all of these systems a working fluid is heated by the concentrated sunlight, and is then used for power generation or energy storage.Martin and Goswami (2005), p. 45
Architecture and urban planning
thumb|Darmstadt University of Technology, Germany, won the 2007 Solar Decathlon in Washington, D.C. with this passive house designed for humid and hot subtropical climate.
Sunlight has influenced building design since the beginning of architectural history.Schittich (2003), p. 14 Advanced solar architecture and urban planning methods were first employed by the Greeks and Chinese, who oriented their buildings toward the south to provide light and warmth.Butti and Perlin (1981), pp. 4, 159
The common features of passive solar architecture are orientation relative to the Sun, compact proportion (a low surface area to volume ratio), selective shading (overhangs) and thermal mass. When these features are tailored to the local climate and environment they can produce well-lit spaces that stay in a comfortable temperature range. Socrates' Megaron House is a classic example of passive solar design. The most recent approaches to solar design use computer modeling tying together solar lighting, heating and ventilation systems in an integrated solar design package.Balcomb (1992) Active solar equipment such as pumps, fans and switchable windows can complement passive design and improve system performance.
Urban heat islands (UHI) are metropolitan areas with higher temperatures than that of the surrounding environment. The higher temperatures result from increased absorption of solar energy by urban materials such as asphalt and concrete, which have lower albedos and higher heat capacities than those in the natural environment. A straightforward method of counteracting the UHI effect is to paint buildings and roads white, and to plant trees in the area. Using these methods, a hypothetical "cool communities" program in Los Angeles has projected that urban temperatures could be reduced by approximately 3 °C at an estimated cost of US$1 billion, giving estimated total annual benefits of US$530 million from reduced air-conditioning costs and healthcare savings.
Agriculture and horticulture
thumb|Greenhouses like these in the Westland municipality of the Netherlands grow vegetables, fruits and flowers.
Agriculture and horticulture seek to optimize the capture of solar energy in order to optimize the productivity of plants. Techniques such as timed planting cycles, tailored row orientation, staggered heights between rows and the mixing of plant varieties can improve crop yields.Kaul (2005), pp. 169–174 While sunlight is generally considered a plentiful resource, the exceptions highlight the importance of solar energy to agriculture. During the short growing seasons of the Little Ice Age, French and English farmers employed fruit walls to maximize the collection of solar energy. These walls acted as thermal masses and accelerated ripening by keeping plants warm. Early fruit walls were built perpendicular to the ground and facing south, but over time, sloping walls were developed to make better use of sunlight. In 1699, Nicolas Fatio de Duillier even suggested using a tracking mechanism which could pivot to follow the Sun.Butti and Perlin (1981), pp. 42–46 Applications of solar energy in agriculture aside from growing crops include pumping water, drying crops, brooding chicks and drying chicken manure.Leon (2006), p. 62Bénard (1981), p. 347 More recently the technology has been embraced by vintners, who use the energy generated by solar panels to power grape presses.
Greenhouses convert solar light to heat, enabling year-round production and the growth (in enclosed environments) of specialty crops and other plants not naturally suited to the local climate. Primitive greenhouses were first used during Roman times to produce cucumbers year-round for the Roman emperor Tiberius.Butti and Perlin (1981), p. 19 The first modern greenhouses were built in Europe in the 16th century to keep exotic plants brought back from explorations abroad.Butti and Perlin (1981), p. 41 Greenhouses remain an important part of horticulture today, and plastic transparent materials have also been used to similar effect in polytunnels and row covers.
Transport
Development of a solar-powered car has been an engineering goal since the 1980s. The World Solar Challenge is a biannual solar-powered car race, where teams from universities and enterprises compete over across central Australia from Darwin to Adelaide. In 1987, when it was founded, the winner's average speed was and by 2007 the winner's average speed had improved to .
The North American Solar Challenge and the planned South African Solar Challenge are comparable competitions that reflect an international interest in the engineering and development of solar powered vehicles.
Some vehicles use solar panels for auxiliary power, such as for air conditioning, to keep the interior cool, thus reducing fuel consumption.http://www.systaic.com/press/press-release/systaic-ag-demand-for-car-solar-roofs-skyrockets.html
In 1975, the first practical solar boat was constructed in England.Electrical Review Vol. 201, No. 7, 12 August 1977 By 1995, passenger boats incorporating PV panels began appearing and are now used extensively. In 1996, Kenichi Horie made the first solar-powered crossing of the Pacific Ocean, and the Sun21 catamaran made the first solar-powered crossing of the Atlantic Ocean in the winter of 2006–2007. There were plans to circumnavigate the globe in 2010.
In 1974, the unmanned AstroFlight Sunrise airplane made the first solar flight. On 29 April 1979, the Solar Riser made the first flight in a solar-powered, fully controlled, man-carrying flying machine, reaching an altitude of . In 1980, the Gossamer Penguin made the first piloted flights powered solely by photovoltaics. This was quickly followed by the Solar Challenger which crossed the English Channel in July 1981. In 1990 Eric Scott Raymond in 21 hops flew from California to North Carolina using solar power.http://www.evworld.com/article.cfm?storyid=709 Developments then turned back to unmanned aerial vehicles (UAV) with the Pathfinder (1997) and subsequent designs, culminating in the Helios which set the altitude record for a non-rocket-propelled aircraft at in 2001. The Zephyr, developed by BAE Systems, is the latest in a line of record-breaking solar aircraft, making a 54-hour flight in 2007, and month-long flights were envisioned by 2010. As of 2016, Solar Impulse, an electric aircraft, is currently circumnavigating the globe. It is a single-seat plane powered by solar cells and capable of taking off under its own power. The design allows the aircraft to remain airborne for several days.
A solar balloon is a black balloon that is filled with ordinary air. As sunlight shines on the balloon, the air inside is heated and expands causing an upward buoyancy force, much like an artificially heated hot air balloon. Some solar balloons are large enough for human flight, but usage is generally limited to the toy market as the surface-area to payload-weight ratio is relatively high.
Fuel production
thumb|Concentrated solar panels are getting a power boost. Pacific Northwest National Laboratory (PNNL) will be testing a new concentrated solar power system -- one that can help natural gas power plants reduce their fuel usage by up to 20 percent.
Solar chemical processes use solar energy to drive chemical reactions. These processes offset energy that would otherwise come from a fossil fuel source and can also convert solar energy into storable and transportable fuels. Solar induced chemical reactions can be divided into thermochemical or photochemical.Bolton (1977), p. 1 A variety of fuels can be produced by artificial photosynthesis.Wasielewski M. R. Photoinduced electron transfer in supramolecular systems for artificial photosynthesis. Chem. Rev. 1992; 92: 435-461. The multielectron catalytic chemistry involved in making carbon-based fuels (such as methanol) from reduction of carbon dioxide is challenging; a feasible alternative is hydrogen production from protons, though use of water as the source of electrons (as plants do) requires mastering the multielectron oxidation of two water molecules to molecular oxygen.Hammarstrom L. and Hammes-Schiffer S. Artificial Photosynthesis and Solar Fuels. Accounts of Chemical Research 2009; 42 (12): 1859-1860. Some have envisaged working solar fuel plants in coastal metropolitan areas by 2050 the splitting of sea water providing hydrogen to be run through adjacent fuel-cell electric power plants and the pure water by-product going directly into the municipal water system.Gray H. B. Powering the planet with solar fuel. Nature Chemistry 2009; 1: 7. Another vision involves all human structures covering the earth's surface (i.e., roads, vehicles and buildings) doing photosynthesis more efficiently than plants.
Hydrogen production technologies have been a significant area of solar chemical research since the 1970s. Aside from electrolysis driven by photovoltaic or photochemical cells, several thermochemical processes have also been explored. One such route uses concentrators to split water into oxygen and hydrogen at high temperatures ().Agrafiotis (2005), p. 409 Another approach uses the heat from solar concentrators to drive the steam reformation of natural gas thereby increasing the overall hydrogen yield compared to conventional reforming methods.Zedtwitz (2006), p. 1333 Thermochemical cycles characterized by the decomposition and regeneration of reactants present another avenue for hydrogen production. The Solzinc process under development at the Weizmann Institute of Science uses a 1 MW solar furnace to decompose zinc oxide (ZnO) at temperatures above . This initial reaction produces pure zinc, which can subsequently be reacted with water to produce hydrogen.
Energy storage methods
thumb|Thermal energy storage. The Andasol CSP plant uses tanks of molten salt to store solar energy.
Thermal mass systems can store solar energy in the form of heat at domestically useful temperatures for daily or interseasonal durations. Thermal storage systems generally use readily available materials with high specific heat capacities such as water, earth and stone. Well-designed systems can lower peak demand, shift time-of-use to off-peak hours and reduce overall heating and cooling requirements.Balcomb(1992), p. 6
Phase change materials such as paraffin wax and Glauber's salt are another thermal storage medium. These materials are inexpensive, readily available, and can deliver domestically useful temperatures (approximately ). The "Dover House" (in Dover, Massachusetts) was the first to use a Glauber's salt heating system, in 1948.Butti and Perlin (1981), pp. 212–214 Solar energy can also be stored at high temperatures using molten salts. Salts are an effective storage medium because they are low-cost, have a high specific heat capacity and can deliver heat at temperatures compatible with conventional power systems. The Solar Two project used this method of energy storage, allowing it to store in its 68 m³ storage tank with an annual storage efficiency of about 99%.
Off-grid PV systems have traditionally used rechargeable batteries to store excess electricity. With grid-tied systems, excess electricity can be sent to the transmission grid, while standard grid electricity can be used to meet shortfalls. Net metering programs give household systems a credit for any electricity they deliver to the grid. This is handled by 'rolling back' the meter whenever the home produces more electricity than it consumes. If the net electricity use is below zero, the utility then rolls over the kilowatt hour credit to the next month. Other approaches involve the use of two meters, to measure electricity consumed vs. electricity produced. This is less common due to the increased installation cost of the second meter. Most standard meters accurately measure in both directions, making a second meter unnecessary.
Pumped-storage hydroelectricity stores energy in the form of water pumped when energy is available from a lower elevation reservoir to a higher elevation one. The energy is recovered when demand is high by releasing the water, with the pump becoming a hydroelectric power generator.
Development, deployment and economics
thumb|Participants in a workshop on sustainable development inspect solar panels at Monterrey Institute of Technology and Higher Education, Mexico City on top of a building on campus.
Beginning with the surge in coal use which accompanied the Industrial Revolution, energy consumption has steadily transitioned from wood and biomass to fossil fuels. The early development of solar technologies starting in the 1860s was driven by an expectation that coal would soon become scarce. However, development of solar technologies stagnated in the early 20th century in the face of the increasing availability, economy, and utility of coal and petroleum.Butti and Perlin (1981), pp. 63, 77, 101
The 1973 oil embargo and 1979 energy crisis caused a reorganization of energy policies around the world and brought renewed attention to developing solar technologies.Butti and Perlin (1981), p. 249Yergin (1991), pp. 634, 653-673 Deployment strategies focused on incentive programs such as the Federal Photovoltaic Utilization Program in the U.S. and the Sunshine Program in Japan. Other efforts included the formation of research facilities in the U.S. (SERI, now NREL), Japan (NEDO), and Germany (Fraunhofer Institute for Solar Energy Systems ISE).
Commercial solar water heaters began appearing in the United States in the 1890s.Butti and Perlin (1981), p. 117 These systems saw increasing use until the 1920s but were gradually replaced by cheaper and more reliable heating fuels.Butti and Perlin (1981), p. 139 As with photovoltaics, solar water heating attracted renewed attention as a result of the oil crises in the 1970s but interest subsided in the 1980s due to falling petroleum prices. Development in the solar water heating sector progressed steadily throughout the 1990s and annual growth rates have averaged 20% since 1999. Although generally underestimated, solar water heating and cooling is by far the most widely deployed solar technology with an estimated capacity of 154 GW as of 2007.
The International Energy Agency has said that solar energy can make considerable contributions to solving some of the most urgent problems the world now faces:
The development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries’ energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared.
In 2011, a report by the International Energy Agency found that solar energy technologies such as photovoltaics, solar hot water and concentrated solar power could provide a third of the world’s energy by 2060 if politicians commit to limiting climate change. The energy from the sun could play a key role in de-carbonizing the global economy alongside improvements in energy efficiency and imposing costs on greenhouse gas emitters. "The strength of solar is the incredible variety and flexibility of applications, from small scale to big scale".
ISO standards
The International Organization for Standardization has established several standards relating to solar energy equipment. For example, ISO 9050 relates to glass in building while ISO 10217 relates to the materials used in solar water heaters.
See also
Airmass
Artificial photosynthesis
Community solar farm
Copper in renewable energy
Desertec
Global dimming
Greasestock
Green electricity
Heliostat
List of conservation topics
List of renewable energy organizations
List of solar energy topics
Photovoltaic system
Renewable heat
Soil solarization
Solar Decathlon
Solar easement
Solar energy use in rural Africa
Solar updraft tower
Solar power satellite
Solar tracker
SolarEdge
Timeline of solar cells
Notes
References
External links
Solar Energy Back in the Day - slideshow by Life magazine
U.S. Solar Farm Map (1 MW or Higher)
Online Resources Database on Solar in Developing Countries
Online resources and news from the nonprofit American Solar Energy Society
Category:Energy conversion
Category:Alternative energy | 27,743 | 2017-01 |