title
stringlengths 3
49
| text
stringlengths 509
117k
| relevans
float64 0.76
0.83
| popularity
float64 0.95
1
| ranking
float64 0.76
0.82
|
---|---|---|---|---|
Bioenergetic systems | Bioenergetic systems are metabolic processes that relate to the flow of energy in living organisms. Those processes convert energy into adenosine triphosphate (ATP), which is the form suitable for muscular activity. There are two main forms of synthesis of ATP: aerobic, which uses oxygen from the bloodstream, and anaerobic, which does not. Bioenergetics is the field of biology that studies bioenergetic systems.
Overview
The process that converts the chemical energy of food into ATP (which can release energy) is not dependent on oxygen availability. During exercise, the supply and demand of oxygen available to muscle cells is affected by duration and intensity and by the individual's cardio respiratory fitness level. It is also affected by the type of activity, for instance, during isometric activity the contracted muscles restricts blood flow (leaving oxygen and blood borne fuels unable to be delivered to muscle cells adequately for oxidative phosphorylation). Three systems can be selectively recruited, depending on the amount of oxygen available, as part of the cellular respiration process to generate ATP for the muscles. They are ATP, the anaerobic system and the aerobic system.
Adenosine triphosphate
ATP is the only type of usable form of chemical energy for musculoskeletal activity. It is stored in most cells, particularly in muscle cells. Other forms of chemical energy, such as those available from oxygen and food, must be transformed into ATP before they can be utilized by the muscle cells.
Coupled reactions
Since energy is released when ATP is broken down, energy is required to rebuild or resynthesize it. The building blocks of ATP synthesis are the by-products of its breakdown; adenosine diphosphate (ADP) and inorganic phosphate (Pi). The energy for ATP resynthesis comes from three different series of chemical reactions that take place within the body. Two of the three depend upon the food eaten, whereas the other depends upon a chemical compound called phosphocreatine. The energy released from any of these three series of reactions is utilized in reactions that resynthesize ATP. The separate reactions are functionally linked in such a way that the energy released by one is used by the other.
Three processes can synthesize ATP:
ATP–CP system (phosphagen system) – At maximum intensity, this system is used for up to 10–15 seconds. The ATP–CP system neither uses oxygen nor produces lactic acid if oxygen is unavailable and is thus called alactic anaerobic. This is the primary system behind very short, powerful movements like a golf swing, a 100 m sprint or powerlifting.
Anaerobic system – This system predominates in supplying energy for intense exercise lasting less than two minutes. It is also known as the glycolytic system. An example of an activity of the intensity and duration that this system works under would be a 400 m sprint.
Aerobic system – This is the long-duration energy system. After five minutes of exercise, the O2 system is dominant. In a 1 km run, this system is already providing approximately half the energy; in a marathon run it provides 98% or more. Around mile 20 of a marathon, runners typically "hit the wall," having depleted their glycogen reserves they then attain "second wind" which is entirely aerobic metabolism primarily by free fatty acids.
Aerobic and anaerobic systems usually work concurrently. When describing activity, it is not a question of which energy system is working, but which predominates.
Anaerobic and aerobic metabolism
The term metabolism refers to the various series of chemical reactions that take place within the body. Aerobic refers to the presence of oxygen, whereas anaerobic means with a series of chemical reactions that does not require the presence of oxygen. The ATP-CP series and the lactic acid series are anaerobic, whereas the oxygen series is aerobic.
Anaerobic metabolism
ATP–CP: the phosphagen system
Creatine phosphate (CP), like ATP, is stored in muscle cells. When it is broken down, a considerable amount of energy is released. The energy released is coupled to the energy requirement necessary for the resynthesis of ATP.
The total muscular stores of both ATP and CP are small. Thus, the amount of energy obtainable through this system is limited. The phosphagen stored in the working muscles is typically exhausted in seconds of vigorous activity. However, the usefulness of the ATP-CP system lies in the rapid availability of energy rather than quantity. This is important with respect to the kinds of physical activities that humans are capable of performing.
The phosphagen system (ATP-PCr) occurs in the cytosol (a gel-like substance) of the sarcoplasm of skeletal muscle, and in the myocyte's cytosolic compartment of the cytoplasm of cardiac and smooth muscle.
During muscle contraction:
H2O + ATP → H+ + ADP + Pi (Mg2+ assisted, utilization of ATP for muscle contraction by ATPase)
H+ + ADP + CP → ATP + Creatine (Mg2+ assisted, catalyzed by creatine kinase, ATP is used again in the above reaction for continued muscle contraction)
2 ADP → ATP + AMP (catalyzed by adenylate kinase/myokinase when CP is depleted, ATP is again used for muscle contraction)
Muscle at rest:
ATP + Creatine → H+ + ADP + CP (Mg2+ assisted, catalyzed by creatine kinase)
ADP + Pi → ATP (during anaerobic glycolysis and oxidative phosphorylation)
When the phosphagen system has been depleted of phosphocreatine (creatine phosphate), the resulting AMP produced from the adenylate kinase (myokinase) reaction is primarily regulated by the purine nucleotide cycle.
Anaerobic glycolysis
This system is known as anaerobic glycolysis. "Glycolysis" refers to the breakdown of sugar. In this system, the breakdown of sugar supplies the necessary energy from which ATP is manufactured. When sugar is metabolized anaerobically, it is only partially broken down and one of the byproducts is lactic acid. This process creates enough energy to couple with the energy requirements to resynthesize ATP.
When H+ ions accumulate in the muscles causing the blood pH level to reach low levels, temporary muscle fatigue results. Another limitation of the lactic acid system that relates to its anaerobic quality is that only a few moles of ATP can be resynthesized from the breakdown of sugar. This system cannot be relied on for extended periods of time.
The lactic acid system, like the ATP-CP system, is important primarily because it provides a rapid supply of ATP energy. For example, exercises that are performed at maximum rates for between 1 and 3 minutes depend heavily upon the lactic acid system. In activities such as running 1500 meters or a mile, the lactic acid system is used predominantly for the "kick" at the end of the race.
Aerobic metabolism
Aerobic glycolysis
Glycolysis – The first stage is known as glycolysis, which produces 2 ATP molecules, 2 reduced molecules of nicotinamide adenine dinucleotide (NADH) and 2 pyruvate molecules that move on to the next stage – the Krebs cycle. Glycolysis takes place in the cytoplasm of normal body cells, or the sarcoplasm of muscle cells.
The Krebs cycle – This is the second stage, and the products of this stage of the aerobic system are a net production of one ATP, one carbon dioxide molecule, three reduced NAD+ molecules, and one reduced flavin adenine dinucleotide (FAD) molecule. (The molecules of NAD+ and FAD mentioned here are electron carriers, and if they are reduced, they have had one or two H+ ions and two electrons added to them.) The metabolites are for each turn of the Krebs cycle. The Krebs cycle turns twice for each six-carbon molecule of glucose that passes through the aerobic system – as two three-carbon pyruvate molecules enter the Krebs cycle. Before pyruvate enters the Krebs cycle it must be converted to acetyl coenzyme A. During this link reaction, for each molecule of pyruvate converted to acetyl coenzyme A, a NAD+ is also reduced. This stage of the aerobic system takes place in the matrix of the cells' mitochondria.
Oxidative phosphorylation – The last stage of the aerobic system produces the largest yield of ATP – a total of 34 ATP molecules. It is called oxidative phosphorylation because oxygen is the final acceptor of electrons and hydrogen ions (hence oxidative) and an extra phosphate is added to ADP to form ATP (hence phosphorylation).
This stage of the aerobic system occurs on the cristae (infoldings of the membrane of the mitochondria). The reaction of each NADH in this electron transport chain provides enough energy for 3 molecules of ATP, while reaction of FADH2 yields 2 molecules of ATP. This means that 10 total NADH molecules allow the regeneration of 30 ATP, and 2 FADH2 molecules allow for 4 ATP molecules to be regenerated (in total 34 ATP from oxidative phosphorylation, plus 4 from the previous two stages, producing a total of 38 ATP in the aerobic system). NADH and FADH2 are oxidized to allow the NAD+ and FAD to be reused in the aerobic system, while electrons and hydrogen ions are accepted by oxygen to produce water, a harmless byproduct.
Fatty acid oxidation
Triglycerides stored in adipose tissue and in other tissues, such as muscle and liver, release fatty acids and glycerol in a process known as lipolysis. Fatty acids are slower than glucose to convert into acetyl-CoA, as first it has to go through beta oxidation. It takes about 10 minutes for fatty acids to sufficiently produce ATP. Fatty acids are the primary fuel source at rest and in low to moderate intensity exercise. Though slower than glucose, its yield is much higher. One molecule of glucose produces through aerobic glycolysis a net of 30-32 ATP; whereas a fatty acid can produce through beta oxidation a net of approximately 100 ATP depending on the type of fatty acid. For example, palmitic acid can produce a net of 106 ATP.
Amino acid degradation
Normally, amino acids do not provide the bulk of fuel substrates. However, in times of glycolytic or ATP crisis, amino acids can convert into pyruvate, acetyl-CoA, and citric acid cycle intermediates. This is useful during strenuous exercise or starvation as it provides faster ATP than fatty acids; however, it comes at the expense of risking protein catabolism (such as the breakdown of muscle tissue) to maintain the free amino acid pool.
Purine nucleotide cycle
The purine nucleotide cycle is used in times of glycolytic or ATP crisis, such as strenuous exercise or starvation. It produces fumarate, a citric acid cycle intermediate, which enters the mitochondrion through the malate-aspartate shuttle, and from there produces ATP by oxidative phosphorylation.
Ketolysis
During starvation or while consuming a low-carb/ketogenic diet, the liver produces ketones. Ketones are needed as fatty acids cannot pass the blood-brain barrier, blood glucose levels are low and glycogen reserves depleted. Ketones also convert to acetyl-CoA faster than fatty acids. After the ketones convert to acetyl-CoA in a process known as ketolysis, it enters the citric acid cycle to produce ATP by oxidative phosphorylation.
The longer that the person's glycogen reserves have been depleted, the higher the blood concentration of ketones, typically due to starvation or a low carb diet (βHB 3 - 5 mM). Prolonged high-intensity aerobic exercise, such as running 20 miles, where individuals "hit the wall" can create post-exercise ketosis; however, the level of ketones produced are smaller (βHB 0.3 - 2 mM).
Ethanol metabolism
Ethanol (alcohol) is first converted into acetaldehyde, consuming NAD+ twice, before being converted into acetate. The acetate is then converted into acetyl-CoA. When alcohol is consumed in small quantities, the NADH/NAD+ ratio remains in balance enough for the acetyl-CoA to be used by the Krebs cycle for oxidative phosphorylation. However, even moderate amounts of alcohol (1-2 drinks) results in more NADH than NAD+, which inhibits oxidative phosphorylation.
When the NADH/NAD+ ratio is disrupted (far more NADH than NAD+), this is called pseudohypoxia. The Krebs cycle needs NAD+ as well as oxygen, for oxidative phosphorylation. Without sufficient NAD+, the impaired aerobic metabolism mimics hypoxia (insufficient oxygen), resulting in excessive use of anaerobic glycolysis and a disrupted pyruvate/lactate ratio (low pyruvate, high lactate). The conversion of pyruvate into lactate produces NAD+, but only enough to maintain anaerobic glycolysis. In chronic excessive alcohol consumption (alcoholism), the microsomal ethanol oxidizing system (MEOS) is used in addition to alcohol dehydrogenase.
See also
Hitting the wall (muscle fatigue due to glycogen depletion)
Second wind (increased ATP synthesis primarily from free fatty acids)
References
Further reading
Exercise Physiology for Health, Fitness and Performance. Sharon Plowman and Denise Smith. Lippincott Williams & Wilkins; Third edition (2010). .
Ch. 38. Hormonal Regulation of Energy Metabolism. Berne and Levy Physiology, 6th ed (2008)
The effects of increasing exercise intensity on muscle fuel utilisation in humans. Van Loon et al. Journal of Physiology (2001)
(OTEP) Open Textbook of Exercise Physiology. Edited by Brian R. MacIntosh (2023)
ATP metabolism | 0.783744 | 0.975526 | 0.764563 |
Strychnine | Strychnine (, , US chiefly ) is a highly toxic, colorless, bitter, crystalline alkaloid used as a pesticide, particularly for killing small vertebrates such as birds and rodents. Strychnine, when inhaled, swallowed, or absorbed through the eyes or mouth, causes poisoning which results in muscular convulsions and eventually death through asphyxia. While it is no longer used medicinally, it was used historically in small doses to strengthen muscle contractions, such as a heart and bowel stimulant and performance-enhancing drug. The most common source is from the seeds of the Strychnos nux-vomica tree.
Biosynthesis
Strychnine is a terpene indole alkaloid belonging to the Strychnos family of Corynanthe alkaloids, and it is derived from tryptamine and secologanin. The biosynthesis of strychnine was solved in 2022. The enzyme, strictosidine synthase, catalyzes the condensation of tryptamine and secologanin, followed by a Pictet-Spengler reaction to form strictosidine. Many steps have been inferred by isolation of intermediates from Strychnos nux-vomica. The next step is hydrolysis of the acetal, which opens the ring by elimination of glucose (O-Glu) and provides a reactive aldehyde. The nascent aldehyde is then attacked by a secondary amine to afford geissoschizine, a common intermediate of many related compounds in the Strychnos family.
A reverse Pictet-Spengler reaction cleaves the C2–C3 bond, while subsequently forming the C3–C7 bond via a 1,2-alkyl migration, an oxidation from a Cytochrome P450 enzyme to a spiro-oxindole, nucleophilic attack from the enol at C16, and elimination of oxygen forms the C2–C16 bond to provide dehydropreakuammicine. Hydrolysis of the methyl ester and decarboxylation leads to norfluorocurarine. Stereospecific reduction of the endocyclic double bond by NADPH and hydroxylation provides the Wieland-Gumlich aldehyde, which was first isolated by Heimberger and Scott in 1973, although previously synthesized by Wieland and Gumlich in 1932. To elongate the appendage by two carbons, acetyl-CoA is added to the aldehyde in an aldol reaction to afford prestrychnine. Strychnine is then formed by a facile addition of the amine with the carboxylic acid or its activated CoA thioester, followed by ring-closure via displacement of an activated alcohol.
Chemical synthesis
As early researchers noted, the strychnine molecular structure, with its specific array of rings, stereocenters, and nitrogen functional groups, is a complex synthetic target, and has stimulated interest for that reason and for interest in the structure–activity relationships underlying its pharmacologic activities. An early synthetic chemist targeting strychnine, Robert Burns Woodward, quoted the chemist who determined its structure through chemical decomposition and related physical studies as saying that "for its molecular size it is the most complex organic substance known" (attributed to Sir Robert Robinson).
The first total synthesis of strychnine was reported by the research group of R. B. Woodward in 1954, and is considered a classic in this field. The Woodward account published in 1954 was very brief (3 pages), but was followed by a 42-page report in 1963. The molecule has since received continuing wide attention in the years since for the challenges to synthetic organic strategy and tactics presented by its complexity; its synthesis has been targeted and its stereocontrolled preparation independently achieved by more than a dozen research groups since the first success.
Mechanism of action
Strychnine is a neurotoxin which acts as an antagonist of glycine and acetylcholine receptors. It primarily affects the motor nerve fibers in the spinal cord which control muscle contraction. An impulse is triggered at one end of a nerve cell by the binding of neurotransmitters to the receptors. In the presence of an inhibitory neurotransmitter, such as glycine, a greater quantity of excitatory neurotransmitters must bind to receptors before an action potential is generated. Glycine acts primarily as an agonist of the glycine receptor, which is a ligand-gated chloride channel in neurons located in the spinal cord and in the brain. This chloride channel allows the negatively charged chloride ions into the neuron, causing a hyperpolarization which pushes the membrane potential further from threshold. Strychnine is an antagonist of glycine; it binds noncovalently to the same receptor, preventing the inhibitory effects of glycine on the postsynaptic neuron. Therefore, action potentials are triggered with lower levels of excitatory neurotransmitters. When the inhibitory signals are prevented, the motor neurons are more easily activated and the victim has spastic muscle contractions, resulting in death by asphyxiation. Strychnine binds the Aplysia californica acetylcholine binding protein (a homolog of nicotinic receptors) with high affinity but low specificity, and does so in multiple conformations.
Toxicity
In high doses, strychnine is very toxic to humans (minimum lethal oral dose in adults is 30–120 mg) and many other animals (oral = 16 mg/kg in rats, 2 mg/kg in mice), and poisoning by inhalation, swallowing, or absorption through eyes or mouth can be fatal. S. nux-vomica seeds are generally effective as a poison only when they are crushed or chewed before swallowing because the pericarp is quite hard and indigestible; poisoning symptoms may therefore not appear if the seeds are ingested whole.
Animal toxicity
Strychnine poisoning in animals usually occurs from ingestion of baits designed for use against gophers, rats, squirrels, moles, chipmunks and coyotes. Strychnine is also used as a rodenticide, but is not specific to such unwanted pests and may kill other small animals. In the United States, most baits containing strychnine have been replaced with zinc phosphide baits since 1990. In the European Union, rodenticides with strychnine have been forbidden since 2006. Some animals are immune to strychnine; usually these have evolved resistance to poisonous strychnos alkaloids in the fruit they eat, such as fruit bats. The drugstore beetle has a symbiotic gut yeast that allows it to digest pure strychnine.
Strychnine toxicity in rats is dependent on sex. It is more toxic to females than to males when administered via subcutaneous injection or intraperitoneal injection. Differences are due to higher rates of metabolism by male rat liver microsomes. Dogs and cats are more susceptible among domestic animals, pigs are believed to be as susceptible as dogs, and horses are able to tolerate relatively large amounts of strychnine. Birds affected by strychnine poisoning exhibit wing droop, salivation, tremors, muscle tenseness, and convulsions. Death occurs as a result of respiratory arrest. The clinical signs of strychnine poisoning relate to its effects on the central nervous system. The first clinical signs of poisoning include nervousness, restlessness, twitching of the muscles, and stiffness of the neck. As the poisoning progresses, the muscular twitching becomes more pronounced and convulsions suddenly appear in all the skeletal muscles. The limbs are extended and the neck is curved to opisthotonus. The pupils are widely dilated. As death approaches, the convulsions follow one another with increased rapidity, severity, and duration. Death results from asphyxia due to prolonged paralysis of the respiratory muscles. Following the ingestion of strychnine, symptoms of poisoning usually appear within 15 to 60 minutes.
Human toxicity
After injection, inhalation, or ingestion, the first symptoms to appear are generalized muscle spasms. They appear very quickly after inhalation or injection – within as few as five minutes – and take somewhat longer to manifest after ingestion, typically approximately 15 minutes. With a very high dose, the onset of respiratory failure and brain death can occur in 15 to 30 minutes. If a lower dose is ingested, other symptoms begin to develop, including seizures, cramping, stiffness, hypervigilance, and agitation. Seizures caused by strychnine poisoning can start as early as 15 minutes after exposure and last 12–24 hours. They are often triggered by sights, sounds, or touch and can cause other adverse symptoms, including hyperthermia, rhabdomyolysis, myoglobinuric kidney failure, metabolic acidosis, and respiratory acidosis. During seizures, mydriasis (abnormal dilation), exophthalmos (protrusion of the eyes), and nystagmus (involuntary eye movements) may occur.
As strychnine poisoning progresses, tachycardia (rapid heart beat), hypertension (high blood pressure), tachypnea (rapid breathing), cyanosis (blue discoloration), diaphoresis (sweating), water-electrolyte imbalance, leukocytosis (high number of white blood cells), trismus (lockjaw), risus sardonicus (spasm of the facial muscles), and opisthotonus (dramatic spasm of the back muscles, causing arching of the back and neck) can occur. In rare cases, the affected person may experience nausea or vomiting.
The proximate cause of death in strychnine poisoning can be cardiac arrest, respiratory failure, multiple organ failure, or brain damage.
For occupational exposures to strychnine, the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have set exposure limits at 0.15 mg/m3 over an 8-hour work day.
Because strychnine produces some of the most dramatic and painful symptoms of any known toxic reaction, strychnine poisoning is often portrayed in literature and film including authors Agatha Christie and Arthur Conan Doyle.
Treatment
There is no antidote for strychnine poisoning. Strychnine poisoning demands aggressive management with early control of muscle spasms, intubation for loss of airway control, toxin removal (decontamination), intravenous hydration and potentially active cooling efforts in the context of hyperthermia as well as hemodialysis in kidney failure (strychnine has not been shown to be removed by hemodialysis). Treatment involves oral administration of activated charcoal, which adsorbs strychnine within the digestive tract; unabsorbed strychnine is removed from the stomach by gastric lavage, along with tannic acid or potassium permanganate solutions to oxidize strychnine.
Activated charcoal
Activated charcoal is a substance that can bind to certain toxins in the digestive tract and prevent their absorption into the bloodstream. The effectiveness of this treatment, as well as how long it is effective after ingestion, are subject to debate. According to one source, activated charcoal is only effective within one hour of poison being ingested, although the source does not regard strychnine specifically. Other sources specific to strychnine state that activated charcoal may be used after one hour of ingestion, depending on dose and type of strychnine-containing product. Therefore, other treatment options are generally favoured over activated charcoal.
The use of activated charcoal is considered dangerous in patients with tenuous airways or altered mental states.
Other treatments
Most other treatment options focus on controlling the convulsions that arise from strychnine poisoning. These treatments involve keeping the patient in a quiet and darkened room, anticonvulsants such as phenobarbital or diazepam, muscle relaxants such as dantrolene, barbiturates and propofol, and chloroform or heavy doses of chloral, bromide, urethane or amyl nitrite. If a poisoned person is able to survive for 6 to 12 hours subsequent to initial dose, they have a good prognosis.
The sine qua non of strychnine toxicity is the "awake" seizure, in which tonic-clonic activity occurs but the patient is alert and oriented throughout and afterwards. Accordingly, George Harley (1829–1896) showed in 1850 that curare (wourali) was effective for the treatment of tetanus and strychnine poisoning.
Pharmacokinetics
Absorption
Strychnine may be introduced into the body orally, by inhalation, or by injection. It is a potently bitter substance, and in humans has been shown to activate bitter taste receptors TAS2R10 and TAS2R46. Strychnine is rapidly absorbed from the gastrointestinal tract.
Distribution
Strychnine is transported by plasma and red blood cells. Due to slight protein binding, strychnine leaves the bloodstream quickly and distributes to bodily tissues. Approximately 50% of the ingested dose can enter the tissues in 5 minutes. Also within a few minutes of ingestion, strychnine can be detected in the urine. Little difference was noted between oral and intramuscular administration of strychnine in a 4 mg dose. In persons killed by strychnine, the highest concentrations are found in the blood, liver, kidney and stomach wall. The usual fatal dose is 60–100 mg strychnine and is fatal after a period of 1–2 hours, though lethal doses vary depending on the individual.
Metabolism
Strychnine is rapidly metabolized by the liver microsomal enzyme system requiring NADPH and O2. Strychnine competes with the inhibitory neurotransmitter glycine resulting in an excitatory state. However, the toxicokinetics after overdose have not been well described. In most severe cases of strychnine poisoning, the patient dies before reaching the hospital. The biological half-life of strychnine is about 10 hours.
Excretion
A few minutes after ingestion, strychnine is excreted unchanged in the urine, and accounts for about 5 to 15% of a sublethal dose given over 6 hours. Approximately 10 to 20% of the dose will be excreted unchanged in the urine in the first 24 hours. The percentage excreted decreases with the increasing dose. Of the amount excreted by the kidneys, about 70% is excreted in the first 6 hours, and almost 90% in the first 24 hours. Excretion is virtually complete in 48 to 72 hours.
History
Strychnine was the first alkaloid to be identified in plants of the genus Strychnos, family Loganiaceae. Strychnos, named by Carl Linnaeus in 1753, is a genus of trees and climbing shrubs of the Gentianales order. The genus contains 196 various species and is distributed throughout the warm regions of Asia (58 species), America (64 species), and Africa (75 species). The seeds and bark of many plants in this genus contain strychnine.
The toxic and medicinal effects of Strychnos nux-vomica have been well known from the times of ancient India, although the chemical compound itself was not identified and characterized until the 19th century. The inhabitants of these countries had historical knowledge of the species Strychnos nux-vomica and Saint-Ignatius' bean (Strychnos ignatii). Strychnos nux-vomica is a tree native to the tropical forests on the Malabar Coast in Southern India, Sri Lanka and Indonesia, which attains a height of about . The tree has a crooked, short, thick trunk and the wood is close grained and very durable. The fruit has an orange color and is about the size of a large apple with a hard rind and contains five seeds, which are covered with a soft wool-like substance. The ripe seeds look like flattened disks, which are very hard. These seeds are the chief commercial source of strychnine and were first imported to and marketed in Europe as a poison to kill rodents and small predators. Strychnos ignatii is a woody climbing shrub of the Philippines. The fruit of the plant, known as Saint Ignatius' bean, contains as many as 25 seeds embedded in the pulp. The seeds contain more strychnine than other commercial alkaloids. The properties of S. nux-vomica and S. ignatii are substantially those of the alkaloid strychnine.
Strychnine was first discovered by French chemists Joseph Bienaimé Caventou and Pierre-Joseph Pelletier in 1818 in the Saint-Ignatius' bean. In some Strychnos plants a 9,10-dimethoxy derivative of strychnine, the alkaloid brucine, is also present. Brucine is not as poisonous as strychnine. Historic records indicate that preparations containing strychnine (presumably) had been used to kill dogs, cats, and birds in Europe as far back as 1640. It was allegedly used by convicted murderer William Palmer to kill his final victim, John Cook. It was also used during World War II by the Dirlewanger Brigade against civilian population.
The structure of strychnine was first determined in 1946 by Sir Robert Robinson and in 1954 this alkaloid was synthesized in a laboratory by Robert B. Woodward. This is one of the most famous syntheses in the history of organic chemistry. Both chemists won the Nobel prize (Robinson in 1947 and Woodward in 1965).
Strychnine has been used as a plot device in the author Agatha Christie's murder mysteries.
Other uses
Strychnine was popularly used as an athletic performance enhancer and recreational stimulant in the late 19th century and early 20th century, due to its convulsant effects. One notorious instance of its use was during the 1904 Olympics marathon, when track-and-field athlete Thomas Hicks was unwittingly administered a concoction of egg whites and brandy laced with a small amount of strychnine by his assistants in a vain attempt to boost his stamina. Hicks won the race, but was hallucinating by the time he reached the finish line, and soon after collapsed.
Maximilian Theodor Buch proposed it as a cure for alcoholism around the same time. It was thought to be similar to coffee, and also has been used and abused recreationally.
Its effects are well-described in H. G. Wells' novella The Invisible Man: the title character states "Strychnine is a grand tonic ... to take the flabbiness out of a man." Dr Kemp, an acquaintance, replies: "It's the devil. It's the palaeolithic in a bottle."
See also
Avicide
References
Avicides
Bitter compounds
Chloride channel blockers
Convulsants
Ethers
Glycine receptor antagonists
Indole alkaloids
Lactams
Neurotoxins
Nitrogen heterocycles
Oxygen heterocycles
Plant toxins
Strychnine poisoning | 0.765439 | 0.998853 | 0.764561 |
Van 't Hoff factor | The van 't Hoff factor (named after Dutch chemist Jacobus Henricus van 't Hoff) is a measure of the effect of a solute on colligative properties such as osmotic pressure, relative lowering in vapor pressure, boiling-point elevation and freezing-point depression. The van 't Hoff factor is the ratio between the actual concentration of particles produced when the substance is dissolved and the concentration of a substance as calculated from its mass. For most non-electrolytes dissolved in water, the van 't Hoff factor is essentially 1.
For most ionic compounds dissolved in water, the van 't Hoff factor is equal to the number of discrete ions in a formula unit of the substance. This is true for ideal solutions only, as occasionally ion pairing occurs in solution. At a given instant a small percentage of the ions are paired and count as a single particle. Ion pairing occurs to some extent in all electrolyte solutions. This causes the measured van 't Hoff factor to be less than that predicted in an ideal solution. The deviation for the van 't Hoff factor tends to be greatest where the ions have multiple charges.
The factor binds osmolarity to molarity and osmolality to molality.
Dissociated solutes
The degree of dissociation is the fraction of the original solute molecules that have dissociated. It is usually indicated by the Greek symbol . There is a simple relationship between this parameter and the van 't Hoff factor. If a fraction of the solute dissociates into ions, then
For example, the dissociation KCl K+ + Cl− yields ions, so that .
For dissociation in the absence of association, the van 't Hoff factor is:
.
Associated solutes
Similarly, if a fraction of moles of solute associate to form one mole of an n-mer (dimer, trimer, etc.), then
For the dimerisation of acetic acid in benzene:
2 CH3COOH (CH3COOH)2
2 moles of acetic acid associate to form 1 mole of dimer, so that
For association in the absence of dissociation, the van 't Hoff factor is:
.
Physical significance of
When solute particles associate in solution, is less than 1. For example, carboxylic acids such as acetic acid (ethanoic acid) or benzoic acid form dimers in benzene, so that the number of solute particles is half the number of acid molecules.
When solute particles dissociate in solution, is greater than 1 (e.g. sodium chloride in water, potassium chloride in water, magnesium chloride in water).
When solute particles neither dissociate nor associate in solution, equals 1 (e.g. glucose in water).
The value of is the actual number of particles in solution after dissociation divided by the number of formula units initially dissolved in solution and means the number of particles per formula unit of the solute when a solution is dilute.
Relation to osmotic coefficient
This quantity can be related to the osmotic coefficient g by the relation:
.
See also
Colligative properties
Thermodynamic activity
Raoult's law
Law of dilution
Van 't Hoff equation
Dissociation (chemistry)
Osmosis
Osmotic coefficient
References
Physical chemistry
Dimensionless numbers
Jacobus Henricus van 't Hoff | 0.772226 | 0.990074 | 0.764561 |
Olefin metathesis | In organic chemistry, olefin metathesis is an organic reaction that entails the redistribution of fragments of alkenes (olefins) by the scission and regeneration of carbon-carbon double bonds. Because of the relative simplicity of olefin metathesis, it often creates fewer undesired by-products and hazardous wastes than alternative organic reactions. For their elucidation of the reaction mechanism and their discovery of a variety of highly active catalysts, Yves Chauvin, Robert H. Grubbs, and Richard R. Schrock were collectively awarded the 2005 Nobel Prize in Chemistry.
Catalysts
The reaction requires metal catalysts. Most commercially important processes employ heterogeneous catalysts. The heterogeneous catalysts are often prepared by in-situ activation of a metal halide (MClx) using organoaluminium or organotin compounds, e.g. combining MClx–EtAlCl2. A typical catalyst support is alumina. Commercial catalysts are often based on molybdenum and ruthenium. Well-defined organometallic compounds have mainly been investigated for small-scale reactions or in academic research. The homogeneous catalysts are often classified as Schrock catalysts and Grubbs catalysts. Schrock catalysts feature molybdenum(VI)- and tungsten(VI)-based centers supported by alkoxide and imido ligands.
Grubbs catalysts, on the other hand, are ruthenium(II) carbenoid complexes. Many variations of Grubbs catalysts are known. Some have been modified with a chelating isopropoxybenzylidene ligand to form the related Hoveyda–Grubbs catalyst.
Applications
Olefin metathesis has several industrial applications. Almost all commercial applications employ heterogeneous catalysts using catalysts developed well before the Nobel-Prize winning work on homogeneous complexes. Representative processes include:
The Phillips Triolefin and the Olefin conversion technology. This process interconverts propylene with ethylene and 2-butenes. Rhenium and molybdenum catalysts are used. Nowadays, only the reverse reaction, i.e., the conversion of ethylene and 2-butene to propylene is industrially practiced, however.
Shell higher olefin process (SHOP) produces (alpha-olefins) for conversion to detergents. The process recycles certain olefin fractions using metathesis.
Neohexene production, which involves ethenolysis of isobutene dimers. The catalyst is derived from tungsten trioxide supported on silica and MgO.
1,5-Hexadiene and 1,9-decadiene, useful crosslinking agents and synthetic intermediates, are produced commercially by ethenolysis of 1,5-cyclooctadiene and cyclooctene. The catalyst is derived from Re2O7 on alumina.
Synthesis of pharmaceutical drugs,
Homogeneous catalyst potential
Molecular catalysts have been explored for the preparation of a variety of potential applications. the manufacturing of high-strength materials, the preparation of cancer-targeting nanoparticles, and the conversion of renewable plant-based feedstocks into hair and skin care products.
Types
Some important classes of olefin metathesis include:
Cross metathesis (CM)
Ring-opening metathesis (ROM)
Ring-closing metathesis (RCM)
Ring-opening metathesis polymerization (ROMP)
Acyclic diene metathesis (ADMET)
Ethenolysis
Mechanism
Hérisson and Chauvin first proposed the widely accepted mechanism of transition metal alkene metathesis. The direct [2+2] cycloaddition of two alkenes is formally symmetry forbidden and thus has a high activation energy. The Chauvin mechanism involves the [2+2] cycloaddition of an alkene double bond to a transition metal alkylidene to form a metallacyclobutane intermediate. The metallacyclobutane produced can then cycloeliminate to give either the original species or a new alkene and alkylidene. Interaction with the d-orbitals on the metal catalyst lowers the activation energy enough that the reaction can proceed rapidly at modest temperatures.
Olefin metathesis involves little change in enthalpy for unstrained alkenes. Product distributions are determined instead by le Chatelier's Principle, i.e. entropy.
Cross metathesis and ring-closing metathesis are driven by the entropically favored evolution of ethylene or propylene, which can be removed from the system because they are gases. Because of this CM and RCM reactions often use alpha-olefins. The reverse reaction of CM of two alpha-olefins, ethenolysis, can be favored but requires high pressures of ethylene to increase ethylene concentration in solution. The reverse reaction of RCM, ring-opening metathesis, can likewise be favored by a large excess of an alpha-olefin, often styrene. Ring-opening metathesis usually involves a strained alkene (often a norbornene) and the release of ring strain drives the reaction. Ring-closing metathesis, conversely, usually involves the formation of a five- or six-membered ring, which is enthalpically favorable; although these reactions tend to also evolve ethylene, as previously discussed. RCM has been used to close larger macrocycles, in which case the reaction may be kinetically controlled by running the reaction at high dilutions. The same substrates that undergo RCM can undergo acyclic diene metathesis, with ADMET favored at high concentrations. The Thorpe–Ingold effect may also be exploited to improve both reaction rates and product selectivity.
Cross-metathesis is synthetically equivalent to (and has replaced) a procedure of ozonolysis of an alkene to two ketone fragments followed by the reaction of one of them with a Wittig reagent.
Historical overview
"Olefin metathesis is a child of industry and, as with many catalytic processes, it was discovered by accident."
As part of ongoing work in what would later become known as Ziegler–Natta catalysis Karl Ziegler discovered the conversion of ethylene into 1-butene instead of a saturated long-chain hydrocarbon (see nickel effect).
In 1960 a Du Pont research group polymerized norbornene to polynorbornene using lithium aluminum tetraheptyl and titanium tetrachloride (a patent by this company on this topic dates back to 1955),
a reaction then classified as a so-called coordination polymerization. According to the then proposed reaction mechanism a RTiX titanium intermediate first coordinates to the double bond in a pi complex. The second step then is a concerted SNi reaction breaking a CC bond and forming a new alkylidene-titanium bond; the process then repeats itself with a second monomer:
Only much later the polynorbornene was going to be produced through ring opening metathesis polymerisation. The DuPont work was led by Herbert S. Eleuterio. Giulio Natta in 1964 also observed the formation of an unsaturated polymer when polymerizing cyclopentene with tungsten and molybdenum halides.
In a third development leading up to olefin metathesis, researchers at Phillips Petroleum Company in 1964 described olefin disproportionation with catalysts molybdenum hexacarbonyl, tungsten hexacarbonyl, and molybdenum oxide supported on alumina for example converting propylene to an equal mixture of ethylene and 2-butene for which they proposed a reaction mechanism involving a cyclobutane (they called it a quasicyclobutane) – metal complex:
This particular mechanism is symmetry forbidden based on the Woodward–Hoffmann rules first formulated two years earlier. Cyclobutanes have also never been identified in metathesis reactions, which is another reason why it was quickly abandoned.
Then in 1967 researchers led by Nissim Calderon at the Goodyear Tire and Rubber Company described a novel catalyst system for the metathesis of 2-pentene based on tungsten hexachloride, ethanol, and the organoaluminum compound EtAlMe2. The researchers proposed a name for this reaction type: olefin metathesis. Formerly the reaction had been called "olefin disproportionation."
In this reaction 2-pentene forms a rapid (a matter of seconds) chemical equilibrium with 2-butene and 3-hexene. No double bond migrations are observed; the reaction can be started with the butene and hexene as well and the reaction can be stopped by addition of methanol.
The Goodyear group demonstrated that the reaction of regular 2-butene with its all-deuterated isotopologue yielded C4H4D4 with deuterium evenly distributed. In this way they were able to differentiate between a transalkylidenation mechanism and a transalkylation mechanism (ruled out):
In 1971 Chauvin proposed a four-membered metallacycle intermediate to explain the statistical distribution of products found in certain metathesis reactions. This mechanism is today considered the actual mechanism taking place in olefin metathesis.
Chauvin's experimental evidence was based on the reaction of cyclopentene and 2-pentene with the homogeneous catalyst tungsten(VI) oxytetrachloride and tetrabutyltin:
The three principal products C9, C10 and C11 are found in a 1:2:1 regardless of conversion. The same ratio is found with the higher oligomers. Chauvin also explained how the carbene forms in the first place: by alpha-hydride elimination from a carbon metal single bond. For example, propylene (C3) forms in a reaction of 2-butene (C4) with tungsten hexachloride and tetramethyltin (C1).
In the same year Pettit who synthesised cyclobutadiene a few years earlier independently came up with a competing mechanism. It consisted of a tetramethylene intermediate with sp3 hybridized carbon atoms linked to a central metal atom with multiple three-center two-electron bonds.
Experimental support offered by Pettit for this mechanism was based on an observed reaction inhibition by carbon monoxide in certain metathesis reactions of 4-nonene with a tungsten metal carbonyl
Robert H. Grubbs got involved in metathesis in 1972 and also proposed a metallacycle intermediate but one with four carbon atoms in the ring. The group he worked in reacted 1,4-dilithiobutane with tungsten hexachloride in an attempt to directly produce a cyclomethylenemetallacycle producing an intermediate, which yielded products identical with those produced by the intermediate in the olefin metathesis reaction. This mechanism is pairwise:
In 1973 Grubbs found further evidence for this mechanism by isolating one such metallacycle not with tungsten but with platinum by reaction of the dilithiobutane with cis-bis(triphenylphosphine)dichloroplatinum(II)
In 1975 Katz also arrived at a metallacyclobutane intermediate consistent with the one proposed by Chauvin He reacted a mixture of cyclooctene, 2-butene and 4-octene with a molybdenum catalyst and observed that the unsymmetrical C14 hydrocarbon reaction product is present right from the start at low conversion.
In any of the pairwise mechanisms with olefin pairing as rate-determining step this compound, a secondary reaction product of C12 with C6, would form well after formation of the two primary reaction products C12 and C16.
In 1974 Casey was the first to implement carbenes into the metathesis reaction mechanism:
Grubbs in 1976 provided evidence against his own updated pairwise mechanism:
with a 5-membered cycle in another round of isotope labeling studies in favor of the 4-membered cycle Chauvin mechanism:
In this reaction the ethylene product distribution at low conversion was found to be consistent with the carbene mechanism. On the other hand, Grubbs did not rule out the possibility of a tetramethylene intermediate.
The first practical metathesis system was introduced in 1978 by Tebbe based on the (what later became known as the) Tebbe reagent. In a model reaction isotopically labeled carbon atoms in isobutene and methylenecyclohexane switched places:
The Grubbs group then isolated the proposed metallacyclobutane intermediate in 1980 also with this reagent together with 3-methyl-1-butene:
They isolated a similar compound in the total synthesis of capnellene in 1986:
In that same year the Grubbs group proved that metathesis polymerization of norbornene by Tebbe's reagent is a living polymerization system and a year later Grubbs and Schrock co-published an article describing living polymerization with a tungsten carbene complex While Schrock focussed his research on tungsten and molybdenum catalysts for olefin metathesis, Grubbs started the development of catalysts based on ruthenium, which proved to be less sensitive to oxygen and water and therefore more functional group tolerant.
Grubbs catalysts
In the 1960s and 1970s various groups reported the ring-opening polymerization of norbornene catalyzed by hydrated trichlorides of ruthenium and other late transition metals in polar, protic solvents. This prompted Robert H. Grubbs and coworkers to search for well-defined, functional group tolerant catalysts based on ruthenium. The Grubbs group successfully polymerized the 7-oxo norbornene derivative using ruthenium trichloride, osmium trichloride as well as tungsten alkylidenes. They identified a Ru(II) carbene as an effective metal center and in 1992 published the first well-defined, ruthenium-based olefin metathesis catalyst, (PPh3)2Cl2Ru=CHCH=CPh2:
The corresponding tricyclohexylphosphine complex (PCy3)2Cl2Ru=CHCH=CPh2 was also shown to be active. This work culminated in the now commercially available 1st generation Grubbs catalyst.
Schrock catalysts
Schrock entered the olefin metathesis field in 1979 as an extension of work on tantalum alkylidenes. The initial result was disappointing as reaction of with ethylene yielded only a metallacyclopentane, not metathesis products:
But by tweaking this structure to a (replacing chloride by t-butoxide and a cyclopentadienyl by an organophosphine, metathesis was established with cis-2-pentene. In another development, certain tungsten oxo complexes of the type were also found to be effective.
Schrock alkylidenes for olefin metathesis of the type were commercialized starting in 1990.
The first asymmetric catalyst followed in 1993
With a Schrock catalyst modified with a BINOL ligand in a norbornadiene ROMP leading to highly stereoregular cis, isotactic polymer.
See also
Alkane metathesis
Alkyne metathesis
Enyne metathesis
Salt metathesis reaction
References
Further reading
Carbon-carbon bond forming reactions
Organometallic chemistry
Homogeneous catalysis
Industrial processes | 0.774328 | 0.987378 | 0.764554 |
Nutrition | Nutrition is the biochemical and physiological process by which an organism uses food to support its life. It provides organisms with nutrients, which can be metabolized to create energy and chemical structures. Failure to obtain the required amount of nutrients causes malnutrition. Nutritional science is the study of nutrition, though it typically emphasizes human nutrition.
The type of organism determines what nutrients it needs and how it obtains them. Organisms obtain nutrients by consuming organic matter, consuming inorganic matter, absorbing light, or some combination of these. Some can produce nutrients internally by consuming basic elements, while some must consume other organisms to obtain pre-existing nutrients. All forms of life require carbon, energy, and water as well as various other molecules. Animals require complex nutrients such as carbohydrates, lipids, and proteins, obtaining them by consuming other organisms. Humans have developed agriculture and cooking to replace foraging and advance human nutrition. Plants acquire nutrients through the soil and the atmosphere. Fungi absorb nutrients around them by breaking them down and absorbing them through the mycelium.
History
Scientific analysis of food and nutrients began during the chemical revolution in the late 18th century. Chemists in the 18th and 19th centuries experimented with different elements and food sources to develop theories of nutrition. Modern nutrition science began in the 1910s as individual micronutrients began to be identified. The first vitamin to be chemically identified was thiamine in 1926, and vitamin C was identified as a protection against scurvy in 1932. The role of vitamins in nutrition was studied in the following decades. The first recommended dietary allowances for humans were developed to address fears of disease caused by food deficiencies during the Great Depression and the Second World War. Due to its importance in human health, the study of nutrition has heavily emphasized human nutrition and agriculture, while ecology is a secondary concern.
Nutrients
Nutrients are substances that provide energy and physical components to the organism, allowing it to survive, grow, and reproduce. Nutrients can be basic elements or complex macromolecules. Approximately 30 elements are found in organic matter, with nitrogen, carbon, and phosphorus being the most important. Macronutrients are the primary substances required by an organism, and micronutrients are substances required by an organism in trace amounts. Organic micronutrients are classified as vitamins, and inorganic micronutrients are classified as minerals.
Nutrients are absorbed by the cells and used in metabolic biochemical reactions. These include fueling reactions that create precursor metabolites and energy, biosynthetic reactions that convert precursor metabolites into building block molecules, polymerizations that combine these molecules into macromolecule polymers, and assembly reactions that use these polymers to construct cellular structures.
Nutritional groups
Organisms can be classified by how they obtain carbon and energy. Heterotrophs are organisms that obtain nutrients by consuming the carbon of other organisms, while autotrophs are organisms that produce their own nutrients from the carbon of inorganic substances like carbon dioxide. Mixotrophs are organisms that can be heterotrophs and autotrophs, including some plankton and carnivorous plants. Phototrophs obtain energy from light, while chemotrophs obtain energy by consuming chemical energy from matter. Organotrophs consume other organisms to obtain electrons, while lithotrophs obtain electrons from inorganic substances, such as water, hydrogen sulfide, dihydrogen, iron(II), sulfur, or ammonium. Prototrophs can create essential nutrients from other compounds, while auxotrophs must consume preexisting nutrients.
Diet
In nutrition, the diet of an organism is the sum of the foods it eats. A healthy diet improves the physical and mental health of an organism. This requires ingestion and absorption of vitamins, minerals, essential amino acids from protein and essential fatty acids from fat-containing food. Carbohydrates, protein and fat play major roles in ensuring the quality of life, health and longevity of the organism. Some cultures and religions have restrictions on what is acceptable for their diet.
Nutrient cycle
A nutrient cycle is a biogeochemical cycle involving the movement of inorganic matter through a combination of soil, organisms, air or water, where they are exchanged in organic matter. Energy flow is a unidirectional and noncyclic pathway, whereas the movement of mineral nutrients is cyclic. Mineral cycles include the carbon cycle, sulfur cycle, nitrogen cycle, water cycle, phosphorus cycle, and oxygen cycle, among others that continually recycle along with other mineral nutrients into productive ecological nutrition.
Biogeochemical cycles that are performed by living organisms and natural processes are water, carbon, nitrogen, phosphorus, and sulfur cycles. Nutrient cycles allow these essential elements to return to the environment after being absorbed or consumed. Without proper nutrient cycling, there would be risk of change in oxygen levels, climate, and ecosystem function.
Foraging
Foraging is the process of seeking out nutrients in the environment. It may also be defined to include the subsequent use of the resources. Some organisms, such as animals and bacteria, can navigate to find nutrients, while others, such as plants and fungi, extend outward to find nutrients. Foraging may be random, in which the organism seeks nutrients without method, or it may be systematic, in which the organism can go directly to a food source. Organisms are able to detect nutrients through taste or other forms of nutrient sensing, allowing them to regulate nutrient intake. Optimal foraging theory is a model that explains foraging behavior as a cost–benefit analysis in which an animal must maximize the gain of nutrients while minimizing the amount of time and energy spent foraging. It was created to analyze the foraging habits of animals, but it can also be extended to other organisms. Some organisms are specialists that are adapted to forage for a single food source, while others are generalists that can consume a variety of food sources.
Nutrient deficiency
Nutrient deficiencies, known as malnutrition, occur when an organism does not have the nutrients that it needs. This may be caused by suddenly losing nutrients or the inability to absorb proper nutrients. Not only is malnutrition the result of a lack of necessary nutrients, but it can also be a result of other illnesses and health conditions. When this occurs, an organism will adapt by reducing energy consumption and expenditure to prolong the use of stored nutrients. It will use stored energy reserves until they are depleted, and it will then break down its own body mass for additional energy.
A balanced diet includes appropriate amounts of all essential and nonessential nutrients. These can vary by age, weight, sex, physical activity levels, and more. A lack of just one essential nutrient can cause bodily harm, just as an overabundance can cause toxicity. The Daily Reference Values keep the majority of people from nutrient deficiencies. DRVs are not recommendations but a combination of nutrient references to educate professionals and policymakers on what the maximum and minimum nutrient intakes are for the average person. Food labels also use DRVs as a reference to create safe nutritional guidelines for the average healthy person.
In organisms
Animal
Animals are heterotrophs that consume other organisms to obtain nutrients. Herbivores are animals that eat plants, carnivores are animals that eat other animals, and omnivores are animals that eat both plants and other animals. Many herbivores rely on bacterial fermentation to create digestible nutrients from indigestible plant cellulose, while obligate carnivores must eat animal meats to obtain certain vitamins or nutrients their bodies cannot otherwise synthesize. Animals generally have a higher requirement of energy in comparison to plants. The macronutrients essential to animal life are carbohydrates, amino acids, and fatty acids.
All macronutrients except water are required by the body for energy, however, this is not their sole physiological function. The energy provided by macronutrients in food is measured in kilocalories, usually called Calories, where 1 Calorie is the amount of energy required to raise 1 kilogram of water by 1 degree Celsius.
Carbohydrates are molecules that store significant amounts of energy. Animals digest and metabolize carbohydrates to obtain this energy. Carbohydrates are typically synthesized by plants during metabolism, and animals have to obtain most carbohydrates from nature, as they have only a limited ability to generate them. They include sugars, oligosaccharides, and polysaccharides. Glucose is the simplest form of carbohydrate. Carbohydrates are broken down to produce glucose and short-chain fatty acids, and they are the most abundant nutrients for herbivorous land animals. Carbohydrates contain 4 calories per gram.
Lipids provide animals with fats and oils. They are not soluble in water, and they can store energy for an extended period of time. They can be obtained from many different plant and animal sources. Most dietary lipids are triglycerides, composed of glycerol and fatty acids. Phospholipids and sterols are found in smaller amounts. An animal's body will reduce the amount of fatty acids it produces as dietary fat intake increases, while it increases the amount of fatty acids it produces as carbohydrate intake increases. Fats contain 9 calories per gram.
Protein consumed by animals is broken down to amino acids, which would be later used to synthesize new proteins. Protein is used to form cellular structures, fluids, and enzymes (biological catalysts). Enzymes are essential to most metabolic processes, as well as DNA replication, repair, and transcription. Protein contains 4 calories per gram.
Much of animal behavior is governed by nutrition. Migration patterns and seasonal breeding take place in conjunction with food availability, and courtship displays are used to display an animal's health. Animals develop positive and negative associations with foods that affect their health, and they can instinctively avoid foods that have caused toxic injury or nutritional imbalances through a conditioned food aversion. Some animals, such as rats, do not seek out new types of foods unless they have a nutrient deficiency.
Human
Early human nutrition consisted of foraging for nutrients, like other animals, but it diverged at the beginning of the Holocene with the Neolithic Revolution, in which humans developed agriculture to produce food. The Chemical Revolution in the 18th century allowed humans to study the nutrients in foods and develop more advanced methods of food preparation. Major advances in economics and technology during the 20th century allowed mass production and food fortification to better meet the nutritional needs of humans. Human behavior is closely related to human nutrition, making it a subject of social science in addition to biology. Nutrition in humans is balanced with eating for pleasure, and optimal diet may vary depending on the demographics and health concerns of each person.
Humans are omnivores that eat a variety of foods. Cultivation of cereals and production of bread has made up a key component of human nutrition since the beginning of agriculture. Early humans hunted animals for meat, and modern humans domesticate animals to consume their meat and eggs. The development of animal husbandry has also allowed humans in some cultures to consume the milk of other animals and process it into foods such as cheese. Other foods eaten by humans include nuts, seeds, fruits, and vegetables. Access to domesticated animals as well as vegetable oils has caused a significant increase in human intake of fats and oils. Humans have developed advanced methods of food processing that prevent contamination of pathogenic microorganisms and simplify the production of food. These include drying, freezing, heating, milling, pressing, packaging, refrigeration, and irradiation. Most cultures add herbs and spices to foods before eating to add flavor, though most do not significantly affect nutrition. Other additives are also used to improve the safety, quality, flavor, and nutritional content of food.
Humans obtain most carbohydrates as starch from cereals, though sugar has grown in importance. Lipids can be found in animal fat, butterfat, vegetable oil, and leaf vegetables, and they are also used to increase flavor in foods. Protein can be found in virtually all foods, as it makes up cellular material, though certain methods of food processing may reduce the amount of protein in a food. Humans can also obtain energy from ethanol, which is both a food and a drug, but it provides relatively few essential nutrients and is associated with nutritional deficiencies and other health risks.
In humans, poor nutrition can cause deficiency-related diseases, such as blindness, anemia, scurvy, preterm birth, stillbirth and cretinism, or nutrient-excess conditions, such as obesity and metabolic syndrome. Other conditions possibly affected by nutrition disorders include cardiovascular diseases, diabetes, and osteoporosis. Undernutrition can lead to wasting in acute cases, and stunting of marasmus in chronic cases of malnutrition.
Domesticated animal
In domesticated animals, such as pets, livestock, and working animals, as well as other animals in captivity, nutrition is managed by humans through animal feed. Fodder and forage are provided to livestock. Specialized pet food has been manufactured since 1860, and subsequent research and development have addressed the nutritional needs of pets. Dog food and cat food in particular are heavily studied and typically include all essential nutrients for these animals. Cats are sensitive to some common nutrients, such as taurine, and require additional nutrients derived from meat. Large-breed puppies are susceptible to overnutrition, as small-breed dog food is more energy dense than they can absorb.
Plant
Most plants obtain nutrients through inorganic substances absorbed from the soil or the atmosphere. Carbon, hydrogen, oxygen, nitrogen, and sulfur are essential nutrients that make up organic material in a plant and allow enzymic processes. These are absorbed ions in the soil, such as bicarbonate, nitrate, ammonium, and sulfate, or they are absorbed as gases, such as carbon dioxide, water, oxygen gas, and sulfur dioxide. Phosphorus, boron, and silicon are used for esterification. They are obtained through the soil as phosphates, boric acid, and silicic acid, respectively. Other nutrients used by plants are potassium, sodium, calcium, magnesium, manganese, chlorine, iron, copper, zinc, and molybdenum.
Plants uptake essential elements from the soil through their roots and from the air (consisting of mainly nitrogen and oxygen) through their leaves. Nutrient uptake in the soil is achieved by cation exchange, wherein root hairs pump hydrogen ions (H+) into the soil through proton pumps. These hydrogen ions displace cations attached to negatively charged soil particles so that the cations are available for uptake by the root. In the leaves, stomata open to take in carbon dioxide and expel oxygen. Although nitrogen is plentiful in the Earth's atmosphere, very few plants can use this directly. Most plants, therefore, require nitrogen compounds to be present in the soil in which they grow. This is made possible by the fact that largely inert atmospheric nitrogen is changed in a nitrogen fixation process to biologically usable forms in the soil by bacteria.
As these nutrients do not provide the plant with energy, they must obtain energy by other means. Green plants absorb energy from sunlight with chloroplasts and convert it to usable energy through photosynthesis.
Fungus
Fungi are chemoheterotrophs that consume external matter for energy. Most fungi absorb matter through the root-like mycelium, which grows through the organism's source of nutrients and can extend indefinitely. The fungus excretes extracellular enzymes to break down surrounding matter and then absorbs the nutrients through the cell wall. Fungi can be parasitic, saprophytic, or symbiotic. Parasitic fungi attach and feed on living hosts, such as animals, plants, or other fungi. Saprophytic fungi feed on dead and decomposing organisms. Symbiotic fungi grow around other organisms and exchange nutrients with them.
Protist
Protists include all eukaryotes that are not animals, plants, or fungi, resulting in great diversity between them. Algae are photosynthetic protists that can produce energy from light. Several types of protists use mycelium similar to those of fungi. Protozoa are heterotrophic protists, and different protozoa seek nutrients in different ways. Flagellate protozoa use a flagellum to assist in hunting for food, and some protozoa travel via infectious spores to act as parasites. Many protists are mixotrophic, having both phototrophic and heterotrophic characteristics. Mixotrophic protists will typically depend on one source of nutrients while using the other as a supplemental source or a temporary alternative when its primary source is unavailable.
Prokaryote
Prokaryotes, including bacteria and archaea, vary greatly in how they obtain nutrients across nutritional groups. Prokaryotes can only transport soluble compounds across their cell envelopes, but they can break down chemical components around them. Some lithotrophic prokaryotes are extremophiles that can survive in nutrient-deprived environments by breaking down inorganic matter. Phototrophic prokaryotes, such as cyanobacteria and Chloroflexia, can engage in photosynthesis to obtain energy from sunlight. This is common among bacteria that form in mats atop geothermal springs. Phototrophic prokaryotes typically obtain carbon from assimilating carbon dioxide through the Calvin cycle.
Some prokaryotes, such as Bdellovibrio and Ensifer, are predatory and feed on other single-celled organisms. Predatory prokaryotes seek out other organisms through chemotaxis or random collision, merge with the organism, degrade it, and absorb the released nutrients. Predatory strategies of prokaryotes include attaching to the outer surface of the organism and degrading it externally, entering the cytoplasm of the organism, or by entering the periplasmic space of the organism. Groups of predatory prokaryotes may forgo attachment by collectively producing hydrolytic enzymes.
See also
Milan Charter 2015 Charter on Nutrition
References
Bibliography
External links | 0.765503 | 0.998742 | 0.76454 |
Isothermal titration calorimetry | In chemical thermodynamics, isothermal titration calorimetry (ITC) is a physical technique used to determine the thermodynamic parameters of interactions in solution. It is most often used to study the binding of small molecules (such as medicinal compounds) to larger macromolecules (proteins, DNA etc.) in a label-free environment. It consists of two cells which are enclosed in an adiabatic jacket. The compounds to be studied are placed in the sample cell, while the other cell, the reference cell, is used as a control and contains the buffer in which the sample is dissolved.
The technique was developed by H. D. Johnston in 1968 as a part of his Ph.D. dissertation at Brigham Young University, and was considered niche until introduced commercially by MicroCal Inc. in 1988. Compared to other calorimeters, ITC has an advantage in not requiring any correctors since there was no heat exchange between the system and the environment.
Thermodynamic measurements
ITC is a quantitative technique that can determine the binding affinity, reaction enthalpy, and binding stoichiometry of the interaction between two or more molecules in solution. This is achieved by measuring the enthalpies of a series of binding reactions caused by injections of a solution of one molecule to a reaction cell containing a solution of another molecule. The enthalpy values are plotted over the molar ratios resulting from the injections. From the plot, the molar reaction enthalpy , the affinity constant and the stochiometry are determined by curve fitting. The reaction's Gibbs free energy change and entropy change can be determined using the relationship:
(where is the gas constant and is the absolute temperature).
For accurate measurements of binding affinity, the curve of the thermogram must be sigmoidal. The profile of the curve is determined by the c-value, which is calculated using the equation:
where is the stoichiometry of the binding, is the association constant and is the concentration of the molecule in the cell. The c-value must fall between 1 and 1000, ideally between 10 and 100. In terms of binding affinity, it would be approximately from ~ within the limit range.
Instrumental measurements
An isothermal titration calorimeter is composed of two identical cells made of a highly efficient thermally conducting and chemically inert material such as Hastelloy alloy or gold, surrounded by an adiabatic jacket. Sensitive thermopile/thermocouple circuits are used to detect temperature differences between the reference cell (filled with buffer or water) and the sample cell containing the macromolecule. Prior to addition of ligand, a constant power (<1 mW) is applied to the reference cell. This directs a feedback circuit, activating a heater located on the sample cell. During the experiment, ligand is titrated into the sample cell in precisely known aliquots, causing heat to be either taken up or evolved (depending on the nature of the reaction). Measurements consist of the time-dependent input of power required to maintain equal temperatures between the sample and reference cells.
In an exothermic reaction, the temperature in the sample cell increases upon addition of ligand. This causes the feedback power to the sample cell to be decreased (remember: a reference power is applied to the reference cell) in order to maintain an equal temperature between the two cells. In an endothermic reaction, the opposite occurs; the feedback circuit increases the power in order to maintain a constant temperature (isothermal operation).
Observations are plotted as the power needed to maintain the reference and the sample cell at an identical temperature against time. As a result, the experimental raw data consists of a series of spikes of heat flow (power), with every spike corresponding to one ligand injection. These heat flow spikes/pulses are integrated with respect to time, giving the total heat exchanged per injection. The pattern of these heat effects as a function of the molar ratio [ligand]/[macromolecule] can then be analyzed to give the thermodynamic parameters of the interaction under study.
To obtain an optimum result, each injection should be given enough time for a reaction equilibrium to reach. Degassing samples is often necessary in order to obtain good measurements as the presence of gas bubbles within the sample cell will lead to abnormal data plots in the recorded results. The entire experiment takes place under computer control.
Direct titration is performed most commonly with ITC to obtain the thermodynamic data, by binding two components of the reaction directly to each other. However, many of the chemical reactions and binding interactions may have higher binding affinity above what is desirable with the c-window. To troubleshoot the limitation of c-window and conditions for certain binding interactions, various different methods of titration can be performed. In some cases, simply doing a reverse titration of changing the samples between the injection syringe and sample cell can solve the issue, depending on the binding mechanism. For most of the high or low affinity bindings require chelation or competitive titration. This method is done by loading pre-bound complex solution in the sample cell, and chelating one of the components out with a reagent of higher observed binding affinity within the desirable c-window.
Analysis and interpretation
Post-hoc analysis and proton inventory
The collected experimental data reflects not only the binding thermodynamics of the interaction of interest, but any contributing competing equilibria associated to it. A post-hoc analysis can be performed to determine the buffer or solvent-independent enthalpy from the experimental thermodynamics, by simply going through the process of Hess’ law. Below example shows a simple interaction between a metal ion (M) and a ligand (L). B represents the buffer used for this interaction and H+ represents protons.M - B <=> M + B -\Delta H_{MB}
L - H <=> L + H+
H+ + B <=> H - B
M + L <=> M - L
Therefore,which can be further processed to calculate the enthalpy of metal-ligand interaction. Although this example is between a metal and a ligand, it is applicable to any ITC experiment, regarding binding interactions.
As a part of the analysis, a number of protons are required to calculate the solvent-independent thermodynamics. This can be easily done by plotting a graph such as shown below.
The linear equation of this plot is the rearranged version of the equation above from the post-hoc analysis in a form of y = mx + b:
Equilibrium constant
Equilibrium constant of the reaction is also not independent from the other competing equilibria. Competition would include buffer interactions and other pH-dependent reactions depending on the experimental conditions. The competition from species other than the species of interest is included in the competition factor, Q in the following equation:where, represents species such a buffer or protons, represents their equilibrium constant, when,
Applications
For the past 30 years, isothermal titration calorimetry has been used in a wide array of fields. In the old days, this technique was used to determine fundamental thermodynamic values for basic small molecular interactions. In recent years, ITC has been used in more industrially applicable areas, such as drug discovery and testing synthetic materials. Although it is still heavily used in fundamental chemistry, the trend has shifted over to the biological side, where label-free and buffer independent values are relatively harder to achieve.
Enzyme kinetics
Using the thermodynamic data from ITC, it is possible to deduce enzyme kinetics including proton or electron transfer, allostery and cooperativity, and enzyme inhibition. ITC collects data over time that is useful for any kinetic experiments, but especially with the proteins due to constant aliquots of injections. In terms of calculation, equilibrium constant and the slopes of binding can be directly utilized to determine the allostery and charge transfer, by comparing experimental data of different conditions (pH, use of mutated peptide chain and binding sites, etc.) .
Membrane and self-assembling peptide studies
Membrane proteins and self-assembly properties of certain proteins can be studied under this technique, due to being a label-free calorimeter. Membrane proteins are known to have difficulties with selection of proper solubilization and purification protocols. As ITC is a non-destructive calorimetric tool, it can be used as a detector to locate the fraction of protein with desired binding sites, by binding a known binding ligand to the protein. This feature also applies in studies of self-assembling proteins, especially in use of measuring thermodynamics of their structural transformation.
Drug development
Binding affinity carries a huge importance in medicinal chemistry, as drugs need to bind to the protein effectively within a desired range. However, determining enthalpy changes and optimization of thermodynamic parameters are hugely difficult when designing drugs. ITC troubleshoots this issue easily by deducing the binding affinity, enthalpic/entropic contributions and its binding stoichiometry.
Chiral chemistry
Applying the ideas above, chirality of organometallic compounds can be deduced as well with this technique. Each chiral compound has its own unique properties and binding mechanisms that are comparable to each other, which leads to differences in thermodynamic properties. By binding chiral solutions in a binding site can deduce the type of chirality and depending on the purpose, which chiral compound is more suitable for binding.
Metal binding interactions
Binding metal ions to protein and other components of biological material is one of the most popular uses of ITC, since ovotransferrin to ferric iron binding study published by Lin et al. from MicroCal Inc. This is due to some of the metal ions utilized in biological systems having d10 electron configuration which cannot be studied with other common techniques such as UV-vis spectrophotometry or electron paramagnetic resonance. It is also closely related to biochemical and medicinal studies due to the large abundance of metal binding enzymes in biological systems.
Carbon nanotubes and related materials
The technique has been well utilized in studying carbon nanotubes to determine thermodynamic binding interactions with biological molecules and graphene composite interactions. Another notable use of ITC with carbon nanotubes is optimization of preparation of carbon nanotubes from graphene composite and polyvinyl alcohol (PVA). PVA assembly process can be measured thermodynamically as mixing of the two ingredients is an exothermic reaction, and its binding trend can be easily observed by ITC.
See also
Differential scanning calorimetry
Dual polarisation interferometry
Sorption calorimetry
Pressure perturbation calorimetry
Surface plasmon resonance
References
Scientific techniques
Biochemistry methods
Biophysics
Chemical thermodynamics
Calorimetry | 0.77591 | 0.985339 | 0.764534 |
Dalton (unit) | The dalton or unified atomic mass unit (symbols: Da or u) is a unit of mass defined as of the mass of an unbound neutral atom of carbon-12 in its nuclear and electronic ground state and at rest. It is a non-SI unit accepted for use with SI. The atomic mass constant, denoted mu, is defined identically, giving .
This unit is commonly used in physics and chemistry to express the mass of atomic-scale objects, such as atoms, molecules, and elementary particles, both for discrete instances and multiple types of ensemble averages. For example, an atom of helium-4 has a mass of . This is an intrinsic property of the isotope and all helium-4 atoms have the same mass. Acetylsalicylic acid (aspirin), , has an average mass of about . However, there are no acetylsalicylic acid molecules with this mass. The two most common masses of individual acetylsalicylic acid molecules are , having the most common isotopes, and , in which one carbon is carbon-13.
The molecular masses of proteins, nucleic acids, and other large polymers are often expressed with the unit kilodalton (kDa) and megadalton (MDa). Titin, one of the largest known proteins, has a molecular mass of between 3 and 3.7 megadaltons. The DNA of chromosome 1 in the human genome has about 249 million base pairs, each with an average mass of about , or total.
The mole is a unit of amount of substance used in chemistry and physics, which defines the mass of one mole of a substance in grams as numerically equal to the average mass of one of its particles in daltons. That is, the molar mass of a chemical compound is meant to be numerically equal to its average molecular mass. For example, the average mass of one molecule of water is about 18.0153 daltons, and one mole of water is about 18.0153 grams. A protein whose molecule has an average mass of would have a molar mass of . However, while this equality can be assumed for practical purposes, it is only approximate, because of the 2019 redefinition of the mole.
In general, the mass in daltons of an atom is numerically close but not exactly equal to the number of nucleons in its nucleus. It follows that the molar mass of a compound (grams per mole) is numerically close to the average number of nucleons contained in each molecule. By definition, the mass of an atom of carbon-12 is 12 daltons, which corresponds with the number of nucleons that it has (6 protons and 6 neutrons). However, the mass of an atomic-scale object is affected by the binding energy of the nucleons in its atomic nuclei, as well as the mass and binding energy of its electrons. Therefore, this equality holds only for the carbon-12 atom in the stated conditions, and will vary for other substances. For example, the mass of an unbound atom of the common hydrogen isotope (hydrogen-1, protium) is ,
the mass of a proton is the mass of a free neutron is and the mass of a hydrogen-2 (deuterium) atom is . In general, the difference (absolute mass excess) is less than 0.1%; exceptions include hydrogen-1 (about 0.8%), helium-3 (0.5%), lithium-6 (0.25%) and beryllium (0.14%).
The dalton differs from the unit of mass in the system of atomic units, which is the electron rest mass (m).
Energy equivalents
The atomic mass constant can also be expressed as its energy-equivalent, mc. The CODATA recommended values are:
The mass-equivalent is commonly used in place of a unit of mass in particle physics, and these values are also important for the practical determination of relative atomic masses.
History
Origin of the concept
The interpretation of the law of definite proportions in terms of the atomic theory of matter implied that the masses of atoms of various elements had definite ratios that depended on the elements. While the actual masses were unknown, the relative masses could be deduced from that law. In 1803 John Dalton proposed to use the (still unknown) atomic mass of the lightest atom, hydrogen, as the natural unit of atomic mass. This was the basis of the atomic weight scale.
For technical reasons, in 1898, chemist Wilhelm Ostwald and others proposed to redefine the unit of atomic mass as the mass of an oxygen atom. That proposal was formally adopted by the International Committee on Atomic Weights (ICAW) in 1903. That was approximately the mass of one hydrogen atom, but oxygen was more amenable to experimental determination. This suggestion was made before the discovery of isotopes in 1912. Physicist Jean Perrin had adopted the same definition in 1909 during his experiments to determine the atomic masses and the Avogadro constant. This definition remained unchanged until 1961. Perrin also defined the "mole" as an amount of a compound that contained as many molecules as 32 grams of oxygen. He called that number the Avogadro number in honor of physicist Amedeo Avogadro.
Isotopic variation
The discovery of isotopes of oxygen in 1929 required a more precise definition of the unit. Two distinct definitions came into use. Chemists choose to define the AMU as of the average mass of an oxygen atom as found in nature; that is, the average of the masses of the known isotopes, weighted by their natural abundance. Physicists, on the other hand, defined it as of the mass of an atom of the isotope oxygen-16 (16O).
Definition by IUPAC
The existence of two distinct units with the same name was confusing, and the difference (about in relative terms) was large enough to affect high-precision measurements. Moreover, it was discovered that the isotopes of oxygen had different natural abundances in water and in air. For these and other reasons, in 1961 the International Union of Pure and Applied Chemistry (IUPAC), which had absorbed the ICAW, adopted a new definition of the atomic mass unit for use in both physics and chemistry; namely, of the mass of a carbon-12 atom. This new value was intermediate between the two earlier definitions, but closer to the one used by chemists (who would be affected the most by the change).
The new unit was named the "unified atomic mass unit" and given a new symbol "u", to replace the old "amu" that had been used for the oxygen-based unit. However, the old symbol "amu" has sometimes been used, after 1961, to refer to the new unit, particularly in lay and preparatory contexts.
With this new definition, the standard atomic weight of carbon is about , and that of oxygen is about . These values, generally used in chemistry, are based on averages of many samples from Earth's crust, its atmosphere, and organic materials.
Adoption by BIPM
The IUPAC 1961 definition of the unified atomic mass unit, with that name and symbol "u", was adopted by the International Bureau for Weights and Measures (BIPM) in 1971 as a non-SI unit accepted for use with the SI.
Unit name
In 1993, the IUPAC proposed the shorter name "dalton" (with symbol "Da") for the unified atomic mass unit. As with other unit names such as watt and newton, "dalton" is not capitalized in English, but its symbol, "Da", is capitalized. The name was endorsed by the International Union of Pure and Applied Physics (IUPAP) in 2005.
In 2003 the name was recommended to the BIPM by the Consultative Committee for Units, part of the CIPM, as it "is shorter and works better with [SI] prefixes". In 2006, the BIPM included the dalton in its 8th edition of the SI brochure of formal definitions as a non-SI unit accepted for use with the SI. The name was also listed as an alternative to "unified atomic mass unit" by the International Organization for Standardization in 2009. It is now recommended by several scientific publishers, and some of them consider "atomic mass unit" and "amu" deprecated. In 2019, the BIPM retained the dalton in its 9th edition of the SI brochure, while dropping the unified atomic mass unit from its table of non-SI units accepted for use with the SI, but secondarily notes that the dalton (Da) and the unified atomic mass unit (u) are alternative names (and symbols) for the same unit.
2019 revision of the SI
The definition of the dalton was not affected by the 2019 revision of the SI, that is, 1 Da in the SI is still of the mass of a carbon-12 atom, a quantity that must be determined experimentally in terms of SI units. However, the definition of a mole was changed to be the amount of substance consisting of exactly entities and the definition of the kilogram was changed as well. As a consequence, the molar mass constant remains close to but no longer exactly 1 g/mol, meaning that the mass in grams of one mole of any substance remains nearly but no longer exactly numerically equal to its average molecular mass in daltons, although the relative standard uncertainty of at the time of the redefinition is insignificant for all practical purposes.
Measurement
Though relative atomic masses are defined for neutral atoms, they are measured (by mass spectrometry) for ions: hence, the measured values must be corrected for the mass of the electrons that were removed to form the ions, and also for the mass equivalent of the electron binding energy, E/mc. The total binding energy of the six electrons in a carbon-12 atom is = : Eb/muc2 = , or about one part in 10 million of the mass of the atom.
Before the 2019 revision of the SI, experiments were aimed to determine the value of the Avogadro constant for finding the value of the unified atomic mass unit.
Josef Loschmidt
A reasonably accurate value of the atomic mass unit was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas.
Jean Perrin
Perrin estimated the Avogadro number by a variety of methods, at the turn of the 20th century. He was awarded the 1926 Nobel Prize in Physics, largely for this work.
Coulometry
The electric charge per mole of elementary charges is a constant called the Faraday constant, F, whose value had been essentially known since 1834 when Michael Faraday published his works on electrolysis. In 1910, Robert Millikan obtained the first measurement of the charge on an electron, −e. The quotient F/e provided an estimate of the Avogadro constant.
The classic experiment is that of Bower and Davis at NIST, and relies on dissolving silver metal away from the anode of an electrolysis cell, while passing a constant electric current I for a known time t. If m is the mass of silver lost from the anode and A the atomic weight of silver, then the Faraday constant is given by:
The NIST scientists devised a method to compensate for silver lost from the anode by mechanical causes, and conducted an isotope analysis of the silver used to determine its atomic weight. Their value for the conventional Faraday constant was F = , which corresponds to a value for the Avogadro constant of : both values have a relative standard uncertainty of .
Electron mass measurement
In practice, the atomic mass constant is determined from the electron rest mass m and the electron relative atomic mass A(e) (that is, the mass of electron divided by the atomic mass constant). The relative atomic mass of the electron can be measured in cyclotron experiments, while the rest mass of the electron can be derived from other physical constants.
where c is the speed of light, h is the Planck constant, α is the fine-structure constant, and R is the Rydberg constant.
As may be observed from the old values (2014 CODATA) in the table below, the main limiting factor in the precision of the Avogadro constant was the uncertainty in the value of the Planck constant, as all the other constants that contribute to the calculation were known more precisely.
The power of having defined values of universal constants as is presently the case can be understood from the table below (2018 CODATA).
X-ray crystal density methods
Silicon single crystals may be produced today in commercial facilities with extremely high purity and with few lattice defects. This method defined the Avogadro constant as the ratio of the molar volume, V, to the atomic volume V:
where and n is the number of atoms per unit cell of volume Vcell.
The unit cell of silicon has a cubic packing arrangement of 8 atoms, and the unit cell volume may be measured by determining a single unit cell parameter, the length a of one of the sides of the cube. The CODATA value of a for silicon is
In practice, measurements are carried out on a distance known as d(Si), which is the distance between the planes denoted by the Miller indices {220}, and is equal to .
The isotope proportional composition of the sample used must be measured and taken into account. Silicon occurs in three stable isotopes (Si, Si, Si), and the natural variation in their proportions is greater than other uncertainties in the measurements. The atomic weight A for the sample crystal can be calculated, as the standard atomic weights of the three nuclides are known with great accuracy. This, together with the measured density ρ of the sample, allows the molar volume V to be determined:
where M is the molar mass constant. The CODATA value for the molar volume of silicon is , with a relative standard uncertainty of
See also
Mass (mass spectrometry)
Kendrick mass
Monoisotopic mass
Mass-to-charge ratio
Notes
References
External links
Metrology
Nuclear chemistry
Units of chemical measurement
Units of mass | 0.766039 | 0.998033 | 0.764532 |
Mathematical psychology | Mathematical psychology is an approach to psychological research that is based on mathematical modeling of perceptual, thought, cognitive and motor processes, and on the establishment of law-like rules that relate quantifiable stimulus characteristics with quantifiable behavior (in practice often constituted by task performance). The mathematical approach is used with the goal of deriving hypotheses that are more exact and thus yield stricter empirical validations. There are five major research areas in mathematical psychology: learning and memory, perception and psychophysics, choice and decision-making, language and thinking, and measurement and scaling.
Although psychology, as an independent subject of science, is a more recent discipline than physics, the application of mathematics to psychology has been done in the hope of emulating the success of this approach in the physical sciences, which dates back to at least the seventeenth century. Mathematics in psychology is used extensively roughly in two areas: one is the mathematical modeling of psychological theories and experimental phenomena, which leads to mathematical psychology; the other is the statistical approach of quantitative measurement practices in psychology, which leads to psychometrics.
As quantification of behavior is fundamental in this endeavor, the theory of measurement is a central topic in mathematical psychology. Mathematical psychology is therefore closely related to psychometrics. However, where psychometrics is concerned with individual differences (or population structure) in mostly static variables, mathematical psychology focuses on process models of perceptual, cognitive and motor processes as inferred from the 'average individual'. Furthermore, where psychometrics investigates the stochastic dependence structure between variables as observed in the population, mathematical psychology almost exclusively focuses on the modeling of data obtained from experimental paradigms and is therefore even more closely related to experimental psychology, cognitive psychology, and psychonomics. Like computational neuroscience and econometrics, mathematical psychology theory often uses statistical optimality as a guiding principle, assuming that the human brain has evolved to solve problems in an optimized way. Central themes from cognitive psychology (e.g., limited vs. unlimited processing capacity, serial vs. parallel processing) and their implications are central in rigorous analysis in mathematical psychology.
Mathematical psychologists are active in many fields of psychology, especially in psychophysics, sensation and perception, problem solving, decision-making, learning, memory, language, and the quantitative analysis of behavior, and contribute to the work of other subareas of psychology such as clinical psychology, social psychology, educational psychology, and psychology of music.
History
Mathematics and psychology before the 19th century
Choice and decision making theory are rooted in the development of statstics theory. In the mid 1600s, Blaise Pascal considered situations in gambling and further extended to Pascal's wager. In the 18th century, Nicolas Bernoulli proposed the St. Petersburg Paradox in decision making, Daniel Bernoulli gave a solution and Laplace proposed a modification to the solution later on. In 1763, Bayes published the paper "An Essay Towards Solving a Problem in the Doctrine of Chances", which is the milestone of Bayesian statistics.
Robert Hooke worked on modeling human memory, which is a precursor of the study of memory.
Mathematics and psychology in the 19th century
The research developments in Germany and England in the 19th century made psychology a new academic subject. Since the German approach emphasized experiments in the investigation of the psychological processes that all humans share and the English approach was in the measurement of individual differences, the applications of mathematics were also different.
In Germany, Wilhelm Wundt established the first experimental psychology laboratory. The math in German psychology is mainly applied in sensory and psychophysics. Ernst Weber (1795–1878) created the first mathematical law of the mind, Weber's law, based on a variety of experiments. Gustav Fechner (1801–1887) contributed theories in sensations and perceptions and one of them is the Fechner's law, which modifies Weber's law.
Mathematical modeling has a long history in psychology starting in the 19th century with Ernst Weber (1795–1878) and Gustav Fechner (1801–1887) being among the first to apply functional equations to psychological processes. They thereby established the fields of experimental psychology in general, and that of psychophysics in particular.
Researchers in astronomy in the 19th century were mapping distances between stars by denoting the exact time of a star's passing of a cross-hair on a telescope. For lack of the automatic registration instruments of the modern era, these time measurements relied entirely on human response speed. It had been noted that there were small systematic differences in the times measured by different astronomers, and these were first systematically studied by German astronomer Friedrich Bessel (1782–1846). Bessel constructed personal equations from measurements of basic response speed that would cancel out individual differences from the astronomical calculations. Independently, physicist Hermann von Helmholtz measured reaction times to determine nerve conduction speed, developed resonance theory of hearing and the Young-Helmholtz theory of color vision.
These two lines of work came together in the research of Dutch physiologist F. C. Donders and his student J. J. de Jaager, who recognized the potential of reaction times for more or less objectively quantifying the amount of time elementary mental operations required. Donders envisioned the employment of his mental chronometry to scientifically infer the elements of complex cognitive activity by measurement of simple reaction time
Although there are developments in sensation and perception, Johann Herbart developed a system of mathematical theories in cognitive area to understand the mental process of consciousness.
The origin of English psychology can be traced to the theory of evolution by Darwin. But the emergence of English psychology is because of Francis Galton, who interested in individual differences between humans on psychological variables. The math in English psychology is mainly statistics and the work and methods of Galton is the foundation of psychometrics.
Galton introduced bivariate normal distribution in modeling the traits of the same individual, he also investigated measurement error and built his own model, and he also developed a stochastic branching process to examine the extinction of family names. There is also a tradition of the interest in studying intelligence in English psychology started from Galton. James McKeen Cattell and Alfred Binet developed tests of intelligence.
The first psychological laboratory was established in Germany by Wilhelm Wundt, who amply used Donders' ideas. However, findings that came from the laboratory were hard to replicate and this was soon attributed to the method of introspection that Wundt introduced. Some of the problems resulted from individual differences in response speed found by astronomers. Although Wundt did not seem to take interest in these individual variations and kept his focus on the study of the general human mind, Wundt's U.S. student James McKeen Cattell was fascinated by these differences and started to work on them during his stay in England.
The failure of Wundt's method of introspection led to the rise of different schools of thought. Wundt's laboratory was directed towards conscious human experience, in line with the work of Fechner and Weber on the intensity of stimuli. In the United Kingdom, under the influence of the anthropometric developments led by Francis Galton, interest focussed on individual differences between humans on psychological variables, in line with the work of Bessel. Cattell soon adopted the methods of Galton and helped laying the foundation of psychometrics.
20th century
Many statistical methods were developed even before the 20th century: Charles Spearman invented factor analysis which studies individual differences by the variance and covariance. German psychology and English psychology have been combined and taken over by the United States. The statistical methods dominated the field during the beginning of the century. There are two important statistical developments: Structural Equation Modeling (SEM) and analysis of variance (ANOVA). Since factor analysis unable to make causal inferences, the method of structural equation modeling was developed by Sewall Wright to correlational data to infer causality, which is still a major research area today. Those statistical methods formed psychometrics. The Psychometric Society was established in 1935 and the journal Psychometrika was published since 1936.
In the United States, behaviorism arose in opposition to introspectionism and associated reaction-time research, and turned the focus of psychological research entirely to learning theory. In Europe introspection survived in Gestalt psychology. Behaviorism dominated American psychology until the end of the Second World War, and largely refrained from inference on mental processes. Formal theories were mostly absent (except for vision and hearing).
During the war, developments in engineering, mathematical logic and computability theory, computer science and mathematics, and the military need to understand human performance and limitations, brought together experimental psychologists, mathematicians, engineers, physicists, and economists. Out of this mix of different disciplines mathematical psychology arose. Especially the developments in signal processing, information theory, linear systems and filter theory, game theory, stochastic processes and mathematical logic gained a large influence on psychological thinking.
Two seminal papers on learning theory in Psychological Review helped to establish the field in a world that was still dominated by behaviorists: A paper by Bush and Mosteller instigated the linear operator approach to learning, and a paper by Estes that started the stimulus sampling tradition in psychological theorizing. These two papers presented the first detailed formal accounts of data from learning experiments.
Mathematical modeling of learning process were greatly developed in the 1950s as the behavioral learning theory was flourishing. One development is the stimulus sampling theory by Williams K. Estes, the other is linear operator models by Robert R. Bush, and Frederick Mosteller.
Signal processing and detection theory are broadly used in perception, psychophysics and nonsensory area of cognition. Von Neumann's book The Theory of Games and Economic Behavior establish the importance of game theory and decision making. R. Duncan Luce and Howard Raiffa contributed to the choice and decision making area.
The area of language and thinking comes into the spotlight with the development of computer science and linguistics, especially information theory and computation theory. Chomsky proposed the model of linguistics and computational hierarchy theory. Allen Newell and Herbert Simon proposed the model of human solving problems. The development in artificial intelligence and human computer interface are active areas in both computer science and psychology.
Before the 1950s, psychometricians emphasized the structure of measurement error and the development of high-power statistical methods to the measurement of psychological quantities but little of the psychometric work concerned the structure of the psychological quantities being measured or the cognitive factors behind the response data. Scott and Suppes studied relationship between the structure of data and the structure of numerical systems that represent the data. Coombs constructed formal cognitive models of the respondent in a measurement situation rather than statistical data processing algorithms, for example the unfolding model. Another breakthrough is the development of a new form of the psychophysical scaling function along with new methods of collecting psychophysical data, like Stevens' power law.
The 1950s saw a surge in mathematical theories of psychological processes, including Luce's theory of choice, Tanner and Swets' introduction of signal detection theory for human stimulus detection, and Miller's approach to information processing. By the end of the 1950s, the number of mathematical psychologists had increased from a handful by more than a tenfold, not counting psychometricians. Most of these were concentrated at the Indiana University, Michigan, Pennsylvania, and Stanford. Some of these were regularly invited by the U.S. Social Science Research Counsel to teach in summer workshops in mathematics for social scientists at Stanford University, promoting collaboration.
To better define the field of mathematical psychology, the mathematical models of the 1950s were brought together in sequence of volumes edited by Luce, Bush, and Galanter: Two readings and three handbooks.<ref>Luce, R. D., Bush, R. R. & Galanter, E. (Eds.) (1963). Handbook of mathematical psychology. Volumes I-III. New York: Wiley. Volume II from Internet Archive</ref> This series of volumes turned out to be helpful in the development of the field. In the summer of 1963 the need was felt for a journal for theoretical and mathematical studies in all areas in psychology, excluding work that was mainly factor analytical. An initiative led by R. C. Atkinson, R. R. Bush, W. K. Estes, R. D. Luce, and P. Suppes resulted in the appearance of the first issue of the Journal of Mathematical Psychology in January 1964.
Under the influence of developments in computer science, logic, and language theory, in the 1960s modeling gravitated towards computational mechanisms and devices. Examples of the latter constitute so called cognitive architectures (e.g., production rule systems, ACT-R) as well as connectionist systems or neural networks.
Important mathematical expressions for relations between physical characteristics of stimuli and subjective perception are Weber–Fechner law, Ekman's law, Stevens's power law, Thurstone's law of comparative judgment, the theory of signal detection (borrowed from radar engineering), the matching law, and Rescorla–Wagner rule for classical conditioning. While the first three laws are all deterministic in nature, later established relations are more fundamentally stochastic. This has been a general theme in the evolution in mathematical modeling of psychological processes: from deterministic relations as found in classical physics to inherently stochastic models.
Influential mathematical psychologists
John Anderson
Richard C. Atkinson
William H. Batchelder
Michael H. Birnbaum
Jerome Busemeyer
Hans Colonius
C. H. Coombs
Robyn Dawes
Adele Diederich
Ehtibar Dzhafarov
William Kaye Estes
Jean-Claude Falmagne
B. F. Green
Daniel Kahneman
Eric Maris
Roger E. Kirk
D. H. Krantz
D. R. J. Laming
Michael D. Lee
Philip Marcus Levy
R. Duncan Luce
David Marr
James L. McClelland
Jeff Miller
Jay Myung
Louis Narens
Allen Newell
Robert M. Nosofsky
Roger Ratcliff
David E. Rumelhart
Herbert A. Simon
Roger Shepard
Richard Shiffrin
Philip L. Smith
Stanley S. Stevens
George Sperling
Saul Sternberg
Patrick Suppes
John A. Swets
Joshua Tenenbaum
James T. Townsend
Louis L. Thurstone
Amos Tversky
Rolf Ulrich
Dirk Vorberg
Eric-Jan Wagenmakers
Elke U. Weber
Thomas D. Wickens
Important theories and models
Sensation, perception, and psychophysics
Stevens' power law
Weber–Fechner law
Stimulus detection and discrimination
Signal detection theory
Stimulus identification
Accumulator models
Diffusion models
Neural network/connectionist models
Race models
Random walk models
Renewal models
Simple decision
Cascade model
Level and change race model
Recruitment model
SPRT
Decision field theory
Memory scanning, visual search
Push-down stack
Serial exhaustive search (SES) model
Error response times
Fast guess model
Sequential effects
Linear operator model
Learning
Linear operator model
Stochastic learning theory
Measurement theory
Theory of conjoint measurement
Developmental psychology
Journals and organizations
Central journals are the Journal of Mathematical Psychology and the British Journal of Mathematical and Statistical Psychology. There are three annual conferences in the field, the annual meeting of the Society for Mathematical Psychology in the U.S, the annual European Mathematical Psychology Group meeting in Europe, and the Australasian Mathematical Psychology'' conference.
See also
Computational cognition
Mathematical models of social learning
Outline of psychology
Psychological statistics
Quantitative psychology
References
External links
British Journal of Mathematical and Statistical Psychology
European Mathematical Psychology Group
Journal of Mathematical Psychology
Online tutorials on Mathematical Psychology from the Open Distance Learning initiative of the University of Bonn.
Society for Mathematical Psychology
Applied mathematics | 0.774542 | 0.987075 | 0.764531 |
Base pair | A base pair (bp) is a fundamental unit of double-stranded nucleic acids consisting of two nucleobases bound to each other by hydrogen bonds. They form the building blocks of the DNA double helix and contribute to the folded structure of both DNA and RNA. Dictated by specific hydrogen bonding patterns, "Watson–Crick" (or "Watson–Crick–Franklin") base pairs (guanine–cytosine and adenine–thymine) allow the DNA helix to maintain a regular helical structure that is subtly dependent on its nucleotide sequence. The complementary nature of this based-paired structure provides a redundant copy of the genetic information encoded within each strand of DNA. The regular structure and data redundancy provided by the DNA double helix make DNA well suited to the storage of genetic information, while base-pairing between DNA and incoming nucleotides provides the mechanism through which DNA polymerase replicates DNA and RNA polymerase transcribes DNA into RNA. Many DNA-binding proteins can recognize specific base-pairing patterns that identify particular regulatory regions of genes.
Intramolecular base pairs can occur within single-stranded nucleic acids. This is particularly important in RNA molecules (e.g., transfer RNA), where Watson–Crick base pairs (guanine–cytosine and adenine–uracil) permit the formation of short double-stranded helices, and a wide variety of non–Watson–Crick interactions (e.g., G–U or A–A) allow RNAs to fold into a vast range of specific three-dimensional structures. In addition, base-pairing between transfer RNA (tRNA) and messenger RNA (mRNA) forms the basis for the molecular recognition events that result in the nucleotide sequence of mRNA becoming translated into the amino acid sequence of proteins via the genetic code.
The size of an individual gene or an organism's entire genome is often measured in base pairs because DNA is usually double-stranded. Hence, the number of total base pairs is equal to the number of nucleotides in one of the strands (with the exception of non-coding single-stranded regions of telomeres). The haploid human genome (23 chromosomes) is estimated to be about 3.2 billion base pairs long and to contain 20,000–25,000 distinct protein-coding genes. A kilobase (kb) is a unit of measurement in molecular biology equal to 1000 base pairs of DNA or RNA. The total number of DNA base pairs on Earth is estimated at 5.0 with a weight of 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon).
Hydrogen bonding and stability
Top, a G.C base pair with three hydrogen bonds. Bottom, an A.T base pair with two hydrogen bonds. Non-covalent hydrogen bonds between the bases are shown as dashed lines. The wiggly lines stand for the connection to the pentose sugar and point in the direction of the minor groove.
Hydrogen bonding is the chemical interaction that underlies the base-pairing rules described above. Appropriate geometrical correspondence of hydrogen bond donors and acceptors allows only the "right" pairs to form stably. DNA with high GC-content is more stable than DNA with low GC-content. Crucially, however, stacking interactions are primarily responsible for stabilising the double-helical structure; Watson-Crick base pairing's contribution to global structural stability is minimal, but its role in the specificity underlying complementarity is, by contrast, of maximal importance as this underlies the template-dependent processes of the central dogma (e.g. DNA replication).
The bigger nucleobases, adenine and guanine, are members of a class of double-ringed chemical structures called purines; the smaller nucleobases, cytosine and thymine (and uracil), are members of a class of single-ringed chemical structures called pyrimidines. Purines are complementary only with pyrimidines: pyrimidine–pyrimidine pairings are energetically unfavorable because the molecules are too far apart for hydrogen bonding to be established; purine–purine pairings are energetically unfavorable because the molecules are too close, leading to overlap repulsion. Purine–pyrimidine base-pairing of AT or GC or UA (in RNA) results in proper duplex structure. The only other purine–pyrimidine pairings would be AC and GT and UG (in RNA); these pairings are mismatches because the patterns of hydrogen donors and acceptors do not correspond. The GU pairing, with two hydrogen bonds, does occur fairly often in RNA (see wobble base pair).
Paired DNA and RNA molecules are comparatively stable at room temperature, but the two nucleotide strands will separate above a melting point that is determined by the length of the molecules, the extent of mispairing (if any), and the GC content. Higher GC content results in higher melting temperatures; it is, therefore, unsurprising that the genomes of extremophile organisms such as Thermus thermophilus are particularly GC-rich. On the converse, regions of a genome that need to separate frequently — for example, the promoter regions for often-transcribed genes — are comparatively GC-poor (for example, see TATA box). GC content and melting temperature must also be taken into account when designing primers for PCR reactions.
Examples
The following DNA sequences illustrate pair double-stranded patterns. By convention, the top strand is written from the 5′-end to the 3′-end; thus, the bottom strand is written 3′ to 5′.
A base-paired DNA sequence:
The corresponding RNA sequence, in which uracil is substituted for thymine in the RNA strand:
Base analogs and intercalators
Chemical analogs of nucleotides can take the place of proper nucleotides and establish non-canonical base-pairing, leading to errors (mostly point mutations) in DNA replication and DNA transcription. This is due to their isosteric chemistry. One common mutagenic base analog is 5-bromouracil, which resembles thymine but can base-pair to guanine in its enol form.
Other chemicals, known as DNA intercalators, fit into the gap between adjacent bases on a single strand and induce frameshift mutations by "masquerading" as a base, causing the DNA replication machinery to skip or insert additional nucleotides at the intercalated site. Most intercalators are large polyaromatic compounds and are known or suspected carcinogens. Examples include ethidium bromide and acridine.
Mismatch repair
Mismatched base pairs can be generated by errors of DNA replication and as intermediates during homologous recombination. The process of mismatch repair ordinarily must recognize and correctly repair a small number of base mispairs within a long sequence of normal DNA base pairs. To repair mismatches formed during DNA replication, several distinctive repair processes have evolved to distinguish between the template strand and the newly formed strand so that only the newly inserted incorrect nucleotide is removed (in order to avoid generating a mutation). The proteins employed in mismatch repair during DNA replication, and the clinical significance of defects in this process are described in the article DNA mismatch repair. The process of mispair correction during recombination is described in the article gene conversion.
Length measurements
The following abbreviations are commonly used to describe the length of a D/RNA molecule:
bp = base pair—one bp corresponds to approximately 3.4 Å (340 pm) of length along the strand, and to roughly 618 or 643 daltons for DNA and RNA respectively.
kb (= kbp) = kilo–base-pair = 1,000 bp
Mb (= Mbp) = mega–base-pair = 1,000,000 bp
Gb (= Gbp) = giga–base-pair = 1,000,000,000 bp
For single-stranded DNA/RNA, units of nucleotides are used—abbreviated nt (or knt, Mnt, Gnt)—as they are not paired.
To distinguish between units of computer storage and bases, kbp, Mbp, Gbp, etc. may be used for base pairs.
The centimorgan is also often used to imply distance along a chromosome, but the number of base pairs it corresponds to varies widely. In the human genome, the centimorgan is about 1 million base pairs.
Unnatural base pair (UBP)
An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. DNA sequences have been described which use newly created nucleobases to form a third base pair, in addition to the two base pairs found in nature, A-T (adenine – thymine) and G-C (guanine – cytosine). A few research groups have been searching for a third base pair for DNA, including teams led by Steven A. Benner, Philippe Marliere, Floyd E. Romesberg and Ichiro Hirao. Some new base pairs based on alternative hydrogen bonding, hydrophobic interactions and metal coordination have been reported.
In 1989 Steven Benner (then working at the Swiss Federal Institute of Technology in Zurich) and his team led with modified forms of cytosine and guanine into DNA molecules in vitro. The nucleotides, which encoded RNA and proteins, were successfully replicated in vitro. Since then, Benner's team has been trying to engineer cells that can make foreign bases from scratch, obviating the need for a feedstock.
In 2002, Ichiro Hirao's group in Japan developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in transcription and translation, for the site-specific incorporation of non-standard amino acids into proteins. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription. Afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins.
In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or Unnatural Base Pair (UBP) were named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. His team designed a variety of in vitro or "test tube" templates containing the unnatural base pair and they confirmed that it was efficiently replicated with high fidelity in virtually all sequence contexts using the modern standard in vitro techniques, namely PCR amplification of DNA and PCR-based applications. Their results show that for PCR and PCR-based applications, the d5SICS–dNaM unnatural base pair is functionally equivalent to a natural base pair, and when combined with the other two natural base pairs used by all organisms, A–T and G–C, they provide a fully functional and expanded six-letter "genetic alphabet".
In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed and inserted it into cells of the common bacterium E. coli that successfully replicated the unnatural base pairs through multiple generations. The transfection did not hamper the growth of the E. coli cells and showed no sign of losing its unnatural base pairs to its natural DNA repair mechanisms. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. Romesberg said he and his colleagues created 300 variants to refine the design of nucleotides that would be stable enough and would be replicated as easily as the natural ones when the cells divide. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. Then, the natural bacterial replication pathways use them to accurately replicate a plasmid containing d5SICS–dNaM. Other researchers were surprised that the bacteria replicated these human-made DNA subunits.
The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. The artificial strings of DNA do not encode for anything yet, but scientists speculate they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses. Experts said the synthetic DNA incorporating the unnatural base pair raises the possibility of life forms based on a different DNA code.
Non-canonical base pairing
In addition to the canonical pairing, some conditions can also favour base-pairing with alternative base orientation, and number and geometry of hydrogen bonds. These pairings are accompanied by alterations to the local backbone shape.
The most common of these is the wobble base pairing that occurs between tRNAs and mRNAs at the third base position of many codons during transcription and during the charging of tRNAs by some tRNA synthetases. They have also been observed in the secondary structures of some RNA sequences.
Additionally, Hoogsteen base pairing (typically written as A•U/T and G•C) can exist in some DNA sequences (e.g. CA and TA dinucleotides) in dynamic equilibrium with standard Watson–Crick pairing. They have also been observed in some protein–DNA complexes.
In addition to these alternative base pairings, a wide range of base-base hydrogen bonding is observed in RNA secondary and tertiary structure. These bonds are often necessary for the precise, complex shape of an RNA, as well as its binding to interaction partners.
See also
List of Y-DNA single-nucleotide polymorphisms
Non-canonical base pairing
Chargaff's rules
References
Further reading
(See esp. ch. 6 and 9)
External links
DAN—webserver version of the EMBOSS tool for calculating melting temperatures
Nucleobases
Molecular genetics
Nucleic acids | 0.767659 | 0.995918 | 0.764525 |
Adenosine diphosphate | Adenosine diphosphate (ADP), also known as adenosine pyrophosphate (APP), is an important organic compound in metabolism and is essential to the flow of energy in living cells. ADP consists of three important structural components: a sugar backbone attached to adenine and two phosphate groups bonded to the 5 carbon atom of ribose. The diphosphate group of ADP is attached to the 5’ carbon of the sugar backbone, while the adenine attaches to the 1’ carbon.
ADP can be interconverted to adenosine triphosphate (ATP) and adenosine monophosphate (AMP). ATP contains one more phosphate group than does ADP. AMP contains one fewer phosphate group. Energy transfer used by all living things is a result of dephosphorylation of ATP by enzymes known as ATPases. The cleavage of a phosphate group from ATP results in the coupling of energy to metabolic reactions and a by-product of ADP. ATP is continually reformed from lower-energy species ADP and AMP. The biosynthesis of ATP is achieved throughout processes such as substrate-level phosphorylation, oxidative phosphorylation, and photophosphorylation, all of which facilitate the addition of a phosphate group to ADP.
Bioenergetics
ADP cycling supplies the energy needed to do work in a biological system, the thermodynamic process of transferring energy from one source to another. There are two types of energy: potential energy and kinetic energy. Potential energy can be thought of as stored energy, or usable energy that is available to do work. Kinetic energy is the energy of an object as a result of its motion. The significance of ATP is in its ability to store potential energy within the phosphate bonds. The energy stored between these bonds can then be transferred to do work. For example, the transfer of energy from ATP to the protein myosin causes a conformational change when connecting to actin during muscle contraction.
It takes multiple reactions between myosin and actin to effectively produce one muscle contraction, and, therefore, the availability of large amounts of ATP is required to produce each muscle contraction. For this reason, biological processes have evolved to produce efficient ways to replenish the potential energy of ATP from ADP.
Breaking one of ATP's phosphorus bonds generates approximately 30.5 kilojoules per mole of ATP (7.3 kcal). ADP can be converted, or powered back to ATP through the process of releasing the chemical energy available in food; in humans, this is constantly performed via aerobic respiration in the mitochondria. Plants use photosynthetic pathways to convert and store energy from sunlight, also conversion of ADP to ATP. Animals use the energy released in the breakdown of glucose and other molecules to convert ADP to ATP, which can then be used to fuel necessary growth and cell maintenance.
Cellular respiration
Catabolism
The ten-step catabolic pathway of glycolysis is the initial phase of free-energy release in the breakdown of glucose and can be split into two phases, the preparatory phase and payoff phase. ADP and phosphate are needed as precursors to synthesize ATP in the payoff reactions of the TCA cycle and oxidative phosphorylation mechanism. During the payoff phase of glycolysis, the enzymes phosphoglycerate kinase and pyruvate kinase facilitate the addition of a phosphate group to ADP by way of substrate-level phosphorylation.
Glycolysis
Glycolysis is performed by all living organisms and consists of 10 steps. The net reaction for the overall process of glycolysis is:
Glucose + 2 NAD+ + 2 Pi + 2 ADP → 2 pyruvate + 2 ATP + 2 NADH + 2 H2O
Steps 1 and 3 require the input of energy derived from the hydrolysis of ATP to ADP and Pi (inorganic phosphate), whereas steps 7 and 10 require the input of ADP, each yielding ATP. The enzymes necessary to break down glucose are found in the cytoplasm, the viscous fluid that fills living cells, where the glycolytic reactions take place.
Citric acid cycle
The citric acid cycle, also known as the Krebs cycle or the TCA (tricarboxylic acid) cycle is an 8-step process that takes the pyruvate generated by glycolysis and generates 4 NADH, FADH2, and GTP, which is further converted to ATP. It is only in step 5, where GTP is generated, by succinyl-CoA synthetase, and then converted to ATP, that ADP is used (GTP + ADP → GDP + ATP).
Oxidative phosphorylation
Oxidative phosphorylation produces 26 of the 30 equivalents of ATP generated in cellular respiration by transferring electrons from NADH or FADH2 to O2 through electron carriers. The energy released when electrons are passed from higher-energy NADH or FADH2 to the lower-energy O2 is required to phosphorylate ADP and once again generate ATP. It is this energy coupling and phosphorylation of ADP to ATP that gives the electron transport chain the name oxidative phosphorylation.
Mitochondrial ATP synthase complex
During the initial phases of glycolysis and the TCA cycle, cofactors such as NAD+ donate and accept electrons that aid in the electron transport chain's ability to produce a proton gradient across the inner mitochondrial membrane. The ATP synthase complex exists within the mitochondrial membrane (FO portion) and protrudes into the matrix (F1 portion). The energy derived as a result of the chemical gradient is then used to synthesize ATP by coupling the reaction of inorganic phosphate to ADP in the active site of the ATP synthase enzyme; the equation for this can be written as ADP + Pi → ATP.
Blood platelet activation
Under normal conditions, small disk-shape platelets circulate in the blood freely and without interaction with one another. ADP is stored in dense bodies inside blood platelets and is released upon platelet activation. ADP interacts with a family of ADP receptors found on platelets (P2Y1, P2Y12, and P2X1), which leads to platelet activation.
P2Y1 receptors initiate platelet aggregation and shape change as a result of interactions with ADP.
P2Y12 receptors further amplify the response to ADP and draw forth the completion of aggregation.
ADP in the blood is converted to adenosine by the action of ecto-ADPases, inhibiting further platelet activation via adenosine receptors.
See also
Nucleoside
Nucleotide
DNA
RNA
Oligonucleotide
Apyrase
Phosphate
Adenosine diphosphate ribose
References
Adenosine receptor agonists
Neurotransmitters
Nucleotides
Cellular respiration
Purines
Purinergic signalling
Pyrophosphate esters | 0.770212 | 0.99259 | 0.764505 |
Photosynthesis | Photosynthesis is a system of biological processes by which photosynthetic organisms, such as most plants, algae, and cyanobacteria, convert light energy, typically from sunlight, into the chemical energy necessary to fuel their metabolism.
Photosynthesis usually refers to oxygenic photosynthesis, a process that produces oxygen. Photosynthetic organisms store the chemical energy so produced within intracellular organic compounds (compounds containing carbon) like sugars, glycogen, cellulose and starches. To use this stored chemical energy, an organism's cells metabolize the organic compounds through cellular respiration. Photosynthesis plays a critical role in producing and maintaining the oxygen content of the Earth's atmosphere, and it supplies most of the biological energy necessary for complex life on Earth.
Some bacteria also perform anoxygenic photosynthesis, which uses bacteriochlorophyll to split hydrogen sulfide as a reductant instead of water, producing sulfur instead of oxygen. Archaea such as Halobacterium also perform a type of non-carbon-fixing anoxygenic photosynthesis, where the simpler photopigment retinal and its microbial rhodopsin derivatives are used to absorb green light and power proton pumps to directly synthesize adenosine triphosphate (ATP), the "energy currency" of cells. Such archaeal photosynthesis might have been the earliest form of photosynthesis that evolved on Earth, as far back as the Paleoarchean, preceding that of cyanobacteria (see Purple Earth hypothesis).
While the details may differ between species, the process always begins when light energy is absorbed by the reaction centers, proteins that contain photosynthetic pigments or chromophores. In plants, these proteins are chlorophylls (a porphyrin derivative that absorbs the red and blue spectrums of light, thus reflecting green) held inside chloroplasts, abundant in leaf cells. In bacteria they are embedded in the plasma membrane. In these light-dependent reactions, some energy is used to strip electrons from suitable substances, such as water, producing oxygen gas. The hydrogen freed by the splitting of water is used in the creation of two important molecules that participate in energetic processes: reduced nicotinamide adenine dinucleotide phosphate (NADPH) and ATP.
In plants, algae, and cyanobacteria, sugars are synthesized by a subsequent sequence of reactions called the Calvin cycle. In this process, atmospheric carbon dioxide is incorporated into already existing organic compounds, such as ribulose bisphosphate (RuBP). Using the ATP and NADPH produced by the light-dependent reactions, the resulting compounds are then reduced and removed to form further carbohydrates, such as glucose. In other bacteria, different mechanisms like the reverse Krebs cycle are used to achieve the same end.
The first photosynthetic organisms probably evolved early in the evolutionary history of life using reducing agents such as hydrogen or hydrogen sulfide, rather than water, as sources of electrons. Cyanobacteria appeared later; the excess oxygen they produced contributed directly to the oxygenation of the Earth, which rendered the evolution of complex life possible. The average rate of energy captured by global photosynthesis is approximately 130 terawatts, which is about eight times the total power consumption of human civilization. Photosynthetic organisms also convert around 100–115 billion tons (91–104 Pg petagrams, or a billion metric tons), of carbon into biomass per year. Photosynthesis was discovered in 1779 by Jan Ingenhousz. He showed that plants need light, not just air, soil, and water.
Photosynthesis is vital for climate processes, as it captures carbon dioxide from the air and binds it into plants, harvested produce and soil. Cereals alone are estimated to bind 3,825 Tg or 3.825 Pg of carbon dioxide every year, i.e. 3.825 billion metric tons.
Overview
Most photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from carbon dioxide and water using energy from light. However, not all organisms use carbon dioxide as a source of carbon atoms to carry out photosynthesis; photoheterotrophs use organic compounds, rather than carbon dioxide, as a source of carbon.
In plants, algae, and cyanobacteria, photosynthesis releases oxygen. This oxygenic photosynthesis is by far the most common type of photosynthesis used by living organisms. Some shade-loving plants (sciophytes) produce such low levels of oxygen during photosynthesis that they use all of it themselves instead of releasing it to the atmosphere.
Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. There are also many varieties of anoxygenic photosynthesis, used mostly by bacteria, which consume carbon dioxide but do not release oxygen.
Carbon dioxide is converted into sugars in a process called carbon fixation; photosynthesis captures energy from sunlight to convert carbon dioxide into carbohydrates. Carbon fixation is an endothermic redox reaction. In general outline, photosynthesis is the opposite of cellular respiration: while photosynthesis is a process of reduction of carbon dioxide to carbohydrates, cellular respiration is the oxidation of carbohydrates or other nutrients to carbon dioxide. Nutrients used in cellular respiration include carbohydrates, amino acids and fatty acids. These nutrients are oxidized to produce carbon dioxide and water, and to release chemical energy to drive the organism's metabolism.
Photosynthesis and cellular respiration are distinct processes, as they take place through different sequences of chemical reactions and in different cellular compartments (cellular respiration in mitochondria).
The general equation for photosynthesis as first proposed by Cornelis van Niel is:
+ + → + +
Since water is used as the electron donor in oxygenic photosynthesis, the equation for this process is:
+ + → + +
This equation emphasizes that water is both a reactant in the light-dependent reaction and a product of the light-independent reaction, but canceling n water molecules from each side gives the net equation:
+ + → +
Other processes substitute other compounds (such as arsenite) for water in the electron-supply role; for example some microbes use sunlight to oxidize arsenite to arsenate: The equation for this reaction is:
+ + → + (used to build other compounds in subsequent reactions)
Photosynthesis occurs in two stages. In the first stage, light-dependent reactions or light reactions capture the energy of light and use it to make the hydrogen carrier NADPH and the energy-storage molecule ATP. During the second stage, the light-independent reactions use these products to capture and reduce carbon dioxide.
Most organisms that use oxygenic photosynthesis use visible light for the light-dependent reactions, although at least three use shortwave infrared or, more specifically, far-red radiation.
Some organisms employ even more radical variants of photosynthesis. Some archaea use a simpler method that employs a pigment similar to those used for vision in animals. The bacteriorhodopsin changes its configuration in response to sunlight, acting as a proton pump. This produces a proton gradient more directly, which is then converted to chemical energy. The process does not involve carbon dioxide fixation and does not release oxygen, and seems to have evolved separately from the more common types of photosynthesis.
Photosynthetic membranes and organelles
In photosynthetic bacteria, the proteins that gather light for photosynthesis are embedded in cell membranes. In its simplest form, this involves the membrane surrounding the cell itself. However, the membrane may be tightly folded into cylindrical sheets called thylakoids, or bunched up into round vesicles called intracytoplasmic membranes. These structures can fill most of the interior of a cell, giving the membrane a very large surface area and therefore increasing the amount of light that the bacteria can absorb.
In plants and algae, photosynthesis takes place in organelles called chloroplasts. A typical plant cell contains about 10 to 100 chloroplasts. The chloroplast is enclosed by a membrane. This membrane is composed of a phospholipid inner membrane, a phospholipid outer membrane, and an intermembrane space. Enclosed by the membrane is an aqueous fluid called the stroma. Embedded within the stroma are stacks of thylakoids (grana), which are the site of photosynthesis. The thylakoids appear as flattened disks. The thylakoid itself is enclosed by the thylakoid membrane, and within the enclosed volume is a lumen or thylakoid space. Embedded in the thylakoid membrane are integral and peripheral membrane protein complexes of the photosynthetic system.
Plants absorb light primarily using the pigment chlorophyll. The green part of the light spectrum is not absorbed but is reflected which is the reason that most plants have a green color. Besides chlorophyll, plants also use pigments such as carotenes and xanthophylls. Algae also use chlorophyll, but various other pigments are present, such as phycocyanin, carotenes, and xanthophylls in green algae, phycoerythrin in red algae (rhodophytes) and fucoxanthin in brown algae and diatoms resulting in a wide variety of colors.
These pigments are embedded in plants and algae in complexes called antenna proteins. In such proteins, the pigments are arranged to work together. Such a combination of proteins is also called a light-harvesting complex.
Although all cells in the green parts of a plant have chloroplasts, the majority of those are found in specially adapted structures called leaves. Certain species adapted to conditions of strong sunlight and aridity, such as many Euphorbia and cactus species, have their main photosynthetic organs in their stems. The cells in the interior tissues of a leaf, called the mesophyll, can contain between 450,000 and 800,000 chloroplasts for every square millimeter of leaf. The surface of the leaf is coated with a water-resistant waxy cuticle that protects the leaf from excessive evaporation of water and decreases the absorption of ultraviolet or blue light to minimize heating. The transparent epidermis layer allows light to pass through to the palisade mesophyll cells where most of the photosynthesis takes place.
Light-dependent reactions
In the light-dependent reactions, one molecule of the pigment chlorophyll absorbs one photon and loses one electron. This electron is taken up by a modified form of chlorophyll called pheophytin, which passes the electron to a quinone molecule, starting the flow of electrons down an electron transport chain that leads to the ultimate reduction of NADP to NADPH. In addition, this creates a proton gradient (energy gradient) across the chloroplast membrane, which is used by ATP synthase in the synthesis of ATP. The chlorophyll molecule ultimately regains the electron it lost when a water molecule is split in a process called photolysis, which releases oxygen.
The overall equation for the light-dependent reactions under the conditions of non-cyclic electron flow in green plants is:
Not all wavelengths of light can support photosynthesis. The photosynthetic action spectrum depends on the type of accessory pigments present. For example, in green plants, the action spectrum resembles the absorption spectrum for chlorophylls and carotenoids with absorption peaks in violet-blue and red light. In red algae, the action spectrum is blue-green light, which allows these algae to use the blue end of the spectrum to grow in the deeper waters that filter out the longer wavelengths (red light) used by above-ground green plants. The non-absorbed part of the light spectrum is what gives photosynthetic organisms their color (e.g., green plants, red algae, purple bacteria) and is the least effective for photosynthesis in the respective organisms.
Z scheme
In plants, light-dependent reactions occur in the thylakoid membranes of the chloroplasts where they drive the synthesis of ATP and NADPH. The light-dependent reactions are of two forms: cyclic and non-cyclic.
In the non-cyclic reaction, the photons are captured in the light-harvesting antenna complexes of photosystem II by chlorophyll and other accessory pigments (see diagram at right). The absorption of a photon by the antenna complex loosens an electron by a process called photoinduced charge separation. The antenna system is at the core of the chlorophyll molecule of the photosystem II reaction center. That loosened electron is taken up by the primary electron-acceptor molecule, pheophytin. As the electrons are shuttled through an electron transport chain (the so-called Z-scheme shown in the diagram), a chemiosmotic potential is generated by pumping proton cations (H+) across the membrane and into the thylakoid space. An ATP synthase enzyme uses that chemiosmotic potential to make ATP during photophosphorylation, whereas NADPH is a product of the terminal redox reaction in the Z-scheme. The electron enters a chlorophyll molecule in Photosystem I. There it is further excited by the light absorbed by that photosystem. The electron is then passed along a chain of electron acceptors to which it transfers some of its energy. The energy delivered to the electron acceptors is used to move hydrogen ions across the thylakoid membrane into the lumen. The electron is eventually used to reduce the coenzyme NADP with an H+ to NADPH (which has functions in the light-independent reaction); at that point, the path of that electron ends.
The cyclic reaction is similar to that of the non-cyclic but differs in that it generates only ATP, and no reduced NADP (NADPH) is created. The cyclic reaction takes place only at photosystem I. Once the electron is displaced from the photosystem, the electron is passed down the electron acceptor molecules and returns to photosystem I, from where it was emitted, hence the name cyclic reaction.
Water photolysis
Linear electron transport through a photosystem will leave the reaction center of that photosystem oxidized. Elevating another electron will first require re-reduction of the reaction center. The excited electrons lost from the reaction center (P700) of photosystem I are replaced by transfer from plastocyanin, whose electrons come from electron transport through photosystem II. Photosystem II, as the first step of the Z-scheme, requires an external source of electrons to reduce its oxidized chlorophyll a reaction center. The source of electrons for photosynthesis in green plants and cyanobacteria is water. Two water molecules are oxidized by the energy of four successive charge-separation reactions of photosystem II to yield a molecule of diatomic oxygen and four hydrogen ions. The electrons yielded are transferred to a redox-active tyrosine residue that is oxidized by the energy of P680. This resets the ability of P680 to absorb another photon and release another photo-dissociated electron. The oxidation of water is catalyzed in photosystem II by a redox-active structure that contains four manganese ions and a calcium ion; this oxygen-evolving complex binds two water molecules and contains the four oxidizing equivalents that are used to drive the water-oxidizing reaction (Kok's S-state diagrams). The hydrogen ions are released in the thylakoid lumen and therefore contribute to the transmembrane chemiosmotic potential that leads to ATP synthesis. Oxygen is a waste product of light-dependent reactions, but the majority of organisms on Earth use oxygen and its energy for cellular respiration, including photosynthetic organisms.
Light-independent reactions
Calvin cycle
In the light-independent (or "dark") reactions, the enzyme RuBisCO captures CO2 from the atmosphere and, in a process called the Calvin cycle, uses the newly formed NADPH and releases three-carbon sugars, which are later combined to form sucrose and starch. The overall equation for the light-independent reactions in green plants is
Carbon fixation produces the three-carbon sugar intermediate, which is then converted into the final carbohydrate products. The simple carbon sugars photosynthesis produces are then used to form other organic compounds, such as the building material cellulose, the precursors for lipid and amino acid biosynthesis, or as a fuel in cellular respiration. The latter occurs not only in plants but also in animals when the carbon and energy from plants is passed through a food chain.
The fixation or reduction of carbon dioxide is a process in which carbon dioxide combines with a five-carbon sugar, ribulose 1,5-bisphosphate, to yield two molecules of a three-carbon compound, glycerate 3-phosphate, also known as 3-phosphoglycerate. Glycerate 3-phosphate, in the presence of ATP and NADPH produced during the light-dependent stages, is reduced to glyceraldehyde 3-phosphate. This product is also referred to as 3-phosphoglyceraldehyde (PGAL) or, more generically, as triose phosphate. Most (five out of six molecules) of the glyceraldehyde 3-phosphate produced are used to regenerate ribulose 1,5-bisphosphate so the process can continue. The triose phosphates not thus "recycled" often condense to form hexose phosphates, which ultimately yield sucrose, starch, and cellulose, as well as glucose and fructose. The sugars produced during carbon metabolism yield carbon skeletons that can be used for other metabolic reactions like the production of amino acids and lipids.
Carbon concentrating mechanisms
On land
In hot and dry conditions, plants close their stomata to prevent water loss. Under these conditions, will decrease and oxygen gas, produced by the light reactions of photosynthesis, will increase, causing an increase of photorespiration by the oxygenase activity of ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO) and decrease in carbon fixation. Some plants have evolved mechanisms to increase the concentration in the leaves under these conditions.
Plants that use the C4 carbon fixation process chemically fix carbon dioxide in the cells of the mesophyll by adding it to the three-carbon molecule phosphoenolpyruvate (PEP), a reaction catalyzed by an enzyme called PEP carboxylase, creating the four-carbon organic acid oxaloacetic acid. Oxaloacetic acid or malate synthesized by this process is then translocated to specialized bundle sheath cells where the enzyme RuBisCO and other Calvin cycle enzymes are located, and where released by decarboxylation of the four-carbon acids is then fixed by RuBisCO activity to the three-carbon 3-phosphoglyceric acids. The physical separation of RuBisCO from the oxygen-generating light reactions reduces photorespiration and increases fixation and, thus, the photosynthetic capacity of the leaf. plants can produce more sugar than plants in conditions of high light and temperature. Many important crop plants are plants, including maize, sorghum, sugarcane, and millet. Plants that do not use PEP-carboxylase in carbon fixation are called C3 plants because the primary carboxylation reaction, catalyzed by RuBisCO, produces the three-carbon 3-phosphoglyceric acids directly in the Calvin-Benson cycle. Over 90% of plants use carbon fixation, compared to 3% that use carbon fixation; however, the evolution of in over sixty plant lineages makes it a striking example of convergent evolution. C2 photosynthesis, which involves carbon-concentration by selective breakdown of photorespiratory glycine, is both an evolutionary precursor to and a useful carbon-concentrating mechanism in its own right.
Xerophytes, such as cacti and most succulents, also use PEP carboxylase to capture carbon dioxide in a process called Crassulacean acid metabolism (CAM). In contrast to metabolism, which spatially separates the fixation to PEP from the Calvin cycle, CAM temporally separates these two processes. CAM plants have a different leaf anatomy from plants, and fix the at night, when their stomata are open. CAM plants store the mostly in the form of malic acid via carboxylation of phosphoenolpyruvate to oxaloacetate, which is then reduced to malate. Decarboxylation of malate during the day releases inside the leaves, thus allowing carbon fixation to 3-phosphoglycerate by RuBisCO. CAM is used by 16,000 species of plants.
Calcium-oxalate-accumulating plants, such as Amaranthus hybridus and Colobanthus quitensis, show a variation of photosynthesis where calcium oxalate crystals function as dynamic carbon pools, supplying carbon dioxide (CO2) to photosynthetic cells when stomata are partially or totally closed. This process was named alarm photosynthesis. Under stress conditions (e.g., water deficit), oxalate released from calcium oxalate crystals is converted to CO2 by an oxalate oxidase enzyme, and the produced CO2 can support the Calvin cycle reactions. Reactive hydrogen peroxide (H2O2), the byproduct of oxalate oxidase reaction, can be neutralized by catalase. Alarm photosynthesis represents a photosynthetic variant to be added to the well-known C4 and CAM pathways. However, alarm photosynthesis, in contrast to these pathways, operates as a biochemical pump that collects carbon from the organ interior (or from the soil) and not from the atmosphere.
In water
Cyanobacteria possess carboxysomes, which increase the concentration of around RuBisCO to increase the rate of photosynthesis. An enzyme, carbonic anhydrase, located within the carboxysome, releases CO2 from dissolved hydrocarbonate ions (HCO). Before the CO2 can diffuse out, RuBisCO concentrated within the carboxysome quickly sponges it up. HCO ions are made from CO2 outside the cell by another carbonic anhydrase and are actively pumped into the cell by a membrane protein. They cannot cross the membrane as they are charged, and within the cytosol they turn back into CO2 very slowly without the help of carbonic anhydrase. This causes the HCO ions to accumulate within the cell from where they diffuse into the carboxysomes. Pyrenoids in algae and hornworts also act to concentrate around RuBisCO.
Order and kinetics
The overall process of photosynthesis takes place in four stages:
Efficiency
Plants usually convert light into chemical energy with a photosynthetic efficiency of 3–6%.
Absorbed light that is unconverted is dissipated primarily as heat, with a small fraction (1–2%) reemitted as chlorophyll fluorescence at longer (redder) wavelengths. This fact allows measurement of the light reaction of photosynthesis by using chlorophyll fluorometers.
Actual plants' photosynthetic efficiency varies with the frequency of the light being converted, light intensity, temperature, and proportion of carbon dioxide in the atmosphere, and can vary from 0.1% to 8%. By comparison, solar panels convert light into electric energy at an efficiency of approximately 6–20% for mass-produced panels, and above 40% in laboratory devices.
Scientists are studying photosynthesis in hopes of developing plants with increased yield.
The efficiency of both light and dark reactions can be measured, but the relationship between the two can be complex. For example, the light reaction creates ATP and NADPH energy molecules, which C3 plants can use for carbon fixation or photorespiration. Electrons may also flow to other electron sinks. For this reason, it is not uncommon for authors to differentiate between work done under non-photorespiratory conditions and under photorespiratory conditions.
Chlorophyll fluorescence of photosystem II can measure the light reaction, and infrared gas analyzers can measure the dark reaction. An integrated chlorophyll fluorometer and gas exchange system can investigate both light and dark reactions when researchers use the two separate systems together. Infrared gas analyzers and some moisture sensors are sensitive enough to measure the photosynthetic assimilation of CO2 and of ΔH2O using reliable methods. CO2 is commonly measured in /(m2/s), parts per million, or volume per million; and H2O is commonly measured in /(m2/s) or in . By measuring CO2 assimilation, ΔH2O, leaf temperature, barometric pressure, leaf area, and photosynthetically active radiation (PAR), it becomes possible to estimate, "A" or carbon assimilation, "E" or transpiration, "gs" or stomatal conductance, and "Ci" or intracellular CO2. However, it is more common to use chlorophyll fluorescence for plant stress measurement, where appropriate, because the most commonly used parameters FV/FM and Y(II) or F/FM' can be measured in a few seconds, allowing the investigation of larger plant populations.
Gas exchange systems that offer control of CO2 levels, above and below ambient, allow the common practice of measurement of A/Ci curves, at different CO2 levels, to characterize a plant's photosynthetic response.
Integrated chlorophyll fluorometer – gas exchange systems allow a more precise measure of photosynthetic response and mechanisms. While standard gas exchange photosynthesis systems can measure Ci, or substomatal CO2 levels, the addition of integrated chlorophyll fluorescence measurements allows a more precise measurement of CC, the estimation of CO2 concentration at the site of carboxylation in the chloroplast, to replace Ci. CO2 concentration in the chloroplast becomes possible to estimate with the measurement of mesophyll conductance or gm using an integrated system.
Photosynthesis measurement systems are not designed to directly measure the amount of light the leaf absorbs, but analysis of chlorophyll fluorescence, P700- and P515-absorbance, and gas exchange measurements reveal detailed information about, e.g., the photosystems, quantum efficiency and the CO2 assimilation rates. With some instruments, even wavelength dependency of the photosynthetic efficiency can be analyzed.
A phenomenon known as quantum walk increases the efficiency of the energy transport of light significantly. In the photosynthetic cell of an alga, bacterium, or plant, there are light-sensitive molecules called chromophores arranged in an antenna-shaped structure called a photocomplex. When a photon is absorbed by a chromophore, it is converted into a quasiparticle referred to as an exciton, which jumps from chromophore to chromophore towards the reaction center of the photocomplex, a collection of molecules that traps its energy in a chemical form accessible to the cell's metabolism. The exciton's wave properties enable it to cover a wider area and try out several possible paths simultaneously, allowing it to instantaneously "choose" the most efficient route, where it will have the highest probability of arriving at its destination in the minimum possible time.
Because that quantum walking takes place at temperatures far higher than quantum phenomena usually occur, it is only possible over very short distances. Obstacles in the form of destructive interference cause the particle to lose its wave properties for an instant before it regains them once again after it is freed from its locked position through a classic "hop". The movement of the electron towards the photo center is therefore covered in a series of conventional hops and quantum walks.
Evolution
Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old. More recent studies also suggest that photosynthesis may have begun about 3.4 billion years ago, though the first direct evidence of photosynthesis comes from thylakoid membranes preserved in 1.75-billion-year-old cherts.
Oxygenic photosynthesis is the main source of oxygen in the Earth's atmosphere, and its earliest appearance is sometimes referred to as the oxygen catastrophe. Geological evidence suggests that oxygenic photosynthesis, such as that in cyanobacteria, became important during the Paleoproterozoic era around two billion years ago. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic, using water as an electron donor, which is oxidized to molecular oxygen in the photosynthetic reaction center.
Symbiosis and the origin of chloroplasts
Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges, and sea anemones. Scientists presume that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks, such as Elysia viridis and Elysia chlorotica, also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies (see Kleptoplasty). This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins they need to survive.
An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosome, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those found in cyanobacteria. DNA in chloroplasts codes for redox proteins such as those found in the photosynthetic reaction centers. The CoRR Hypothesis proposes that this co-location of genes with their gene products is required for redox regulation of gene expression, and accounts for the persistence of DNA in bioenergetic organelles.
Photosynthetic eukaryotic lineages
Symbiotic and kleptoplastic organisms excluded:
The glaucophytes and the red and green algae—clade Archaeplastida (uni- and multicellular)
The cryptophytes—clade Cryptista (unicellular)
The haptophytes—clade Haptista (unicellular)
The dinoflagellates and chromerids in the superphylum Myzozoa, and Pseudoblepharisma in the phylum Ciliophora—clade Alveolata (unicellular)
The ochrophytes—clade Stramenopila (uni- and multicellular)
The chlorarachniophytes and three species of Paulinella in the phylum Cercozoa—clade Rhizaria (unicellular)
The euglenids—clade Excavata (unicellular)
Except for the euglenids, which are found within the Excavata, all of these belong to the Diaphoretickes. Archaeplastida and the photosynthetic Paulinella got their plastids, which are surrounded by two membranes, through primary endosymbiosis in two separate events, by engulfing a cyanobacterium. The plastids in all the other groups have either a red or green algal origin, and are referred to as the "red lineages" and the "green lineages". The only known exception is the ciliate Pseudoblepharisma tenue, which in addition to its plastids that originated from green algae also has a purple sulfur bacterium as symbiont. In dinoflagellates and euglenids the plastids are surrounded by three membranes, and in the remaining lines by four. A nucleomorph, remnants of the original algal nucleus located between the inner and outer membranes of the plastid, is present in the cryptophytes (from a red alga) and chlorarachniophytes (from a green alga).
Some dinoflagellates that lost their photosynthetic ability later regained it again through new endosymbiotic events with different algae.
While able to perform photosynthesis, many of these eukaryotic groups are mixotrophs and practice heterotrophy to various degrees.
Photosynthetic prokaryotic lineages
Early photosynthetic systems, such as those in green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, and used various other molecules than water as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and sulfur as electron donors. Green nonsulfur bacteria used various amino and other organic acids as electron donors. Purple nonsulfur bacteria used a variety of nonspecific organic molecules. The use of these molecules is consistent with the geological evidence that Earth's early atmosphere was highly reducing at that time.
With a possible exception of Heimdallarchaeota, photosynthesis is not found in archaea. Haloarchaea are phototrophic and can absorb energy from the sun, but do not harvest carbon from the atmosphere and are therefore not photosynthetic. Instead of chlorophyll they use rhodopsins, which convert light-energy to ion gradients but cannot mediate electron transfer reactions.
In bacteria eight photosynthetic lineages are currently known:
Cyanobacteria, the only prokaryotes performing oxygenic photosynthesis and the only prokaryotes that contain two types of photosystems (type I (RCI), also known as Fe-S type, and type II (RCII), also known as quinone type). The seven remaining prokaryotes have anoxygenic photosynthesis and use versions of either type I or type II.
Chlorobi (green sulfur bacteria) Type I
Heliobacteria Type I
Chloracidobacterium Type I
Proteobacteria (purple sulfur bacteria and purple non-sulfur bacteria) Type II (see: Purple bacteria)
Chloroflexota (green non-sulfur bacteria) Type II
Gemmatimonadota Type II
Eremiobacterota Type II
Cyanobacteria and the evolution of photosynthesis
The biochemical capacity to use water as the source for electrons in photosynthesis evolved once, in a common ancestor of extant cyanobacteria (formerly called blue-green algae). The geological record indicates that this transforming event took place early in Earth's history, at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. Because the Earth's atmosphere contained almost no oxygen during the estimated development of photosynthesis, it is believed that the first photosynthetic cyanobacteria did not generate oxygen. Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma, but the question of when oxygenic photosynthesis evolved is still unanswered. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of cyanobacteria. Cyanobacteria remained the principal primary producers of oxygen throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined cyanobacteria as the major primary producers of oxygen on continental shelves near the end of the Proterozoic, but only with the Mesozoic (251–66 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did the primary production of oxygen in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers of oxygen in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae.
Experimental history
Discovery
Although some of the steps in photosynthesis are still not completely understood, the overall photosynthetic equation has been known since the 19th century.
Jan van Helmont began the research of the process in the mid-17th century when he carefully measured the mass of the soil a plant was using and the mass of the plant as it grew. After noticing that the soil mass changed very little, he hypothesized that the mass of the growing plant must come from the water, the only substance he added to the potted plant. His hypothesis was partially accurate – much of the gained mass comes from carbon dioxide as well as water. However, this was a signaling point to the idea that the bulk of a plant's biomass comes from the inputs of photosynthesis, not the soil itself.
Joseph Priestley, a chemist and minister, discovered that when he isolated a volume of air under an inverted jar and burned a candle in it (which gave off CO2), the candle would burn out very quickly, much before it ran out of wax. He further discovered that a mouse could similarly "injure" air. He then showed that a plant could restore the air the candle and the mouse had "injured."
In 1779, Jan Ingenhousz repeated Priestley's experiments. He discovered that it was the influence of sunlight on the plant that could cause it to revive a mouse in a matter of hours.
In 1796, Jean Senebier, a Swiss pastor, botanist, and naturalist, demonstrated that green plants consume carbon dioxide and release oxygen under the influence of light. Soon afterward, Nicolas-Théodore de Saussure showed that the increase in mass of the plant as it grows could not be due only to uptake of CO2 but also to the incorporation of water. Thus, the basic reaction by which organisms use photosynthesis to produce food (such as glucose) was outlined.
Refinements
Cornelis Van Niel made key discoveries explaining the chemistry of photosynthesis. By studying purple sulfur bacteria and green bacteria, he was the first to demonstrate that photosynthesis is a light-dependent redox reaction in which hydrogen reduces (donates its atoms as electrons and protons to) carbon dioxide.
Robert Emerson discovered two light reactions by testing plant productivity using different wavelengths of light. With the red alone, the light reactions were suppressed. When blue and red were combined, the output was much more substantial. Thus, there were two photosystems, one absorbing up to 600 nm wavelengths, the other up to 700 nm. The former is known as PSII, the latter is PSI. PSI contains only chlorophyll "a", PSII contains primarily chlorophyll "a" with most of the available chlorophyll "b", among other pigments. These include phycobilins, which are the red and blue pigments of red and blue algae, respectively, and fucoxanthol for brown algae and diatoms. The process is most productive when the absorption of quanta is equal in both PSII and PSI, assuring that input energy from the antenna complex is divided between the PSI and PSII systems, which in turn powers the photochemistry.
Robert Hill thought that a complex of reactions consisted of an intermediate to cytochrome b6 (now a plastoquinone), and that another was from cytochrome f to a step in the carbohydrate-generating mechanisms. These are linked by plastoquinone, which does require energy to reduce cytochrome f. Further experiments to prove that the oxygen developed during the photosynthesis of green plants came from water were performed by Hill in 1937 and 1939. He showed that isolated chloroplasts give off oxygen in the presence of unnatural reducing agents like iron oxalate, ferricyanide or benzoquinone after exposure to light. In the Hill reaction:
2 H2O + 2 A + (light, chloroplasts) → 2 AH2 + O2
A is the electron acceptor. Therefore, in light, the electron acceptor is reduced and oxygen is evolved. Samuel Ruben and Martin Kamen used radioactive isotopes to determine that the oxygen liberated in photosynthesis came from the water.
Melvin Calvin and Andrew Benson, along with James Bassham, elucidated the path of carbon assimilation (the photosynthetic carbon reduction cycle) in plants. The carbon reduction cycle is known as the Calvin cycle, but many scientists refer to it as the Calvin-Benson, Benson-Calvin, or even Calvin-Benson-Bassham (or CBB) Cycle.
Nobel Prize–winning scientist Rudolph A. Marcus was later able to discover the function and significance of the electron transport chain.
Otto Heinrich Warburg and Dean Burk discovered the I-quantum photosynthesis reaction that splits CO2, activated by the respiration.
In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation.
In 1954, Daniel I. Arnon et al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32.
Louis N. M. Duysens and Jan Amesz discovered that chlorophyll "a" will absorb one light, oxidize cytochrome f, while chlorophyll "a" (and other pigments) will absorb another light but will reduce this same oxidized cytochrome, stating the two light reactions are in series.
Development of the concept
In 1893, the American botanist Charles Reid Barnes proposed two terms, photosyntax and photosynthesis, for the biological process of synthesis of complex carbon compounds out of carbonic acid, in the presence of chlorophyll, under the influence of light. The term photosynthesis is derived from the Greek phōs (φῶς, gleam) and sýnthesis (σύνθεσις, arranging together), while another word that he designated was photosyntax, from sýntaxis (σύνταξις, configuration). Over time, the term photosynthesis came into common usage. Later discovery of anoxygenic photosynthetic bacteria and photophosphorylation necessitated redefinition of the term.
C3 : C4 photosynthesis research
In the late 1940s at the University of California, Berkeley, the details of photosynthetic carbon metabolism were sorted out by the chemists Melvin Calvin, Andrew Benson, James Bassham and a score of students and researchers utilizing the carbon-14 isotope and paper chromatography techniques. The pathway of CO2 fixation by the algae Chlorella in a fraction of a second in light resulted in a three carbon molecule called phosphoglyceric acid (PGA). For that original and ground-breaking work, a Nobel Prize in Chemistry was awarded to Melvin Calvin in 1961. In parallel, plant physiologists studied leaf gas exchanges using the new method of infrared gas analysis and a leaf chamber where the net photosynthetic rates ranged from 10 to 13 μmol CO2·m−2·s−1, with the conclusion that all terrestrial plants have the same photosynthetic capacities, that are light saturated at less than 50% of sunlight.
Later in 1958–1963 at Cornell University, field grown maize was reported to have much greater leaf photosynthetic rates of 40 μmol CO2·m−2·s−1 and not be saturated at near full sunlight. This higher rate in maize was almost double of those observed in other species such as wheat and soybean, indicating that large differences in photosynthesis exist among higher plants. At the University of Arizona, detailed gas exchange research on more than 15 species of monocots and dicots uncovered for the first time that differences in leaf anatomy are crucial factors in differentiating photosynthetic capacities among species. In tropical grasses, including maize, sorghum, sugarcane, Bermuda grass and in the dicot amaranthus, leaf photosynthetic rates were around 38−40 μmol CO2·m−2·s−1, and the leaves have two types of green cells, i.e. outer layer of mesophyll cells surrounding a tightly packed cholorophyllous vascular bundle sheath cells. This type of anatomy was termed Kranz anatomy in the 19th century by the botanist Gottlieb Haberlandt while studying leaf anatomy of sugarcane. Plant species with the greatest photosynthetic rates and Kranz anatomy showed no apparent photorespiration, very low CO2 compensation point, high optimum temperature, high stomatal resistances and lower mesophyll resistances for gas diffusion and rates never saturated at full sun light. The research at Arizona was designated a Citation Classic in 1986. These species were later termed C4 plants as the first stable compound of CO2 fixation in light has four carbons as malate and aspartate. Other species that lack Kranz anatomy were termed C3 type such as cotton and sunflower, as the first stable carbon compound is the three-carbon PGA. At 1000 ppm CO2 in measuring air, both the C3 and C4 plants had similar leaf photosynthetic rates around 60 μmol CO2·m−2·s−1 indicating the suppression of photorespiration in C3 plants.
Factors
There are four main factors influencing photosynthesis and several corollary factors. The four main are:
Light irradiance and wavelength
Water absorption
Carbon dioxide concentration
Temperature.
Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.
Light intensity (irradiance), wavelength and temperature
The process of photosynthesis provides the main input of free energy into the biosphere, and is one of four main ways in which radiation is important for plant life.
The radiation climate within plant communities is extremely variable, in both time and space.
In the early 20th century, Frederick Blackman and Gabrielle Matthaei investigated the effects of light intensity (irradiance) and temperature on the rate of carbon assimilation.
At constant temperature, the rate of carbon assimilation varies with irradiance, increasing as the irradiance increases, but reaching a plateau at higher irradiance.
At low irradiance, increasing the temperature has little influence on the rate of carbon assimilation. At constant high irradiance, the rate of carbon assimilation increases as the temperature is increased.
These two experiments illustrate several important points: First, it is known that, in general, photochemical reactions are not affected by temperature. However, these experiments clearly show that temperature affects the rate of carbon assimilation, so there must be two sets of reactions in the full process of carbon assimilation. These are the light-dependent 'photochemical' temperature-independent stage, and the light-independent, temperature-dependent stage. Second, Blackman's experiments illustrate the concept of limiting factors. Another limiting factor is the wavelength of light. Cyanobacteria, which reside several meters underwater, cannot receive the correct wavelengths required to cause photoinduced charge separation in conventional photosynthetic pigments. To combat this problem, Cyanobacteria have a light-harvesting complex called Phycobilisome. This complex is made up of a series of proteins with different pigments which surround the reaction center.
Carbon dioxide levels and photorespiration
As carbon dioxide concentrations rise, the rate at which sugars are made by the light-independent reactions increases until limited by other factors. RuBisCO, the enzyme that captures carbon dioxide in the light-independent reactions, has a binding affinity for both carbon dioxide and oxygen. When the concentration of carbon dioxide is high, RuBisCO will fix carbon dioxide. However, if the carbon dioxide concentration is low, RuBisCO will bind oxygen instead of carbon dioxide. This process, called photorespiration, uses energy, but does not produce sugars.
RuBisCO oxygenase activity is disadvantageous to plants for several reasons:
One product of oxygenase activity is phosphoglycolate (2 carbon) instead of 3-phosphoglycerate (3 carbon). Phosphoglycolate cannot be metabolized by the Calvin-Benson cycle and represents carbon lost from the cycle. A high oxygenase activity, therefore, drains the sugars that are required to recycle ribulose 5-bisphosphate and for the continuation of the Calvin-Benson cycle.
Phosphoglycolate is quickly metabolized to glycolate that is toxic to a plant at a high concentration; it inhibits photosynthesis.
Salvaging glycolate is an energetically expensive process that uses the glycolate pathway, and only 75% of the carbon is returned to the Calvin-Benson cycle as 3-phosphoglycerate. The reactions also produce ammonia (NH3), which is able to diffuse out of the plant, leading to a loss of nitrogen.
A highly simplified summary is:
2 glycolate + ATP → 3-phosphoglycerate + carbon dioxide + ADP + NH3
The salvaging pathway for the products of RuBisCO oxygenase activity is more commonly known as photorespiration, since it is characterized by light-dependent oxygen consumption and the release of carbon dioxide.
See also
Jan Anderson (scientist)
Artificial photosynthesis
Calvin-Benson cycle
Carbon fixation
Cellular respiration
Chemosynthesis
Daily light integral
Hill reaction
Integrated fluorometer
Light-dependent reaction
Organic reaction
Photobiology
Photoinhibition
Photosynthetic reaction center
Photosynthetically active radiation
Photosystem
Photosystem I
Photosystem II
Quantasome
Quantum biology
Radiosynthesis
Red edge
Vitamin D
References
Further reading
Books
Papers
External links
A collection of photosynthesis pages for all levels from a renowned expert (Govindjee)
In depth, advanced treatment of photosynthesis, also from Govindjee
Science Aid: Photosynthesis Article appropriate for high school science
Metabolism, Cellular Respiration and Photosynthesis – The Virtual Library of Biochemistry and Cell Biology
Overall examination of Photosynthesis at an intermediate level
Overall Energetics of Photosynthesis
The source of oxygen produced by photosynthesis Interactive animation, a textbook tutorial
Photosynthesis – Light Dependent & Light Independent Stages
Khan Academy, video introduction
Agronomy
Biological processes
Botany
Cellular respiration
Ecosystems
Metabolism
Plant nutrition
Plant physiology
Quantum biology | 0.764771 | 0.999631 | 0.764488 |
Biological activity | In pharmacology, biological activity or pharmacological activity describes the beneficial or adverse effects of a drug on living matter. When a drug is a complex chemical mixture, this activity is exerted by the substance's active ingredient or pharmacophore but can be modified by the other constituents. Among the various properties of chemical compounds, pharmacological/biological activity plays a crucial role since it suggests uses of the compounds in the medical applications. However, chemical compounds may show some adverse and toxic effects which may prevent their use in medical practice.
Biological activity is usually measured by a bioassay and the activity is generally dosage-dependent, which is investigated via dose-response curves. Further, it is common to have effects ranging from beneficial to adverse for one substance when going from low to high doses. Activity depends critically on fulfillment of the ADME criteria. To be an effective drug, a compound not only must be active against a target, but also possess the appropriate ADME (Absorption, Distribution, Metabolism, and Excretion) properties necessary to make it suitable for use as a drug. Because of the costs of the measurement, biological activities are often predicted with computational methods, so-called QSAR models.
Bioactivity is a key property that promotes osseointegration for bonding and better stability of dental implants. Bioglass coatings represent high surface area and reactivity leading to an effective interaction of the coating material and surrounding bone tissues. In the biological environment, the formation of a layer of carbonated hydroxyapatite (CHA) initiates bonding to the bone tissues. The bioglass surface coating undergoes leaching/exchange of ions, dissolution of glass, and formation of the HA layer that promotes cellular response of tissues. The high specific surface area of bioactive glasses is likely to induce quicker solubility of the material, availability of ions in the surrounding area, and enhanced protein adsorption ability. These factors altogether contribute toward the bioactivity of bioglass coatings. In addition, tissue mineralization (bone, teeth) is promoted while tissue forming cells are in direct contact with bioglass materials.
Whereas a material is considered bioactive if it has interaction with or effect on any cell tissue in the human body, pharmacological activity is usually taken to describe beneficial effects, i.e. the effects of drug candidates as well as a substance's toxicity.
In the study of biomineralisation, bioactivity is often meant to mean the formation of calcium phosphate deposits on the surface of objects placed in simulated body fluid, a buffer solution with ion content similar to blood.
See also
Chemical property
Chemical structure
Lipinski's rule of five, describing molecular properties of drugs
Molecular property
Physical property
QSAR, quantitative structure-activity relationship
References
Pharmacodynamics
Bioactivity | 0.782871 | 0.976494 | 0.76447 |
Noether's theorem | Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems (see Noether's second theorem) published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
Noether's theorem is used in theoretical physics and the calculus of variations. It reveals the fundamental relation between the symmetries of a physical system and the conservation laws. It also made modern theoretical physicists much more focused on symmetries of physical systems. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g., systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law.
Basic illustrations and background
As an illustration, if a physical system behaves the same regardless of how it is oriented in space (that is, it's invariant), its Lagrangian is symmetric under continuous rotation: from this symmetry, Noether's theorem dictates that the angular momentum of the system be conserved, as a consequence of its laws of motion. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry. It is the laws of its motion that are symmetric.
As another example, if a physical process exhibits the same outcomes regardless of place or time, then its Lagrangian is symmetric under continuous translations in space and time respectively: by Noether's theorem, these symmetries account for the conservation laws of linear momentum and energy within this system, respectively.
Noether's theorem is important, both because of the insight it gives into conservation laws, and also as a practical calculational tool. It allows investigators to determine the conserved quantities (invariants) from the observed symmetries of a physical system. Conversely, it allows researchers to consider whole classes of hypothetical Lagrangians with given invariants, to describe a physical system. As an illustration, suppose that a physical theory is proposed which conserves a quantity X. A researcher can calculate the types of Lagrangians that conserve X through a continuous symmetry. Due to Noether's theorem, the properties of these Lagrangians provide further criteria to understand the implications and judge the fitness of the new theory.
There are numerous versions of Noether's theorem, with varying degrees of generality. There are natural quantum counterparts of this theorem, expressed in the Ward–Takahashi identities. Generalizations of Noether's theorem to superspaces also exist.
Informal statement of the theorem
All fine technical points aside, Noether's theorem can be stated informally:
A more sophisticated version of the theorem involving fields states that:
The word "symmetry" in the above statement refers more precisely to the covariance of the form that a physical law takes with respect to a one-dimensional Lie group of transformations satisfying certain technical criteria. The conservation law of a physical quantity is usually expressed as a continuity equation.
The formal proof of the theorem utilizes the condition of invariance to derive an expression for a current associated with a conserved physical quantity. In modern terminology, the conserved quantity is called the Noether charge, while the flow carrying that charge is called the Noether current. The Noether current is defined up to a solenoidal (divergenceless) vector field.
In the context of gravitation, Felix Klein's statement of Noether's theorem for action I stipulates for the invariants:
Brief illustration and overview of the concept
The main idea behind Noether's theorem is most easily illustrated by a system with one coordinate and a continuous symmetry (gray arrows on the diagram).
Consider any trajectory (bold on the diagram) that satisfies the system's laws of motion. That is, the action governing this system is stationary on this trajectory, i.e. does not change under any local variation of the trajectory. In particular it would not change under a variation that applies the symmetry flow on a time segment and is motionless outside that segment. To keep the trajectory continuous, we use "buffering" periods of small time to transition between the segments gradually.
The total change in the action now comprises changes brought by every interval in play. Parts, where variation itself vanishes, i.e outside bring no . The middle part does not change the action either, because its transformation is a symmetry and thus preserves the Lagrangian and the action . The only remaining parts are the "buffering" pieces. In these regions both the coordinate and velocity change, but changes by , and the change in the coordinate is negligible by comparison since the time span of the buffering is small (taken to the limit of 0), so . So the regions contribute mostly through their "slanting" .
That changes the Lagrangian by , which integrates to
These last terms, evaluated around the endpoints and , should cancel each other in order to make the total change in the action be zero, as would be expected if the trajectory is a solution. That is
meaning the quantity is conserved, which is the conclusion of Noether's theorem. For instance if pure translations of by a constant are the symmetry, then the conserved quantity becomes just , the canonical momentum.
More general cases follow the same idea:
Historical context
A conservation law states that some quantity X in the mathematical description of a system's evolution remains constant throughout its motion – it is an invariant. Mathematically, the rate of change of X (its derivative with respect to time) is zero,
Such quantities are said to be conserved; they are often called constants of motion (although motion per se need not be involved, just evolution in time). For example, if the energy of a system is conserved, its energy is invariant at all times, which imposes a constraint on the system's motion and may help in solving for it. Aside from insights that such constants of motion give into the nature of a system, they are a useful calculational tool; for example, an approximate solution can be corrected by finding the nearest state that satisfies the suitable conservation laws.
The earliest constants of motion discovered were momentum and kinetic energy, which were proposed in the 17th century by René Descartes and Gottfried Leibniz on the basis of collision experiments, and refined by subsequent researchers. Isaac Newton was the first to enunciate the conservation of momentum in its modern form, and showed that it was a consequence of Newton's laws of motion. According to general relativity, the conservation laws of linear momentum, energy and angular momentum are only exactly true globally when expressed in terms of the sum of the stress–energy tensor (non-gravitational stress–energy) and the Landau–Lifshitz stress–energy–momentum pseudotensor (gravitational stress–energy). The local conservation of non-gravitational linear momentum and energy in a free-falling reference frame is expressed by the vanishing of the covariant divergence of the stress–energy tensor. Another important conserved quantity, discovered in studies of the celestial mechanics of astronomical bodies, is the Laplace–Runge–Lenz vector.
In the late 18th and early 19th centuries, physicists developed more systematic methods for discovering invariants. A major advance came in 1788 with the development of Lagrangian mechanics, which is related to the principle of least action. In this approach, the state of the system can be described by any type of generalized coordinates q; the laws of motion need not be expressed in a Cartesian coordinate system, as was customary in Newtonian mechanics. The action is defined as the time integral I of a function known as the Lagrangian L
where the dot over q signifies the rate of change of the coordinates q,
Hamilton's principle states that the physical path q(t)—the one actually taken by the system—is a path for which infinitesimal variations in that path cause no change in I, at least up to first order. This principle results in the Euler–Lagrange equations,
Thus, if one of the coordinates, say qk, does not appear in the Lagrangian, the right-hand side of the equation is zero, and the left-hand side requires that
where the momentum
is conserved throughout the motion (on the physical path).
Thus, the absence of the ignorable coordinate qk from the Lagrangian implies that the Lagrangian is unaffected by changes or transformations of qk; the Lagrangian is invariant, and is said to exhibit a symmetry under such transformations. This is the seed idea generalized in Noether's theorem.
Several alternative methods for finding conserved quantities were developed in the 19th century, especially by William Rowan Hamilton. For example, he developed a theory of canonical transformations which allowed changing coordinates so that some coordinates disappeared from the Lagrangian, as above, resulting in conserved canonical momenta. Another approach, and perhaps the most efficient for finding conserved quantities, is the Hamilton–Jacobi equation.
Emmy Noether's work on the invariance theorem began in 1915 when she was helping Felix Klein and David Hilbert with their work related to Albert Einstein's theory of general relativity By March 1918 she had most of the key ideas for the paper which would be published later in the year.
Mathematical expression
Simple form using perturbations
The essence of Noether's theorem is generalizing the notion of ignorable coordinates.
One can assume that the Lagrangian L defined above is invariant under small perturbations (warpings) of the time variable t and the generalized coordinates q. One may write
where the perturbations δt and δq are both small, but variable. For generality, assume there are (say) N such symmetry transformations of the action, i.e. transformations leaving the action unchanged; labelled by an index r = 1, 2, 3, ..., N.
Then the resultant perturbation can be written as a linear sum of the individual types of perturbations,
where εr are infinitesimal parameter coefficients corresponding to each:
generator Tr of time evolution, and
generator Qr of the generalized coordinates.
For translations, Qr is a constant with units of length; for rotations, it is an expression linear in the components of q, and the parameters make up an angle.
Using these definitions, Noether showed that the N quantities
are conserved (constants of motion).
Examples
I. Time invariance
For illustration, consider a Lagrangian that does not depend on time, i.e., that is invariant (symmetric) under changes t → t + δt, without any change in the coordinates q. In this case, N = 1, T = 1 and Q = 0; the corresponding conserved quantity is the total energy H
II. Translational invariance
Consider a Lagrangian which does not depend on an ("ignorable", as above) coordinate qk; so it is invariant (symmetric) under changes qk → qk + δqk. In that case, N = 1, T = 0, and Qk = 1; the conserved quantity is the corresponding linear momentum pk
In special and general relativity, these two conservation laws can be expressed either globally (as it is done above), or locally as a continuity equation. The global versions can be united into a single global conservation law: the conservation of the energy-momentum 4-vector. The local versions of energy and momentum conservation (at any point in space-time) can also be united, into the conservation of a quantity defined locally at the space-time point: the stress–energy tensor(this will be derived in the next section).
III. Rotational invariance
The conservation of the angular momentum L = r × p is analogous to its linear momentum counterpart. It is assumed that the symmetry of the Lagrangian is rotational, i.e., that the Lagrangian does not depend on the absolute orientation of the physical system in space. For concreteness, assume that the Lagrangian does not change under small rotations of an angle δθ about an axis n; such a rotation transforms the Cartesian coordinates by the equation
Since time is not being transformed, T = 0, and N = 1. Taking δθ as the ε parameter and the Cartesian coordinates r as the generalized coordinates q, the corresponding Q variables are given by
Then Noether's theorem states that the following quantity is conserved,
In other words, the component of the angular momentum L along the n axis is conserved. And if n is arbitrary, i.e., if the system is insensitive to any rotation, then every component of L is conserved; in short, angular momentum is conserved.
Field theory version
Although useful in its own right, the version of Noether's theorem just given is a special case of the general version derived in 1915. To give the flavor of the general theorem, a version of Noether's theorem for continuous fields in four-dimensional space–time is now given. Since field theory problems are more common in modern physics than mechanics problems, this field theory version is the most commonly used (or most often implemented) version of Noether's theorem.
Let there be a set of differentiable fields defined over all space and time; for example, the temperature would be representative of such a field, being a number defined at every place and time. The principle of least action can be applied to such fields, but the action is now an integral over space and time
(the theorem can be further generalized to the case where the Lagrangian depends on up to the nth derivative, and can also be formulated using jet bundles).
A continuous transformation of the fields can be written infinitesimally as
where is in general a function that may depend on both and . The condition for to generate a physical symmetry is that the action is left invariant. This will certainly be true if the Lagrangian density is left invariant, but it will also be true if the Lagrangian changes by a divergence,
since the integral of a divergence becomes a boundary term according to the divergence theorem. A system described by a given action might have multiple independent symmetries of this type, indexed by so the most general symmetry transformation would be written as
with the consequence
For such systems, Noether's theorem states that there are conserved current densities
(where the dot product is understood to contract the field indices, not the index or index).
In such cases, the conservation law is expressed in a four-dimensional way
which expresses the idea that the amount of a conserved quantity within a sphere cannot change unless some of it flows out of the sphere. For example, electric charge is conserved; the amount of charge within a sphere cannot change unless some of the charge leaves the sphere.
For illustration, consider a physical system of fields that behaves the same under translations in time and space, as considered above; in other words, is constant in its third argument. In that case, N = 4, one for each dimension of space and time. An infinitesimal translation in space, (with denoting the Kronecker delta), affects the fields as : that is, relabelling the coordinates is equivalent to leaving the coordinates in place while translating the field itself, which in turn is equivalent to transforming the field by replacing its value at each point with the value at the point "behind" it which would be mapped onto by the infinitesimal displacement under consideration. Since this is infinitesimal, we may write this transformation as
The Lagrangian density transforms in the same way, , so
and thus Noether's theorem corresponds to the conservation law for the stress–energy tensor Tμν, where we have used in place of . To wit, by using the expression given earlier, and collecting the four conserved currents (one for each ) into a tensor , Noether's theorem gives
with
(we relabelled as at an intermediate step to avoid conflict). (However, the obtained in this way may differ from the symmetric tensor used as the source term in general relativity; see Canonical stress–energy tensor.)
The conservation of electric charge, by contrast, can be derived by considering Ψ linear in the fields φ rather than in the derivatives. In quantum mechanics, the probability amplitude ψ(x) of finding a particle at a point x is a complex field φ, because it ascribes a complex number to every point in space and time. The probability amplitude itself is physically unmeasurable; only the probability p = |ψ|2 can be inferred from a set of measurements. Therefore, the system is invariant under transformations of the ψ field and its complex conjugate field ψ* that leave |ψ|2 unchanged, such as
a complex rotation. In the limit when the phase θ becomes infinitesimally small, δθ, it may be taken as the parameter ε, while the Ψ are equal to iψ and −iψ*, respectively. A specific example is the Klein–Gordon equation, the relativistically correct version of the Schrödinger equation for spinless particles, which has the Lagrangian density
In this case, Noether's theorem states that the conserved (∂ ⋅ j = 0) current equals
which, when multiplied by the charge on that species of particle, equals the electric current density due to that type of particle. This "gauge invariance" was first noted by Hermann Weyl, and is one of the prototype gauge symmetries of physics.
Derivations
One independent variable
Consider the simplest case, a system with one independent variable, time. Suppose the dependent variables q are such that the action integral
is invariant under brief infinitesimal variations in the dependent variables. In other words, they satisfy the Euler–Lagrange equations
And suppose that the integral is invariant under a continuous symmetry. Mathematically such a symmetry is represented as a flow, φ, which acts on the variables as follows
where ε is a real variable indicating the amount of flow, and T is a real constant (which could be zero) indicating how much the flow shifts time.
The action integral flows to
which may be regarded as a function of ε. Calculating the derivative at ε = 0 and using Leibniz's rule, we get
Notice that the Euler–Lagrange equations imply
Substituting this into the previous equation, one gets
Again using the Euler–Lagrange equations we get
Substituting this into the previous equation, one gets
From which one can see that
is a constant of the motion, i.e., it is a conserved quantity. Since φ[q, 0] = q, we get and so the conserved quantity simplifies to
To avoid excessive complication of the formulas, this derivation assumed that the flow does not change as time passes. The same result can be obtained in the more general case.
Field-theoretic derivation
Noether's theorem may also be derived for tensor fields where the index A ranges over the various components of the various tensor fields. These field quantities are functions defined over a four-dimensional space whose points are labeled by coordinates xμ where the index μ ranges over time (μ = 0) and three spatial dimensions (μ = 1, 2, 3). These four coordinates are the independent variables; and the values of the fields at each event are the dependent variables. Under an infinitesimal transformation, the variation in the coordinates is written
whereas the transformation of the field variables is expressed as
By this definition, the field variations result from two factors: intrinsic changes in the field themselves and changes in coordinates, since the transformed field αA depends on the transformed coordinates ξμ. To isolate the intrinsic changes, the field variation at a single point xμ may be defined
If the coordinates are changed, the boundary of the region of space–time over which the Lagrangian is being integrated also changes; the original boundary and its transformed version are denoted as Ω and Ω’, respectively.
Noether's theorem begins with the assumption that a specific transformation of the coordinates and field variables does not change the action, which is defined as the integral of the Lagrangian density over the given region of spacetime. Expressed mathematically, this assumption may be written as
where the comma subscript indicates a partial derivative with respect to the coordinate(s) that follows the comma, e.g.
Since ξ is a dummy variable of integration, and since the change in the boundary Ω is infinitesimal by assumption, the two integrals may be combined using the four-dimensional version of the divergence theorem into the following form
The difference in Lagrangians can be written to first-order in the infinitesimal variations as
However, because the variations are defined at the same point as described above, the variation and the derivative can be done in reverse order; they commute
Using the Euler–Lagrange field equations
the difference in Lagrangians can be written neatly as
Thus, the change in the action can be written as
Since this holds for any region Ω, the integrand must be zero
For any combination of the various symmetry transformations, the perturbation can be written
where is the Lie derivative of
in the Xμ direction. When is a scalar or ,
These equations imply that the field variation taken at one point equals
Differentiating the above divergence with respect to ε at ε = 0 and changing the sign yields the conservation law
where the conserved current equals
Manifold/fiber bundle derivation
Suppose we have an n-dimensional oriented Riemannian manifold, M and a target manifold T. Let be the configuration space of smooth functions from M to T. (More generally, we can have smooth sections of a fiber bundle over M.)
Examples of this M in physics include:
In classical mechanics, in the Hamiltonian formulation, M is the one-dimensional manifold , representing time and the target space is the cotangent bundle of space of generalized positions.
In field theory, M is the spacetime manifold and the target space is the set of values the fields can take at any given point. For example, if there are m real-valued scalar fields, , then the target manifold is . If the field is a real vector field, then the target manifold is isomorphic to .
Now suppose there is a functional
called the action. (It takes values into , rather than ; this is for physical reasons, and is unimportant for this proof.)
To get to the usual version of Noether's theorem, we need additional restrictions on the action. We assume is the integral over M of a function
called the Lagrangian density, depending on , its derivative and the position. In other words, for in
Suppose we are given boundary conditions, i.e., a specification of the value of at the boundary if M is compact, or some limit on as x approaches ∞. Then the subspace of consisting of functions such that all functional derivatives of at are zero, that is:
and that satisfies the given boundary conditions, is the subspace of on shell solutions. (See principle of stationary action)
Now, suppose we have an infinitesimal transformation on , generated by a functional derivation, Q such that
for all compact submanifolds N or in other words,
for all x, where we set
If this holds on shell and off shell, we say Q generates an off-shell symmetry. If this only holds on shell, we say Q generates an on-shell symmetry. Then, we say Q is a generator of a one parameter symmetry Lie group.
Now, for any N, because of the Euler–Lagrange theorem, on shell (and only on-shell), we have
Since this is true for any N, we have
But this is the continuity equation for the current defined by:
which is called the Noether current associated with the symmetry. The continuity equation tells us that if we integrate this current over a space-like slice, we get a conserved quantity called the Noether charge (provided, of course, if M is noncompact, the currents fall off sufficiently fast at infinity).
Comments
Noether's theorem is an on shell theorem: it relies on use of the equations of motion—the classical path. It reflects the relation between the boundary conditions and the variational principle. Assuming no boundary terms in the action, Noether's theorem implies that
The quantum analogs of Noether's theorem involving expectation values (e.g., ) probing off shell quantities as well are the Ward–Takahashi identities.
Generalization to Lie algebras
Suppose we have two symmetry derivations Q1 and Q2. Then, [Q1, Q2] is also a symmetry derivation. Let us see this explicitly. Let us say
and
Then,
where f12 = Q1[f2μ] − Q2[f1μ]. So,
This shows we can extend Noether's theorem to larger Lie algebras in a natural way.
Generalization of the proof
This applies to any local symmetry derivation Q satisfying QS ≈ 0, and also to more general local functional differentiable actions, including ones where the Lagrangian depends on higher derivatives of the fields. Let ε be any arbitrary smooth function of the spacetime (or time) manifold such that the closure of its support is disjoint from the boundary. ε is a test function. Then, because of the variational principle (which does not apply to the boundary, by the way), the derivation distribution q generated by q[ε][Φ(x)] = ε(x)Q[Φ(x)] satisfies q[ε][S] ≈ 0 for every ε, or more compactly, q(x)[S] ≈ 0 for all x not on the boundary (but remember that q(x) is a shorthand for a derivation distribution, not a derivation parametrized by x in general). This is the generalization of Noether's theorem.
To see how the generalization is related to the version given above, assume that the action is the spacetime integral of a Lagrangian that only depends on and its first derivatives. Also, assume
Then,
for all .
More generally, if the Lagrangian depends on higher derivatives, then
Examples
Example 1: Conservation of energy
Looking at the specific case of a Newtonian particle of mass m, coordinate x, moving under the influence of a potential V, coordinatized by time t. The action, S, is:
The first term in the brackets is the kinetic energy of the particle, while the second is its potential energy. Consider the generator of time translations Q = d/dt. In other words, . The coordinate x has an explicit dependence on time, whilst V does not; consequently:
so we can set
Then,
The right hand side is the energy, and Noether's theorem states that (i.e. the principle of conservation of energy is a consequence of invariance under time translations).
More generally, if the Lagrangian does not depend explicitly on time, the quantity
(called the Hamiltonian) is conserved.
Example 2: Conservation of center of momentum
Still considering 1-dimensional time, let
for Newtonian particles where the potential only depends pairwise upon the relative displacement.
For , consider the generator of Galilean transformations (i.e. a change in the frame of reference). In other words,
And
This has the form of so we can set
Then,
where is the total momentum, M is the total mass and is the center of mass. Noether's theorem states:
Example 3: Conformal transformation
Both examples 1 and 2 are over a 1-dimensional manifold (time). An example involving spacetime is a conformal transformation of a massless real scalar field with a quartic potential in (3 + 1)-Minkowski spacetime.
For Q, consider the generator of a spacetime rescaling. In other words,
The second term on the right hand side is due to the "conformal weight" of . And
This has the form of
(where we have performed a change of dummy indices) so set
Then
Noether's theorem states that (as one may explicitly check by substituting the Euler–Lagrange equations into the left hand side).
If one tries to find the Ward–Takahashi analog of this equation, one runs into a problem because of anomalies.
Applications
Application of Noether's theorem allows physicists to gain powerful insights into any general theory in physics, by just analyzing the various transformations that would make the form of the laws involved invariant. For example:
Invariance of an isolated system with respect to spatial translation (in other words, that the laws of physics are the same at all locations in space) gives the law of conservation of linear momentum (which states that the total linear momentum of an isolated system is constant)
Invariance of an isolated system with respect to time translation (i.e. that the laws of physics are the same at all points in time) gives the law of conservation of energy (which states that the total energy of an isolated system is constant)
Invariance of an isolated system with respect to rotation (i.e., that the laws of physics are the same with respect to all angular orientations in space) gives the law of conservation of angular momentum (which states that the total angular momentum of an isolated system is constant)
Invariance of an isolated system with respect to Lorentz boosts (i.e., that the laws of physics are the same with respect to all inertial reference frames) gives the center-of-mass theorem (which states that the center-of-mass of an isolated system moves at a constant velocity).
In quantum field theory, the analog to Noether's theorem, the Ward–Takahashi identity, yields further conservation laws, such as the conservation of electric charge from the invariance with respect to a change in the phase factor of the complex field of the charged particle and the associated gauge of the electric potential and vector potential.
The Noether charge is also used in calculating the entropy of stationary black holes.
See also
Conservation law
Charge (physics)
Gauge symmetry
Gauge symmetry (mathematics)
Invariant (physics)
Goldstone boson
Symmetry (physics)
References
Further reading
Online copy.
External links
(Original in Gott. Nachr. 1918:235–257)
Noether's Theorem at MathPages.
Articles containing proofs
Calculus of variations
Conservation laws
Concepts in physics
Eponymous theorems of physics
Partial differential equations
Physics theorems
Quantum field theory
Symmetry | 0.765855 | 0.998171 | 0.764454 |
Phenomenology (physics) | In physics, phenomenology is the application of theoretical physics to experimental data by making quantitative predictions based upon known theories. It is related to the philosophical notion of the same name in that these predictions describe anticipated behaviors for the phenomena in reality. Phenomenology stands in contrast with experimentation in the scientific method, in which the goal of the experiment is to test a scientific hypothesis instead of making predictions.
Phenomenology is commonly applied to the field of particle physics, where it forms a bridge between the mathematical models of theoretical physics (such as quantum field theories and theories of the structure of space-time) and the results of the high-energy particle experiments. It is sometimes used in other fields such as in condensed matter physics and plasma physics, when there are no existing theories for the observed experimental data.
Applications in particle physics
Standard Model consequences
Within the well-tested and generally accepted Standard Model, phenomenology is the calculating of detailed predictions for experiments, usually at high precision (e.g., including radiative corrections).
Examples include:
Next-to-leading order calculations of particle production rates and distributions.
Monte Carlo simulation studies of physics processes at colliders.
Extraction of parton distribution functions from data.
CKM matrix calculations
The CKM matrix is useful in these predictions:
Application of heavy quark effective field theory to extract CKM matrix elements.
Using lattice QCD to extract quark masses and CKM matrix elements from experiment.
Theoretical models
In Physics beyond the Standard Model, phenomenology addresses the experimental consequences of new models: how their new particles could be searched for, how the model parameters could be measured, and how the model could be distinguished from other, competing models.
Phenomenological analysis
Phenomenological analyses, in which one studies the experimental consequences of adding the most general set of beyond-the-Standard-Model effects in a given sector of the Standard Model, usually parameterized in terms of anomalous couplings and higher-dimensional operators. In this case, the term "phenomenological" is being used more in its philosophy of science sense.
See also
Effective theory
Phenomenological model
Phenomenological quantum gravity
References
External links
Papers on phenomenology are available on the hep-ph archive of the ArXiv.org e-print archive
List of topics on phenomenology from IPPP, the Institute for Particle Physics Phenomenology at University of Durham, UK
Collider Phenomenology: Basic knowledge and techniques, lectures by Tao Han
Pheno '08 Symposium on particle physics phenomenology, including slides from the talks linked from the symposium program.
Condensed matter physics
Experimental particle physics
Experimental physics
Particle physics
Phenomenological methodology
Theoretical physics | 0.785374 | 0.973361 | 0.764453 |
MADNESS | MADNESS (Multiresolution Adaptive Numerical Environment for Scientific Simulation)
is a high-level software environment for the solution of integral and differential equations in many dimensions using adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis
and separated representations
.
There are three main components to MADNESS. At the lowest level is a petascale parallel programming environment
that aims to increases programmer productivity and code performance/scalability while maintaining backward compatibility with current programming tools such as the message-passing interface and Global Arrays. The numerical capabilities built upon the parallel tools provide a high-level environment for composing and solving numerical problems in many (1-6+) dimensions. Finally, built upon the numerical tools are new applications with initial focus upon chemistry,
, atomic and molecular physics,
material science, and nuclear structure. It is open-source, has an object-oriented design, and is designed to be a parallel processing program for computers with up to millions of cores running already on the Cray XT5 at Oak Ridge National Laboratory and the IBM Blue Gene at Argonne National Laboratory. The small matrix multiplication (relative to large, BLAS-optimized matrices) is the primary computational kernel in MADNESS; thus, an efficient implement on modern CPUs is an ongoing research effort.
.
Adapting the irregular computation in MADNESS to heterogeneous platforms is nontrivial due to the size of the kernel, which is too small to be offloaded via compiler directives (e.g. OpenACC), but has been demonstrated for CPU–GPU systems
.
Intel has publicly stated that MADNESS is one of the codes running on the Intel MIC architecture
but no performance data has been published yet.
MADNESS' chemistry capability includes Hartree–Fock and density functional theory in chemistry
(including analytic derivatives, response properties
and time-dependent density functional theory with asymptotically corrected potentials
)
as well as nuclear density functional theory
and
Hartree–Fock–Bogoliubov theory.
MADNESS and BigDFT are the two most widely known codes that perform DFT and TDDFT using wavelets
.
Many-body wavefunctions requiring six-dimensional spatial representations are also implemented
(e.g. MP2).
The parallel runtime inside of MADNESS has been used to implement a wide variety of features, including graph optimization
.
From a mathematical perspective, MADNESS emphasizes rigorous numerical precision without loss of computational performance
. This is useful not only in quantum chemistry and nuclear physics, but also the modeling of partial differential equations
.
MADNESS was recognized by the R&D 100 Awards in 2011. It is an important code to Department of Energy supercomputing sites and is being used by both the leadership computing facilities at Argonne National Laboratory
and Oak Ridge National Laboratory to evaluate the stability and performance of their latest supercomputers. It has users around the world, including the United States and Japan
.
MADNESS has been a workhorse code for computational chemistry in the DOE INCITE program
at the Oak Ridge Leadership Computing Facility
and is noted as one of the important codes to run on the Cray Cascade architecture.
See also
List of numerical analysis software
List of quantum chemistry and solid state physics software
References
External links
MADNESS Homepage on Google Code
Computational chemistry software
Free mathematics software
Mathematical software
Numerical software
Parallel computing | 0.782554 | 0.976854 | 0.764441 |
Design of experiments | The design of experiments (DOE or DOX), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.
In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment.
Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity.
Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience.
History
Statistical experiments, following Charles S. Peirce
A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics.
Randomized experiments
Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights.
Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.
Optimal designs for regression models
Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less).
Sequences of experiments
The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952.
Fisher's principles
A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.
Comparison
In some fields of study it is not possible to have independent measurements to a traceable metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a scientific control or traditional treatment that acts as baseline.
Randomization
Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment.
The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger statistical population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things.
Statistical replication
Measurements are usually subject to variation and measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic. However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible.
Blocking
Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study.
Orthogonality
Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T – 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.
Multifactorial experiments
Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test.
Example
This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs.
Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by
We consider two different experiments:
Weigh each object in one pan, with the other pan empty. Let Xi be the measured weight of the object, for i = 1, ..., 8.
Do the eight weighings according to the following schedule—a weighing matrix:
Let Yi be the measured difference for i = 1, ..., 8. Then the estimated value of the weight θ1 is
Similar estimates can be found for the weights of the other items:
The question of design of experiments is: which experiment is better?
The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other.
Many problems of the design of experiments involve combinatorial designs, as in this example and others.
Avoiding false positives
False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields.
Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention.
Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance.
P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible.
Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers.
Clear and complete documentation of the experimental methodology is also important in order to support replication of results.
Discussion topics when setting up an experimental design
An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section:
How many factors does the design have, and are the levels of these factors fixed or random?
Are control conditions needed, and what should they be?
Manipulation checks: did the manipulation really work?
What are the background variables?
What is the sample size? How many units must be collected for the experiment to be generalisable and have enough power?
What is the relevance of interactions between factors?
What is the influence of delayed effects of substantive factors on outcomes?
How do response shifts affect self-report measures?
How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests?
What about using a proxy pretest?
Are there lurking variables?
Should the client/patient, researcher or even the analyst of the data be blind to conditions?
What is the feasibility of subsequent application of different conditions to the same units?
How many of each control and noise factors should be taken into account?
The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used.
Causal attributions
In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design (Adér & Mellenbergh, 2008).
Statistical control
It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.
To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned.
One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time.
Experimental designs after Fisher
Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations.
In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards.
Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics.
As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space.
Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn.
The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known as system identification.
Human participant constraints
Laws and ethical considerations preclude some carefully designed
experiments with human subjects. Legal constraints are dependent on
jurisdiction. Constraints may involve
institutional review boards, informed consent
and confidentiality affecting both clinical (medical) trials and
behavioral and social science experiments.
In the field of toxicology, for example, experimentation is performed
on laboratory animals with the goal of defining safe exposure limits
for humans. Balancing
the constraints are views from the medical field. Regarding the randomization of patients,
"... if no one knows which therapy is better, there is no ethical
imperative to use one therapy or another." (p 380) Regarding
experimental design, "...it is clearly not ethical to place subjects
at risk to collect data in a poorly designed study when this situation
can be easily avoided...". (p 393)
See also
Adversarial collaboration
Bayesian experimental design
Block design
Box–Behnken design
Central composite design
Clinical trial
Clinical study design
Computer experiment
Control variable
Controlling for a variable
Experimetrics (econometrics-related experiments)
Factor analysis
Fractional factorial design
Glossary of experimental design
Grey box model
Industrial engineering
Instrument effect
Law of large numbers
Manipulation checks
Multifactor design of experiments software
One-factor-at-a-time method
Optimal design
Plackett–Burman design
Probabilistic design
Protocol (natural sciences)
Quasi-experimental design
Randomized block design
Randomized controlled trial
Research design
Robust parameter design
Sample size determination
Supersaturated design
Royal Commission on Animal Magnetism
Survey sampling
System identification
Taguchi methods
References
Sources
Peirce, C. S. (1877–1878), "Illustrations of the Logic of Science" (series), Popular Science Monthly, vols. 12–13. Relevant individual papers:
(1878 March), "The Doctrine of Chances", Popular Science Monthly, v. 12, March issue, pp. 604–615. Internet Archive Eprint.
(1878 April), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705–718. Internet Archive Eprint.
(1878 June), "The Order of Nature", Popular Science Monthly, v. 13, pp. 203–217.Internet Archive Eprint.
(1878 August), "Deduction, Induction, and Hypothesis", Popular Science Monthly, v. 13, pp. 470–482. Internet Archive Eprint.
(1883), "A Theory of Probable Inference", Studies in Logic, pp. 126–181, Little, Brown, and Company. (Reprinted 1983, John Benjamins Publishing Company, )
External links
A chapter from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST
Box–Behnken designs from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST
Detailed mathematical developments of most common DoE in the Opera Magistris v3.6 online reference Chapter 15, section 7.4, .
Experiments
Industrial engineering
Metascience
Quantitative research
Statistical process control
Statistical theory
Systems engineering
Mathematics in medicine | 0.766366 | 0.997487 | 0.76444 |
Web of Science | The Web of Science (WoS; previously known as Web of Knowledge) is a paid-access platform that provides (typically via the internet) access to multiple databases that provide reference and citation data from academic journals, conference proceedings, and other documents in various academic disciplines.
Until 1997, it was originally produced by the Institute for Scientific Information. It is currently owned by Clarivate.
Web of Science currently contains 79 million records in the core collection and 171 million records on the platform.
History
A citation index is built on the fact that citations in science serve as linkages between similar research items, and lead to matching or related scientific literature, such as journal articles, conference proceedings, abstracts, etc. In addition, literature that shows the greatest impact in a particular field, or more than one discipline, can be located through a citation index. For example, a paper's influence can be determined by linking to all the papers that have cited it. In this way, current trends, patterns, and emerging fields of research can be assessed. Eugene Garfield, the "father of citation indexing of academic literature", who launched the Science Citation Index, which in turn led to the Web of Science, wrote:
Search answer
Web of Science "is a unifying research tool which enables the user to acquire, analyze, and disseminate database information in a timely manner". This is accomplished because of the creation of a common vocabulary, called ontology, for varied search terms and varied data. Moreover, search terms generate related information across categories.
Acceptable content for Web of Science is determined by an evaluation and selection process based on the following criteria: impact, influence, timeliness, peer review, and geographic representation.
Web of Science employs various search and analysis capabilities. First, citation indexing is employed, which is enhanced by the capability to search for results across disciplines. The influence, impact, history, and methodology of an idea can be followed from its first instance, notice, or referral to the present day. This technology points to a deficiency with the keyword-only method of searching.
Second, subtle trends and patterns relevant to the literature or research of interest, become apparent. Broad trends indicate significant topics of the day, as well as the history relevant to both the work at hand, and particular areas of study.
Third, trends can be graphically represented.
Coverage
Expanding the coverage of Web of Science, in November 2009 Thomson Reuters introduced Century of Social Sciences. This service contains files which trace social science research back to the beginning of the 20th century, and Web of Science now has indexing coverage from the year 1900 to the present. As of February 2017, the multidisciplinary coverage of the Web of Science encompasses: over a billion cited references, 90 million records, covering over 12 thousand high impact journals, and 8.2 million records across 160 thousand conference proceedings, with 15 thousand proceedings added each year. The selection is made on the basis of impact evaluations and comprise academic journals, spanning multiple academic disciplines. The coverage includes: the sciences, social sciences, the arts, and humanities, and goes across disciplines. However, Web of Science does not index all journals.
There is a significant and positive correlation between the impact factor and CiteScore. However, an analysis by Elsevier, who created the journal evaluation metric CiteScore, has identified 216 journals from 70 publishers to be in the top 10 percent of the most-cited journals in their subject category based on the CiteScore while they did not have an impact factor. It appears that the impact factor does not provide comprehensive and unbiased coverage of high-quality journals. Similar results can be observed by comparing the impact factor with the SCImago Journal Rank.
Furthermore, as of September 2014, the total file count of the Web of Science was over 90 million records, which included over 800 million cited references, covering 5.3 thousand social science publications in 55 disciplines.
Titles of foreign-language publications are translated into English and so cannot be found by searches in the original language.
In 2018, the Web of Science started embedding partial information about the open access status of works, using Unpaywall data.
While marketed as a global point of reference, Scopus and WoS have been characterised as «structurally biased against research produced in non-Western countries, non-English language research, and research from the arts, humanities, and social sciences».
After the 2022 Russian invasion of Ukraine, on March 11, 2022, Clarivate – which owns Web of Science – announced that it would cease all commercial activity in Russia and immediately close an office there.
Citation databases
The Web of Science Core Collection consists of six online indexing databases:
Science Citation Index Expanded (SCIE), previously entitled Science Citation Index, covers more than 9,200 journals across 178 scientific disciplines. Coverage is from 1900 to present day, with over 53 million records
Social Sciences Citation Index (SSCI) covers more than 3,400 journals in the social sciences. Coverage is from 1900 to present, with over 9.3 million records
Arts & Humanities Citation Index (AHCI) covers more than 1,800 journals in the arts and humanities. Coverage is from 1975 to present, with over 4.9 million records
Emerging Sources Citation Index (ESCI) covers more than 7,800 journals in all disciplines. Coverage is from 2005 to present, with over 3 million records
Book Citation Index (BCI) covers more than 116,000 editorially selected books. Coverage is from 2005 to present, with over 53.2 million records
Conference Proceedings Citation Index (CPCI) covers more than 205,000 conference proceedings. Coverage is from 1990 to present, with over 70.1 million records
Regional databases
Since 2008, the Web of Science hosts a number of regional citation indices:
Chinese Science Citation Database, produced in partnership with the Chinese Academy of Sciences, was the first indexing database in a language other than English
SciELO Citation Index, established in 2013, covering Brazil, Spain, Portugal, the Caribbean and South Africa, and an additional 12 countries of Latin America
Korea Citation Index in 2014, with updates from the National Research Foundation of Korea
Russian Science Citation Index in 2015
Arabic Regional Citation Index in 2020
Contents
The seven citation indices listed above contain references which have been cited by other articles. One may use them to undertake cited reference search, that is, locating articles that cite an earlier, or current publication. One may search citation databases by topic, by author, by source title, and by location. Two chemistry databases, Index Chemicus and Current Chemical Reactions allow for the creation of structure drawings, thus enabling users to locate chemical compounds and reactions.
Abstracting and indexing
The following types of literature are indexed: scholarly books, peer reviewed journals, original research articles, reviews, editorials, chronologies, abstracts, as well as other items. Disciplines included in this index are agriculture, biological sciences, engineering, medical and life sciences, physical and chemical sciences, anthropology, law, library sciences, architecture, dance, music, film, and theater. Seven citation databases encompasses coverage of the above disciplines.
Other databases and products
Among other WoS databases are BIOSIS and The Zoological Record, an electronic index of zoological literature that also serves as the unofficial register of scientific names in zoology.
Clarivate owns and markets numerous other products that provide data and analytics, workflow tools, and professional services to researchers, universities, research institutions, and other organizations, such as:
InCites
Journal Citation Reports
Essential Science Indicators
ScholarOne
Converis
Limitations in the use of citation analysis
As with other scientific approaches, scientometrics and bibliometrics have their own limitations. In 2010, a criticism was voiced pointing toward certain deficiencies of the journal impact factor calculation process, based on Thomson Reuters Web of Science, such as: journal citation distributions usually are highly skewed towards established journals; journal impact factor properties are field-specific and can be easily manipulated by editors, or even by changing the editorial policies; this makes the entire process essentially non-transparent.
Regarding the more objective journal metrics, there is a growing view that for greater accuracy it must be supplemented with article-level metrics and peer-review. Studies of methodological quality and reliability have found that "reliability of published research works in several fields may be decreasing with increasing journal rank". Thomson Reuters replied to criticism in general terms by stating that "no one metric can fully capture the complex contributions scholars make to their disciplines, and many forms of scholarly achievement should be considered."
Journal Citation Reports
See also
arXiv
CiteSeerX
CSA databases
Dialog (online database)
Energy Citations Database
Energy Science and Technology Database
ETDEWEB
Google Scholar
h-index
Indian Citation Index
J-Gate
List of academic journal search engines
Materials Science Citation Index
PASCAL database
PubMed Central
Répertoire International de Littérature Musicale
ResearchGate
Serbian Citation Index
VINITI Database RAS
References
External links
Further reading
Bibliographic databases and indexes
Online databases
Clarivate | 0.765547 | 0.998551 | 0.764438 |
Biorobotics | Biorobotics is an interdisciplinary science that combines the fields of biomedical engineering, cybernetics, and robotics to develop new technologies that integrate biology with mechanical systems to develop more efficient communication, alter genetic information, and create machines that imitate biological systems.
Cybernetics
Cybernetics focuses on the communication and system of living organisms and machines that can be applied and combined with multiple fields of study such as biology, mathematics, computer science, engineering, and much more.
This discipline falls under the branch of biorobotics because of its combined field of study between biological bodies and mechanical systems. Studying these two systems allow for advanced analysis on the functions and processes of each system as well as the interactions between them.
History
Cybernetic theory is a concept that has existed for centuries, dating back to the era of Plato where he applied the term to refer to the "governance of people". The term cybernetique is seen in the mid-1800s used by physicist André-Marie Ampère. The term cybernetics was popularized in the late 1940s to refer to a discipline that touched on, but was separate, from established disciplines, such as electrical engineering, mathematics, and biology.
Science
Cybernetics is often misunderstood because of the breadth of disciplines it covers. In the early 20th century, it was coined as an interdisciplinary field of study that combines biology, science, network theory, and engineering. Today, it covers all scientific fields with system related processes. The goal of cybernetics is to analyze systems and processes of any system or systems in an attempt to make them more efficient and effective.
Applications
Cybernetics is used as an umbrella term so applications extend to all systems related scientific fields such as biology, mathematics, computer science, engineering, management, psychology, sociology, art, and more. Cybernetics is used amongst several fields to discover principles of systems, adaptation of organisms, information analysis and much more.
Genetic engineering
Genetic engineering is a field that uses advances in technology to modify biological organisms. Through different methods, scientists are able to alter the genetic material of microorganisms, plants and animals to provide them with desirable traits. For example, making plants grow bigger, better, and faster. Genetic engineering is included in biorobotics because it uses new technologies to alter biology and change an organism's DNA for their and society's benefit.
History
Although humans have modified genetic material of animals and plants through artificial selection for millennia (such as the genetic mutations that developed teosinte into corn and wolves into dogs), genetic engineering refers to the deliberate alteration or insertion of specific genes to an organism's DNA. The first successful case of genetic engineering occurred in 1973 when Herbert Boyer and Stanley Cohen were able to transfer a gene with antibiotic resistance to a bacterium.
Science
There are three main techniques used in genetic engineering: The plasmid method, the vector method and the biolistic method.
Plasmid method
This technique is used mainly for microorganisms such as bacteria. Through this method, DNA molecules called plasmids are extracted from bacteria and placed in a lab where restriction enzymes break them down. As the enzymes break the molecules down, some develop a rough edge that resembles that of a staircase which is considered 'sticky' and capable of reconnecting. These 'sticky' molecules are inserted into another bacteria where they will connect to the DNA rings with the altered genetic material.
Vector method
The vector method is considered a more precise technique than the plasmid method as it involves the transfer of a specific gene instead of a whole sequence. In the vector method, a specific gene from a DNA strand is isolated through restriction enzymes in a laboratory and is inserted into a vector. Once the vector accepts the genetic code, it is inserted into the host cell where the DNA will be transferred.
Biolistic method
The biolistic method is typically used to alter the genetic material of plants. This method embeds the desired DNA with a metallic particle such as gold or tungsten in a high speed gun. The particle is then bombarded into the plant. Due to the high velocities and the vacuum generated during bombardment, the particle is able to penetrate the cell wall and inserts the new DNA into the cell.
Applications
Genetic engineering has many uses in the fields of medicine, research and agriculture. In the medical field, genetically modified bacteria are used to produce drugs such as insulin, human growth hormones and vaccines. In research, scientists genetically modify organisms to observe physical and behavioral changes to understand the function of specific genes. In agriculture, genetic engineering is extremely important as it is used by farmers to grow crops that are resistant to herbicides and to insects such as BTCorn.
Bionics
Bionics is a medical engineering field and a branch of biorobotics consisting of electrical and mechanical systems that imitate biological systems, such as prosthetics and hearing aids. It's a portmanteau that combines biology and electronics.
History
The history of bionics goes as far back in time as ancient Egypt. A prosthetic toe made out of wood and leather was found on the foot of a mummy. The time period of the mummy corpse was estimated to be from around the fifteenth century B.C. Bionics can also be witnessed in ancient Greece and Rome. Prosthetic legs and arms were made for amputee soldiers. In the early 16th century, a French military surgeon by the name of Ambroise Pare became a pioneer in the field of bionics. He was known for making various types of upper and lower prosthetics. One of his most famous prosthetics, Le Petit Lorrain, was a mechanical hand operated by catches and springs. During the early 19th century, Alessandro Volta further progressed bionics. He set the foundation for the creation of hearing aids with his experiments. He found that electrical stimulation could restore hearing by inserting an electrical implant to the saccular nerve of a patient's ear. In 1945, the National Academy of Sciences created the Artificial Limb Program, which focused on improving prosthetics since there were a large number of World War II amputee soldiers. Since this creation, prosthetic materials, computer design methods, and surgical procedures have improved, creating modern-day bionics.
Science
Prosthetics
The important components that make up modern-day prosthetics are the pylon, the socket, and the suspension system. The pylon is the internal frame of the prosthetic that is made up of metal rods or carbon-fiber composites. The socket is the part of the prosthetic that connects the prosthetic to the person's missing limb. The socket consists of a soft liner that makes the fit comfortable, but also snug enough to stay on the limb. The suspension system is important in keeping the prosthetic on the limb. The suspension system is usually a harness system made up of straps, belts or sleeves that are used to keep the limb attached.
The operation of a prosthetic could be designed in various ways. The prosthetic could be body-powered, externally-powered, or myoelectrically powered. Body-powered prosthetics consist of cables attached to a strap or harness, which is placed on the person's functional shoulder, allowing the person to manipulate and control the prosthetic as he or she deems fit. Externally-powered prosthetics consist of motors to power the prosthetic and buttons and switches to control the prosthetic. Myoelectrically powered prosthetics are new, advanced forms of prosthetics where electrodes are placed on the muscles above the limb. The electrodes will detect the muscle contractions and send electrical signals to the prosthetic to move the prosthetic. The downside to this type of prosthetic is that if the sensors are not placed correctly on the limb then the electrical impulses will fail to move the prosthetic. TrueLimb is a specific brand of prosthetics that uses myoelectrical sensors which enable a person to have control of their bionic limb.
Hearing aids
Four major components make up the hearing aid: the microphone, the amplifier, the receiver, and the battery. The microphone takes in outside sound, turns that sound to electrical signals, and sends those signals to the amplifier. The amplifier increases the sound and sends that sound to the receiver. The receiver changes the electrical signal back into sound and sends the sound into the ear. Hair cells in the ear will sense the vibrations from the sound, convert the vibrations into nerve signals, and send it to the brain so the sounds can become coherent to the person. The battery simply powers the hearing aid.
Applications
Cochlear Implant
Cochlear implants are a type of hearing aid for those who are deaf. Cochlear implants send electrical signals straight to the auditory nerve, the nerve responsible for sound signals, instead of just sending the signals to the ear canal like normal hearing aids.
Bone-Anchored Hearing Aids
These hearing aids are also used for people with severe hearing loss. They attach to the bones of the middle ear to create sound vibrations in the skull and send those vibrations to the cochlea.
Artificial sensing skin
This artificial sensing skin detects any pressure put on it and is meant for people who have lost any sense of feeling on parts of their bodies, such as diabetics with peripheral neuropathy.
Bionic eye
The bionic eye is a bioelectronic implant that restores vision for people with blindness.
The bionic eye, although isn't perfect yet, helped 5 individuals classified as legally blind help to make out letters again.
As the retina has millions of photoreceptors, and the human eye has extraordinary capabilities in lensing and dynamic range, it is very hard to replicate with technology. Neural integration is another major challenge. Despite these hurdles, intense research and prototyping is ongoing with many major accomplishments in recent times.
Orthopedic bionics
Orthopedic bionics consist of advanced bionic limbs that use a person's neuromuscular system to control the bionic limb. A new advancement in the comprehension of brain function has led to the development and implementation of brain-machine interfaces (BMIs). BMIs allow for the processing of neural messaging between motor regions of the brain to muscles of a specific limb to initiate movement. BMIs contribute greatly to the restoration of a person's independent movement who has a bionic limb and or an exoskeleton.
Endoscopic robotics
These robotics can remove a polyp during a colonoscopy.
See also
Android (robot)
Bio-inspired robotics
Molecular machine#Biological
Biological devices
Biomechatronics
Biomimetics
Cultured neural networks
Cyborg
Cylon (reimagining)
Nanobot
Nanomedicine
Plantoid
Remote control animal
Replicant
Roborat
Technorganic
References
External links
The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
The BioRobotics Lab. Robotics Institute, Carnegie Mellon University *
Bioroïdes - A timeline of the popularization of the idea (in French)
Harvard BioRobotics Laboratory, Harvard University
Locomotion in Mechanical and Biological Systems (LIMBS) Laboratory, Johns Hopkins University
BioRobotics Lab in Korea
Laboratory of Biomedical Robotics and Biomicrosystems, Italy
Tiny backpacks for cells (MIT News)
Biologically Inspired Robotics Lab, Case Western Reserve University
Bio-Robotics and Human Modeling Laboratory - Georgia Institute of Technology
Biorobotics Laboratory at École Polytechnique Fédérale de Lausanne (Switzerland)
BioRobotics Laboratory, Free University of Berlin (Germany)
Biorobotics research group, Institute of Movement Science, CNRS/Aix-Marseille University (France)
Center for Biorobotics, Tallinn University of Technology (Estonia)
Biopunk
Biotechnology
Cyberpunk
Cybernetics
Fictional technology
Postcyberpunk
Health care robotics
Science fiction themes
Robotics | 0.775465 | 0.985754 | 0.764418 |
Nucleation | In thermodynamics, nucleation is the first step in the formation of either a new thermodynamic phase or structure via self-assembly or self-organization within a substance or mixture. Nucleation is typically defined to be the process that determines how long an observer has to wait before the new phase or self-organized structure appears. For example, if a volume of water is cooled (at atmospheric pressure) significantly below 0°C, it will tend to freeze into ice, but volumes of water cooled only a few degrees below 0°C often stay completely free of ice for long periods (supercooling). At these conditions, nucleation of ice is either slow or does not occur at all. However, at lower temperatures nucleation is fast, and ice crystals appear after little or no delay.
Nucleation is a common mechanism which generates first-order phase transitions, and it is the start of the process of forming a new thermodynamic phase. In contrast, new phases at continuous phase transitions start to form immediately.
Nucleation is often very sensitive to impurities in the system. These impurities may be too small to be seen by the naked eye, but still can control the rate of nucleation. Because of this, it is often important to distinguish between heterogeneous nucleation and homogeneous nucleation. Heterogeneous nucleation occurs at nucleation sites on surfaces in the system. Homogeneous nucleation occurs away from a surface.
Characteristics
Nucleation is usually a stochastic (random) process, so even in two identical systems nucleation will occur at different times. A common mechanism is illustrated in the animation to the right. This shows nucleation of a new phase (shown in red) in an existing phase (white). In the existing phase microscopic fluctuations of the red phase appear and decay continuously, until an unusually large fluctuation of the new red phase is so large it is more favourable for it to grow than to shrink back to nothing. This nucleus of the red phase then grows and converts the system to this phase. The standard theory that describes this behaviour for the nucleation of a new thermodynamic phase is called classical nucleation theory. However, the CNT fails in describing experimental results of vapour to liquid nucleation even for model substances like argon by several orders of magnitude.
For nucleation of a new thermodynamic phase, such as the formation of ice in water below 0°C, if the system is not evolving with time and nucleation occurs in one step, then the probability that nucleation has not occurred should undergo exponential decay. This is seen for example in the nucleation of ice in supercooled small water droplets. The decay rate of the exponential gives the nucleation rate. Classical nucleation theory is a widely used approximate theory for estimating these rates, and how they vary with variables such as temperature. It correctly predicts that the time you have to wait for nucleation decreases extremely rapidly when supersaturated.
It is not just new phases such as liquids and crystals that form via nucleation followed by growth. The self-assembly process that forms objects like the amyloid aggregates associated with Alzheimer's disease also starts with nucleation. Energy consuming self-organising systems such as the microtubules in cells also show nucleation and growth.
Heterogeneous nucleation often dominates homogeneous nucleation
Heterogeneous nucleation, nucleation with the nucleus at a surface, is much more common than homogeneous nucleation.
For example, in the nucleation of ice from supercooled water droplets, purifying the water to remove all or almost all impurities results in water droplets that freeze below around −35°C, whereas water that contains impurities may freeze at −5°C or warmer.
This observation that heterogeneous nucleation can occur when the rate of homogeneous nucleation is essentially zero, is often understood using classical nucleation theory. This predicts that the nucleation slows exponentially with the height of a free energy barrier ΔG*. This barrier comes from the free energy penalty of forming the surface of the growing nucleus. For homogeneous nucleation the nucleus is approximated by a sphere, but as we can see in the schematic of macroscopic droplets to the right, droplets on surfaces are not complete spheres and so the area of the interface between the droplet and the surrounding fluid is less than a sphere's . This reduction in surface area of the nucleus reduces the height of the barrier to nucleation and so speeds nucleation up exponentially.
Nucleation can also start at the surface of a liquid. For example, computer simulations of gold nanoparticles show that the crystal phase sometimes nucleates at the liquid-gold surface.
Computer simulation studies of simple models
Classical nucleation theory makes a number of assumptions, for example it treats a microscopic nucleus as if it is a macroscopic droplet with a well-defined surface whose free energy is estimated using an equilibrium property: the interfacial tension σ. For a nucleus that may be only of order ten molecules across it is not always clear that we can treat something so small as a volume plus a surface. Also nucleation is an inherently out of thermodynamic equilibrium phenomenon so it is not always obvious that its rate can be estimated using equilibrium properties.
However, modern computers are powerful enough to calculate essentially exact nucleation rates for simple models. These have been compared with the classical theory, for example for the case of nucleation of the crystal phase in the model of hard spheres. This is a model of perfectly hard spheres in thermal motion, and is a simple model of some colloids. For the crystallization of hard spheres the classical theory is a very reasonable approximate theory. So for the simple models we can study, classical nucleation theory works quite well, but we do not know if it works equally well for (say) complex molecules crystallising out of solution.
The spinodal region
Phase-transition processes can also be explained in terms of spinodal decomposition, where phase separation is delayed until the system enters the unstable region where a small perturbation in composition leads to a decrease in energy and, thus, spontaneous growth of the perturbation. This region of a phase diagram is known as the spinodal region and the phase separation process is known as spinodal decomposition and may be governed by the Cahn–Hilliard equation.
The nucleation of crystals
In many cases, liquids and solutions can be cooled down or concentrated up to conditions where the liquid or solution is significantly less thermodynamically stable than the crystal, but where no crystals will form for minutes, hours, weeks or longer; this process is called supercooling. Nucleation of the crystal is then being prevented by a substantial barrier. This has consequences, for example cold high altitude clouds may contain large numbers of small liquid water droplets that are far below 0°C.
In small volumes, such as in small droplets, only one nucleation event may be needed for crystallisation. In these small volumes, the time until the first crystal appears is usually defined to be the nucleation time. Calcium carbonate crystal nucleation depends not only on degree of supersaturation but also the ratio of calcium to carbonate ions in aqueous solutions. In larger volumes many nucleation events will occur. A simple model for crystallisation in that case, that combines nucleation and growth is the KJMA or Avrami model. Although the existing theories including the classical nucleation theory explain well the steady nucleation state when the crystal nucleation rate is not time dependent, the initial non-steady state transient nucleation, and even more mysterious incubation period, require more attention of the scientific community. Chemical ordering of the undercooling liquid prior to crystal nucleation was suggested to be responsible for that feature by reducing the energy barrier for nucleation.
Primary and secondary nucleation
The time until the appearance of the first crystal is also called primary nucleation time, to distinguish it from secondary nucleation times. Primary here refers to the first nucleus to form, while secondary nuclei are crystal nuclei produced from a preexisting crystal. Primary nucleation describes the transition to a new phase that does not rely on the new phase already being present, either because it is the very first nucleus of that phase to form, or because the nucleus forms far from any pre-existing piece of the new phase. Particularly in the study of crystallisation, secondary nucleation can be important. This is the formation of nuclei of a new crystal directly caused by pre-existing crystals.
For example, if the crystals are in a solution and the system is subject to shearing forces, small crystal nuclei could be sheared off a growing crystal, thus increasing the number of crystals in the system. So both primary and secondary nucleation increase the number of crystals in the system but their mechanisms are very different, and secondary nucleation relies on crystals already being present.
Experimental observations on the nucleation times for the crystallisation of small volumes
It is typically difficult to experimentally study the nucleation of crystals. The nucleus is microscopic, and thus too small to be directly observed. In large liquid volumes there are typically multiple nucleation events, and it is difficult to disentangle the effects of nucleation from those of growth of the nucleated phase. These problems can be overcome by working with small droplets. As nucleation is stochastic, many droplets are needed so that statistics for the nucleation events can be obtained.
To the right is shown an example set of nucleation data. It is for the nucleation at constant temperature and hence supersaturation of the crystal phase in small droplets of supercooled liquid tin; this is the work of Pound and La Mer.
Nucleation occurs in different droplets at different times, hence the fraction is not a simple step function that drops sharply from one to zero at one particular time. The red curve is a fit of a Gompertz function to the data. This is a simplified version of the model Pound and La Mer used to model their data. The model assumes that nucleation occurs due to impurity particles in the liquid tin droplets, and it makes the simplifying assumption that all impurity particles produce nucleation at the same rate. It also assumes that these particles are Poisson distributed among the liquid tin droplets. The fit values are that the nucleation rate due to a single impurity particle is 0.02/s, and the average number of impurity particles per droplet is 1.2. Note that about 30% of the tin droplets never freeze; the data plateaus at a fraction of about 0.3. Within the model this is assumed to be because, by chance, these droplets do not have even one impurity particle and so there is no heterogeneous nucleation. Homogeneous nucleation is assumed to be negligible on the timescale of this experiment. The remaining droplets freeze in a stochastic way, at rates 0.02/s if they have one impurity particle, 0.04/s if they have two, and so on.
These data are just one example, but they illustrate common features of the nucleation of crystals in that there is clear evidence for heterogeneous nucleation, and that nucleation is clearly stochastic.
Ice
The freezing of small water droplets to ice is an important process, particularly in the formation and dynamics of clouds. Water (at atmospheric pressure) does not freeze at 0°C, but rather at temperatures that tend to decrease as the volume of the water decreases and as the concentration of dissolved chemicals in the water increases. Thus small droplets of water, as found in clouds, may remain liquid far below 0°C.
An example of experimental data on the freezing of small water droplets is shown at the right. The plot shows the fraction of a large set of water droplets, that are still liquid water, i.e., have not yet frozen, as a function of temperature. Note that the highest temperature at which any of the droplets freezes is close to -19°C, while the last droplet to freeze does so at almost -35°C.
Examples
Nucleation of fluids (gases and liquids)
Clouds form when wet air cools (often because the air rises) and many small water droplets nucleate from the supersaturated air. The amount of water vapour that air can carry decreases with lower temperatures. The excess vapor begins to nucleate and to form small water droplets which form a cloud. Nucleation of the droplets of liquid water is heterogeneous, occurring on particles referred to as cloud condensation nuclei. Cloud seeding is the process of adding artificial condensation nuclei to quicken the formation of clouds.
Bubbles of carbon dioxide nucleate shortly after the pressure is released from a container of carbonated liquid.
Nucleation in boiling can occur in the bulk liquid if the pressure is reduced so that the liquid becomes superheated with respect to the pressure-dependent boiling point. More often, nucleation occurs on the heating surface, at nucleation sites. Typically, nucleation sites are tiny crevices where free gas-liquid surface is maintained or spots on the heating surface with lower wetting properties. Substantial superheating of a liquid can be achieved after the liquid is de-gassed and if the heating surfaces are clean, smooth and made of materials well wetted by the liquid.
Some champagne stirrers operate by providing many nucleation sites via high surface-area and sharp corners, speeding the release of bubbles and removing carbonation from the wine.
The Diet Coke and Mentos eruption offers another example. The surface of Mentos candy provides nucleation sites for the formation of carbon-dioxide bubbles from carbonated soda.
Both the bubble chamber and the cloud chamber rely on nucleation, of bubbles and droplets, respectively.
Nucleation of crystals
The most common crystallisation process on Earth is the formation of ice. Liquid water does not freeze at 0°C unless there is ice already present; cooling significantly below 0°C is required to nucleate ice and for the water to freeze. For example, small droplets of very pure water can remain liquid down to below -30 °C although ice is the stable state below 0°C.
Many of the materials we make and use are crystalline, but are made from liquids, e.g. crystalline iron made from liquid iron cast into a mold, so the nucleation of crystalline materials is widely studied in industry. It is used heavily in the chemical industry for cases such as in the preparation of metallic ultradispersed powders that can serve as catalysts. For example, platinum deposited onto TiO2 nanoparticles catalyses the decomposition of water. It is an important factor in the semiconductor industry, as the band gap energy in semiconductors is influenced by the size of nanoclusters.
Nucleation in solids
In addition to the nucleation and growth of crystals e.g. in non-crystalline glasses, the nucleation and growth of impurity precipitates in crystals at, and between, grain boundaries is quite important industrially. For example in metals solid-state nucleation and precipitate growth plays an important role e.g. in modifying mechanical properties like ductility, while in semiconductors it plays an important role e.g. in trapping impurities during integrated circuit manufacture.
References
Articles containing video clips
Chemistry
Materials science
Physics | 0.769098 | 0.993912 | 0.764416 |
Pyruvic acid | Pyruvic acid (CH3COCOOH) is the simplest of the alpha-keto acids, with a carboxylic acid and a ketone functional group. Pyruvate, the conjugate base, CH3COCOO−, is an intermediate in several metabolic pathways throughout the cell.
Pyruvic acid can be made from glucose through glycolysis, converted back to carbohydrates (such as glucose) via gluconeogenesis, or converted to fatty acids through a reaction with acetyl-CoA. It can also be used to construct the amino acid alanine and can be converted into ethanol or lactic acid via fermentation.
Pyruvic acid supplies energy to cells through the citric acid cycle (also known as the Krebs cycle) when oxygen is present (aerobic respiration), and alternatively ferments to produce lactate when oxygen is lacking.
Chemistry
In 1834, Théophile-Jules Pelouze distilled tartaric acid and isolated glutaric acid and another unknown organic acid. Jöns Jacob Berzelius characterized this other acid the following year and named pyruvic acid because it was distilled using heat. The correct molecular structure was deduced by the 1870s.
Pyruvic acid is a colorless liquid with a smell similar to that of acetic acid and is miscible with water. In the laboratory, pyruvic acid may be prepared by heating a mixture of tartaric acid and potassium hydrogen sulfate, by the oxidation of propylene glycol by a strong oxidizer (e.g., potassium permanganate or bleach), or by the hydrolysis of acetyl cyanide, formed by reaction of acetyl chloride with potassium cyanide:
CH3COCl + KCN → CH3COCN + KCl
CH3COCN → CH3COCOOH
Biochemistry
Pyruvate is an important chemical compound in biochemistry. It is the output of the metabolism of glucose known as glycolysis. One molecule of glucose breaks down into two molecules of pyruvate, which are then used to provide further energy, in one of two ways. Pyruvate is converted into acetyl-coenzyme A, which is the main input for a series of reactions known as the Krebs cycle (also known as the citric acid cycle or tricarboxylic acid cycle). Pyruvate is also converted to oxaloacetate by an anaplerotic reaction, which replenishes Krebs cycle intermediates; also, the oxaloacetate is used for gluconeogenesis.
These reactions are named after Hans Adolf Krebs, the biochemist awarded the 1953 Nobel Prize for physiology, jointly with Fritz Lipmann, for research into metabolic processes. The cycle is also known as the citric acid cycle or tricarboxylic acid cycle, because citric acid is one of the intermediate compounds formed during the reactions.
If insufficient oxygen is available, the acid is broken down anaerobically, creating lactate in animals and ethanol in plants and microorganisms (and in carp). Pyruvate from glycolysis is converted by fermentation to lactate using the enzyme lactate dehydrogenase and the coenzyme NADH in lactate fermentation, or to acetaldehyde (with the enzyme pyruvate decarboxylase) and then to ethanol in alcoholic fermentation.
Pyruvate is a key intersection in the network of metabolic pathways. Pyruvate can be converted into carbohydrates via gluconeogenesis, to fatty acids or energy through acetyl-CoA, to the amino acid alanine, and to ethanol. Therefore, it unites several key metabolic processes.
Pyruvic acid production by glycolysis
In the last step of glycolysis, phosphoenolpyruvate (PEP) is converted to pyruvate by pyruvate kinase. This reaction is strongly exergonic and irreversible; in gluconeogenesis, it takes two enzymes, pyruvate carboxylase and PEP carboxykinase, to catalyze the reverse transformation of pyruvate to PEP.
Decarboxylation to acetyl CoA
Pyruvate decarboxylation by the pyruvate dehydrogenase complex produces acetyl-CoA.
Carboxylation to oxaloacetate
Carboxylation by pyruvate carboxylase produces oxaloacetate.
Transamination to alanine
Transamination by alanine transaminase produces alanine.
Reduction to lactate
Reduction by lactate dehydrogenase produces lactate.
Environmental chemistry
Pyruvic acid is an abundant carboxylic acid in secondary organic aerosols.
Uses
Pyruvate is sold as a weight-loss supplement, though credible science has yet to back this claim. A systematic review of six trials found a statistically significant difference in body weight with pyruvate compared to placebo. However, all of the trials had methodological weaknesses and the magnitude of the effect was small. The review also identified adverse events associated with pyruvate such as diarrhea, bloating, gas, and increase in low-density lipoprotein (LDL) cholesterol. The authors concluded that there was insufficient evidence to support the use of pyruvate for weight loss.
There is also in vitro as well as in vivo evidence in hearts that pyruvate improves metabolism by NADH production stimulation and increases cardiac function.
See also
Pyruvate scale
Uvitonic acid
Notes
References
External links
Pyruvic acid mass spectrum
Alpha-keto acids
Cellular respiration
Exercise physiology
Metabolism
Glycolysis | 0.767958 | 0.995354 | 0.76439 |
Model selection | Model selection is the task of selecting a model from among various candidates on the basis of performance criterion to choose the best one.
In the context of machine learning and more generally statistical analysis, this may be the selection of a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (Occam's razor).
state, "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose of decision making or optimization under uncertainty.
In machine learning, algorithmic approaches to model selection include feature selection, hyperparameter optimization, and statistical learning theory.
Introduction
In its most basic forms, model selection is one of the fundamental tasks of scientific inquiry. Determining the principle that explains a series of observations is often linked directly to a mathematical model predicting those observations. For example, when Galileo performed his inclined plane experiments, he demonstrated that the motion of the balls fitted the parabola predicted by his model .
Of the countless number of possible mechanisms and processes that could have produced the data, how can one even begin to choose the best model? The mathematical approach commonly taken decides among a set of candidate models; this set must be chosen by the researcher. Often simple models such as polynomials are used, at least initially . emphasize throughout their book the importance of choosing models based on sound scientific principles, such as understanding of the phenomenological processes or mechanisms (e.g., chemical reactions) underlying the data.
Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity. More complex models will be better able to adapt their shape to fit the data (for example, a fifth-order polynomial can exactly fit six points), but the additional parameters may not represent anything useful. (Perhaps those six points are really just randomly distributed about a straight line.) Goodness of fit is generally determined using a likelihood ratio approach, or an approximation of this, leading to a chi-squared test. The complexity is generally measured by counting the number of parameters in the model.
Model selection techniques can be considered as estimators of some physical quantity, such as the probability of the model producing the given data. The bias and variance are both important measures of the quality of this estimator; efficiency is also often considered.
A standard example of model selection is that of curve fitting, where, given a set of points and other background knowledge (e.g. points are a result of i.i.d. samples), we must select a curve that describes the function that generated the points.
Two directions of model selection
There are two main objectives in inference and learning from data. One is for scientific discovery, also called statistical inference, understanding of the underlying data-generating mechanism and interpretation of the nature of the data. Another objective of learning from data is for predicting future or unseen observations, also called Statistical Prediction. In the second objective, the data scientist does not necessarily concern an accurate probabilistic description of the data. Of course, one may also be interested in both directions.
In line with the two different objectives, model selection can also have two directions: model selection for inference and model selection for prediction. The first direction is to identify the best model for the data, which will preferably provide a reliable characterization of the sources of uncertainty for scientific interpretation. For this goal, it is significantly important that the selected model is not too sensitive to the sample size. Accordingly, an appropriate notion for evaluating model selection is the selection consistency, meaning that the most robust candidate will be consistently selected given sufficiently many data samples.
The second direction is to choose a model as machinery to offer excellent predictive performance. For the latter, however, the selected model may simply be the lucky winner among a few close competitors, yet the predictive performance can still be the best possible. If so, the model selection is fine for the second goal (prediction), but the use of the selected model for insight and interpretation may be severely unreliable and misleading. Moreover, for very complex models selected this way, even predictions may be unreasonable for data only slightly different from those on which the selection was made.
Methods to assist in choosing the set of candidate models
Data transformation (statistics)
Exploratory data analysis
Model specification
Scientific method
Criteria
Below is a list of criteria for model selection. The most commonly used information criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor), see
for a review.
Akaike information criterion (AIC), a measure of the goodness fit of an estimated statistical model
Bayes factor
Bayesian information criterion (BIC), also known as the Schwarz information criterion, a statistical criterion for model selection
Bridge criterion (BC), a statistical criterion that can attain the better performance of AIC and BIC despite the appropriateness of model specification.
Cross-validation
Deviance information criterion (DIC), another Bayesian oriented model selection criterion
False discovery rate
Focused information criterion (FIC), a selection criterion sorting statistical models by their effectiveness for a given focus parameter
Hannan–Quinn information criterion, an alternative to the Akaike and Bayesian criteria
Kashyap information criterion (KIC) is a powerful alternative to AIC and BIC, because KIC uses Fisher information matrix
Likelihood-ratio test
Mallows's Cp
Minimum description length
Minimum message length (MML)
PRESS statistic, also known as the PRESS criterion
Structural risk minimization
Stepwise regression
Watanabe–Akaike information criterion (WAIC), also called the widely applicable information criterion
Extended Bayesian Information Criterion (EBIC) is an extension of ordinary Bayesian information criterion (BIC) for models with high parameter spaces.
Extended Fisher Information Criterion (EFIC) is a model selection criterion for linear regression models.
Constrained Minimum Criterion (CMC) is a frequentist criterion for selecting regression models with a geometric underpinning.
Among these criteria, cross-validation is typically the most accurate, and computationally the most expensive, for supervised learning problems.
say the following:
See also
All models are wrong
Analysis of competing hypotheses
Automated machine learning (AutoML)
Bias-variance dilemma
Feature selection
Freedman's paradox
Grid search
Identifiability Analysis
Log-linear analysis
Model identification
Occam's razor
Optimal design
Parameter identification problem
Scientific modelling
Statistical model validation
Stein's paradox
Notes
References
[this has over 38000 citations on Google Scholar]
(reprinted 1965, Science 148: 754–759 )
Regression variable selection
Mathematical and quantitative methods (economics)
Management science | 0.776949 | 0.983834 | 0.764389 |
CLaMS | CLaMS (Chemical Lagrangian Model of the Stratosphere) is a modular chemistry transport model (CTM) system developed at Forschungszentrum Jülich, Germany. CLaMS was first described by McKenna et al. (2002a,b) and was expanded into three dimensions by Konopka et al. (2004). CLaMS has been employed in recent European field campaigns THESEO, EUPLEX, TROCCINOX SCOUT-O3, and RECONCILE with a focus on simulating ozone depletion and water vapour transport.
Major strengths of CLaMS in comparison to other CTMs are
its applicability for reverse domain filling studies
its anisotropic mixing scheme
its integrability with arbitrary observational data
its comprehensive chemistry scheme
CLaMS gridding
Unlike other CTMs (e.g. SLIMCAT, REPROBUS), CLaMS operates on a Lagrangian model grid (see section about model grids in general circulation model): an air parcel is described by three space coordinates and a time coordinate. The time evolution path that an air parcels traces in space is called a trajectory. A specialised mixing scheme ensures that physically realistic diffusion is imposed on an ensemble of
trajectories in regions of high wind shear.
CLaMS operates on arbitrarily resolved horizontal grids. The space coordinates are latitude, longitude and potential temperature.
CLaMS hierarchy
CLaMS is composed of four modules and several preprocessors. The four modules are
a trajectory module
a box chemistry module
a Lagrangian mixing module
a Lagrangian sedimentation scheme
Trajectory module
Integration of trajectories with 4th order Runge-Kutta method, integration time step 30 minutes. Vertical displacement of trajectories is calculated from radiation budget.
Box chemistry module
Chemistry is based on the ASAD chemistry code of the University of Cambridge. More than 100 chemical reactions involving 40+ chemical
species are considered. Integration time step is 10 minutes, species
can be combined into chemical families to facilitate integration. The
module includes a radiative transfer model for the determination of
photolysis rates. The module also includes heterogeneous reactions on
NAT, ice and liquid particle surfaces.
Lagrangian mixing
Mixing is based on grid deformation of quasi uniform air parcel
distributions. The contraction or elongation factors of the distances
to neighboring air parcels are examined: if a critical elongation
(contraction) is reached, new air parcels are introduced (taken away).
This way, anisotropic diffusion is simulated in a physically realistic
manner.
Lagrangian sedimentation
Lagrangian sedimentation is calculated by following individual nitric
acid trihydrate (NAT) particles that may grow or shrink by the uptake
or release of HNO3 from/to the gas phase. These particle parcels are
simulated independently from the Lagrangian air parcels. Their
trajectories are determined using the horizontal winds and their
vertical settling velocity that depends on the size of the individual
particles. NAT particles are nucleated assuming a constant nucleation
rate and they evaporate where temperatures grow too high. With this,
a vertical redistribution of HNO3 (denitrification and
renitrification) is determined.
CLaMS data sets
A chemical transport model does not simulate the dynamics of the atmosphere. For CLaMS, the following meteorological data sets have been used
European Centre for Medium-Range Weather Forecasts (ECMWF), Predictions, Analyses, ERA-15, ERA-40
United Kingdom Met Office (UKMO)
European Centre Hamburg Atmospheric Model (ECHAM4), in the DLR version
To initialize the chemical fields in CLaMS, data from a large variety of instruments have provided data.
on satellite (CRISTA, MIPAS, MLS, HALOE, ILAS, ...),
on aircraft and balloons (HALOX, FISH, Mark IV, BONBON...)
If no observations are present, the chemical fields can be initialised
from two-dimensional chemical models, chemistry-climate models,
climatologies, or from correlations between chemical species or
chemical species and dynamical variables.
See also
Forschungszentrum Jülich
Ozone depletion
Meteorology
External links
CLaMS at Forschungszentrum Jülich
Current field campaign SCOUT-O3
References
The details of the model CLaMS are well documented and published in the scientific literature.
Formulation of advection and mixing by McKenna et al., 2002a
Formulation of chemistry-scheme and initialisation by McKenna et al., 2002b
Comparison of the chemistry module with other stratospheric models by Krämer et al., 2003
Calculation of photolysis rates by Becker et al., 2000
Extension to 3-dimension model version by Konopka et al., 2004
Lagrangian sedimentation by Grooß et al., 2005
Numerical climate and weather models
Ozone depletion | 0.795932 | 0.960363 | 0.764383 |
Mycology | Mycology is the branch of biology concerned with the study of fungi, including their taxonomy, genetics, biochemical properties, and use by humans. Fungi can be a source of tinder, food, traditional medicine, as well as entheogens, poison, and infection. Mycology branches into the field of phytopathology, the study of plant diseases. The two disciplines are closely related, because the vast majority of plant pathogens are fungi. A biologist specializing in mycology is called a mycologist.
Overview
Although mycology was historically considered a branch of botany, the 1969 discovery of fungi's close evolutionary relationship to animals resulted in the study's reclassification as an independent field. Pioneer mycologists included Elias Magnus Fries, Christiaan Hendrik Persoon, Heinrich Anton de Bary, Elizabeth Eaton Morse, and Lewis David de Schweinitz. Beatrix Potter, author of The Tale of Peter Rabbit, also made significant contributions to the field.
Pier Andrea Saccardo developed a system for classifying the imperfect fungi by spore color and form, which became the primary system used before classification by DNA analysis. He is most famous for his Sylloge Fungorum, which was a comprehensive list of all of the names that had been used for mushrooms. Sylloge is still the only work of this kind that was both comprehensive for the botanical kingdom Fungi and reasonably modern.
Many fungi produce toxins, antibiotics, and other secondary metabolites. For example, the cosmopolitan genus Fusarium and their toxins associated with fatal outbreaks of alimentary toxic aleukia in humans were extensively studied by Abraham Z. Joffe.
Fungi are fundamental for life on earth in their roles as symbionts, e.g. in the form of mycorrhizae, insect symbionts, and lichens. Many fungi are able to break down complex organic biomolecules such as lignin, the more durable component of wood, and pollutants such as xenobiotics, petroleum, and polycyclic aromatic hydrocarbons. By decomposing these molecules, fungi play a critical role in the global carbon cycle.
Fungi and other organisms traditionally recognized as fungi, such as oomycetes and myxomycetes (slime molds), often are economically and socially important, as some cause diseases of animals (including humans) and of plants.
Apart from pathogenic fungi, many fungal species are very important in controlling the plant diseases caused by different pathogens. For example, species of the filamentous fungal genus Trichoderma are considered one of the most important biological control agents as an alternative to chemical-based products for effective crop diseases management.
Field meetings to find interesting species of fungi are known as 'forays', after the first such meeting organized by the Woolhope Naturalists' Field Club in 1868 and entitled "A foray among the funguses".
Some fungi can cause disease in humans and other animals; the study of pathogenic fungi that infect animals is referred to as medical mycology.
History
It is believed that humans started collecting mushrooms as food in prehistoric times. Mushrooms were first written about in the works of Euripides (480–406 BC). The Greek philosopher Theophrastos of Eresos (371–288 BC) was perhaps the first to try to systematically classify plants; mushrooms were considered to be plants missing certain organs. It was later Pliny the Elder (23–79 AD), who wrote about truffles in his encyclopedia Natural History. The word mycology comes from the Ancient Greek: μύκης (mukēs), meaning "fungus" and the suffix (-logia), meaning "study".
The Middle Ages saw little advancement in the body of knowledge about fungi. However, the invention of the printing press allowed authors to dispel superstitions and misconceptions about the fungi that had been perpetuated by the classical authors.
The start of the modern age of mycology begins with Pier Antonio Micheli's 1737 publication of Nova plantarum genera. Published in Florence, this seminal work laid the foundations for the systematic classification of grasses, mosses and fungi. He originated the still current genus names Polyporus and Tuber, both dated 1729 (though the descriptions were later amended as invalid by modern rules).
The founding nomenclaturist Carl Linnaeus included fungi in his binomial naming system in 1753, where each type of organism has a two-word name consisting of a genus and species (whereas up to then organisms were often designated with Latin phrases containing many words). He originated the scientific names of numerous well-known mushroom taxa, such as Boletus and Agaricus, which are still in use today. During this period, fungi were still considered to belong to the plant kingdom, so they were categorized in his Species Plantarum. Linnaeus' fungal taxa were not nearly as comprehensive as his plant taxa, however, grouping together all gilled mushrooms with a stem in genus Agaricus. Thousands of gilled species exist, which were later divided into dozens of diverse genera; in its modern usage, Agaricus only refers to mushrooms closely related to the common shop mushroom, Agaricus bisporus. For example, Linnaeus gave the name Agaricus deliciosus to the saffron milk-cap, but its current name is Lactarius deliciosus. On the other hand, the field mushroom Agaricus campestris has kept the same name ever since Linnaeus's publication. The English word "agaric" is still used for any gilled mushroom, which corresponds to Linnaeus's use of the word.
The term mycology and the complementary term mycologist are traditionally attributed to M.J. Berkeley in 1836. However, mycologist appeared in writings by English botanist Robert Kaye Greville as early as 1823 in reference to Schweinitz.
Mycology and drug discovery
For centuries, certain mushrooms have been documented as a folk medicine in China, Japan, and Russia. Although the use of mushrooms in folk medicine is centered largely on the Asian continent, people in other parts of the world like the Middle East, Poland, and Belarus have been documented using mushrooms for medicinal purposes.
Mushrooms produce large amounts of vitamin D when exposed to ultraviolet (UV) light. Penicillin, ciclosporin, griseofulvin, cephalosporin and psilocybin are examples of drugs that have been isolated from molds or other fungi.
See also
Ethnomycology
Glossary of mycology
Fungal biochemical test
List of mycologists
List of mycology journals
Marine fungi
Mushroom hunting
Mycotoxicology
Pathogenic fungi
Protistology
References
Cited literature
External links
Professional organizations
BMS: British Mycological Society (United Kingdom)
MSA: Mycological Society of America (North America)
Amateur organizations
MSSF: Mycological Society of San Francisco
North American Mycological Association (list of amateur organizations in North America)
Puget Sound Mycological Society
Oregon Mycological Society
IMA Illinois Mycological Association
Miscellaneous links
Online lectures in mycology University of South Carolina
The WWW Virtual Library: Mycology
MykoWeb links page
Mycological Glossary at the Illinois Mycological Association
FUNGI Magazine for professionals and amateurs – largest circulating U.S. publication concerning all things mycological
Fungal Cell Biology Group at University of Edinburgh, UK.
Mycological Marvels Cornell University, Mann Library
Branches of biology | 0.76723 | 0.99625 | 0.764353 |
Analysis of variance | Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.
History
While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. Around 1800, Laplace and Gauss developed the least-squares method for combining observations, which improved upon methods then used in astronomy and geodesy. It also initiated much study of the contributions to sums of squares. Laplace knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827, Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800, astronomers had isolated observational errors resulting
from reaction times (the "personal equation") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was available in 1885.
Ronald Fisher introduced the term variance and proposed its formal analysis in a 1918 article on theoretical population genetics, The Correlation Between Relatives on the Supposition of Mendelian Inheritance. His first application of the analysis of variance to data analysis was published in 1921, Studies in Crop Variation I. This divided the variation of a time series into components representing annual causes and slow deterioration. Fisher's next piece, Studies in Crop Variation II, written with Winifred Mackenzie and published in 1923, studied the variation in yield across plots sown with different varieties and subjected to different fertiliser treatments. Analysis of variance became widely known after being included in Fisher's 1925 book Statistical Methods for Research Workers.
Randomization models were developed by several researchers. The first was published in Polish by Jerzy Neyman in 1923.
Example
The analysis of variance can be used to describe otherwise complex relations among variables. A dog show provides an example. A dog show is not a random sampling of the breed: it is typically limited to dogs that are adult, pure-bred, and exemplary. A histogram of dog weights from a show is likely to be rather complicated, like the yellow-orange distribution shown in the illustrations. Suppose we wanted to predict the weight of a dog based on a certain set of characteristics of each dog. One way to do that is to explain the distribution of weights by dividing the dog population into groups based on those characteristics. A successful grouping will split dogs such that (a) each group has a low variance of dog weights (meaning the group is relatively homogeneous) and (b) the mean of each group is distinct (if two groups have the same mean, then it isn't reasonable to conclude that the groups are, in fact, separate in any meaningful way).
In the illustrations to the right, groups are identified as X1, X2, etc. In the first illustration, the dogs are divided according to the product (interaction) of two binary groupings: young vs old, and short-haired vs long-haired (e.g., group 1 is young, short-haired dogs, group 2 is young, long-haired dogs, etc.). Since the distributions of dog weight within each of the groups (shown in blue) has a relatively large variance, and since the means are very similar across groups, grouping dogs by these characteristics does not produce an effective way to explain the variation in dog weights: knowing which group a dog is in doesn't allow us to predict its weight much better than simply knowing the dog is in a dog show. Thus, this grouping fails to explain the variation in the overall distribution (yellow-orange).
An attempt to explain the weight distribution by grouping dogs as pet vs working breed and less athletic vs more athletic would probably be somewhat more successful (fair fit). The heaviest show dogs are likely to be big, strong, working breeds, while breeds kept as pets tend to be smaller and thus lighter. As shown by the second illustration, the distributions have variances that are considerably smaller than in the first case, and the means are more distinguishable. However, the significant overlap of distributions, for example, means that we cannot distinguish X1 and X2 reliably. Grouping dogs according to a coin flip might produce distributions that look similar.
An attempt to explain weight by breed is likely to produce a very good fit. All Chihuahuas are light and all St Bernards are heavy. The difference in weights between Setters and Pointers does not justify separate breeds. The analysis of variance provides the formal tools to justify these intuitive judgments. A common use of the method is the analysis of experimental data or the development of models. The method has some advantages over correlation: not all of the data must be numeric and one result of the method is a judgment in the confidence in an explanatory relationship.
Classes of models
There are three classes of models used in the analysis of variance, and these are outlined here.
Fixed-effects models
The fixed-effects model (class I) of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see whether the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole.
Random-effects models
Random-effects model (class II) is used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model.
Mixed-effects models
A mixed-effects model (class III) contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types.
Example
Teaching experiments could be performed by a college or university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives.
Defining fixed and random effects has proven elusive, with multiple competing definitions.
Assumptions
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data.
Textbook analysis using a normal distribution
The analysis of variance can be presented in terms of a linear model, which makes the following assumptions about the probability distribution of the responses:
Independence of observations – this is an assumption of the model that simplifies the statistical analysis.
Normality – the distributions of the residuals are normal.
Equality (or "homogeneity") of variances, called homoscedasticity—the variance of data in groups should be the same.
The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors are independent and
Randomization-based analysis
In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of unit treatment additivity, which is discussed in the books of Kempthorne and David R. Cox.
Unit-treatment additivity
In its simplest form, the assumption of unit-treatment additivity states that the observed response from experimental unit when receiving treatment can be written as the sum of the unit's response and the treatment-effect , that is
The assumption of unit-treatment additivity implies that, for every treatment , the th treatment has exactly the same effect on every experiment unit.
The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity implies that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant.
The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling.
Derived linear model
Kempthorne uses the randomization-distribution and the assumption of unit treatment additivity to produce a derived linear model, very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is no assumption of a normal distribution and certainly no assumption of independence. On the contrary, the observations are dependent!
The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments.
Statistical models for observational data
However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use subjective models, as emphasized by Ronald Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public.
Summary of assumptions
The normal-model based ANOVA analysis assumes the independence, normality, and homogeneity of variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis.
However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA. There are no necessary assumptions for ANOVA in its full generality, but the F-test used for ANOVA hypothesis testing has assumptions and practical
limitations which are of continuing interest.
Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions.
The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses which are believed to follow a multiplicative model.
According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition.
Characteristics
ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance result is independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry. This is an example of data coding.
Algorithm
The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial: "the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean".
Partitioning of the sum of squares
ANOVA uses traditional standardized terminology. The definitional equation of sample variance is , where the divisor is called the degrees of freedom (DF), the summation is called
the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means, and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means.
The fundamental technique is a partitioning of the total sum of squares SS into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels.
The number of degrees of freedom DF can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect.
The F-test
The F-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic
where MS is mean square, is the number of treatments and is the total number of cases
to the F-distribution with being the numerator degrees of freedom and the denominator degrees of freedom. Using the F-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution.
The expected value of F is (where is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1, the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls.
There are two methods of concluding the ANOVA hypothesis test, both of which produce the same result:
The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (α). If F ≥ FCritical, the null hypothesis is rejected.
The computer method calculates the probability (p-value) of a value of F greater than or equal to the observed value. The null hypothesis is rejected if this probability is less than or equal to the significance level (α).
The ANOVA F-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (i.e. maximizing power for a fixed significance level). For example, to test the hypothesis that various medical treatments have exactly the same effect, the F-test's p-values closely approximate the permutation test's p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum. The ANOVA F-test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.
Extended algorithm
ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. "Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients." "[W]e think of the analysis of variance as a way of understanding and structuring multilevel models—not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences ..."
For a single factor
The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks and Latin squares (and variants: Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors.
There are some alternatives to conventional one-way analysis of variance, e.g.: Welch's heteroscedastic F test, Welch's heteroscedastic F test with trimmed means and Winsorized variances, Brown-Forsythe test, Alexander-Govern test, James second order test and Kruskal-Wallis test, available in onewaytests R
It is useful to represent each data point in the following form, called a statistical model:
where
i = 1, 2, 3, ..., R
j = 1, 2, 3, ..., C
μ = overall average (mean)
τj = differential effect (response) associated with the j level of X; this assumes that overall the values of τj add to zero (that is, )
εij = noise or error associated with the particular ij data value
That is, we envision an additive model that says every data point can be represented by summing three quantities: the true mean, averaged over all factor levels being investigated, plus an incremental component associated with the particular column (factor level), plus a final component associated with everything else affecting that specific data value.
For multiple factors
ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termed factorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used.
The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz).
All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare.
The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results.
Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. "A significant interaction will often mask the significance of main effects." Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot.
A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications.
Associated analysis
Some analysis is required in support of the design of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments.
Preparatory analysis
The number of experimental units
In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential.
Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals.
Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions." The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards.
Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confidence interval.
Power analysis
Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true.
Effect size
Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately "meaningful" units may be preferable for reporting purposes.
Model confirmation
Sometimes tests are conducted to determine whether the assumptions of ANOVA appear to be violated. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and
modeled data values. Trends hint at interactions among factors or among observations.
Follow-up tests
A statistically significant effect in ANOVA is often followed by additional tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are "planned" (a priori) or "post hoc." Planned tests are determined before looking at the data, and post hoc tests are conceived only after looking at the data (though the term "post hoc" is inconsistently used).
The follow-up tests may be "simple" pairwise comparisons of individual group means or may be "compound" comparisons (e.g., comparing the mean pooling across groups A, B and C to the mean of group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels. Often the follow-up tests incorporate a method of adjusting for the multiple comparisons problem.
Follow-up tests to identify which specific groups, variables, or factors have statistically different means include the Tukey's range test, and Duncan's new multiple range test. In turn, these tests are often followed with a Compact Letter Display (CLD) methodology in order to render the output of the mentioned tests more transparent to a non-statistician audience.
Study designs
There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model.
Some popular designs use the following types of ANOVA:
One-way ANOVA is used to test for differences among two or more independent groups (means), e.g. different levels of urea application in a crop, or different levels of antibiotic action on several different bacterial species, or different levels of effect of some medicine on groups of patients. However, should these groups not be independent, and there is an order in the groups (such as mild, moderate and severe disease), or in the dose of a drug (such as 5 mg/mL, 10 mg/mL, 20 mg/mL) given to the same group of patients, then a linear trend estimation should be used. Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by .
Factorial ANOVA is used when there is more than one factor.
Repeated measures ANOVA is used when the same subjects are used for each factor (e.g., in a longitudinal study).
Multivariate analysis of variance (MANOVA) is used when there is more than one response variable.
Cautions
Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; unbalanced experiments offer more complexity. For single-factor (one-way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs." In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and F-ratios will depend on the order in which the sources of variation are considered."
ANOVA is (in part) a test of statistical significance. The American Psychological Association (and many other organisations) holds the view that simply reporting statistical significance is insufficient and that reporting confidence bounds is preferred.
Generalizations
ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized.
The Kruskal-Wallis test and the Friedman test are nonparametric tests which do not rely on an assumption of normality.
Connection to linear regression
Below we make clear the connection between multi-way ANOVA and linear regression.
Linearly re-order the data so that -th observation is associated with a response and factors where denotes the different factors and is the total number of factors. In one-way ANOVA and in two-way ANOVA . Furthermore, we assume the -th factor has levels, namely . Now, we can one-hot encode the factors into the dimensional vector .
The one-hot encoding function is defined such that the -th entry of is
The vector is the concatenation of all of the above vectors for all . Thus, . In order to obtain a fully general -way interaction ANOVA we must also concatenate every additional interaction term in the vector and then add an intercept term. Let that vector be .
With this notation in place, we now have the exact connection with linear regression. We simply regress response against the vector . However, there is a concern about identifiability. In order to overcome such issues we assume that the sum of the parameters within each set of interactions is equal to zero. From here, one can use F-statistics or other methods to determine the relevance of the individual factors.
Example
We can consider the 2-way interaction example where we assume that the first factor has 2 levels and the second factor has 3 levels.
Define if and if , i.e. is the one-hot encoding of the first factor and is the one-hot encoding of the second factor.
With that,
where the last term is an intercept term. For a more concrete example suppose that
Then,
See also
ANOVA on ranks
ANOVA-simultaneous component analysis
Analysis of covariance (ANCOVA)
Analysis of molecular variance (AMOVA)
Analysis of rhythmic variance (ANORVA)
Expected mean squares
Explained variation
Linear trend estimation
Mixed-design analysis of variance
Multivariate analysis of covariance (MANCOVA)
Permutational analysis of variance
Variance decomposition
Footnotes
Notes
References
Pre-publication chapters are available on-line.
Cohen, Jacob (1988). Statistical power analysis for the behavior sciences (2nd ed.). Routledge
Cox, David R. (1958). Planning of experiments. Reprinted as
Freedman, David A.(2005). Statistical Models: Theory and Practice, Cambridge University Press.
Lehmann, E.L. (1959) Testing Statistical Hypotheses. John Wiley & Sons.
Moore, David S. & McCabe, George P. (2003). Introduction to the Practice of Statistics (4e). W H Freeman & Co.
Rosenbaum, Paul R. (2002). Observational Studies (2nd ed.). New York: Springer-Verlag.
Further reading
External links
SOCR: ANOVA Activity
Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares, and their analysis in R (University of Southampton)
NIST/SEMATECH e-Handbook of Statistical Methods, section 7.4.3: "Are the means equal?"
Analysis of variance: Introduction
Design of experiments
Statistical tests
Parametric statistics | 0.765532 | 0.998417 | 0.76432 |
Glutamine | Glutamine (symbol Gln or Q) is an α-amino acid that is used in the biosynthesis of proteins. Its side chain is similar to that of glutamic acid, except the carboxylic acid group is replaced by an amide. It is classified as a charge-neutral, polar amino acid. It is non-essential and conditionally essential in humans, meaning the body can usually synthesize sufficient amounts of it, but in some instances of stress, the body's demand for glutamine increases, and glutamine must be obtained from the diet. It is encoded by the codons CAA and CAG. It is named after glutamic acid, which in turn is named after its discovery in cereal proteins, gluten.
In human blood, glutamine is the most abundant free amino acid.
The dietary sources of glutamine include especially the protein-rich foods like beef, chicken, fish, dairy products, eggs, vegetables like beans, beets, cabbage, spinach, carrots, parsley, vegetable juices and also in wheat, papaya, Brussels sprouts, celery, kale and fermented foods like miso.
The one-letter symbol Q for glutamine was assigned in alphabetical sequence to N for asparagine, being larger by merely one methylene –CH2– group. Note that P was used for proline, and O was avoided due to similarity with D. The mnemonic Qlutamine was also proposed.
Functions
Glutamine plays a role in a variety of biochemical functions:
Protein synthesis, as any other of the 20 proteinogenic amino acids
Lipid synthesis, especially by cancer cells.
Regulation of acid-base balance in the kidney by producing ammonium
Cellular energy, as a source, next to glucose
Nitrogen donation for many anabolic processes, including the synthesis of purines
Carbon donation, as a source, refilling the citric acid cycle
Nontoxic transporter of ammonia in the blood circulation.
Integrity of healthy intestinal mucosa, though small randomized trials have shown no benefit in Crohn's disease.
Roles in metabolism
Glutamine maintains redox balance by participating in glutathione synthesis and contributing to anabolic processes such as lipid synthesis by reductive carboxylation.
Glutamine provides a source of carbon and nitrogen for use in other metabolic processes. Glutamine is present in serum at higher concentrations than other amino acids and is essential for many cellular functions. Examples include the synthesis of nucleotides and non-essential amino acids. One of the most important functions of glutamine is its ability to be converted into α-KG, which helps to maintain the flow of the tricarboxylic acid cycle, generating ATP via the electron carriers NADH and FADH2. The highest consumption of glutamine occurs in the cells of the intestines, kidney cells (where it is used for acid-base balance), activated immune cells, and many cancer cells.
Production
Glutamine is produced industrially using mutants of Brevibacterium flavum, which gives ca. 40 g/L in 2 days using glucose as a carbon source.
Biosynthesis
Glutamine synthesis from glutamate and ammonia is catalyzed by the enzyme glutamine synthetase. The majority of glutamine production occurs in muscle tissue, accounting for about 90% of all glutamine synthesized. Glutamine is also released, in small amounts, by the lungs and brain. Although the liver is capable of glutamine synthesis, its role in glutamine metabolism is more regulatory than productive, as the liver takes up glutamine derived from the gut via the hepatic portal system.
Uses
Nutrition
Glutamine is the most abundant naturally occurring, nonessential amino acid in the human body, and one of the few amino acids that can directly cross the blood–brain barrier. Humans obtain glutamine through catabolism of proteins in foods they eat. In states where tissue is being built or repaired, like growth of babies, or healing from wounds or severe illness, glutamine becomes conditionally essential.
Sickle cell disease
In 2017, the U.S. Food and Drug Administration (FDA) approved L-glutamine oral powder, marketed as Endari, to reduce severe complications of sickle cell disease in people aged five years and older with the disorder.
The safety and efficacy of L-glutamine oral powder were studied in a randomized trial of subjects ages five to 58 years old with sickle cell disease who had two or more painful crises within the 12 months prior to enrollment in the trial. Subjects were assigned randomly to treatment with L-glutamine oral powder or placebo, and the effect of treatment was evaluated over 48 weeks. Subjects who were treated with L-glutamine oral powder experienced fewer hospital visits for pain treated with a parenterally administered narcotic or ketorolac (sickle cell crises), on average, compared to subjects who received a placebo (median 3 vs. median 4), fewer hospitalizations for sickle cell pain (median 2 vs. median 3), and fewer days in the hospital (median 6.5 days vs. median 11 days). Subjects who received L-glutamine oral powder also had fewer occurrences of acute chest syndrome (a life-threatening complication of sickle cell disease) compared with patients who received a placebo (8.6 percent vs. 23.1 percent).
Common side effects of L-glutamine oral powder include constipation, nausea, headache, abdominal pain, cough, pain in the extremities, back pain and chest pain.
L-glutamine oral powder received orphan drug designation. The FDA granted the approval of Endari to Emmaus Medical Inc.
Medical food
Glutamine is marketed as medical food and is prescribed when a medical professional believes a person in their care needs supplementary glutamine due to metabolic demands beyond what can be met by endogenous synthesis or diet.
Safety
Glutamine is safe in adults and in preterm infants. Although glutamine is metabolized to glutamate and ammonia, both of which have neurological effects, their concentrations are not increased much, and no adverse neurological effects were detected. The observed safe level for supplemental L-glutamine in normal healthy adults is 14 g/day.
Adverse effects of glutamine have been described for people receiving home parenteral nutrition and those with liver-function abnormalities.
Although glutamine has no effect on the proliferation of tumor cells, it is still possible that glutamine supplementation may be detrimental in some cancer types.
Ceasing glutamine supplementation in people adapted to very high consumption may initiate a withdrawal effect, raising the risk of health problems such as infections or impaired integrity of the intestine.
Structure
Glutamine can exist in either of two enantiomeric forms, L-glutamine and D-glutamine. The L-form is found in nature. Glutamine contains an α-amino group which is in the protonated −NH3+ form under biological conditions and a carboxylic acid group which is in the deprotonated −COO− form, known as carboxylate, under physiological conditions.
Research
Glutamine mouthwash may be useful to prevent oral mucositis in people undergoing chemotherapy but intravenous glutamine does not appear useful to prevent mucositis in the GI tract.
Glutamine supplementation was thought to have potential to reduce complications in people who are critically ill or who have had abdominal surgery but this was based on poor quality clinical trials. Supplementation does not appear to be useful in adults or children with Crohn's disease or inflammatory bowel disease, but clinical studies as of 2016 were underpowered. Supplementation does not appear to have an effect in infants with significant problems of the stomach or intestines.
Some athletes use L-glutamine as supplement. Studies support the positive effects of the chronic oral administration of the supplement on the injury and inflammation induced by intense aerobic and exhaustive exercise, but the effects on muscle recovery from weight training are unclear.
Stress conditions for plants (drought, injury, soil salnity) cause the synthesis of such plant enzymes as superoxide dismutase, L-ascorbate oxidase, and Delta 1 DNA polymerase. Limiting this process, initiated by the conditions of strong soil salinity can be achieved by administering exogenous glutamine to plants. The decrease in the level of expression of genes responsible for the synthesis of superoxide dismutase increases with the increase in glutamine concentration.
See also
Isoglutamine
Trinucleotide repeat disorder
PolyQ tract
References
External links
Glutamine spectra acquired through mass spectroscopy
Carboxamides
Dietary supplements
Glucogenic amino acids
Proteinogenic amino acids
Medical food
Orphan drugs
Glutamate (neurotransmitter)
X
Neurotransmitter precursors | 0.765899 | 0.997934 | 0.764316 |
Fluid mosaic model | The fluid mosaic model explains various characteristics regarding the structure of functional cell membranes. According to this biological model, there is a lipid bilayer (two molecules thick layer consisting primarily of amphipathic phospholipids) in which protein molecules are embedded. The phospholipid bilayer gives fluidity and elasticity to the membrane. Small amounts of carbohydrates are also found in the cell membrane. The biological model, which was devised by Seymour Jonathan Singer and Garth L. Nicolson in 1972, describes the cell membrane as a two-dimensional liquid where embedded proteins are generally randomly distributed. For example, it is stated that "A prediction of the fluid mosaic model is that the two-dimensional long-range distribution of any integral protein in the plane of the membrane is essentially random."
Chemical makeup
Experimental evidence
The fluid property of functional biological membranes had been determined through labeling experiments, x-ray diffraction, and calorimetry. These studies showed that integral membrane proteins diffuse at rates affected by the viscosity of the lipid bilayer in which they were embedded, and demonstrated that the molecules within the cell membrane are dynamic rather than static.
Previous models of biological membranes included the Robertson Unit Membrane Model and the Davson-Danielli Tri-Layer model. These models had proteins present as sheets neighboring a lipid layer, rather than incorporated into the phospholipid bilayer. Other models described repeating, regular units of protein and lipid. These models were not well supported by microscopy and thermodynamic data, and did not accommodate evidence for dynamic membrane properties.
An important experiment that provided evidence supporting fluid and dynamic biological was performed by Frye and Edidin. They used Sendai virus to force human and mouse cells to fuse and form a heterokaryon. Using antibody staining, they were able to show that the mouse and human proteins remained segregated to separate halves of the heterokaryon a short time after cell fusion. However, the proteins eventually diffused and over time the border between the two halves was lost. Lowering the temperature slowed the rate of this diffusion by causing the membrane phospholipids to transition from a fluid to a gel phase. Singer and Nicolson rationalized the results of these experiments using their fluid mosaic model.
The fluid mosaic model explains changes in structure and behavior of cell membranes under different temperatures, as well as the association of membrane proteins with the membranes. While Singer and Nicolson had substantial evidence drawn from multiple subfields to support their model, recent advances in fluorescence microscopy and structural biology have validated the fluid mosaic nature of cell membranes.
Subsequent developments
Membrane asymmetry
Additionally, the two leaflets of biological membranes are asymmetric and divided into subdomains composed of specific proteins or lipids, allowing spatial segregation of biological processes associated with membranes. Cholesterol and cholesterol-interacting proteins can concentrate into lipid rafts and constrain cell signaling processes to only these rafts. Another form of asymmetry was shown by the work of Mouritsen and Bloom in 1984, where they proposed a Mattress Model of lipid-protein interactions to address the biophysical evidence that the membrane can range in thickness and hydrophobicity of proteins.
Non-bilayer membranes
The existence of non-bilayer lipid formations with important biological functions was confirmed subsequent to publication of the fluid mosaic model. These membrane structures may be useful when the cell needs to propagate a non bilayer form, which occurs during cell division and the formation of a gap junction.
Membrane curvature
The membrane bilayer is not always flat. Local curvature of the membrane can be caused by the asymmetry and non-bilayer organization of lipids as discussed above. More dramatic and functional curvature is achieved through BAR domains, which bind to phosphatidylinositol on the membrane surface, assisting in vesicle formation, organelle formation and cell division. Curvature development is in constant flux and contributes to the dynamic nature of biological membranes.
Lipid movement within the membrane
During the 1970s, it was acknowledged that individual lipid molecules undergo free lateral diffusion within each of the layers of the lipid membrane. Diffusion occurs at a high speed, with an average lipid molecule diffusing ~2μm, approximately the length of a large bacterial cell, in about 1 second. It has also been observed that individual lipid molecules rotate rapidly around their own axis. Moreover, phospholipid molecules can, although they seldom do, migrate from one side of the lipid bilayer to the other (a process known as flip-flop). However, flip-flop movement is enhanced by flippase enzymes. The processes described above influence the disordered nature of lipid molecules and interacting proteins in the lipid membranes, with consequences to membrane fluidity, signaling, trafficking and function.
Restrictions to lateral diffusion
There are restrictions to the lateral mobility of the lipid and protein components in the fluid membrane imposed by zonation. Early attempts to explain the assembly of membrane zones include the formation of lipid rafts and “cytoskeletal fences”, corrals wherein lipid and membrane proteins can diffuse freely, but that they can seldom leave. These ideas remain controversial, and alternative explanations are available such as the proteolipid code.
Lipid rafts
Lipid rafts are membrane nanometric platforms with a particular lipid and protein composition that laterally diffuse, navigating on the liquid bilipid layer. Sphingolipids and cholesterol are important building blocks of the lipid rafts.
Protein complexes
Cell membrane proteins and glycoproteins do not exist as single elements of the lipid membrane, as first proposed by Singer and Nicolson in 1972. Rather, they occur as diffusing complexes within the membrane. The assembly of single molecules into these macromolecular complexes has important functional consequences for the cell; such as ion and metabolite transport, signaling, cell adhesion, and migration.
Cytoskeletal fences (corrals) and binding to the extracellular matrix
Some proteins embedded in the bilipid layer interact with the extracellular matrix outside the cell, cytoskeleton filaments inside the cell, and septin ring-like structures. These interactions have a strong influence on shape and structure, as well as on compartmentalization. Moreover, they impose physical constraints that restrict the free lateral diffusion of proteins and at least some lipids within the bilipid layer.
When integral proteins of the lipid bilayer are tethered to the extracellular matrix, they are unable to diffuse freely. Proteins with a long intracellular domain may collide with a fence formed by cytoskeleton filaments. Both processes restrict the diffusion of proteins and lipids directly involved, as well as of other interacting components of the cell membranes.
Septins are a family of GTP-binding proteins highly conserved among eukaryotes. Prokaryotes have similar proteins called paraseptins. They form compartmentalizing ring-like structures strongly associated with the cell membranes. Septins are involved in the formation of structures such as, cilia and flagella, dendritic spines, and yeast buds.
Historical timeline
1895 – Ernest Overton hypothesized that cell membranes are made out of lipids.
1925 – Evert Gorter and François Grendel found that red blood cell membranes are formed by a fatty layer two molecules thick, i.e. they described the bilipid nature of the cell membrane.
1935 – Hugh Davson and James Danielli proposed that lipid membranes are layers composed by proteins and lipids with pore-like structures that allow specific permeability for certain molecules. Then, they suggested a model for the cell membrane, consisting of a lipid layer surrounded by protein layers at both sides of it.
1957 – J. David Robertson, based on electron microscopy studies, establishes the "Unit Membrane Hypothesis". This, states that all membranes in the cell, i.e. plasma and organelle membranes, have the same structure: a bilayer of phospholipids with monolayers of proteins at both sides of it.
1972 – SJ Singer and GL Nicolson proposed the fluid mosaic model as an explanation for the data and latest evidence regarding the structure and thermodynamics of cell membranes.
1997 – K Simons and E Ikonen proposed the lipid raft theory as an initial explanation of membrane zonation.
2024 – TA Kervin and M Overduin proposed the proteolipid code to fully explain membrane zonation as the lipid raft theory became increasingly controversial.
Notes and references
Membrane biology
Organelles
Cell anatomy | 0.767046 | 0.996439 | 0.764314 |
Cyclic voltammetry | In electrochemistry, cyclic voltammetry (CV) is a type of potentiodynamic measurement. In a cyclic voltammetry experiment, the working electrode potential is ramped linearly versus time. Unlike in linear sweep voltammetry, after the set potential is reached in a CV experiment, the working electrode's potential is ramped in the opposite direction to return to the initial potential. These cycles of ramps in potential may be repeated as many times as needed. The current at the working electrode is plotted versus the applied voltage (that is, the working electrode's potential) to give the cyclic voltammogram trace. Cyclic voltammetry is generally used to study the electrochemical properties of an analyte in solution or of a molecule that is adsorbed onto the electrode.
Experimental method
In cyclic voltammetry (CV), the electrode potential ramps linearly versus time in cyclical phases (blue trace in Figure 2). The rate of voltage change over time during each of these phases is known as the experiment's scan rate (V/s). The potential is measured between the working electrode and the reference electrode, while the current is measured between the working electrode and the counter electrode. These data are plotted as current density (j) versus applied potential (E, often referred to as just 'potential'). In Figure 2, during the initial forward scan (from t0 to t1) an increasingly oxidation potential is applied; thus the anodic current will, at least initially, increase over this time period, assuming that there are oxidable analytes in the system. At some point after the oxidation potential of the analyte is reached, the anodic current will decrease as the concentration of oxidable analyte is depleted. If the redox couple is reversible, then during the reverse scan (from t1 to t2), the oxidized analyte will start to be re-reduced, giving rise to a current of reverse polarity (cathodic current) to before. The more reversible the redox couple is, the more similar the oxidation peak will be in shape to the reduction peak. Hence, CV data can provide information about redox potentials and electrochemical reaction rates.
For instance, if the electron transfer at the working electrode surface is fast and the current is limited by the diffusion of analyte species to the electrode surface, then the peak current will be proportional to the square root of the scan rate. This relationship is described by the Randles–Sevcik equation. In this situation, the CV experiment only samples a small portion of the solution, i.e., the diffusion layer at the electrode surface.
Characterization
The utility of cyclic voltammetry is highly dependent on the analyte being studied. The analyte has to be redox active within the potential window to be scanned.
The analyte is in solution
Reversible couples
Often the analyte displays a reversible CV wave (such as that depicted in Figure 1), which is observed when all of the initial analyte can be recovered after a forward and reverse scan cycle. Although such reversible couples are simpler to analyze, they contain less information than more complex waveforms.
The waveform of even reversible couples is complex owing to the combined effects of polarization and diffusion. The difference between the two peak potentials (Ep), ΔEp, is of particular interest.
ΔEp = Epa - Epc > 0
This difference mainly results from the effects of analyte diffusion rates. In the ideal case of a reversible 1e- couple, ΔEp is 57 mV and the full-width half-max of the forward scan peak is 59 mV. Typical values observed experimentally are greater, often approaching 70 or 80 mV. The waveform is also affected by the rate of electron transfer, usually discussed as the activation barrier for electron transfer. A theoretical description of polarization overpotential is in part described by the Butler–Volmer equation and Cottrell equation. In an ideal system the relationship reduces to for an n electron process.
Focusing on current, reversible couples are characterized by ipa/ipc = 1.
When a reversible peak is observed, thermodynamic information in the form of a half cell potential E01/2 can be determined. When waves are semi-reversible (ipa/ipc is close but not equal to 1), it may be possible to determine even more specific information (see electrochemical reaction mechanism).
The current maxima for oxidation and reduction itself depend on the scan rate, see the figure.
To study the nature of the electrochemical reaction mechanism it is useful to perform a power fit according to
A fit with in the figure shows the proportionality of the peak currents to the square root of the scan rate when additionally is fulfilled.
This leads to the so called Randles–Sevcik equation and the rate determining step of this electrochemical redox reaction can be assigned to diffusion.
Nonreversible couples
Many redox processes observed by CV are quasi-reversible or non-reversible. In such cases the thermodynamic potential E01/2 is often deduced by simulation. The irreversibility is indicated by ipa/ipc ≠ 1. Deviations from unity are attributable to a subsequent chemical reaction that is triggered by the electron transfer. Such EC processes can be complex, involving isomerization, dissociation, association, etc.
The analyte is adsorbed onto the electrode surface
Adsorbed species give simple voltammetric responses: ideally, at slow scan rates, there is no peak separation, the peak width is 90mV for a one-electron redox couple, and the peak current and peak area are proportional to scan rate (observing that the peak current is proportional to scan rate proves that the redox species that gives the peak is actually immobilised). The effect of increasing the scan rate can be used to measure the rate of interfacial electron transfer and/or the rates of reactions that are coupltransfer. This technique has been useful to study redox proteins, some of which readily adsorb on various electrode materials, but the theory for biological and non-biological redox molecules is the same (see the page about protein film voltammetry).
Experimental setup
CV experiments are conducted on a solution in a cell fitted with electrodes. The solution consists of the solvent, in which is dissolved electrolyte and the species to be studied.
The cell
A standard CV experiment employs a cell fitted with three electrodes: reference electrode, working electrode, and counter electrode. This combination is sometimes referred to as a three-electrode setup. Electrolyte is usually added to the sample solution to ensure sufficient conductivity. The solvent, electrolyte, and material composition of the working electrode will determine the potential range that can be accessed during the experiment.
The electrodes are immobile and sit in unstirred solutions during cyclic voltammetry. This "still" solution method gives rise to cyclic voltammetry's characteristic diffusion-controlled peaks. This method also allows a portion of the analyte to remain after reduction or oxidation so that it may display further redox activity. Stirring the solution between cyclic voltammetry traces is important in order to supply the electrode surface with fresh analyte for each new experiment. The solubility of an analyte can change drastically with its overall charge; as such it is common for reduced or oxidized analyte species to precipitate out onto the electrode. This layering of analyte can insulate the electrode surface, display its own redox activity in subsequent scans, or otherwise alter the electrode surface in a way that affects the CV measurements. For this reason it is often necessary to clean the electrodes between scans.
Common materials for the working electrode include glassy carbon, platinum, and gold. These electrodes are generally encased in a rod of inert insulator with a disk exposed at one end. A regular working electrode has a radius within an order of magnitude of 1 mm. Having a controlled surface area with a well-defined shape is necessary for being able to interpret cyclic voltammetry results.
To run cyclic voltammetry experiments at very high scan rates a regular working electrode is insufficient. High scan rates create peaks with large currents and increased resistances, which result in distortions. Ultramicroelectrodes can be used to minimize the current and resistance.
The counter electrode, also known as the auxiliary or second electrode, can be any material that conducts current easily, will not react with the bulk solution, and has a surface area much larger than the working electrode. Common choices are platinum and graphite. Reactions occurring at the counter electrode surface are unimportant as long as it continues to conduct current well. To maintain the observed current the counter electrode will often oxidize or reduce the solvent or bulk electrolyte.
Solvents
CV can be conducted using a variety of solutions. Solvent choice for cyclic voltammetry takes into account several requirements. The solvent must dissolve the analyte and high concentrations of the supporting electrolyte. It must also be stable in the potential window of the experiment with respect to the working electrode. It must not react with either the analyte or the supporting electrolyte. It must be pure to prevent interference.
Electrolyte
The electrolyte ensures good electrical conductivity and minimizes iR drop such that the recorded potentials correspond to actual potentials. For aqueous solutions, many electrolytes are available, but typical ones are alkali metal salts of perchlorate and nitrate. In nonaqueous solvents, the range of electrolytes is more limited, and a popular choice is tetrabutylammonium hexafluorophosphate.
Related potentiometric techniques
Potentiodynamic techniques also exist that add low-amplitude AC perturbations to a potential ramp and measure variable response in a single frequency (AC voltammetry) or in many frequencies simultaneously (potentiodynamic electrochemical impedance spectroscopy). The response in alternating current is two-dimensional, characterized by both amplitude and phase. These data can be analyzed to determine information about different chemical processes (charge transfer, diffusion, double layer charging, etc.). Frequency response analysis enables simultaneous monitoring of the various processes that contribute to the potentiodynamic AC response of an electrochemical system.
Whereas cyclic voltammetry is not hydrodynamic voltammetry, useful electrochemical methods are. In such cases, flow is achieved at the electrode surface by stirring the solution, pumping the solution, or rotating the electrode as is the case with rotating disk electrodes and rotating ring-disk electrodes. Such techniques target steady state conditions and produce waveforms that appear the same when scanned in either the positive or negative directions, thus limiting them to linear sweep voltammetry.
Applications
Cyclic voltammetry (CV) has become an important and widely used electroanalytical technique in many areas of chemistry. It is often used to study a variety of redox processes, to determine the stability of reaction products, the presence of intermediates in redox reactions, electron transfer kinetics, and the reversibility of a reaction. It can be used for electrochemical deposition of thin films or for determining suitable reduction potential range of the ions present in electrolyte for electrochemical deposition. CV can also be used to determine the electron stoichiometry of a system, the diffusion coefficient of an analyte, and the formal reduction potential of an analyte, which can be used as an identification tool. In addition, because concentration is proportional to current in a reversible, Nernstian system, the concentration of an unknown solution can be determined by generating a calibration curve of current vs. concentration.
In cellular biology it is used to measure the concentrations in living organisms. In organometallic chemistry, it is used to evaluate redox mechanisms.
Measuring antioxidant capacity
Cyclical voltammetry can be used to determine the antioxidant capacity in food and even skin. Low molecular weight antioxidants, molecules that prevent other molecules from being oxidized by acting as reducing agents, are important in living cells because they inhibit cell damage or death caused by oxidation reactions that produce radicals. Examples of antioxidants include flavonoids, whose antioxidant activity is greatly increased with more hydroxyl groups. Because traditional methods to determine antioxidant capacity involve tedious steps, techniques to increase the rate of the experiment are continually being researched. One such technique involves cyclic voltammetry because it can measure the antioxidant capacity by quickly measuring the redox behavior over a complex system without the need to measure each component's antioxidant capacity. Furthermore, antioxidants are quickly oxidized at inert electrodes, so the half-wave potential can be utilized to determine antioxidant capacity. It is important to note that whenever cyclic voltammetry is utilized, it is usually compared to spectrophotometry or high-performance liquid chromatography (HPLC). Applications of the technique extend to food chemistry, where it is used to determine the antioxidant activity of red wine, chocolate, and hops. Additionally, it even has uses in the world of medicine in that it can determine antioxidants in the skin.
Evaluation of a technique
The technique being evaluated uses voltammetric sensors combined in an electronic tongue (ET) to observe the antioxidant capacity in red wines. These electronic tongues (ETs) consist of multiple sensing units like voltammetric sensors, which will have unique responses to certain compounds. This approach is optimal to use since samples of high complexity can be analyzed with high cross-selectivity. Thus, the sensors can be sensitive to pH and antioxidants. As usual, the voltage in the cell was monitored using a working electrode and a reference electrode (silver/silver chloride electrode). Furthermore, a platinum counter electrode allows the current to continue to flow during the experiment. The Carbon Paste Electrodes sensor (CPE) and the Graphite-Epoxy Composite (GEC) electrode are tested in a saline solution before the scanning of the wine so that a reference signal can be obtained. The wines are then ready to be scanned, once with CPE and once with GEC. While cyclic voltammetry was successfully used to generate currents using the wine samples, the signals were complex and needed an additional extraction stage. It was found that the ET method could successfully analyze wine's antioxidant capacity as it agreed with traditional methods like TEAC, Folin-Ciocalteu, and I280 indexes. Additionally, the time was reduced, the sample did not have to be pretreated, and other reagents were unnecessary, all of which diminished the popularity of traditional methods. Thus, cyclic voltammetry successfully determines the antioxidant capacity and even improves previous results.
Antioxidant capacity of chocolate and hops
The phenolic antioxidants for cocoa powder, dark chocolate, and milk chocolate can also be determined via cyclic voltammetry. In order to achieve this, the anodic peaks are calculated and analyzed with the knowledge that the first and third anodic peaks can be assigned to the first and second oxidation of flavonoids, while the second anodic peak represents phenolic acids. Using the graph produced by cyclic voltammetry, the total phenolic and flavonoid content can be deduced in each of the three samples. It was observed that cocoa powder and dark chocolate had the highest antioxidant capacity since they had high total phenolic and flavonoid content. Milk chocolate had the lowest capacity as it had the lowest phenolic and flavonoid content. While the antioxidant content was given using the cyclic voltammetry anodic peaks, HPLC must then be used to determine the purity of catechins and procyanidin in cocoa powder, dark chocolate, and milk chocolate.
Hops, the flowers used in making beer, contain antioxidant properties due to the presence of flavonoids and other polyphenolic compounds. In this cyclic voltammetry experiment, the working electrode voltage was determined using a ferricinium/ferrocene reference electrode. By comparing different hop extract samples, it was observed that the sample containing polyphenols that were oxidized at less positive potentials proved to have better antioxidant capacity.
See also
Current–voltage characteristic
Electroanalytical methods
Fast-scan cyclic voltammetry
Randles–Sevcik equation
Voltammetry
References
Further reading
External links
Electroanalytical methods | 0.77116 | 0.991114 | 0.764308 |
Fatty acid | In chemistry, particularly in biochemistry, a fatty acid is a carboxylic acid with an aliphatic chain, which is either saturated or unsaturated. Most naturally occurring fatty acids have an unbranched chain of an even number of carbon atoms, from 4 to 28. Fatty acids are a major component of the lipids (up to 70% by weight) in some species such as microalgae but in some other organisms are not found in their standalone form, but instead exist as three main classes of esters: triglycerides, phospholipids, and cholesteryl esters. In any of these forms, fatty acids are both important dietary sources of fuel for animals and important structural components for cells.
History
The concept of fatty acid (acide gras) was introduced in 1813 by Michel Eugène Chevreul, though he initially used some variant terms: graisse acide and acide huileux ("acid fat" and "oily acid").
Types of fatty acids
Fatty acids are classified in many ways: by length, by saturation vs unsaturation, by even vs odd carbon content, and by linear vs branched.
Length of fatty acids
Short-chain fatty acids (SCFAs) are fatty acids with aliphatic tails of five or fewer carbons (e.g. butyric acid).
Medium-chain fatty acids (MCFAs) are fatty acids with aliphatic tails of 6 to 12 carbons, which can form medium-chain triglycerides.
Long-chain fatty acids (LCFAs) are fatty acids with aliphatic tails of 13 to 21 carbons.
Very long chain fatty acids (VLCFAs) are fatty acids with aliphatic tails of 22 or more carbons.
Saturated fatty acids
Saturated fatty acids have no C=C double bonds. They have the formula CH(CH)COOH, for different n. An important saturated fatty acid is stearic acid (n = 16), which when neutralized with sodium hydroxide is the most common form of soap.
Unsaturated fatty acids
Unsaturated fatty acids have one or more C=C double bonds. The C=C double bonds can give either cis or trans isomers.
cis A cis configuration means that the two hydrogen atoms adjacent to the double bond stick out on the same side of the chain. The rigidity of the double bond freezes its conformation and, in the case of the cis isomer, causes the chain to bend and restricts the conformational freedom of the fatty acid. The more double bonds the chain has in the cis configuration, the less flexibility it has. When a chain has many cis bonds, it becomes quite curved in its most accessible conformations. For example, oleic acid, with one double bond, has a "kink" in it, whereas linoleic acid, with two double bonds, has a more pronounced bend. α-Linolenic acid, with three double bonds, favors a hooked shape. The effect of this is that, in restricted environments, such as when fatty acids are part of a phospholipid in a lipid bilayer or triglycerides in lipid droplets, cis bonds limit the ability of fatty acids to be closely packed, and therefore can affect the melting temperature of the membrane or of the fat. Cis unsaturated fatty acids, however, increase cellular membrane fluidity, whereas trans unsaturated fatty acids do not.
trans A trans configuration, by contrast, means that the adjacent two hydrogen atoms lie on opposite sides of the chain. As a result, they do not cause the chain to bend much, and their shape is similar to straight saturated fatty acids.
In most naturally occurring unsaturated fatty acids, each double bond has three (n−3), six (n−6), or nine (n−9) carbon atoms after it, and all double bonds have a cis configuration. Most fatty acids in the trans configuration (trans fats) are not found in nature and are the result of human processing (e.g., hydrogenation). Some trans fatty acids also occur naturally in the milk and meat of ruminants (such as cattle and sheep). They are produced, by fermentation, in the rumen of these animals. They are also found in dairy products from milk of ruminants, and may be also found in breast milk of women who obtained them from their diet.
The geometric differences between the various types of unsaturated fatty acids, as well as between saturated and unsaturated fatty acids, play an important role in biological processes, and in the construction of biological structures (such as cell membranes).
Even- vs odd-chained fatty acids
Most fatty acids are even-chained, e.g. stearic (C18) and oleic (C18), meaning they are composed of an even number of carbon atoms. Some fatty acids have odd numbers of carbon atoms; they are referred to as odd-chained fatty acids (OCFA). The most common OCFA are the saturated C15 and C17 derivatives, pentadecanoic acid and heptadecanoic acid respectively, which are found in dairy products. On a molecular level, OCFAs are biosynthesized and metabolized slightly differently from the even-chained relatives.
Branching
Most common fatty acids are straight-chain compounds, with no additional carbon atoms bonded as side groups to the main hydrocarbon chain. Branched-chain fatty acids contain one or more methyl groups bonded to the hydrocarbon chain.
Nomenclature
Carbon atom numbering
Most naturally occurring fatty acids have an unbranched chain of carbon atoms, with a carboxyl group (–COOH) at one end, and a methyl group (–CH3) at the other end.
The position of each carbon atom in the backbone of a fatty acid is usually indicated by counting from 1 at the −COOH end. Carbon number x is often abbreviated C-x (or sometimes Cx), with x = 1, 2, 3, etc. This is the numbering scheme recommended by the IUPAC.
Another convention uses letters of the Greek alphabet in sequence, starting with the first carbon after the carboxyl group. Thus carbon α (alpha) is C-2, carbon β (beta) is C-3, and so forth.
Although fatty acids can be of diverse lengths, in this second convention the last carbon in the chain is always labelled as ω (omega), which is the last letter in the Greek alphabet. A third numbering convention counts the carbons from that end, using the labels "ω", "ω−1", "ω−2". Alternatively, the label "ω−x" is written "n−x", where the "n" is meant to represent the number of carbons in the chain.
In either numbering scheme, the position of a double bond in a fatty acid chain is always specified by giving the label of the carbon closest to the carboxyl end. Thus, in an 18 carbon fatty acid, a double bond between C-12 (or ω−6) and C-13 (or ω−5) is said to be "at" position C-12 or ω−6. The IUPAC naming of the acid, such as "octadec-12-enoic acid" (or the more pronounceable variant "12-octadecanoic acid") is always based on the "C" numbering.
The notation Δx,y,... is traditionally used to specify a fatty acid with double bonds at positions x,y,.... (The capital Greek letter "Δ" (delta) corresponds to Roman "D", for Double bond). Thus, for example, the 20-carbon arachidonic acid is Δ5,8,11,14, meaning that it has double bonds between carbons 5 and 6, 8 and 9, 11 and 12, and 14 and 15.
In the context of human diet and fat metabolism, unsaturated fatty acids are often classified by the position of the double bond closest between to the ω carbon (only), even in the case of multiple double bonds such as the essential fatty acids. Thus linoleic acid (18 carbons, Δ9,12), γ-linolenic acid (18-carbon, Δ6,9,12), and arachidonic acid (20-carbon, Δ5,8,11,14) are all classified as "ω−6" fatty acids; meaning that their formula ends with –CH=CH–––––.
Fatty acids with an odd number of carbon atoms are called odd-chain fatty acids, whereas the rest are even-chain fatty acids. The difference is relevant to gluconeogenesis.
Naming of fatty acids
The following table describes the most common systems of naming fatty acids.
Free fatty acids
When circulating in the plasma (plasma fatty acids), not in their ester, fatty acids are known as non-esterified fatty acids (NEFAs) or free fatty acids (FFAs). FFAs are always bound to a transport protein, such as albumin.
FFAs also form from triglyceride food oils and fats by hydrolysis, contributing to the characteristic rancid odor. An analogous process happens in biodiesel with risk of part corrosion.
Production
Industrial
Fatty acids are usually produced industrially by the hydrolysis of triglycerides, with the removal of glycerol (see oleochemicals). Phospholipids represent another source. Some fatty acids are produced synthetically by hydrocarboxylation of alkenes.
By animals
In animals, fatty acids are formed from carbohydrates predominantly in the liver, adipose tissue, and the mammary glands during lactation.
Carbohydrates are converted into pyruvate by glycolysis as the first important step in the conversion of carbohydrates into fatty acids. Pyruvate is then decarboxylated to form acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl-CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to the mitochondrion as malate. The cytosolic acetyl-CoA is carboxylated by acetyl-CoA carboxylase into malonyl-CoA, the first committed step in the synthesis of fatty acids.
Malonyl-CoA is then involved in a repeating series of reactions that lengthens the growing fatty acid chain by two carbons at a time. Almost all natural fatty acids, therefore, have even numbers of carbon atoms. When synthesis is complete the free fatty acids are nearly always combined with glycerol (three fatty acids to one glycerol molecule) to form triglycerides, the main storage form of fatty acids, and thus of energy in animals. However, fatty acids are also important components of the phospholipids that form the phospholipid bilayers out of which all the membranes of the cell are constructed (the cell wall, and the membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus).
The "uncombined fatty acids" or "free fatty acids" found in the circulation of animals come from the breakdown (or lipolysis) of stored triglycerides. Because they are insoluble in water, these fatty acids are transported bound to plasma albumin. The levels of "free fatty acids" in the blood are limited by the availability of albumin binding sites. They can be taken up from the blood by all cells that have mitochondria (with the exception of the cells of the central nervous system). Fatty acids can only be broken down in mitochondria, by means of beta-oxidation followed by further combustion in the citric acid cycle to CO and water. Cells in the central nervous system, although they possess mitochondria, cannot take free fatty acids up from the blood, as the blood–brain barrier is impervious to most free fatty acids, excluding short-chain fatty acids and medium-chain fatty acids. These cells have to manufacture their own fatty acids from carbohydrates, as described above, in order to produce and maintain the phospholipids of their cell membranes, and those of their organelles.
Variation between animal species
Studies on the cell membranes of mammals and reptiles discovered that mammalian cell membranes are composed of a higher proportion of polyunsaturated fatty acids (DHA, omega−3 fatty acid) than reptiles. Studies on bird fatty acid composition have noted similar proportions to mammals but with 1/3rd less omega−3 fatty acids as compared to omega−6 for a given body size. This fatty acid composition results in a more fluid cell membrane but also one that is permeable to various ions ( & ), resulting in cell membranes that are more costly to maintain. This maintenance cost has been argued to be one of the key causes for the high metabolic rates and concomitant warm-bloodedness of mammals and birds. However polyunsaturation of cell membranes may also occur in response to chronic cold temperatures as well. In fish increasingly cold environments lead to increasingly high cell membrane content of both monounsaturated and polyunsaturated fatty acids, to maintain greater membrane fluidity (and functionality) at the lower temperatures.
Fatty acids in dietary fats
The following table gives the fatty acid, vitamin E and cholesterol composition of some common dietary fats.
Reactions of fatty acids
Fatty acids exhibit reactions like other carboxylic acids, i.e. they undergo esterification and acid-base reactions.
Acidity
Fatty acids do not show a great variation in their acidities, as indicated by their respective pKa. Nonanoic acid, for example, has a pK of 4.96, being only slightly weaker than acetic acid (4.76). As the chain length increases, the solubility of the fatty acids in water decreases, so that the longer-chain fatty acids have minimal effect on the pH of an aqueous solution. Near neutral pH, fatty acids exist at their conjugate bases, i.e. oleate, etc.
Solutions of fatty acids in ethanol can be titrated with sodium hydroxide solution using phenolphthalein as an indicator. This analysis is used to determine the free fatty acid content of fats; i.e., the proportion of the triglycerides that have been hydrolyzed.
Neutralization of fatty acids, one form of saponification (soap-making), is a widely practiced route to metallic soaps.
Hydrogenation and hardening
Hydrogenation of unsaturated fatty acids is widely practiced. Typical conditions involve 2.0–3.0 MPa of H pressure, 150 °C, and nickel supported on silica as a catalyst. This treatment affords saturated fatty acids. The extent of hydrogenation is indicated by the iodine number. Hydrogenated fatty acids are less prone toward rancidification. Since the saturated fatty acids are higher melting than the unsaturated precursors, the process is called hardening. Related technology is used to convert vegetable oils into margarine. The hydrogenation of triglycerides (vs fatty acids) is advantageous because the carboxylic acids degrade the nickel catalysts, affording nickel soaps. During partial hydrogenation, unsaturated fatty acids can be isomerized from cis to trans configuration.
More forcing hydrogenation, i.e. using higher pressures of H and higher temperatures, converts fatty acids into fatty alcohols. Fatty alcohols are, however, more easily produced from fatty acid esters.
In the Varrentrapp reaction certain unsaturated fatty acids are cleaved in molten alkali, a reaction which was, at one point of time, relevant to structure elucidation.
Auto-oxidation and rancidity
Unsaturated fatty acids and their esters undergo auto-oxidation, which involves replacement of a C-H bond with C-O bond. The process requires oxygen (air) and is accelerated by the presence of traces of metals, which serve as catalysts. Doubly unsaturated fatty acids are particularly prone to this reaction. Vegetable oils resist this process to a small degree because they contain antioxidants, such as tocopherol. Fats and oils often are treated with chelating agents such as citric acid to remove the metal catalysts.
Ozonolysis
Unsaturated fatty acids are susceptible to degradation by ozone. This reaction is practiced in the production of azelaic acid ((CH)(COH)) from oleic acid.
Circulation
Digestion and intake
Short- and medium-chain fatty acids are absorbed directly into the blood via intestine capillaries and travel through the portal vein just as other absorbed nutrients do. However, long-chain fatty acids are not directly released into the intestinal capillaries. Instead they are absorbed into the fatty walls of the intestine villi and reassemble again into triglycerides. The triglycerides are coated with cholesterol and protein (protein coat) into a compound called a chylomicron.
From within the cell, the chylomicron is released into a lymphatic capillary called a lacteal, which merges into larger lymphatic vessels. It is transported via the lymphatic system and the thoracic duct up to a location near the heart (where the arteries and veins are larger). The thoracic duct empties the chylomicrons into the bloodstream via the left subclavian vein. At this point the chylomicrons can transport the triglycerides to tissues where they are stored or metabolized for energy.
Metabolism
Fatty acids are broken down to CO and water by the intra-cellular mitochondria through beta oxidation and the citric acid cycle. In the final step (oxidative phosphorylation), reactions with oxygen release a lot of energy, captured in the form of large quantities of ATP. Many cell types can use either glucose or fatty acids for this purpose, but fatty acids release more energy per gram. Fatty acids (provided either by ingestion or by drawing on triglycerides stored in fatty tissues) are distributed to cells to serve as a fuel for muscular contraction and general metabolism.
Essential fatty acids
Fatty acids that are required for good health but cannot be made in sufficient quantity from other substrates, and therefore must be obtained from food, are called essential fatty acids. There are two series of essential fatty acids: one has a double bond three carbon atoms away from the methyl end; the other has a double bond six carbon atoms away from the methyl end. Humans lack the ability to introduce double bonds in fatty acids beyond carbons 9 and 10, as counted from the carboxylic acid side. Two essential fatty acids are linoleic acid (LA) and alpha-linolenic acid (ALA). These fatty acids are widely distributed in plant oils. The human body has a limited ability to convert ALA into the longer-chain omega-3 fatty acids — eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), which can also be obtained from fish. Omega−3 and omega−6 fatty acids are biosynthetic precursors to endocannabinoids with antinociceptive, anxiolytic, and neurogenic properties.
Distribution
Blood fatty acids adopt distinct forms in different stages in the blood circulation. They are taken in through the intestine in chylomicrons, but also exist in very low density lipoproteins (VLDL) and low density lipoproteins (LDL) after processing in the liver. In addition, when released from adipocytes, fatty acids exist in the blood as free fatty acids.
It is proposed that the blend of fatty acids exuded by mammalian skin, together with lactic acid and pyruvic acid, is distinctive and enables animals with a keen sense of smell to differentiate individuals.
Skin
The stratum corneum the outermost layer of the epidermis is composed of terminally differentiated and enucleated corneocytes within a lipid matrix. Together with cholesterol and ceramides, free fatty acids form a water-impermeable barrier that prevents evaporative water loss. Generally, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (about 50% by weight), cholesterol (25%), and free fatty acids (15%). Saturated fatty acids 16 and 18 carbons in length are the dominant types in the epidermis, while unsaturated fatty acids and saturated fatty acids of various other lengths are also present. The relative abundance of the different fatty acids in the epidermis is dependent on the body site the skin is covering. There are also characteristic epidermal fatty acid alterations that occur in psoriasis, atopic dermatitis, and other inflammatory conditions.
Analysis
The chemical analysis of fatty acids in lipids typically begins with an interesterification step that breaks down their original esters (triglycerides, waxes, phospholipids etc.) and converts them to methyl esters, which are then separated by gas chromatography or analyzed by gas chromatography and mid-infrared spectroscopy.
Separation of unsaturated isomers is possible by silver ion complemented thin-layer chromatography. Other separation techniques include high-performance liquid chromatography (with short columns packed with silica gel with bonded phenylsulfonic acid groups whose hydrogen atoms have been exchanged for silver ions). The role of silver lies in its ability to form complexes with unsaturated compounds.
Industrial uses
Fatty acids are mainly used in the production of soap, both for cosmetic purposes and, in the case of metallic soaps, as lubricants. Fatty acids are also converted, via their methyl esters, to fatty alcohols and fatty amines, which are precursors to surfactants, detergents, and lubricants. Other applications include their use as emulsifiers, texturizing agents, wetting agents, anti-foam agents, or stabilizing agents.
Esters of fatty acids with simpler alcohols (such as methyl-, ethyl-, n-propyl-, isopropyl- and butyl esters) are used as emollients in cosmetics and other personal care products and as synthetic lubricants. Esters of fatty acids with more complex alcohols, such as sorbitol, ethylene glycol, diethylene glycol, and polyethylene glycol are consumed in food, or used for personal care and water treatment, or used as synthetic lubricants or fluids for metal working.
See also
Fatty acid synthase
Fatty acid synthesis
Fatty aldehyde
List of saturated fatty acids
List of unsaturated fatty acids
List of carboxylic acids
Vegetable oil
Lactobacillic acid
References
External links
Lipid Library
Prostaglandins, Leukotrienes & Essential Fatty Acids journal
Fatty blood acids
Commodity chemicals
E-number additives
Edible oil chemistry | 0.765366 | 0.998586 | 0.764284 |
Sexual fluidity | Sexual fluidity is one or more changes in sexuality or sexual identity (sometimes known as sexual orientation identity). Sexual orientation is stable for the vast majority of people, but some research indicates that some people may experience change in their sexual orientation, and this is slightly more likely for women than for men. There is no scientific evidence that sexual orientation can be changed through psychotherapy. Sexual identity can change throughout an individual's life, and does not have to align with biological sex, sexual behavior, or actual sexual orientation.
According to scientific consensus, sexual orientation is not a choice. There is no consensus on the exact cause of developing a sexual orientation, but genetic, hormonal, social, and cultural influences have been examined. Scientists believe that it is caused by a complex interplay of genetic, hormonal, and environmental influences. Although no single theory on the cause of sexual orientation has yet gained widespread support, scientists favor biologically-based theories. Research over several decades has demonstrated that sexual orientation can be at any point along a continuum, from exclusive attraction to the opposite sex to exclusive attraction to the same sex.
The results of a large-scale, longitudinal study by Savin-Williams, Joyner, and Rieger (2012) indicated that stability of sexual orientation identity over a six-year period was more common than change, and that stability was greatest among men and those identifying as heterosexual. While stability is more common than change, change in sexual orientation identity does occur and the vast majority of research indicates that female sexuality is more fluid than male sexuality. This could be attributed to females' higher erotic plasticity or to sociocultural factors that socialize women to be more open to change. Due to the gender differences in the stability of sexual orientation identity, male and female sexuality may not function via the same mechanisms. Researchers continue to analyze sexual fluidity to better determine its relationship to sexual orientation subgroups (i.e., bisexual, lesbian, gay, etc.).
Use of the term sexual fluidity has been attributed to Lisa M. Diamond. The term and the concept gained recognition in the psychological profession and in the media.
Background
Often, sexual orientation and sexual identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation. While the Centre for Addiction and Mental Health and American Psychiatric Association state that sexual orientation is innate, continuous or fixed throughout their lives for some people, but is fluid or changes over time for others, the American Psychological Association distinguishes between sexual orientation (an innate attraction) and sexual orientation identity (which may change at any point in a person's life). Scientists and mental health professionals generally do not believe that sexual orientation is a choice.
The American Psychological Association states that "sexual orientation is not a choice that can be changed at will, and that sexual orientation is most likely the result of a complex interaction of environmental, cognitive and biological factors...is shaped at an early age...[and evidence suggests] biological, including genetic or inborn hormonal factors, play a significant role in a person's sexuality." They say that "sexual orientation identity—not sexual orientation—appears to change via psychotherapy, support groups, and life events." The American Psychiatric Association says individuals may "become aware at different points in their lives that they are heterosexual, gay, lesbian, or bisexual" and "opposes any psychiatric treatment, such as 'reparative' or 'conversion' therapy, which is based upon the assumption that homosexuality per se is a mental disorder, or based upon a prior assumption that the patient should change his/her homosexual orientation". They do, however, encourage gay affirmative psychotherapy.
In the first decade of the 2000s, psychologist Lisa M. Diamond studied 80 non-heterosexual women over several years. She found that in this group, changes in sexual identity were common, although they were typically between adjacent identity categories (such as 'lesbian' and 'bisexual'). Some change in self-reported sexual feeling occurred among many of the women, but it was small, only averaging about 1 point on the Kinsey scale on average. The range of these women's potential attractions was limited by their sexual orientations, but sexual fluidity permitted movement within that range.
In her book Sexual Fluidity, which was awarded with the 2009 Lesbian, Gay, Bisexual, and Transgender Issues Distinguished Book Award by Division 44 of the American Psychological Association, Diamond speaks of female sexuality and trying to go beyond the language of "phases" and "denial", arguing that traditional labels for sexual desire are inadequate. For some of 100 non-heterosexual women she followed in her study over a period of 10 years, the word bisexual did not truly express the versatile nature of their sexuality. Diamond calls "for an expanded understanding of same-sex sexuality."
Diamond, when reviewing research on lesbian and bisexual women's sexual identities, stated that studies find "change and fluidity in same-sex sexuality that contradict conventional models of sexual orientation as a fixed and uniformly early-developing trait." She suggested that sexual orientation is a phenomenon more connected with female non-heterosexual sexuality, stating, "whereas sexual orientation in men appears to operate as a stable erotic 'compass' reliably channeling sexual arousal and motivation toward one gender or the other, sexual orientation in women does not appear to function in this fashion... As a result of these phenomena, women's same-sex sexuality expresses itself differently from men's same-sex sexuality at every stage of the life course."
Biology and stability
Conversion therapy (attempts to change sexual orientation) is rarely successful. In Maccio's (2011) review of sexual reorientation therapy attempts, she lists two studies that claim to have successfully converted gay men and lesbians to heterosexuals and four that demonstrate the contrary. She sought to settle the debate using a sample that was not recruited from religious organizations. The study consisted of 37 former conversion therapy participants (62.2% were male) from various cultural and religious backgrounds who currently or previously identified as lesbian, gay, or bisexual. The results indicated that there were no statistically significant shifts in sexual orientation from pre- to post-treatment. In follow-up sessions, the few changes in sexual orientation that did occur following therapy did not last. This study stands as support for the biological origin of sexual orientation, but the largely male sample population confounds the findings.
Further support for the biological origin of sexual orientation is that gender atypical behavior in childhood (e.g., a young boy playing with dolls) appears to predict homosexuality in adulthood (see childhood gender nonconformity). A longitudinal study by Drummond et al. (2008) looked at young girls with gender dysphoria (a significant example of gender atypical behavior) and found that the majority of these girls grew up to identify as bisexual or lesbian. Many retrospective studies looking at childhood behavior are criticized for potential memory errors; so a study by Rieger, Linsenmeier, Gygax, & Bailey (2008) used home videos to investigate the relationship between childhood behaviors and adult sexual orientation. The results of this study support biological causation, but an understanding of how cultural assumptions about sexuality can affect sexual identity formation is also considered.
There is strong evidence for a relationship between fraternal birth order and male sexual orientation, and there has been biological research done to investigate potential biological determinants of sexual orientation in men and women. One theory is the second to fourth finger ratio (2D:4D) theory. Some studies have discovered that heterosexual women had higher 2D:4D ratios than did lesbian women but the difference was not found between heterosexual and gay men. Similarly, a study has shown that homosexual men have a sexually dimorphic nucleus in the anterior hypothalamus that is the size of females'. Twin and family studies have also found a genetic influence.
Changes in sexuality
Demographics
General
One study by Steven E. Mock and Richard P. Eibach from 2011 shows 2% of 2,560 adult participants included in National Survey of Midlife Development in the United States reported change of sexual orientation identities after a 10-year period: 0.78% of male and 1.36% of female persons that identified themselves to be heterosexuals at the beginning of the 10-year period, as well as 63.6% of lesbians, 64.7% of bisexual females, 9.52% of gay males, and 47% of bisexual males. According to the study, "this pattern was consistent with the hypothesis that heterosexuality is a more stable sexual orientation identity, perhaps because of its normative status. However, male homosexual identity, although less stable than heterosexual identity, was relatively stable compared to the other sexual minority identities". Having only adults included in the examined group, they did not find the differences in fluidity which were affected by age of the participants. However, they stated that "research on attitude stability and change suggests most change occurs in adolescence and young adulthood (Alwin & Krosnick, 1991; Krosnick & Alwin, 1989), which could explain the diminished impact of age after that point".
Males versus females
Research generally indicates that while the vast majority of men and women are stable and unchanging in their orientation and identity; when it comes to those who are fluid, female sexuality is more fluid than male sexuality. In a seminal review of the sexual orientation literature, stimulated by the findings that the 1970s sexual revolution affected female sexuality more so than male sexuality, research by Baumeister et al. indicated that when compared to males, females have lower concordance between sexual attitudes and behaviors, and sociocultural factors affect female sexuality to a greater degree; it also found that personal change in sexuality is more common for females compared to males. Female sexuality (lesbian and heterosexual) changes significantly more than males on both dimensional and categorical measures of sexual orientation. Furthermore, the majority of homosexual women who previously identified as a different sexual orientation identified as heterosexual; whereas for males, the majority previously identified as bisexual, which the authors believe support the idea of greater fluidity in female sexuality. Females also report having identified with more than one sexual orientation, more often than males and are found to have higher levels of sexual orientation mobility. Females also report being bisexual or unsure of their sexuality more often than males, who more commonly report being exclusively gay or heterosexual. Over a six-year period, women have also been found to display more shifts in sexual orientation identity and were more likely to define their sexual orientation with non-exclusive terms.
The social constructivist view suggests that sexual desire is a product of cultural and psychosocial processes and that men and women are socialized differently. This difference in socialization can explain differences in sexual desire and stability of sexual orientation. Male sexuality is centered around physical factors, whereas female sexuality is centered around sociocultural factors, making female sexuality inherently more open to change. The greater effect on female sexuality in 1970s sexual revolution shows that female shifts in sexual orientation identity may be due to greater exposure to moderating factors (such as the media). In western culture, women are also expected to be more emotionally expressive and intimate towards both males and females. This socialization is a plausible cause of greater female sexual fluidity. Whether female sexuality is naturally more fluid and therefore changes from social factors or social factors cause female sexuality to be less stable is unknown.
An evolutionary psychology hypothesis proposes that bisexuality enables women to reduce conflict with other women, by promoting each others' mothering contributions, thus ensuring their reproductive success. According to this view, women are capable of forming romantic bonds with both sexes and sexual fluidity may be explained as a reproductive strategy that ensures the survival of offspring.
A longitudinal study concluded that stability of sexual orientation was more common than change. Gender differences in the stability of sexual orientation may vary by subgroup and could possibly be related to individual differences more than gender-wide characteristics.
Youth (age 14–21)
One study that did compare the stability of youth sexual orientation identity across genders found results opposite to most done with adult samples. The study compared non-heterosexual male and female sexual orientation over a year and concluded that female youth were more likely to report consistent sexual identities than males. The study was conducted over a single year.
Youth appears to be when most change in sexual orientation identity occurs for females. A 10-year study compared sexual orientation as measured at four times during the study. The most change was found between the first (taken at 18 years of age) and second (taken at 20 years of age) measurements which was the only time bracket that fell during adolescence.
A population-based study conducted over 6 years found that nonheterosexual (gay/lesbian/bisexual) male and female participants were more likely to change sexual orientation identity than heterosexual participants. A yearlong study found that sexual identity was more stable for gay and lesbian youth participants when compared to bisexual participants.
The identity integration process that individuals go through during adolescence appears to be associated with changes in sexual identity; adolescents who score higher on identity integration measures are more consistent in their sexual orientation. Bisexual youths seem to take longer to form their sexual identities than do consistently homosexual or heterosexual identifying youths so bisexuality may be seen as a transitional phase during adolescence. Rosario et al. (2006) conclude that "acceptance, commitment, and integration of a gay/lesbian identity is an ongoing developmental process that, for many youths, may extend through adolescence and beyond."
Sabra L. Katz-Wise and Janet S. Hide report in article published 2014 in "Archives of Sexual Behavior" of their study on 188 female and male young adults in the United States with a same-gender orientation, aged 18–26 years. In that cohort, sexual fluidity in attractions was reported by 63% of females and 50% of males, with 48% of those females and 34% of those males reporting fluidity in sexual orientation identity.
Bisexuality as a transitional phase
Bisexuality as a transitional phase on the way to identifying as exclusively lesbian or gay has also been studied. In a large-scale, longitudinal study, participants who identified as bisexual at one point in time were especially likely to change sexual orientation identity throughout the six-year study. A second longitudinal study found conflicting results. If bisexuality is a transitional phase, as people grow older the number identifying as bisexual should decline. Over the 10-year span of this study (using a female-only sample), the overall number of individuals identifying as bisexual remained relatively constant (hovering between 50 and 60%), suggesting that bisexuality is a third orientation, distinct from homosexuality and heterosexuality and can be stable. A third longitudinal study by Kinnish, Strassberg, and Turner (2005) supports this theory. While sex differences in sexual orientation stability were found for heterosexuals and gays/lesbians, no sex difference was found for bisexual men and women.
Bisexuality remains "undertheorized and underinvestigated".
Cultural debate
The exploration on sexual fluidity initiated by Lisa M. Diamond presented a cultural challenge to the LGBT community; this is because although researchers usually emphasize that changes in sexual orientation are unlikely, despite conversion therapy attempts, sexual identity can change over time. That sexual orientation is not always stable challenges the views of many within the LGBT community, who believe that sexual orientation is fixed and immutable.
There is some level of cultural debate regarding the question of how (and if) fluidity exists among men, including questions regarding fluctuations in attractions and arousal in male bisexuals.
Sexual fluidity may overlap with the label abrosexual, which has been used to refer to regular changes in one's sexuality.
See also
Aceflux
Bambi effect (slang)
Bi-curious
Biology and sexual orientation
Environment and sexual orientation
List of people who identify as sexually fluid
Pansexuality
Questioning (sexuality and gender)
Situational sexual behavior
Unlabeled sexuality
References
Interpersonal relationships
LGBTQ
Plurisexuality | 0.765096 | 0.998932 | 0.764279 |
Thiamine pyrophosphate | Thiamine pyrophosphate (TPP or ThPP), or thiamine diphosphate (ThDP), or cocarboxylase is a thiamine (vitamin B1) derivative which is produced by the enzyme thiamine diphosphokinase. Thiamine pyrophosphate is a cofactor that is present in all living systems, in which it catalyzes several biochemical reactions.
Thiamine pyrophosphate is synthesized in the cytosol and is required in the cytosol for the activity of transketolase and in the mitochondria for the activity of pyruvate-, oxoglutarate- and branched chain keto acid dehydrogenases. To date, the yeast ThPP carrier (Tpc1p) the human Tpc and the Drosophila melanogaster have been identified as being responsible for the mitochondrial transport of ThPP and ThMP. It was first discovered as an essential nutrient (vitamin) in humans through its link with the peripheral nervous system disease beriberi, which results from a deficiency of thiamine in the diet.
TPP works as a coenzyme in many enzymatic reactions, such as:
Pyruvate dehydrogenase complex
Pyruvate decarboxylase in ethanol fermentation
Alpha-ketoglutarate dehydrogenase complex
Branched-chain amino acid dehydrogenase complex
2-hydroxyphytanoyl-CoA lyase
Transketolase
Chemistry
Chemically, TPP consists of a pyrimidine ring which is connected to a thiazole ring, which is in turn connected to a pyrophosphate (diphosphate) functional group.
The part of TPP molecule that is most commonly involved in reactions is the thiazole ring, which contains nitrogen and sulfur. Thus, the thiazole ring is the "reagent portion" of the molecule. The C2 of this ring is capable of acting as an acid by donating its proton and forming a carbanion. Normally, reactions that form carbanions are highly unfavorable, but the positive charge on the tetravalent nitrogen just adjacent to the carbanion stabilizes the negative charge, making the reaction much more favorable. A compound with positive and negative charges on adjacent atoms is called an ylide, so sometimes the carbanion form of TPP is referred to as the "ylide form".
Reaction mechanisms
In several reactions, including that of pyruvate dehydrogenase, alpha-ketoglutarate dehydrogenase, and transketolase, TPP catalyses the reversible decarboxylation reaction (aka cleavage of a substrate compound at a carbon-carbon bond connecting a carbonyl group to an adjacent reactive group—usually a carboxylic acid or an alcohol). It achieves this in four basic steps:
The carbanion of the TPP ylid nucleophilically attacks the carbonyl group on the substrate. (This forms a single bond between the TPP and the substrate.)
The target bond on the substrate is broken, and its electrons are pushed towards the TPP. This creates a double bond between the substrate carbon and the TPP carbon and pushes the electrons in the N-C double bond in TPP entirely onto the nitrogen atom, reducing it from a positive to neutral form.
In what is essentially the reverse of step two, the electrons push back in the opposite direction forming a new bond between the substrate carbon and another atom. (In the case of the decarboxylases, this creates a new carbon-hydrogen bond. In the case of transketolase, this attacks a new substrate molecule to form a new carbon-carbon bond.)
In what is essentially the reverse of step one, the TPP-substrate bond is broken, reforming the TPP ylid and the substrate carbonyl.
The TPP thiazolium ring can be deprotonated at C2 to become an ylid:
A full view of TPP. The arrow indicates the acidic proton.
See also
TPP riboswitch
References
External links
UIC.edu
Cofactors
Thiazoles
Pyrimidines
Thiamine
Pyrophosphate esters | 0.774943 | 0.986231 | 0.764273 |
Level of measurement | Level of measurement or scale of measure is a classification that describes the nature of information within the values assigned to variables. Psychologist Stanley Smith Stevens developed the best-known classification with four levels, or scales, of measurement: nominal, ordinal, interval, and ratio. This framework of distinguishing levels of measurement originated in psychology and has since had a complex history, being adopted and extended in some disciplines and by some scholars, and criticized or rejected by others. Other classifications include those by Mosteller and Tukey, and by Chrisman.
Stevens's typology
Overview
Stevens proposed his typology in a 1946 Science article titled "On the theory of scales of measurement". In that article, Stevens claimed that all measurement in science was conducted using four different types of scales that he called "nominal", "ordinal", "interval", and "ratio", unifying both "qualitative" (which are described by his "nominal" type) and "quantitative" (to a different degree, all the rest of his scales). The concept of scale types later received the mathematical rigour that it lacked at its inception with the work of mathematical psychologists Theodore Alper (1985, 1987), Louis Narens (1981a, b), and R. Duncan Luce (1986, 1987, 2001). As Luce (1997, p. 395) wrote:
Comparison
Nominal level
A nominal scale consists only of a number of distinct classes or categories, for example: [Cat, Dog, Rabbit]. Unlike the other scales, no kind of relationship between the classes can be relied upon. Thus measuring with the nominal scale is equivalent to classifying.
Nominal measurement may differentiate between items or subjects based only on their names or (meta-)categories and other qualitative classifications they belong to. Thus it has been argued that even dichotomous data relies on a constructivist epistemology. In this case, discovery of an exception to a classification can be viewed as progress.
Numbers may be used to represent the variables but the numbers do not have numerical value or relationship: for example, a globally unique identifier.
Examples of these classifications include gender, nationality, ethnicity, language, genre, style, biological species, and form. In a university one could also use residence hall or department affiliation as examples. Other concrete examples are
in grammar, the parts of speech: noun, verb, preposition, article, pronoun, etc.
in politics, power projection: hard power, soft power, etc.
in biology, the taxonomic ranks below domains: Archaea, Bacteria, and Eukarya
in software engineering, type of faults: specification faults, design faults, and code faults
Nominal scales were often called qualitative scales, and measurements made on qualitative scales were called qualitative data. However, the rise of qualitative research has made this usage confusing. If numbers are assigned as labels in nominal measurement, they have no specific numerical value or meaning. No form of arithmetic computation (+, −, ×, etc.) may be performed on nominal measures. The nominal level is the lowest measurement level used from a statistical point of view.
Mathematical operations
Equality and other operations that can be defined in terms of equality, such as inequality and set membership, are the only non-trivial operations that generically apply to objects of the nominal type.
Central tendency
The mode, i.e. the most common item, is allowed as the measure of central tendency for the nominal type. On the other hand, the median, i.e. the middle-ranked item, makes no sense for the nominal type of data since ranking is meaningless for the nominal type.
Ordinal scale
The ordinal type allows for rank order (1st, 2nd, 3rd, etc.) by which data can be sorted but still does not allow for a relative degree of difference between them. Examples include, on one hand, dichotomous data with dichotomous (or dichotomized) values such as "sick" vs. "healthy" when measuring health, "guilty" vs. "not-guilty" when making judgments in courts, "wrong/false" vs. "right/true" when measuring truth value, and, on the other hand, non-dichotomous data consisting of a spectrum of values, such as "completely agree", "mostly agree", "mostly disagree", "completely disagree" when measuring opinion.
The ordinal scale places events in order, but there is no attempt to make the intervals of the scale equal in terms of some rule. Rank orders represent ordinal scales and are frequently used in research relating to qualitative phenomena. A student's rank in his graduation class involves the use of an ordinal scale. One has to be very careful in making a statement about scores based on ordinal scales. For instance, if Devi's position in his class is 10 and Ganga's position is 40, it cannot be said that Devi's position is four times as good as that of Ganga.
Ordinal scales only permit the ranking of items from highest to lowest. Ordinal measures have no absolute values, and the real differences between adjacent ranks may not be equal. All that can be said is that one person is higher or lower on the scale than another, but more precise comparisons cannot be made. Thus, the use of an ordinal scale implies a statement of "greater than" or "less than" (an equality statement is also acceptable) without our being able to state how much greater or less. The real difference between ranks 1 and 2, for instance, may be more or less than the difference between ranks 5 and 6. Since the numbers of this scale have only a rank meaning, the appropriate measure of central tendency is the median. A percentile or quartile measure is used for measuring dispersion. Correlations are restricted to various rank order methods. Measures of statistical significance are restricted to the non-parametric methods (R. M. Kothari, 2004).
Central tendency
The median, i.e. middle-ranked, item is allowed as the measure of central tendency; however, the mean (or average) as the measure of central tendency is not allowed. The mode is allowed.
In 1946, Stevens observed that psychological measurement, such as measurement of opinions, usually operates on ordinal scales; thus means and standard deviations have no validity, but they can be used to get ideas for how to improve operationalization of variables used in questionnaires. Most psychological data collected by psychometric instruments and tests, measuring cognitive and other abilities, are ordinal, although some theoreticians have argued they can be treated as interval or ratio scales. However, there is little prima facie evidence to suggest that such attributes are anything more than ordinal (Cliff, 1996; Cliff & Keats, 2003; Michell, 2008). In particular, IQ scores reflect an ordinal scale, in which all scores are meaningful for comparison only. There is no absolute zero, and a 10-point difference may carry different meanings at different points of the scale.
Interval scale
The interval type allows for defining the degree of difference between measurements, but not the ratio between measurements. Examples include temperature scales with the Celsius scale, which has two defined points (the freezing and boiling point of water at specific conditions) and then separated into 100 intervals, date when measured from an arbitrary epoch (such as AD), location in Cartesian coordinates, and direction measured in degrees from true or magnetic north. Ratios are not meaningful since 20 °C cannot be said to be "twice as hot" as 10 °C (unlike temperature in kelvins), nor can multiplication/division be carried out between any two dates directly. However, ratios of differences can be expressed; for example, one difference can be twice another; for example, the ten degree difference between 15 °C and 25 °C is twice the five degree difference between 17 °C and 22 °C. Interval type variables are sometimes also called "scaled variables", but the formal mathematical term is an affine space (in this case an affine line).
Central tendency and statistical dispersion
The mode, median, and arithmetic mean are allowed to measure central tendency of interval variables, while measures of statistical dispersion include range and standard deviation. Since one can only divide by differences, one cannot define measures that require some ratios, such as the coefficient of variation. More subtly, while one can define moments about the origin, only central moments are meaningful, since the choice of origin is arbitrary. One can define standardized moments, since ratios of differences are meaningful, but one cannot define the coefficient of variation, since the mean is a moment about the origin, unlike the standard deviation, which is (the square root of) a central moment.
Ratio scale
See also:
The ratio type takes its name from the fact that measurement is the estimation of the ratio between a magnitude of a continuous quantity and a unit of measurement of the same kind (Michell, 1997, 1999). Most measurement in the physical sciences and engineering is done on ratio scales. Examples include mass, length, duration, plane angle, energy and electric charge. In contrast to interval scales, ratios can be compared using division. Very informally, many ratio scales can be described as specifying "how much" of something (i.e. an amount or magnitude). Ratio scale is often used to express an order of magnitude such as for temperature in Orders of magnitude (temperature).
Central tendency and statistical dispersion
The geometric mean and the harmonic mean are allowed to measure the central tendency, in addition to the mode, median, and arithmetic mean. The studentized range and the coefficient of variation are allowed to measure statistical dispersion. All statistical measures are allowed because all necessary mathematical operations are defined for the ratio scale.
Debate on Stevens's typology
While Stevens's typology is widely adopted, it is still being challenged by other theoreticians, particularly in the cases of the nominal and ordinal types (Michell, 1986). Duncan (1986), for example, objected to the use of the word measurement in relation to the nominal type and Luce (1997) disagreed with Steven's definition of measurement.
On the other hand, Stevens (1975) said of his own definition of measurement that "the assignment can be any consistent rule. The only rule not allowed would be random assignment, for randomness amounts in effect to a nonrule". Hand says, "Basic psychology texts often begin with Stevens's framework and the ideas are ubiquitous. Indeed, the essential soundness of his hierarchy has been established for representational measurement by mathematicians, determining the invariance properties of mappings from empirical systems to real number continua. Certainly the ideas have been revised, extended, and elaborated, but the remarkable thing is his insight given the relatively limited formal apparatus available to him and how many decades have passed since he coined them."
The use of the mean as a measure of the central tendency for the ordinal type is still debatable among those who accept Stevens's typology. Many behavioural scientists use the mean for ordinal data, anyway. This is often justified on the basis that the ordinal type in behavioural science is in fact somewhere between the true ordinal and interval types; although the interval difference between two ordinal ranks is not constant, it is often of the same order of magnitude.
For example, applications of measurement models in educational contexts often indicate that total scores have a fairly linear relationship with measurements across the range of an assessment. Thus, some argue that so long as the unknown interval difference between ordinal scale ranks is not too variable, interval scale statistics such as means can meaningfully be used on ordinal scale variables. Statistical analysis software such as SPSS requires the user to select the appropriate measurement class for each variable. This ensures that subsequent user errors cannot inadvertently perform meaningless analyses (for example correlation analysis with a variable on a nominal level).
L. L. Thurstone made progress toward developing a justification for obtaining the interval type, based on the law of comparative judgment. A common application of the law is the analytic hierarchy process. Further progress was made by Georg Rasch (1960), who developed the probabilistic Rasch model that provides a theoretical basis and justification for obtaining interval-level measurements from counts of observations such as total scores on assessments.
Other proposed typologies
Typologies aside from Stevens's typology have been proposed. For instance, Mosteller and Tukey (1977), Nelder (1990) described continuous counts, continuous ratios, count ratios, and categorical modes of data. See also Chrisman (1998), van den Berg (1991).
Mosteller and Tukey's typology (1977)
Mosteller and Tukey noted that the four levels are not exhaustive and proposed:
Names
Grades (ordered labels like beginner, intermediate, advanced)
Ranks (orders with 1 being the smallest or largest, 2 the next smallest or largest, and so on)
Counted fractions (bound by 0 and 1)
Counts (non-negative integers)
Amounts (non-negative real numbers)
Balances (any real number)
For example, percentages (a variation on fractions in the Mosteller–Tukey framework) do not fit well into Stevens's framework: No transformation is fully admissible.
Chrisman's typology (1998)
Nicholas R. Chrisman introduced an expanded list of levels of measurement to account for various measurements that do not necessarily fit with the traditional notions of levels of measurement. Measurements bound to a range and repeating (like degrees in a circle, clock time, etc.), graded membership categories, and other types of measurement do not fit to Stevens's original work, leading to the introduction of six new levels of measurement, for a total of ten:
Nominal
Gradation of membership
Ordinal
Interval
Log-interval
Extensive ratio
Cyclical ratio
Derived ratio
Counts
Absolute
While some claim that the extended levels of measurement are rarely used outside of academic geography, graded membership is central to fuzzy set theory, while absolute measurements include probabilities and the plausibility and ignorance in Dempster–Shafer theory. Cyclical ratio measurements include angles and times. Counts appear to be ratio measurements, but the scale is not arbitrary and fractional counts are commonly meaningless. Log-interval measurements are commonly displayed in stock market graphics. All these types of measurements are commonly used outside academic geography, and do not fit well to Stevens' original work.
Scale types and Stevens's "operational theory of measurement"
The theory of scale types is the intellectual handmaiden to Stevens's "operational theory of measurement", which was to become definitive within psychology and the behavioral sciences, despite Michell's characterization as its being quite at odds with measurement in the natural sciences (Michell, 1999). Essentially, the operational theory of measurement was a reaction to the conclusions of a committee established in 1932 by the British Association for the Advancement of Science to investigate the possibility of genuine scientific measurement in the psychological and behavioral sciences. This committee, which became known as the Ferguson committee, published a Final Report (Ferguson, et al., 1940, p. 245) in which Stevens's sone scale (Stevens & Davis, 1938) was an object of criticism:
That is, if Stevens's sone scale genuinely measured the intensity of auditory sensations, then evidence for such sensations as being quantitative attributes needed to be produced. The evidence needed was the presence of additive structure – a concept comprehensively treated by the German mathematician Otto Hölder (Hölder, 1901). Given that the physicist and measurement theorist Norman Robert Campbell dominated the Ferguson committee's deliberations, the committee concluded that measurement in the social sciences was impossible due to the lack of concatenation operations. This conclusion was later rendered false by the discovery of the theory of conjoint measurement by Debreu (1960) and independently by Luce & Tukey (1964). However, Stevens's reaction was not to conduct experiments to test for the presence of additive structure in sensations, but instead to render the conclusions of the Ferguson committee null and void by proposing a new theory of measurement:
Stevens was greatly influenced by the ideas of another Harvard academic, the Nobel laureate physicist Percy Bridgman (1927), whose doctrine of operationalism Stevens used to define measurement. In Stevens's definition, for example, it is the use of a tape measure that defines length (the object of measurement) as being measurable (and so by implication quantitative). Critics of operationism object that it confuses the relations between two objects or events for properties of one of those of objects or events.(Moyer, 1981a,b; Rogers, 1989).
The Canadian measurement theorist William Rozeboom was an early and trenchant critic of Stevens's theory of scale types.
Same variable may be different scale type depending on context
Another issue is that the same variable may be a different scale type depending on how it is measured and on the goals of the analysis. For example, hair color is usually thought of as a nominal variable, since it has no apparent ordering. However, it is possible to order colors (including hair colors) in various ways, including by hue; this is known as colorimetry. Hue is an interval level variable.
See also
Cohen's kappa
Coherence (units of measurement)
Hume's principle
Inter-rater reliability
Logarithmic scale
Ramsey–Lewis method
Set theory
Statistical data type
Transition (linguistics)
References
Further reading
Briand, L. & El Emam, K. & Morasca, S. (1995). On the Application of Measurement Theory in Software Engineering. Empirical Software Engineering, 1, 61–88. [On line] https://web.archive.org/web/20070926232755/http://www2.umassd.edu/swpi/ISERN/isern-95-04.pdf
Cliff, N. (1996). Ordinal Methods for Behavioral Data Analysis. Mahwah, NJ: Lawrence Erlbaum.
Cliff, N. & Keats, J. A. (2003). Ordinal Measurement in the Behavioral Sciences. Mahwah, NJ: Erlbaum.
See also reprints in:
Readings in Statistics, Ch. 3, (Haber, A., Runyon, R. P., and Badia, P.) Reading, Mass: Addison–Wesley, 1970
Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison–Wesley.
Luce, R. D. (2000). Utility of uncertain gains and losses: measurement theoretic and experimental approaches. Mahwah, N.J.: Lawrence Erlbaum.
Michell, J. (1999). Measurement in Psychology – A critical history of a methodological concept. Cambridge: Cambridge University Press.
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Danish Institute for Educational Research.
Stevens, S. S. (1951). Mathematics, measurement and psychophysics. In S. S. Stevens (Ed.), Handbook of experimental psychology (pp. 1–49). New York: Wiley.
Stevens, S. S. (1975). Psychophysics. New York: Wiley.
Scientific method
Statistical data types
Measurement
Cognitive science | 0.768148 | 0.994895 | 0.764227 |
Bayesian network | A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
Graphical model
Formally, Bayesian networks are directed acyclic graphs (DAGs) whose nodes represent variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Each edge represents a direct conditional dependency. Any pair of nodes that are not connected (i.e. no path connects one node to the other) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if parent nodes represent Boolean variables, then the probability function could be represented by a table of entries, one entry for each of the possible parent combinations. Similar ideas may be applied to undirected, and possibly cyclic, graphs such as Markov networks.
Example
Let us use an illustration to enforce the concepts of a Bayesian network. Suppose we want to model the dependencies between three variables: the sprinkler (or more appropriately, its state - whether it is on or not), the presence or absence of rain and whether the grass is wet or not. Observe that two events can cause the grass to become wet: an active sprinkler or rain. Rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler usually is not active). This situation can be modeled with a Bayesian network (shown to the right). Each variable has two possible values, T (for true) and F (for false).
The joint probability function is, by the chain rule of probability,
where G = "Grass wet (true/false)", S = "Sprinkler turned on (true/false)", and R = "Raining (true/false)".
The model can answer questions about the presence of a cause given the presence of an effect (so-called inverse probability) like "What is the probability that it is raining, given the grass is wet?" by using the conditional probability formula and summing over all nuisance variables:
Using the expansion for the joint probability function and the conditional probabilities from the conditional probability tables (CPTs) stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example,
Then the numerical results (subscripted by the associated variable values) are
To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function
obtained by removing the factor from the pre-intervention distribution. The do operator forces the value of G to be true. The probability of rain is unaffected by the action:
To predict the impact of turning the sprinkler on:
with the term removed, showing that the action affects the grass but not the rain.
These predictions may not be feasible given unobserved variables, as in most policy evaluation problems. The effect of the action can still be predicted, however, whenever the back-door criterion is satisfied. It states that, if a set Z of nodes can be observed that d-separates (or blocks) all back-door paths from X to Y then
A back-door path is one that ends with an arrow into X. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the set Z = R is admissible for predicting the effect of S = T on G, because R d-separates the (only) back-door path S ← R → G. However, if S is not observed, no other set d-separates this path and the effect of turning the sprinkler on (S = T) on the grass (G) cannot be predicted from passive observations. In that case P(G | do(S = T)) is not "identified". This reflects the fact that, lacking interventional data, the observed dependence between S and G is due to a causal connection or is spurious
(apparent dependence arising from a common cause, R). (see Simpson's paradox)
To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus" and test whether all do terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data.
Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most values.
One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions.
Inference and learning
Bayesian networks perform three main inference tasks:
Inferring unobserved variables
Because a Bayesian network is a complete model for its variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to update knowledge of the state of a subset of variables when other variables (the evidence variables) are observed. This process of computing the posterior distribution of variables given evidence is called probabilistic inference. The posterior gives a universal sufficient statistic for detection applications, when choosing values for the variable subset that minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applying Bayes' theorem to complex problems.
The most common exact inference methods are: variable elimination, which eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product; clique tree propagation, which caches the computation so that many variables can be queried at one time and new evidence can be propagated quickly; and recursive conditioning and AND/OR search, which allow for a space–time tradeoff and match the efficiency of variable elimination when enough space is used. All of these methods have complexity that is exponential in the network's treewidth. The most common approximate inference algorithms are importance sampling, stochastic MCMC simulation, mini-bucket elimination, loopy belief propagation, generalized belief propagation and variational methods.
Parameter learning
In order to fully specify the Bayesian network and thus fully represent the joint probability distribution, it is necessary to specify for each node X the probability distribution for X conditional upon X parents. The distribution of X conditional upon its parents may have any form. It is common to work with discrete or Gaussian distributions since that simplifies calculations. Sometimes only constraints on distribution are known; one can then use the principle of maximum entropy to determine a single distribution, the one with the greatest entropy given the constraints. (Analogously, in the specific context of a dynamic Bayesian network, the conditional distribution for the hidden state's temporal evolution is commonly specified to maximize the entropy rate of the implied stochastic process.)
Often these conditional distributions include parameters that are unknown and must be estimated from data, e.g., via the maximum likelihood approach. Direct maximization of the likelihood (or of the posterior probability) is often complex given unobserved variables. A classical approach to this problem is the expectation-maximization algorithm, which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct. Under mild regularity conditions, this process converges on maximum likelihood (or maximum posterior) values for parameters.
A more fully Bayesian approach to parameters is to treat them as additional unobserved variables and to compute a full posterior distribution over all nodes conditional upon observed data, then to integrate out the parameters. This approach can be expensive and lead to large dimension models, making classical parameter-setting approaches more tractable.
Structure learning
In the simplest case, a Bayesian network is specified by an expert and is then used to perform inference. In other applications, the task of defining the network is too complex for humans. In this case, the network structure and the parameters of the local distributions must be learned from data.
Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within machine learning. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl and rests on the distinction between the three possible patterns allowed in a 3-node DAG:
The first 2 represent the same dependencies ( and are independent given ) and are, therefore, indistinguishable. The collider, however, can be uniquely identified, since and are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when and have common parents, except that one must first condition on those parents. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independences observed.
An alternative method of structural learning uses optimization-based search. It requires a scoring function and a search strategy. A common scoring function is posterior probability of the structure given the training data, like the BIC or the BDeu. The time requirement of an exhaustive search returning a structure that maximizes the score is superexponential in the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima. Friedman et al. discuss using mutual information between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to k nodes and exhaustively searching therein.
A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it using integer programming. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Such method can handle problems with up to 100 variables.
In order to deal with problems with thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge.
Another method consists of focusing on the sub-class of decomposable models, for which the MLE have a closed form. It is then possible to discover a consistent structure for hundreds of variables.
Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to use K-tree for effective learning.
Statistical introduction
Given data and parameter , a simple Bayesian analysis starts with a prior probability (prior) and likelihood to compute a posterior probability .
Often the prior on depends in turn on other parameters that are not mentioned in the likelihood. So, the prior must be replaced by a likelihood , and a prior on the newly introduced parameters is required, resulting in a posterior probability
This is the simplest example of a hierarchical Bayes model.
The process may be repeated; for example, the parameters may depend in turn on additional parameters , which require their own prior. Eventually the process must terminate, with priors that do not depend on unmentioned parameters.
Introductory examples
Given the measured quantities each with normally distributed errors of known standard deviation ,
Suppose we are interested in estimating the . An approach would be to estimate the using a maximum likelihood approach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply
However, if the quantities are related, so that for example the individual have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g.,
with improper priors , . When , this is an identified model (i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individual will tend to move, or shrink away from the maximum likelihood estimates towards their common mean. This shrinkage is a typical behavior in hierarchical Bayes models.
Restrictions on priors
Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variable in the example. The usual priors such as the Jeffreys prior often do not work, because the posterior distribution will not be normalizable and estimates made by minimizing the expected loss will be inadmissible.
Definitions and concepts
Several equivalent definitions of a Bayesian network have been offered. For the following, let G = (V,E) be a directed acyclic graph (DAG) and let X = (Xv), v ∈ V be a set of random variables indexed by V.
Factorization definition
X is a Bayesian network with respect to G if its joint probability density function (with respect to a product measure) can be written as a product of the individual density functions, conditional on their parent variables:
where pa(v) is the set of parents of v (i.e. those vertices pointing directly to v via a single edge).
For any set of random variables, the probability of any member of a joint distribution can be calculated from conditional probabilities using the chain rule (given a topological ordering of X) as follows:
Using the definition above, this can be written as:
The difference between the two expressions is the conditional independence of the variables from any of their non-descendants, given the values of their parent variables.
Local Markov property
X is a Bayesian network with respect to G if it satisfies the local Markov property: each variable is conditionally independent of its non-descendants given its parent variables:
where de(v) is the set of descendants and V \ de(v) is the set of non-descendants of v.
This can be expressed in terms similar to the first definition, as
The set of parents is a subset of the set of non-descendants because the graph is acyclic.
Marginal independence structure
In general, learning a Bayesian network from data is known to be NP-hard. This is due in part to the combinatorial explosion of enumerating DAGs as the number of variables increases. Nevertheless, insights about an underlying Bayesian network can be learned from data in polynomial time by focusing on its marginal independence structure: while the conditional independence statements of a distribution modeled by a Bayesian network are encoded by a DAG (according to the factorization and Markov properties above), its marginal independence statements—the conditional independence statements in which the conditioning set is empty—are encoded by a simple undirected graph with special properties such as equal intersection and independence numbers.
Developing Bayesian networks
Developing a Bayesian network often begins with creating a DAG G such that X satisfies the local Markov property with respect to G. Sometimes this is a causal DAG. The conditional probability distributions of each variable given its parents in G are assessed. In many cases, in particular in the case where the variables are discrete, if the joint distribution of X is the product of these conditional distributions, then X is a Bayesian network with respect to G.
Markov blanket
The Markov blanket of a node is the set of nodes consisting of its parents, its children, and any other parents of its children. The Markov blanket renders the node independent of the rest of the network; the joint distribution of the variables in the Markov blanket of a node is sufficient knowledge for calculating the distribution of the node. X is a Bayesian network with respect to G if every node is conditionally independent of all other nodes in the network, given its Markov blanket.
d-separation
This definition can be made more general by defining the "d"-separation of two nodes, where d stands for directional. We first define the "d"-separation of a trail and then we will define the "d"-separation of two nodes in terms of that.
Let P be a trail from node u to v. A trail is a loop-free, undirected (i.e. all edge directions are ignored) path between two nodes. Then P is said to be d-separated by a set of nodes Z if any of the following conditions holds:
P contains (but does not need to be entirely) a directed chain, or , such that the middle node m is in Z,
P contains a fork, , such that the middle node m is in Z, or
P contains an inverted fork (or collider), , such that the middle node m is not in Z and no descendant of m is in Z.
The nodes u and v are d-separated by Z if all trails between them are d-separated. If u and v are not d-separated, they are d-connected.
X is a Bayesian network with respect to G if, for any two nodes u, v:
where Z is a set which d-separates u and v. (The Markov blanket is the minimal set of nodes which d-separates node v from all other nodes.)
Causal networks
Although Bayesian networks are often used to represent causal relationships, this need not be the case: a directed edge from u to v does not require that Xv be causally dependent on Xu. This is demonstrated by the fact that Bayesian networks on the graphs:
are equivalent: that is they impose exactly the same conditional independence requirements.
A causal network is a Bayesian network with the requirement that the relationships be causal. The additional semantics of causal networks specify that if a node X is actively caused to be in a given state x (an action written as do(X = x)), then the probability density function changes to that of the network obtained by cutting the links from the parents of X to X, and setting X to the caused value x. Using these semantics, the impact of external interventions from data obtained prior to intervention can be predicted.
Inference complexity and approximation algorithms
In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is NP-hard. This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Paul Dagum and Michael Luby proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks. First, they proved that no tractable deterministic algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2. Second, they proved that no tractable randomized algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2 with confidence probability greater than 1/2.
At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF)) and that approximate inference within a factor 2n1−ɛ for every ɛ > 0, even for Bayesian networks with restricted architecture, is NP-hard.
In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naïve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm developed by Dagum and Luby was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by where was any polynomial of the number of nodes in the network, .
Software
Notable software for Bayesian networks include:
Just another Gibbs sampler (JAGS) – Open-source alternative to WinBUGS. Uses Gibbs sampling.
OpenBUGS – Open-source development of WinBUGS.
SPSS Modeler – Commercial software that includes an implementation for Bayesian networks.
Stan (software) – Stan is an open-source package for obtaining Bayesian inference using the No-U-Turn sampler (NUTS), a variant of Hamiltonian Monte Carlo.
PyMC – A Python library implementing an embedded domain specific language to represent bayesian networks, and a variety of samplers (including NUTS)
WinBUGS – One of the first computational implementations of MCMC samplers. No longer maintained.
History
The term Bayesian network was coined by Judea Pearl in 1985 to emphasize:
the often subjective nature of the input information
the reliance on Bayes' conditioning as the basis for updating information
the distinction between causal and evidential modes of reasoning
In the late 1980s Pearl's Probabilistic Reasoning in Intelligent Systems and Neapolitan's Probabilistic Reasoning in Expert Systems summarized their properties and established them as a field of study.
See also
Notes
References
(This paper puts decision trees in internal nodes of Bayes networks using Minimum Message Length (MML).
:Also appears as
An earlier version appears as, Microsoft Research, March 1, 1995. The paper is about both parameter and structure learning in Bayesian networks.
.
This paper presents variable elimination for belief networks.
Further reading
External links
An Introduction to Bayesian Networks and their Contemporary Applications
On-line Tutorial on Bayesian nets and probability
Web-App to create Bayesian nets and run it with a Monte Carlo method
Continuous Time Bayesian Networks
Bayesian Networks: Explanation and Analogy
A live tutorial on learning Bayesian networks
A hierarchical Bayes Model for handling sample heterogeneity in classification problems, provides a classification model taking into consideration the uncertainty associated with measuring replicate samples.
Hierarchical Naive Bayes Model for handling sample uncertainty , shows how to perform classification and learning with continuous and discrete variables with replicated measurements.
Graphical models
Causal inference | 0.766364 | 0.997204 | 0.764221 |
Cognitive model | A cognitive model is a representation of one or more cognitive processes in humans or other animals for the purposes of comprehension and prediction. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard). In terms of information processing, cognitive modeling is modeling of human perception, reasoning, memory and action.
Relationship to cognitive architectures
Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable. In contrast to cognitive architectures, cognitive models tend to be focused on a single cognitive phenomenon or process (e.g., list learning), how two or more processes interact (e.g., visual search bsc1780 decision making), or making behavioral predictions for a specific task or tool (e.g., how instituting a new software package will affect productivity). Cognitive architectures tend to be focused on the structural properties of the modeled system, and help constrain the development of cognitive models within the architecture. Likewise, model development helps to inform limitations and shortcomings of the architecture. Some of the most popular architectures for cognitive modeling include ACT-R, Clarion, LIDA, and Soar.
History
Cognitive modeling historically developed within cognitive psychology/cognitive science (including human factors), and has received contributions from the fields of machine learning and artificial intelligence among others.
Box-and-arrow models
A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Typically, they are used by speech pathologists while treating a child patient. The input signal is the speech signal heard by the child, usually assumed to come from an adult speaker. The output signal is the utterance produced by the child. The unseen psychological events that occur between the arrival of an input signal and the production of speech are the focus of psycholinguistic models. Events that process the input signal are referred to as input processes, whereas events that process the production of speech are referred to as output processes. Some aspects of speech processing are thought to happen online—that is, they occur during the actual perception
or production of speech and thus require a share of the attentional resources dedicated to the speech task. Other processes, thought to happen offline, take place as part of the child's background mental processing rather than during the time dedicated to the speech task.
In this sense, online processing is sometimes defined as occurring in real-time, whereas offline processing is said to be time-free (Hewlett, 1990). In box-and-arrow psycholinguistic models, each hypothesized level of representation or processing can be represented in a diagram by a “box,” and the relationships between them by “arrows,” hence the name. Sometimes (as in the models of Smith, 1973, and Menn, 1978, described later in this paper) the arrows represent processes additional to those shown in boxes. Such models make explicit the hypothesized information-
processing activities carried out in a particular cognitive function (such as language), in a manner analogous to computer flowcharts that depict the processes and decisions carried out by a computer program. Box-and-arrow models differ widely in the number of unseen psychological processes they describe and thus in the number of boxes they contain. Some have only one or two boxes between the input and output signals (e.g., Menn, 1978; Smith, 1973), whereas others have multiple boxes representing complex relationships between a number of different information-processing events (e.g., Hewlett, 1990; Hewlett, Gibbon, & Cohen- McKenzie, 1998; Stackhouse & Wells, 1997). The most important box, however, and the source of much ongoing debate, is that representing the underlying representation (or UR). In essence, an underlying representation captures information stored in a child's mind about a word he or she knows and uses. As the following description of several models will illustrate, the nature of this information and thus the type(s) of representation present in the child's knowledge base have captured the attention of researchers for some time. (Elise Baker et al. Psycholinguistic Models of Speech Development and Their Application to Clinical Practice. Journal of Speech, Language, and Hearing Research. June 2001. 44. p 685–702.)
Computational models
A computational model is a mathematical model in computational science that requires extensive computational resources to study the behavior of a complex system by computer simulation. The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by changing the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Theories of operation of the model can be derived/deduced from these computational experiments.
Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, and neural network models.
Symbolic
A symbolic model is expressed in characters, usually non-numeric ones, that require translation before they can be used.
Subsymbolic
A cognitive model is subsymbolic if it is made by constituent entities that are not representations in their turn, e.g., pixels, sound images as perceived by the ear, signal samples; subsymbolic units in neural networks can be considered particular cases of this category.
Hybrid
Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations. See more details at hybrid intelligent system.
Dynamical systems
In the traditional computational approach, representations are viewed as static structures of discrete symbols. Cognition takes place by transforming static symbol structures in discrete, sequential steps. Sensory information is transformed into symbolic inputs, which produce symbolic outputs that get transformed into motor outputs. The entire system operates in an ongoing cycle.
What is missing from this traditional view is that human cognition happens continuously and in real time. Breaking down the processes into discrete time steps may not fully capture this behavior. An alternative approach is to define a system with (1) a state of the system at any given time, (2) a behavior, defined as the change over time in overall state, and (3) a state set or state space, representing the totality of overall states the system could be in. The system is distinguished by the fact that a change in any aspect of the system state depends on other aspects of the same or other system states.
A typical dynamical model is formalized by several differential equations that describe how the system's state changes over time. By doing so, the form of the space of possible trajectories and the internal and external forces that shape a specific trajectory that unfold over time, instead of the physical nature of the underlying mechanisms that manifest this dynamics, carry explanatory force. On this dynamical view, parametric inputs alter the system's intrinsic dynamics, rather than specifying an internal state that describes some external state of affairs.
Early dynamical systems
Associative memory
Early work in the application of dynamical systems to cognition can be found in the model of Hopfield networks. These networks were proposed as a model for associative memory. They represent the neural level of memory, modeling systems of around 30 neurons which can be in either an on or off state. By letting the network learn on its own, structure and computational properties naturally arise. Unlike previous models, “memories” can be formed and recalled by inputting a small portion of the entire memory. Time ordering of memories can also be encoded. The behavior of the system is modeled with vectors which can change values, representing different states of the system. This early model was a major step toward a dynamical systems view of human cognition, though many details had yet to be added and more phenomena accounted for.
Language acquisition
By taking into account the evolutionary development of the human nervous system and the similarity of the brain to other organs, Elman proposed that language and cognition should be treated as a dynamical system rather than a digital symbol processor. Neural networks of the type Elman implemented have come to be known as Elman networks. Instead of treating language as a collection of static lexical items and grammar rules that are learned and then used according to fixed rules, the dynamical systems view defines the lexicon as regions of state space within a dynamical system. Grammar is made up of attractors and repellers that constrain movement in the state space. This means that representations are sensitive to context, with mental representations viewed as trajectories through mental space instead of objects that are constructed and remain static. Elman networks were trained with simple sentences to represent grammar as a dynamical system. Once a basic grammar had been learned, the networks could then parse complex sentences by predicting which words would appear next according to the dynamical model.
Cognitive development
A classic developmental error has been investigated in the context of dynamical systems: The A-not-B error is proposed to be not a distinct error occurring at a specific age (8 to 10 months), but a feature of a dynamic learning process that is also present in older children. Children 2 years old were found to make an error similar to the A-not-B error when searching for toys hidden in a sandbox. After observing the toy being hidden in location A and repeatedly searching for it there, the 2-year-olds were shown a toy hidden in a new location B. When they looked for the toy, they searched in locations that were biased toward location A. This suggests that there is an ongoing representation of the toy's location that changes over time. The child's past behavior influences its model of locations of the sandbox, and so an account of behavior and learning must take into account how the system of the sandbox and the child's past actions is changing over time.
Locomotion
One proposed mechanism of a dynamical system comes from analysis of continuous-time recurrent neural networks (CTRNNs). By focusing on the output of the neural networks rather than their states and examining fully interconnected networks, three-neuron central pattern generator (CPG) can be used to represent systems such as leg movements during walking. This CPG contains three motor neurons to control the foot, backward swing, and forward swing effectors of the leg. Outputs of the network represent whether the foot is up or down and how much force is being applied to generate torque in the leg joint. One feature of this pattern is that neuron outputs are either off or on most of the time. Another feature is that the states are quasi-stable, meaning that they will eventually transition to other states. A simple pattern generator circuit like this is proposed to be a building block for a dynamical system. Sets of neurons that simultaneously transition from one quasi-stable state to another are defined as a dynamic module. These modules can in theory be combined to create larger circuits that comprise a complete dynamical system. However, the details of how this combination could occur are not fully worked out.
Modern dynamical systems
Behavioral dynamics
Modern formalizations of dynamical systems applied to the study of cognition vary. One such formalization, referred to as “behavioral dynamics”, treats the agent and the environment as a pair of coupled dynamical systems based on classical dynamical systems theory. In this formalization, the information from the environment informs the agent's behavior and the agent's actions modify the environment. In the specific case of perception-action cycles, the coupling of the environment and the agent is formalized by two functions. The first transforms the representation of the agents action into specific patterns of muscle activation that in turn produce forces in the environment. The second function transforms the information from the environment (i.e., patterns of stimulation at the agent's receptors that reflect the environment's current state) into a representation that is useful for controlling the agents actions. Other similar dynamical systems have been proposed (although not developed into a formal framework) in which the agent's nervous systems, the agent's body, and the environment are coupled together
Adaptive behaviors
Behavioral dynamics have been applied to locomotive behavior. Modeling locomotion with behavioral dynamics demonstrates that adaptive behaviors could arise from the interactions of an agent and the environment. According to this framework, adaptive behaviors can be captured by two levels of analysis. At the first level of perception and action, an agent and an environment can be conceptualized as a pair of dynamical systems coupled together by the forces the agent applies to the environment and by the structured information provided by the environment. Thus, behavioral dynamics emerge from the agent-environment interaction. At the second level of time evolution, behavior can be expressed as a dynamical system represented as a vector field. In this vector field, attractors reflect stable behavioral solutions, where as bifurcations reflect changes in behavior. In contrast to previous work on central pattern generators, this framework suggests that stable behavioral patterns are an emergent, self-organizing property of the agent-environment system rather than determined by the structure of either the agent or the environment.
Open dynamical systems
In an extension of classical dynamical systems theory, rather than coupling the environment's and the agent's dynamical systems to each other, an “open dynamical system” defines a “total system”, an “agent system”, and a mechanism to relate these two systems. The total system is a dynamical system that models an agent in an environment, whereas the agent system is a dynamical system that models an agent's intrinsic dynamics (i.e., the agent's dynamics in the absence of an environment). Importantly, the relation mechanism does not couple the two systems together, but rather continuously modifies the total system into the decoupled agent's total system. By distinguishing between total and agent systems, it is possible to investigate an agent's behavior when it is isolated from the environment and when it is embedded within an environment. This formalization can be seen as a generalization from the classical formalization, whereby the agent system can be viewed as the agent system in an open dynamical system, and the agent coupled to the environment and the environment can be viewed as the total system in an open dynamical system.
Embodied cognition
In the context of dynamical systems and embodied cognition, representations can be conceptualized as indicators or mediators. In the indicator view, internal states carry information about the existence of an object in the environment, where the state of a system during exposure to an object is the representation of that object. In the mediator view, internal states carry information about the environment which is used by the system in obtaining its goals. In this more complex account, the states of the system carries information that mediates between the information the agent takes in from the environment, and the force exerted on the environment by the agents behavior. The application of open dynamical systems have been discussed for four types of classical embodied cognition examples:
Instances where the environment and agent must work together to achieve a goal, referred to as "intimacy". A classic example of intimacy is the behavior of simple agents working to achieve a goal (e.g., insects traversing the environment). The successful completion of the goal relies fully on the coupling of the agent to the environment.
Instances where the use of external artifacts improves the performance of tasks relative to performance without these artifacts. The process is referred to as "offloading". A classic example of offloading is the behavior of Scrabble players; people are able to create more words when playing Scrabble if they have the tiles in front of them and are allowed to physically manipulate their arrangement. In this example, the Scrabble tiles allow the agent to offload working memory demands on to the tiles themselves.
Instances where a functionally equivalent external artifact replaces functions that are normally performed internally by the agent, which is a special case of offloading. One famous example is that of human (specifically the agents Otto and Inga) navigation in a complex environment with or without assistance of an artifact.
Instances where there is not a single agent. The individual agent is part of larger system that contains multiple agents and multiple artifacts. One famous example, formulated by Ed Hutchins in his book Cognition in the Wild, is that of navigating a naval ship.
The interpretations of these examples rely on the following logic: (1) the total system captures embodiment; (2) one or more agent systems capture the intrinsic dynamics of individual agents; (3) the complete behavior of an agent can be understood as a change to the agent's intrinsic dynamics in relation to its situation in the environment; and (4) the paths of an open dynamical system can be interpreted as representational processes. These embodied cognition examples show the importance of studying the emergent dynamics of an agent-environment systems, as well as the intrinsic dynamics of agent systems. Rather than being at odds with traditional cognitive science approaches, dynamical systems are a natural extension of these methods and should be studied in parallel rather than in competition.
See also
Computational cognition
Computational models of language acquisition
Computational-representational understanding of mind
MindModeling@Home
Memory-prediction framework
Space mapping
References
External links
Cognitive modeling at CMU
Cognitive modeling at RPI (HCI)
Cognitive modeling at RPI (CLARION)
Cognitive modeling at the University of Memphis (LIDA)
Cognitive modeling at UMich
Enactive cognition | 0.778546 | 0.981568 | 0.764196 |
Atomism | Atomism (from Greek , atomon, i.e. "uncuttable, indivisible") is a natural philosophy proposing that the physical universe is composed of fundamental indivisible components known as atoms.
References to the concept of atomism and its atoms appeared in both ancient Greek and ancient Indian philosophical traditions. Leucippus is the earliest figure whose commitment to atomism is well attested and he is usually credited with inventing atomism. He and other ancient Greek atomists theorized that nature consists of two fundamental principles: atom and void. Clusters of different shapes, arrangements, and positions give rise to the various macroscopic substances in the world.
Indian Buddhists, such as Dharmakirti ( 6th or 7th century) and others, developed distinctive theories of atomism, for example, involving momentary (instantaneous) atoms (kalapas) that flash in and out of existence.
The particles of chemical matter for which chemists and other natural philosophers of the early 19th century found experimental evidence were thought to be indivisible, and therefore were given by John Dalton the name "atom", long used by the atomist philosophy. Although the connection to historical atomism is at best tenuous, elementary particles have become a modern analogue of philosophical atoms.
Reductionism
Philosophical atomism is a reductive argument, proposing not only that everything is composed of atoms and void, but that nothing they compose really exists: the only things that really exist are atoms ricocheting off each other mechanistically in an otherwise empty void. One proponent of this theory was the Greek philosopher Democritus.
Atomism stands in contrast to a substance theory wherein a prime material continuum remains qualitatively invariant under division (for example, the ratio of the four classical elements would be the same in any portion of a homogeneous material).
Antiquity
Greek atomism
Democritus
In the 5th century BC, Leucippus and his pupil Democritus proposed that all matter was composed of small indivisible particles which they called "atoms". Nothing whatsoever is known about Leucippus except that he was the teacher of Democritus. Democritus, by contrast, wrote prolifically, producing over eighty known treatises, none of which have survived to the present day complete. However, a massive number of fragments and quotations of his writings have survived. These are the main source of information on his teachings about atoms. Democritus's argument for the existence of atoms hinged on the idea that it is impossible to keep dividing matter infinitely - and that matter must therefore be made up of extremely tiny particles. The atomistic theory aimed to remove the "distinction which the Eleatic school drew between the Absolute, or the only real existence, and the world of change around us."
Democritus believed that atoms are too small for human senses to detect, that they are infinitely many, that they come in infinitely many varieties, and that they have always existed. They float in a vacuum, which Democritus called the "void", and they vary in form, order, and posture. Some atoms, he maintained, are convex, others concave, some shaped like hooks, and others like eyes. They are constantly moving and colliding into each other. Democritus wrote that atoms and void are the only things that exist and that all other things are merely said to exist by social convention. The objects humans see in everyday life are composed of many atoms united by random collisions and their forms and materials are determined by what kinds of atom make them up. Likewise, human perceptions are caused by atoms as well. Bitterness is caused by small, angular, jagged atoms passing across the tongue; whereas sweetness is caused by larger, smoother, more rounded atoms passing across the tongue.
Previously, Parmenides had denied the existence of motion, change and void. He believed all existence to be a single, all-encompassing and unchanging mass (a concept known as monism), and that change and motion were mere illusions. He explicitly rejected sensory experience as the path to an understanding of the universe and instead used purely abstract reasoning. He believed there is no such thing as void, equating it with non-being. This in turn meant that motion is impossible, because there is no void to move into. Parmenides doesn't mention or explicitly deny the existence of the void, stating instead that what is not does not exist. He also wrote all that must be an indivisible unity, for if it were manifold, then there would have to be a void that could divide it. Finally, he stated that the all encompassing Unity is unchanging, for the Unity already encompasses all that is and can be.
Democritus rejected Parmenides' belief that change is an illusion. He believed change was real, and if it was not then at least the illusion had to be explained. He thus supported the concept of void, and stated that the universe is made up of many Parmenidean entities that move around in the void. The void is infinite and provides the space in which the atoms can pack or scatter differently. The different possible packings and scatterings within the void make up the shifting outlines and bulk of the objects that organisms feel, see, eat, hear, smell, and taste. While organisms may feel hot or cold, hot and cold actually have no real existence. They are simply sensations produced in organisms by the different packings and scatterings of the atoms in the void that compose the object that organisms sense as being "hot" or "cold".
The work of Democritus survives only in secondhand reports, some of which are unreliable or conflicting. Much of the best evidence of Democritus' theory of atomism is reported by Aristotle (384–322 BCE) in his discussions of Democritus' and Plato's contrasting views on the types of indivisibles composing the natural world.
Unit-point atomism
According to some twentieth-century philosophers, unit-point atomism was the philosophy of the Pythagoreans, a conscious repudiation of Parmenides and the Eleatics. It stated that atoms were infinitesimally small ("point") yet possessed corporeality. It was a predecessor of Democritean atomism. Most recent students of presocratic philosophy, such as Kurt von Fritz, Walter Burkert, Gregory Vlastos, Jonathan Barnes, and Daniel W. Graham have rejected that any form of atomism can be applied to the early Pythagoreans (before Ecphantus of Syracuse).
Unit-point atomism was invoked in order to make sense of a statement ascribed to Zeno of Elea in Plato's Parmenides: "these writings of mine were meant to protect the arguments of Parmenides against those who make fun of him. . . My answer is addressed to the partisans of the many. . ." The anti-Parmenidean pluralists were supposedly unit-point atomists whose philosophy was essentially a reaction against the Eleatics. This hypothesis, however, to explain Zeno's paradoxes, has been thoroughly discredited.
Geometry and atoms
Plato ( – BCE) argued that atoms just crashing into other atoms could never produce the beauty and form of the world. In Plato's Timaeus (28b–29a) the character of Timeaus insisted that the cosmos was not eternal but was created, although its creator framed it after an eternal, unchanging model.
One part of that creation were the four simple bodies of fire, air, water, and earth. But Plato did not consider these corpuscles to be the most basic level of reality, for in his view they were made up of an unchanging level of reality, which was mathematical. These simple bodies were geometric solids, the faces of which were, in turn, made up of triangles. The square faces of the cube were each made up of four isosceles right-angled triangles and the triangular faces of the tetrahedron, octahedron, and icosahedron were each made up of six right-angled triangles.
Plato postulated the geometric structure of the simple bodies of the four elements as summarized in the adjacent table. The cube, with its flat base and stability, was assigned to earth; the tetrahedron was assigned to fire because its penetrating points and sharp edges made it mobile. The points and edges of the octahedron and icosahedron were blunter and so these less mobile bodies were assigned to air and water. Since the simple bodies could be decomposed into triangles, and the triangles reassembled into atoms of different elements, Plato's model offered a plausible account of changes among the primary substances.
Rejection in Aristotelianism
Sometime before 330 BC Aristotle asserted that the elements of fire, air, earth, and water were not made of atoms, but were continuous. Aristotle considered the existence of a void, which was required by atomic theories, to violate physical principles. Change took place not by the rearrangement of atoms to make new structures, but by transformation of matter from what it was in potential to a new actuality. A piece of wet clay, when acted upon by a potter, takes on its potential to be an actual drinking mug. Aristotle has often been criticized for rejecting atomism, but in ancient Greece the atomic theories of Democritus remained "pure speculations, incapable of being put to any experimental test".
Aristotle theorized minima naturalia as the smallest parts into which a homogeneous natural substance (e.g., flesh, bone, or wood) could be divided and still retain its essential character. Unlike the atomism of Democritus, these Aristotelian "natural minima" were not conceptualized as physically indivisible.
Instead, Aristotle's concept was rooted in his hylomorphic worldview, which held that every physical thing is a compound of matter (Greek hyle) and of an immaterial substantial form (Greek morphe) that imparts its essential nature and structure. To use an analogy we could pose a rubber ball: we could imagine the rubber to be the matter that gives the ball the ability to take on another form, and the spherical shape to be the form that gives it its identity of "ball". Using this analogy, though, we should keep in mind that in fact rubber itself would already be considered a composite of form and matter, as it has identity and determinacy to a certain extent, pure or primary matter is completely unformed, unintelligible and with infinite potential to undergo change.
Aristotle's intuition was that there is some smallest size beyond which matter could no longer be structured as flesh, or bone, or wood, or some other such organic substance that for Aristotle (living before the invention of the microscope) could be considered homogeneous. For instance, if flesh were divided beyond its natural minimum, what would be left might be a large amount of the element water, and smaller amounts of the other elements. But whatever water or other elements were left, they would no longer have the "nature" of flesh: in hylomorphic terms, they would no longer be matter structured by the form of flesh; instead the remaining water, e.g., would be matter structured by the form of water, not by the form of flesh.
Epicurus
Epicurus (341–270 BCE) studied atomism with Nausiphanes who had been a student of Democritus. Although Epicurus was certain of the existence of atoms and the void, he was less sure we could adequately explain specific natural phenomena such as earthquakes, lightning, comets, or the phases of the Moon. Few of Epicurus' writings survive, and those that do reflect his interest in applying Democritus' theories to assist people in taking responsibility for themselves and for their own happiness—since he held there are no gods around that can help them. (Epicurus regarded the role of gods as exemplifying moral ideals.)
Ancient Indian atomism
In ancient Indian philosophy, preliminary instances of atomism are found in the works of Vedic sage Aruni, who lived in the 8th century BCE, especially his proposition that "particles too small to be seen mass together into the substances and objects of experience" known as kaṇa. Although kana refers to "particles" not atoms (paramanu). Some scholars such as Hermann Jacobi and Randall Collins have compared Aruni to Thales of Miletus in their scientific methodology, calling them both as "primitive physicists" or "proto-materialist thinkers". Later, the Charvaka, and Ajivika schools of atomism originated as early as the 7th century BCE. Bhattacharya posits that Charvaka may have been one of several atheistic, materialist schools that existed in ancient India. Kanada founded the Vaisheshika school of Indian philosophy that also represents the earliest Indian natural philosophy. The Nyaya and Vaisheshika schools developed theories on how kaṇas combined into more complex objects.
Several of these doctrines of atomism are, in some respects, "suggestively similar" to that of Democritus. McEvilley (2002) assumes that such similarities are due to extensive cultural contact and diffusion, probably in both directions.
The Nyaya–Vaisesika school developed one of the earliest forms of atomism; scholars date the Nyaya and Vaisesika texts from the 9th to 4th centuries BCE. Vaisesika atomists posited the four elemental atom types, but in Vaisesika physics atoms had 25 different possible qualities, divided between general extensive properties and specific (intensive) properties. The Nyaya–Vaisesika atomists had elaborate theories of how atoms combine. In Vaisesika atomism, atoms first combine into s (triads) and (dyad) before they aggregate into bodies of a kind that can be perceived.
Late Roman Republic
Lucretius revives Epicurus
Epicurus' ideas re-appear in the works of his Roman follower Lucretius ( 99 BC – 55 BC), who wrote On the Nature of Things. This Classical Latin scientific work in poetic form illustrates several segments of Epicurean theory on how the universe came into its current stage; it shows that the phenomena we perceive are actually composite forms. The atoms and the void are eternal and in constant motion. Atomic collisions create objects, which are still composed of the same eternal atoms whose motion for a while is incorporated into the created entity. Lucretius also explains human sensations and meteorological phenomena in terms of atomic motion.
"Atoms" and "vacuum" vs. religion
In his epic poem On the Nature of Things, Lucretius depicts Epicurus as the hero who crushed the monster Religion through educating the people in what was possible in atoms and what was not possible in atoms. However, Epicurus expressed a non-aggressive attitude characterized by his statement:
However, according to science historian Charles Coulston Gillispie:
The possibility of a vacuum was accepted—or rejected—together with atoms and atomism, for the vacuum was part of that same theory.
Roman Empire
Galen
While Aristotelian philosophy eclipsed the importance of the atomists in late Roman and medieval Europe, their work was still preserved and exposited through commentaries on the works of Aristotle. In the 2nd century, Galen (AD 129–216) presented extensive discussions of the Greek atomists, especially Epicurus, in his Aristotle commentaries.
Middle Ages
Medieval Hinduism
Ajivika is a "Nastika" school of thought whose metaphysics included a theory of atoms or atomism which was later adapted in the Vaiśeṣika school, which postulated that all objects in the physical universe are reducible to paramāṇu (atoms), and one's experiences are derived from the interplay of substance (a function of atoms, their number and their spatial arrangements), quality, activity, commonness, particularity and inherence. Everything was composed of atoms, qualities emerged from aggregates of atoms, but the aggregation and nature of these atoms was predetermined by cosmic forces. The school founder's traditional name Kanada means 'atom eater', and he is known for developing the foundations of an atomistic approach to physics and philosophy in the Sanskrit text Vaiśeṣika Sūtra. His text is also known as Kanada Sutras, or Aphorisms of Kanada.
Medieval Buddhism
Medieval Buddhist atomism, flourishing around the 7th century, was very different from the atomist doctrines taught in early Buddhism. Medieval Buddhist philosophers Dharmakirti and Dignāga considered atoms to be point-sized, durationless, and made of energy. In discussing the two systems, Fyodor Shcherbatskoy (1930) stresses their commonality, the postulate of "absolute qualities" (guna-dharma) underlying all empirical phenomena.
Still later, the Abhidhammattha-sangaha, a text dated to the 11th or 12th century, postulates the existence of rupa-kalapa, imagined as the smallest units of the physical world, of varying elementary composition. Invisible under normal circumstances, the rupa-kalapa are said to become visible as a result of meditative samadhi.
Medieval Islam
Atomistic philosophies are found very early in Islamic philosophy and were influenced originally by earlier Greek and, to some extent, Indian philosophy. Islamic speculative theology in general approached issues in physics from an atomistic framework.
Al-Ghazali and Asharite atomism
The most successful form of Islamic atomism was in the Asharite school of Islamic theology, most notably in the work of the theologian al-Ghazali (1058–1111). In Asharite atomism, atoms are the only perpetual, material things in existence, and all else in the world is "accidental" meaning something that lasts for only an instant. Nothing accidental can be the cause of anything else, except perception, as it exists for a moment. Contingent events are not subject to natural physical causes, but are the direct result of God's constant intervention, without which nothing could happen. Thus nature is completely dependent on God, which meshes with other Asharite Islamic ideas on causation, or the lack thereof (Gardet 2001). Al-Ghazali also used the theory to support his theory of occasionalism. In a sense, the Asharite theory of atomism has far more in common with Indian atomism than it does with Greek atomism.
Averroes rejects atomism
Other traditions in Islam rejected the atomism of the Asharites and expounded on many Greek texts, especially those of Aristotle. An active school of philosophers in Al-Andalus, including the noted commentator Averroes (1126–1198 CE) explicitly rejected the thought of al-Ghazali and turned to an extensive evaluation of the thought of Aristotle. Averroes commented in detail on most of the works of Aristotle and his commentaries became very influential in Jewish and Christian scholastic thought.
Medieval Christendom
According to historian of atomism Joshua Gregory, there was no serious work done with atomism from the time of Galen until Isaac Beeckman, Gassendi and Descartes resurrected it in the 17th century; "the gap between these two 'modern naturalists' and the ancient Atomists marked "the exile of the atom" and "it is universally admitted that the Middle Ages had abandoned Atomism, and virtually lost it."
Scholasticism
Although the ancient atomists' works were unavailable, scholastic thinkers gradually became aware of Aristotle's critiques of atomism as Averroes's commentaries were translated into Latin. Although the atomism of Epicurus had fallen out of favor in the centuries of Scholasticism, the minima naturalia of Aristotelianism received extensive consideration. Speculation on minima naturalia provided philosophical background for the mechanistic philosophy of early modern thinkers such as Descartes, and for the alchemical works of Geber and Daniel Sennert, who in turn influenced the corpuscularian alchemist Robert Boyle, one of the founders of modern chemistry.
A chief theme in late Roman and Scholastic commentary on this concept was reconciling minima naturalia with the general Aristotelian principle of infinite divisibility. Commentators like John Philoponus and Thomas Aquinas reconciled these aspects of Aristotle's thought by distinguishing between mathematical and "natural" divisibility. With few exceptions, much of the curriculum in the universities of Europe was based on such Aristotelianism for most of the Middle Ages.
Nicholas of Autrecourt
In medieval universities there were, however, expressions of atomism. For example, in the 14th century Nicholas of Autrecourt considered that matter, space, and time were all made up of indivisible atoms, points, and instants and that all generation and corruption took place by the rearrangement of material atoms. The similarities of his ideas with those of al-Ghazali suggest that Nicholas may have been familiar with Ghazali's work, perhaps through Averroes' refutation of it.
Atomist renaissance
17th century
In the 17th century, a renewed interest arose in Epicurean atomism and corpuscularianism as a hybrid or an alternative to Aristotelian physics. The main figures in the rebirth of atomism were Isaac Beeckman, René Descartes, Pierre Gassendi, and Robert Boyle, as well as other notable figures.
Northumberland circle
One of the first groups of atomists in England was a cadre of amateur scientists known as the Northumberland circle, led by Henry Percy, 9th Earl of Northumberland (1564–1632). Although they published little of account, they helped to disseminate atomistic ideas among the burgeoning scientific culture of England, and may have been particularly influential to Francis Bacon, who became an atomist around 1605, though he later rejected some of the claims of atomism. Though they revived the classical form of atomism, this group was among the scientific avant-garde: the Northumberland circle contained nearly half of the confirmed Copernicans prior to 1610 (the year of Galileo's The Starry Messenger). Other influential atomists of late 16th and early 17th centuries include Giordano Bruno, Thomas Hobbes (who also changed his stance on atomism late in his career), and Thomas Hariot. A number of different atomistic theories were blossoming in France at this time, as well (Clericuzio 2000).
Galileo Galilei
Galileo Galilei (1564–1642) was an advocate of atomism in his 1612 Discourse on Floating Bodies (Redondi 1969). In The Assayer, Galileo offered a more complete physical system based on a corpuscular theory of matter, in which all phenomena—with the exception of sound—are produced by "matter in motion".
Perceived vs. real properties
Atomism was associated by its leading proponents with the idea that some of the apparent properties of objects are artifacts of the perceiving mind, that is, "secondary" qualities as distinguished from "primary" qualities.
Galileo identified some basic problems with Aristotelian physics through his experiments. He utilized a theory of atomism as a partial replacement, but he was never unequivocally committed to it. For example, his experiments with falling bodies and inclined planes led him to the concepts of circular inertial motion and accelerating free-fall. The current Aristotelian theories of impetus and terrestrial motion were inadequate to explain these. While atomism did not explain the law of fall either, it was a more promising framework in which to develop an explanation because motion was conserved in ancient atomism (unlike Aristotelian physics).
René Descartes
René Descartes' (1596–1650) "mechanical" philosophy of corpuscularism had much in common with atomism, and is considered, in some senses, to be a different version of it. Descartes thought everything physical in the universe to be made of tiny vortices of matter. Like the ancient atomists, Descartes claimed that sensations, such as taste or temperature, are caused by the shape and size of tiny pieces of matter. In Principles of Philosophy (1644) he writes: "The nature of body consists just in extension—not in weight, hardness, colour or the like." The main difference between atomism and Descartes' concept was the existence of the void. For him, there could be no vacuum, and all matter was constantly swirling to prevent a void as corpuscles moved through other matter. Another key distinction between Descartes' view and classical atomism is the mind/body duality of Descartes, which allowed for an independent realm of existence for thought, soul, and most importantly, God.
Pierre Gassendi
Pierre Gassendi (1592–1655) was a Catholic priest from France who was also an avid natural philosopher. Gassendi's concept of atomism was closer to classical atomism, but with no atheistic overtone. He was particularly intrigued by the Greek atomists, so he set out to "purify" atomism from its heretical and atheistic philosophical conclusions (Dijksterhius 1969). Gassendi formulated his atomistic conception of mechanical philosophy partly in response to Descartes; he particularly opposed Descartes' reductionist view that only purely mechanical explanations of physics are valid, as well as the application of geometry to the whole of physics (Clericuzio 2000).
Johann Chrysostom Magnenus
Johann Chrysostom Magnenus ( – ) published his Democritus reviviscens in 1646. Magnenus was the first to arrive at a scientific estimate of the size of an "atom" (i.e. of what would today be called a molecule). Measuring how much incense had to be burned before it could be smelled everywhere in a large church, he calculated the number of molecules in a grain of incense to be of the order 1018, only about one order of magnitude below the actual figure.
Corpuscularianism
Corpuscularianism is similar to atomism, except that where atoms were supposed to be indivisible, corpuscles could in principle be divided. In this manner, for example, it was theorized that mercury could penetrate into metals and modify their inner structure, a step on the way towards transmutative production of gold. Corpuscularianism was associated by its leading proponents with the idea that some of the properties that objects appear to have are artifacts of the perceiving mind: 'secondary' qualities as distinguished from 'primary' qualities. Not all corpuscularianism made use of the primary-secondary quality distinction, however. An influential tradition in medieval and early modern alchemy argued that chemical analysis revealed the existence of robust corpuscles that retained their identity in chemical compounds (to use the modern term). William R. Newman has dubbed this approach to matter theory "chymical atomism," and has argued for its significance to both the mechanical philosophy and to the chemical atomism that emerged in the early 19th century.
Corpuscularianism stayed a dominant theory over the next several hundred years and retained its links with alchemy in the work of scientists such as Robert Boyle (1627–1692) and Isaac Newton in the 17th century. It was used by Newton, for instance, in his development of the corpuscular theory of light. The form that came to be accepted by most English scientists after Robert Boyle was an amalgam of the systems of Descartes and Gassendi. In The Sceptical Chymist (1661), Boyle demonstrates problems that arise from chemistry, and offers up atomism as a possible explanation. The unifying principle that would eventually lead to the acceptance of a hybrid corpuscular–atomism was mechanical philosophy, which became widely accepted by physical sciences.
Modern atomic theory
18th century
By the late 18th century, the useful practices of engineering and technology began to influence philosophical explanations of the composition of matter. Those who speculated on the ultimate nature of matter began to verify their "thought experiments" with some repeatable demonstrations, when they could.
Ragusan polymath Roger Boscovich (1711–1787) provided the first general mathematical theory of atomism based on the ideas of Newton and Leibniz, but transforming them so as to provide a programme for atomic physics.
19th century
John Dalton
In 1808, English physicist John Dalton (1766–1844) assimilated the known experimental work of many people to summarize the empirical evidence on the composition of matter. He noticed that distilled water everywhere analyzed to the same elements, hydrogen and oxygen. Similarly, other purified substances decomposed to the same elements in the same proportions by weight.
Therefore we may conclude that the ultimate particles of all homogeneous bodies are perfectly alike in weight, figure, etc. In other words, every particle of water is like every other particle of water; every particle of hydrogen is like every other particle of hydrogen, etc.
Furthermore, he concluded that there was a unique atom for each element, using Lavoisier's definition of an element as a substance that could not be analyzed into something simpler. Thus, Dalton concluded the following.
Chemical analysis and synthesis go no farther than to the separation of particles one from another, and to their reunion. No new creation or destruction of matter is within the reach of chemical agency. We might as well attempt to introduce a new planet into the solar system, or to annihilate one already in existence, as to create or destroy a particle of hydrogen. All the changes we can produce, consist in separating particles that are in a state of cohesion or combination, and joining those that were previously at a distance.
And then he proceeded to give a list of relative weights in the compositions of several common compounds, summarizing:
1st. That water is a binary compound of hydrogen and oxygen, and the relative weights of the two elementary atoms are as 1:7, nearly;
2nd. That ammonia is a binary compound of hydrogen and azote nitrogen, and the relative weights of the two atoms are as 1:5, nearly...
Dalton concluded that the fixed proportions of elements by weight suggested that the atoms of one element combined with only a limited number of atoms of the other elements to form the substances that he listed.
Atomic theory debate
Dalton's atomic theory remained controversial throughout the 19th century. Whilst the Law of definite proportion was accepted, the hypothesis that this was due to atoms was not so widely accepted. For example, in 1826 when Sir Humphry Davy presented Dalton the Royal Medal from the Royal Society, Davy said that the theory only became useful when the atomic conjecture was ignored. English chemist Sir Benjamin Collins Brodie in 1866 published the first part of his Calculus of Chemical Operations as a non-atomic alternative to the atomic theory. He described atomic theory as a 'Thoroughly materialistic bit of joiners work'. English chemist Alexander Williamson used his Presidential Address to the London Chemical Society in 1869 to defend the atomic theory against its critics and doubters. This in turn led to further meetings at which the positivists again attacked the supposition that there were atoms. The matter was finally resolved in Dalton's favour in the early 20th century with the rise of atomic physics.
20th century
Experimental verification
Atoms and molecules had long been theorized as the constituents of matter, and Albert Einstein published a paper in 1905 that explained how the motion that Scottish botanist Robert Brown had observed was a result of the pollen being moved by individual water molecules, making one of his first contributions to science. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist, and was further verified experimentally by French physicist Jean Perrin (1870–1942) in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter". The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion.
See also
Eliminative materialism
First principle
History of chemistry
Mereological nihilism
Montonen–Olive duality#Philosophical implications
Ontological pluralism
Physicalism
Prima materia
Process philosophy
References
Citations
References
Clericuzio, Antonio. Elements, Principles, and Corpuscles; a study of atomism and chemistry in the seventeenth century. Dordrecht; Boston: Kluwer Academic Publishers, 2000.
Cornford, Francis MacDonald. Plato's Cosmology: The Timaeus of Plato. New York: Liberal Arts Press, 1957.
Dijksterhuis, E. The Mechanization of the World Picture. Trans. by C. Dikshoorn. New York: Oxford University Press, 1969.
Firth, Raymond. Religion: A Humanist Interpretation. Routledge, 1996. .
Gangopadhyaya, Mrinalkanti. Indian Atomism: history and sources. Atlantic Highlands, New Jersey: Humanities Press, 1981.
Gardet, L. "djuz'" in Encyclopaedia of Islam CD-ROM Edition, v. 1.1. Leiden: Brill, 2001.
Gregory, Joshua C. A Short History of Atomism. London: A. and C. Black, Ltd, 1981.
Kargon, Robert Hugh. Atomism in England from Hariot to Newton. Oxford: Clarendon Press, 1966.
Lloyd, Geoffrey (1973). Greek Science After Aristotle. New York: W. W. Norton, .
Marmura, Michael E. "Causation in Islamic Thought." Dictionary of the History of Ideas. New York: Charles Scribner's Sons, 1973–74
McEvilley, Thomas (2002). The Shape of Ancient Thought: Comparative Studies in Greek and Indian Philosophies. New York: Allworth Communications Inc. .
Redondi, Pietro. Galileo Heretic. Translated by Raymond Rosenthal. Princeton, NJ: Princeton University Press, 1987.
External links
Dictionary of the History of Ideas: Atomism: Antiquity to the Seventeenth Century
Dictionary of the History of Ideas: Atomism in the Seventeenth Century
Jonathan Schaffer, "Is There a Fundamental Level?" Nous 37 (2003): 498–517. By a philosopher who opposes atomism
Article on traditional Greek atomism
Atomism from the 17th to the 20th Century at Stanford Encyclopedia of Philosophy
Metaphysical theories
Presocratic philosophy | 0.767166 | 0.99612 | 0.76419 |
Docking (molecular) | In the field of molecular modeling, docking is a method which predicts the preferred orientation of one molecule to a second when a ligand and a target are bound to each other to form a stable complex. Knowledge of the preferred orientation in turn may be used to predict the strength of association or binding affinity between two molecules using, for example, scoring functions.
The associations between biologically relevant molecules such as proteins, peptides, nucleic acids, carbohydrates, and lipids play a central role in signal transduction. Furthermore, the relative orientation of the two interacting partners may affect the type of signal produced (e.g., agonism vs antagonism). Therefore, docking is useful for predicting both the strength and type of signal produced.
Molecular docking is one of the most frequently used methods in structure-based drug design, due to its ability to predict the binding-conformation of small molecule ligands to the appropriate target binding site. Characterisation of the binding behaviour plays an important role in rational design of drugs as well as to elucidate fundamental biochemical processes.
Definition of problem
One can think of molecular docking as a problem of “lock-and-key”, in which one wants to find the correct relative orientation of the “key” which will open up the “lock” (where on the surface of the lock is the key hole, which direction to turn the key after it is inserted, etc.). Here, the protein can be thought of as the “lock” and the ligand can be thought of as a “key”. Molecular docking may be defined as an optimization problem, which would describe the “best-fit” orientation of a ligand that binds to a particular protein of interest. However, since both the ligand and the protein are flexible, a “hand-in-glove” analogy is more appropriate than “lock-and-key”. During the course of the docking process, the ligand and the protein adjust their conformation to achieve an overall "best-fit" and this kind of conformational adjustment resulting in the overall binding is referred to as "induced-fit".
Molecular docking research focuses on computationally simulating the molecular recognition process. It aims to achieve an optimized conformation for both the protein and ligand and relative orientation between protein and ligand such that the free energy of the overall system is minimized.
Docking approaches
Two approaches are particularly popular within the molecular docking community.
One approach uses a matching technique that describes the protein and the ligand as complementary surfaces.
The second approach simulates the actual docking process in which the ligand-protein pairwise interaction energies are calculated.
Both approaches have significant advantages as well as some limitations. These are outlined below.
Shape complementarity
Geometric matching/shape complementarity methods describe the protein and ligand as a set of features that make them dockable. These features may include molecular surface/complementary surface descriptors. In this case, the receptor's molecular surface is described in terms of its solvent-accessible surface area and the ligand's molecular surface is described in terms of its matching surface description. The complementarity between the two surfaces amounts to the shape matching description that may help finding the complementary pose of docking the target and the ligand molecules. Another approach is to describe the hydrophobic features of the protein using turns in the main-chain atoms. Yet another approach is to use a Fourier shape descriptor technique. Whereas the shape complementarity based approaches are typically fast and robust, they cannot usually model the movements or dynamic changes in the ligand/protein conformations accurately, although recent developments allow these methods to investigate ligand flexibility. Shape complementarity methods can quickly scan through several thousand ligands in a matter of seconds and actually figure out whether they can bind at the protein's active site, and are usually scalable to even protein-protein interactions. They are also much more amenable to pharmacophore based approaches, since they use geometric descriptions of the ligands to find optimal binding.
Simulation
Simulating the docking process is much more complicated. In this approach, the protein and the ligand are separated by some physical distance, and the ligand finds its position into the protein's active site after a certain number of “moves” in its conformational space. The moves incorporate rigid body transformations such as translations and rotations, as well as internal changes to the ligand's structure including torsion angle rotations. Each of these moves in the conformation space of the ligand induces a total energetic cost of the system. Hence, the system's total energy is calculated after every move.
The obvious advantage of docking simulation is that ligand flexibility is easily incorporated, whereas shape complementarity techniques must use ingenious methods to incorporate flexibility in ligands. Also, it more accurately models reality, whereas shape complementary techniques are more of an abstraction.
Clearly, simulation is computationally expensive, having to explore a large energy landscape. Grid-based techniques, optimization methods, and increased computer speed have made docking simulation more realistic.
Mechanics of docking
To perform a docking screen, the first requirement is a structure of the protein of interest. Usually the structure has been determined using a biophysical technique such as
X-ray crystallography,
NMR spectroscopy or
cryo-electron microscopy (cryo-EM),
but can also derive from homology modeling construction. This protein structure and a database of potential ligands serve as inputs to a docking program. The success of a docking program depends on two components: the search algorithm and the scoring function.
Search algorithm
The search space in theory consists of all possible orientations and conformations of the protein paired with the ligand. However, in practice with current computational resources, it is impossible to exhaustively explore the search space — this would involve enumerating all possible distortions of each molecule (molecules are dynamic and exist in an ensemble of conformational states) and all possible rotational and translational orientations of the ligand relative to the protein at a given level of granularity. Most docking programs in use account for the whole conformational space of the ligand (flexible ligand), and several attempt to model a flexible protein receptor. Each "snapshot" of the pair is referred to as a pose.
A variety of conformational search strategies have been applied to the ligand and to the receptor. These include:
systematic or stochastic torsional searches about rotatable bonds
molecular dynamics simulations
genetic algorithms to "evolve" new low energy conformations and where the score of each pose acts as the fitness function used to select individuals for the next iteration.
Ligand flexibility
Conformations of the ligand may be generated in the absence of the receptor and subsequently docked or conformations may be generated on-the-fly in the presence of the receptor binding cavity, or with full rotational flexibility of every dihedral angle using fragment based docking. Force field energy evaluation are most often used to select energetically reasonable conformations, but knowledge-based methods have also been used.
Peptides are both highly flexible and relatively large-sized molecules, which makes modeling their flexibility a challenging task. A number of methods were developed to allow for efficient modeling of flexibility of peptides during protein-peptide docking.
Receptor flexibility
Computational capacity has increased dramatically over the last decade making possible the use of more sophisticated and computationally intensive methods in computer-assisted drug design. However, dealing with receptor flexibility in docking methodologies is still a thorny issue. The main reason behind this difficulty is the large number of degrees of freedom that have to be considered in this kind of calculations. Neglecting it, however, in some of the cases may lead to poor docking results in terms of binding pose prediction.
Multiple static structures experimentally determined for the same protein in different conformations are often used to emulate receptor flexibility. Alternatively rotamer libraries of amino acid side chains that surround the binding cavity may be searched to generate alternate but energetically reasonable protein conformations.
Scoring function
Docking programs generate a large number of potential ligand poses, of which some can be immediately rejected due to clashes with the protein. The remainder are evaluated using some scoring function, which takes a pose as input and returns a number indicating the likelihood that the pose represents a favorable binding interaction and ranks one ligand relative to another.
Most scoring functions are physics-based molecular mechanics force fields that estimate the energy of the pose within the binding site. The various contributions to binding can be written as an additive equation:
The components consist of solvent effects, conformational changes in the protein and ligand, free energy due to protein-ligand interactions, internal rotations, association energy of ligand and receptor to form a single complex and free energy due to changes in vibrational modes. A low (negative) energy indicates a stable system and thus a likely binding interaction.
Alternative approaches use modified scoring functions to include constraints based on known key protein-ligand interactions, or knowledge-based potentials derived from interactions observed in large databases of protein-ligand structures (e.g. the Protein Data Bank).
There are a large number of structures from X-ray crystallography for complexes between proteins and high affinity ligands, but comparatively fewer for low affinity ligands as the latter complexes tend to be less stable and therefore more difficult to crystallize. Scoring functions trained with this data can dock high affinity ligands correctly, but they will also give plausible docked conformations for ligands that do not bind. This gives a large number of false positive hits, i.e., ligands predicted to bind to the protein that actually don't when placed together in a test tube.
One way to reduce the number of false positives is to recalculate the energy of the top scoring poses using (potentially) more accurate but computationally more intensive techniques such as Generalized Born or Poisson-Boltzmann methods.
Docking assessment
The interdependence between sampling and scoring function affects the docking capability in predicting plausible poses or binding affinities for novel compounds. Thus, an assessment of a docking protocol is generally required (when experimental data is available) to determine its predictive capability. Docking assessment can be performed using different strategies, such as:
docking accuracy (DA) calculation;
the correlation between a docking score and the experimental response or determination of the enrichment factor (EF);
the distance between an ion-binding moiety and the ion in the active site;
the presence of induce-fit models.
Docking accuracy
Docking accuracy represents one measure to quantify the fitness of a docking program by rationalizing the ability to predict the right pose of a ligand with respect to that experimentally observed.
Enrichment factor
Docking screens can also be evaluated by the enrichment of annotated ligands of known binders from among a large database of presumed non-binding, “decoy” molecules. In this way, the success of a docking screen is evaluated by its capacity to enrich the small number of known active compounds in the top ranks of a screen from among a much greater number of decoy molecules in the database. The area under the receiver operating characteristic (ROC) curve is widely used to evaluate its performance.
Prospective
Resulting hits from docking screens are subjected to pharmacological validation (e.g. IC50, affinity or potency measurements). Only prospective studies constitute conclusive proof of the suitability of a technique for a particular target. In the case of G protein-coupled receptors (GPCRs), which are targets of more than 30% of marketed drugs, molecular docking led to the discovery of more than 500 GPCR ligands.
Benchmarking
The potential of docking programs to reproduce binding modes as determined by X-ray crystallography can be assessed by a range of docking benchmark sets.
For small molecules, several benchmark data sets for docking and virtual screening exist e.g. Astex Diverse Set consisting of high quality protein−ligand X-ray crystal structures, the Directory of Useful Decoys (DUD) for evaluation of virtual screening performance., or the LEADS-FRAG data set for fragments
An evaluation of docking programs for their potential to reproduce peptide binding modes can be assessed by Lessons for Efficiency Assessment of Docking and Scoring (LEADS-PEP).
Applications
A binding interaction between a small molecule ligand and an enzyme protein may result in activation or inhibition of the enzyme. If the protein is a receptor, ligand binding may result in agonism or antagonism. Docking is most commonly used in the field of drug design — most drugs are small organic molecules, and docking may be applied to:
hit identification – docking combined with a scoring function can be used to quickly screen large databases of potential drugs in silico to identify molecules that are likely to bind to protein target of interest (see virtual screening). Reverse pharmacology routinely uses docking for target identification.
lead optimization – docking can be used to predict in where and in which relative orientation a ligand binds to a protein (also referred to as the binding mode or pose). This information may in turn be used to design more potent and selective analogs.
bioremediation – protein ligand docking can also be used to predict pollutants that can be degraded by enzymes.
See also
Drug design
Katchalski-Katzir algorithm
List of molecular graphics systems
Macromolecular docking
Molecular mechanics
Protein structure
Protein design
Software for molecular mechanics modeling
List of protein-ligand docking software
Molecular design software
Docking@Home
Exscalate4Cov
Ibercivis
ZINC database
Lead Finder
Virtual screening
Scoring functions for docking
References
External links
Docking@GRID Project of Conformational Sampling and Docking on Grids : one aim is to deploy some intrinsic distributed docking algorithms on computational Grids, download Docking@GRID open-source Linux version
Click2Drug.org - Directory of computational drug design tools.
Ligand:Receptor Docking with MOE (Molecular Operating Environment)
Molecular modelling
Computational chemistry
Protein structure
Medicinal chemistry
Bioinformatics
Drug discovery
Articles containing video clips | 0.772101 | 0.989733 | 0.764174 |
Blackboard (design pattern) | In software engineering, the blackboard pattern is a behavioral design pattern that provides a computational framework for the design and implementation of systems that integrate large and diverse specialized modules, and implement complex, non-deterministic control strategies.
This pattern was identified by the members of the Hearsay-II project and first applied to speech recognition.
Structure
The blackboard model defines three main components:
blackboard—a structured global memory containing objects from the solution space
knowledge sources—specialized modules with their own representation
control component—selects, configures and executes modules.
Implementation
The first step is to design the solution space (i.e. potential solutions) that leads to the blackboard structure. Then, knowledge sources are identified. These two activities are closely related.
The next step is to specify the control component; it generally takes the form of a complex scheduler that makes use of a set of domain-specific heuristics to rate the relevance of executable knowledge sources.
Applications
Usage-domains include:
speech recognition
vehicle identification and tracking
protein structure identification
sonar signals interpretation.
Consequences
The blackboard pattern provides effective solutions for designing and implementing complex systems where heterogeneous modules have to be dynamically combined to solve a problem. This provides non-functional properties such as:
reusability
changeability
robustness.
The blackboard pattern allows multiple processes to work closer together on separate threads, polling and reacting when necessary.
See also
Blackboard system
Software design pattern
References
Software design patterns | 0.766197 | 0.997359 | 0.764173 |
Biologist | A biologist is a scientist who conducts research in biology. Biologists are interested in studying life on Earth, whether it is an individual cell, a multicellular organism, or a community of interacting populations. They usually specialize in a particular branch (e.g., molecular biology, zoology, and evolutionary biology) of biology and have a specific research focus (e.g., studying malaria or cancer).
Biologists who are involved in basic research have the aim of advancing knowledge about the natural world. They conduct their research using the scientific method, which is an empirical method for testing hypotheses. Their discoveries may have applications for some specific purpose such as in biotechnology, which has the goal of developing medically useful products for humans.
In modern times, most biologists have one or more academic degrees such as a bachelor's degree, as well as an advanced degree such as a master's degree or a doctorate. Like other scientists, biologists can be found working in different sectors of the economy such as in academia, nonprofits, private industry, or government.
History
Francesco Redi, the founder of biology, is recognized to be one of the greatest biologists of all time. Robert Hooke, an English natural philosopher, coined the term cell, suggesting plant structure's resemblance to honeycomb cells.
Charles Darwin and Alfred Wallace independently formulated the theory of evolution by natural selection, which was described in detail in Darwin's book On the Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes of descent with accumulated modification leading to divergence over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Separately, Gregor Mendel formulated the principles of inheritance in 1866, which became the basis of modern genetics.
In 1953, James D. Watson and Francis Crick described the basic structure of DNA, the genetic material for expressing life in all its forms, building on the work of Maurice Wilkins and Rosalind Franklin, suggested that the structure of DNA was a double helix.
Ian Wilmut led a research group that in 1996 first cloned a mammal from an adult somatic cell, a Finnish Dorset lamb named Dolly.
Education
An undergraduate degree in biology typically requires coursework in molecular and cellular biology, development, ecology, genetics, microbiology, anatomy, physiology, botany, and zoology. Additional requirements may include physics, chemistry (general, organic, and biochemistry), calculus, and statistics.
Students who aspire to a research-oriented career usually pursue a graduate degree such as a master's or a doctorate (e.g., PhD) whereby they would receive training from a research head based on an apprenticeship model that has been in existence since the 1800s. Students in these graduate programs often receive specialized training in a particular subdiscipline of biology.
Research
Biologists who work in basic research formulate theories and devise experiments to advance human knowledge on life including topics such as evolution, biochemistry, molecular biology, neuroscience and cell biology.
Biologists typically conduct laboratory experiments involving animals, plants, microorganisms or biomolecules. However, a small part of biological research also occurs outside the laboratory and may involve natural observation rather than experimentation. For example, a botanist may investigate the plant species present in a particular environment, while an ecologist might study how a forest area recovers after a fire.
Biologists who work in applied research use instead the accomplishments gained by basic research to further knowledge in particular fields or applications. For example, this applied research may be used to develop new pharmaceutical drugs, treatments and medical diagnostic tests. Biological scientists conducting applied research and product development in private industry may be required to describe their research plans or results to non-scientists who are in a position to veto or approve their ideas. These scientists must consider the business effects of their work.
Swift advances in knowledge of genetics and organic molecules spurred growth in the field of biotechnology, transforming the industries in which biological scientists work. Biological scientists can now manipulate the genetic material of animals and plants, attempting to make organisms (including humans) more productive or resistant to disease. Basic and applied research on biotechnological processes, such as recombining DNA, has led to the production of important substances, including human insulin and growth hormone. Many other substances not previously available in large quantities are now produced by biotechnological means. Some of these substances are useful in treating diseases.
Those working on various genome (chromosomes with their associated genes) projects isolate genes and determine their function. This work continues to lead to the discovery of genes associated with specific diseases and inherited health risks, such as sickle cell anemia. Advances in biotechnology have created research opportunities in almost all areas of biology, with commercial applications in areas such as medicine, agriculture, and environmental remediation.
Specializations
Most biological scientists specialize in the study of a certain type of organism or in a specific activity, although recent advances have blurred some traditional classifications.
Geneticists study genetics, the science of genes, heredity, and variation of organisms.
Neuroscientists study the nervous system.
Developmental biologists study the process of development and growth of organisms
Biochemists study the chemical composition of living things. They analyze the complex chemical combinations and reactions involved in metabolism, reproduction, and growth.
Molecular biologists study the biological activity between biomolecules.
Microbiologists investigate the growth and characteristics of microscopic organisms such as bacteria, algae, or fungi.
Physiologists study life functions of plants and animals, in the whole organism and at the cellular or molecular level, under normal and abnormal conditions. Physiologists often specialize in functions such as growth, reproduction, photosynthesis, respiration, or movement, or in the physiology of a certain area or system of the organism.
Biophysicists use experimental methods traditionally employed in physics to answer biological questions .
Computational biologists apply the techniques of computer science, applied mathematics and statistics to address biological problems. The main focus lies on developing mathematical modeling and computational simulation techniques. By these means it addresses scientific research topics with their theoretical and experimental questions without a laboratory.
Zoologists and wildlife biologists study animals and wildlife—their origin, behavior, diseases, and life processes. Some experiment with live animals in controlled or natural surroundings, while others dissect dead animals to study their structure. Zoologists and wildlife biologists also may collect and analyze biological data to determine the environmental effects of current and potential uses of land and water areas. Zoologists usually are identified by the animal group they study. For example, ornithologists study birds, mammalogists study mammals, herpetologists study reptiles and amphibians, ichthyologists study fish, cnidariologists study jellyfishes and entomologists study insects.
Botanists study plants and their environments. Some study all aspects of plant life, including algae, lichens, mosses, ferns, conifers, and flowering plants; others specialize in areas such as identification and classification of plants, the structure and function of plant parts, the biochemistry of plant processes, the causes and cures of plant diseases, the interaction of plants with other organisms and the environment, the geological record of plants and their evolution. Mycologists study fungi, such as yeasts, mold and mushrooms, which are a separate kingdom from plants.
Aquatic biologists study micro-organisms, plants, and animals living in water. Marine biologists study salt water organisms, and limnologists study fresh water organisms. Much of the work of marine biology centers on molecular biology, the study of the biochemical processes that take place inside living cells. Marine biology is a branch of oceanography, which is the study of the biological, chemical, geological, and physical characteristics of oceans and the ocean floor. (See the Handbook statements on environmental scientists and hydrologists and on geoscientists.)
Ecologists investigate the relationships among organisms and between organisms and their environments, examining the effects of population size, pollutants, rainfall, temperature, and altitude. Using knowledge of various scientific disciplines, ecologists may collect, study, and report data on the quality of air, food, soil, and water.
Evolutionary biologists investigate the evolutionary processes that produced the diversity of life on Earth, starting from a single common ancestor. These processes include natural selection, common descent, and speciation.
Employment
Biologists typically work regular hours but longer hours are not uncommon. Researchers may be required to work odd hours in laboratories or other locations (especially while in the field), depending on the nature of their research.
Many biologists depend on grant money to fund their research. They may be under pressure to meet deadlines and to conform to rigid grant-writing specifications when preparing proposals to seek new or extended funding.
Marine biologists encounter a variety of working conditions. Some work in laboratories; others work on research ships, and those who work underwater must practice safe diving while working around sharp coral reefs and hazardous marine life. Although some marine biologists obtain their specimens from the sea, many still spend a good deal of their time in laboratories and offices, conducting tests, running experiments, recording results, and compiling data.
Biologists are not usually exposed to unsafe or unhealthy conditions. Those who work with dangerous organisms or toxic substances in the laboratory must follow strict safety procedures to avoid contamination. Many biological scientists, such as botanists, ecologists, and zoologists, conduct field studies that involve strenuous physical activity and primitive living conditions. Biological scientists in the field may work in warm or cold climates, in all kinds of weather.
Honors and awards
The highest honor awarded to biologists is the Nobel Prize in Physiology or Medicine, awarded since 1901, by the Royal Swedish Academy of Sciences. Another significant award is the Crafoord Prize in Biosciences; established in 1980.
See also
Biology
Glossary of biology
List of biologists
Lists of biologists by author abbreviation
References
U.S. Department of Labor, Occupational Outlook Handbook
Science occupations
sl:Biolog | 0.767968 | 0.99505 | 0.764166 |
Nuclear chemistry | Nuclear chemistry is the sub-field of chemistry dealing with radioactivity, nuclear processes, and transformations in the nuclei of atoms, such as nuclear transmutation and nuclear properties.
It is the chemistry of radioactive elements such as the actinides, radium and radon together with the chemistry associated with equipment (such as nuclear reactors) which are designed to perform nuclear processes. This includes the corrosion of surfaces and the behavior under conditions of both normal and abnormal operation (such as during an accident). An important area is the behavior of objects and materials after being placed into a nuclear waste storage or disposal site.
It includes the study of the chemical effects resulting from the absorption of radiation within living animals, plants, and other materials. The radiation chemistry controls much of radiation biology as radiation has an effect on living things at the molecular scale. To explain it another way, the radiation alters the biochemicals within an organism, the alteration of the bio-molecules then changes the chemistry which occurs within the organism; this change in chemistry then can lead to a biological outcome. As a result, nuclear chemistry greatly assists the understanding of medical treatments (such as cancer radiotherapy) and has enabled these treatments to improve.
It includes the study of the production and use of radioactive sources for a range of processes. These include radiotherapy in medical applications; the use of radioactive tracers within industry, science and the environment, and the use of radiation to modify materials such as polymers.
It also includes the study and use of nuclear processes in non-radioactive areas of human activity. For instance, nuclear magnetic resonance (NMR) spectroscopy is commonly used in synthetic organic chemistry and physical chemistry and for structural analysis in macro-molecular chemistry.
History
After Wilhelm Röntgen discovered X-rays in 1895, many scientists began to work on ionizing radiation. One of these was Henri Becquerel, who investigated the relationship between phosphorescence and the blackening of photographic plates. When Becquerel (working in France) discovered that, with no external source of energy, the uranium generated rays which could blacken (or fog) the photographic plate, radioactivity was discovered. Marie Skłodowska-Curie (working in Paris) and her husband Pierre Curie isolated two new radioactive elements from uranium ore. They used radiometric methods to identify which stream the radioactivity was in after each chemical separation; they separated the uranium ore into each of the different chemical elements that were known at the time, and measured the radioactivity of each fraction. They then attempted to separate these radioactive fractions further, to isolate a smaller fraction with a higher specific activity (radioactivity divided by mass). In this way, they isolated polonium and radium. It was noticed in about 1901 that high doses of radiation could cause an injury in humans. Henri Becquerel had carried a sample of radium in his pocket and as a result he suffered a highly localized dose which resulted in a radiation burn. This injury resulted in the biological properties of radiation being investigated, which in time resulted in the development of medical treatment.
Ernest Rutherford, working in Canada and England, showed that radioactive decay can be described by a simple equation (a linear first degree derivative equation, now called first order kinetics), implying that a given radioactive substance has a characteristic "half-life" (the time taken for the amount of radioactivity present in a source to diminish by half). He also coined the terms alpha, beta and gamma rays, he converted nitrogen into oxygen, and most importantly he supervised the students who conducted the Geiger–Marsden experiment (gold foil experiment) which showed that the 'plum pudding model' of the atom was wrong. In the plum pudding model, proposed by J. J. Thomson in 1904, the atom is composed of electrons surrounded by a 'cloud' of positive charge to balance the electrons' negative charge. To Rutherford, the gold foil experiment implied that the positive charge was confined to a very small nucleus leading first to the Rutherford model, and eventually to the Bohr model of the atom, where the positive nucleus is surrounded by the negative electrons.
In 1934, Marie Curie's daughter (Irène Joliot-Curie) and son-in-law (Frédéric Joliot-Curie) were the first to create artificial radioactivity: they bombarded boron with alpha particles to make the neutron-poor isotope nitrogen-13; this isotope emitted positrons. In addition, they bombarded aluminium and magnesium with neutrons to make new radioisotopes.
In the early 1920s Otto Hahn created a new line of research. Using the "emanation method", which he had recently developed, and the "emanation ability", he founded what became known as "applied radiochemistry" for the researching of general chemical and physical-chemical questions. In 1936 Cornell University Press published a book in English (and later in Russian) titled Applied Radiochemistry, which contained the lectures given by Hahn when he was a visiting professor at Cornell University in Ithaca, New York, in 1933. This important publication had a major influence on almost all nuclear chemists and physicists in the United States, the United Kingdom, France, and the Soviet Union during the 1930s and 1940s, laying the foundation for modern nuclear chemistry.
Hahn and Lise Meitner discovered radioactive isotopes of radium, thorium, protactinium and uranium. He also discovered the phenomena of radioactive recoil and nuclear isomerism, and pioneered rubidium–strontium dating. In 1938, Hahn, Lise Meitner and Fritz Strassmann discovered nuclear fission, for which Hahn received the 1944 Nobel Prize for Chemistry. Nuclear fission was the basis for nuclear reactors and nuclear weapons. Hahn is referred to as the father of nuclear chemistry and godfather of nuclear fission.
Main areas
Radiochemistry is the chemistry of radioactive materials, in which radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable).
For further details please see the page on radiochemistry.
Radiation chemistry
Radiation chemistry is the study of the chemical effects of radiation on matter; this is very different from radiochemistry as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide. Prior to radiation chemistry, it was commonly believed that pure water could not be destroyed.
Initial experiments were focused on understanding the effects of radiation on matter. Using a X-ray generator, Hugo Fricke studied the biological effects of radiation as it became a common treatment option and diagnostic method. Fricke proposed and subsequently proved that the energy from X - rays were able to convert water into activated water, allowing it to react with dissolved species.
Chemistry for nuclear power
Radiochemistry, radiation chemistry and nuclear chemical engineering play a very important role for uranium and thorium fuel precursors synthesis, starting from ores of these elements, fuel fabrication, coolant chemistry, fuel reprocessing, radioactive waste treatment and storage, monitoring of radioactive elements release during reactor operation and radioactive geological storage, etc.
Study of nuclear reactions
A combination of radiochemistry and radiation chemistry is used to study nuclear reactions such as fission and fusion. Some early evidence for nuclear fission was the formation of a short-lived radioisotope of barium which was isolated from neutron irradiated uranium (139Ba, with a half-life of 83 minutes and 140Ba, with a half-life of 12.8 days, are major fission products of uranium). At the time, it was thought that this was a new radium isotope, as it was then standard radiochemical practice to use a barium sulfate carrier precipitate to assist in the isolation of radium. More recently, a combination of radiochemical methods and nuclear physics has been used to try to make new 'superheavy' elements; it is thought that islands of relative stability exist where the nuclides have half-lives of years, thus enabling weighable amounts of the new elements to be isolated. For more details of the original discovery of nuclear fission see the work of Otto Hahn.
The nuclear fuel cycle
This is the chemistry associated with any part of the nuclear fuel cycle, including nuclear reprocessing. The fuel cycle includes all the operations involved in producing fuel, from mining, ore processing and enrichment to fuel production (Front-end of the cycle). It also includes the 'in-pile' behavior (use of the fuel in a reactor) before the back end of the cycle. The back end includes the management of the used nuclear fuel in either a spent fuel pool or dry storage, before it is disposed of into an underground waste store or reprocessed.
Normal and abnormal conditions
The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas, one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or (more rarely) an accident is occurring. Without this process, none of this would be true.
Reprocessing
Law
In the United States, it is normal to use fuel once in a power reactor before placing it in a waste store. The long-term plan is currently to place the used civilian reactor fuel in a deep store. This non-reprocessing policy was started in March 1977 because of concerns about nuclear weapons proliferation. President Jimmy Carter issued a Presidential directive which indefinitely suspended the commercial reprocessing and recycling of plutonium in the United States. This directive was likely an attempt by the United States to lead other countries by example, but many other nations continue to reprocess spent nuclear fuels. The Russian government under President Vladimir Putin repealed a law which had banned the import of used nuclear fuel, which makes it possible for Russians to offer a reprocessing service for clients outside Russia (similar to that offered by BNFL).
PUREX chemistry
The current method of choice is to use the PUREX liquid-liquid extraction process which uses a tributyl phosphate/hydrocarbon mixture to extract both uranium and plutonium from nitric acid. This extraction is of the nitrate salts and is classed as being of a solvation mechanism. For example, the extraction of plutonium by an extraction agent (S) in a nitrate medium occurs by the following reaction.
Pu4+aq + 4NO3−aq + 2Sorganic → [Pu(NO3)4S2]organic
A complex bond is formed between the metal cation, the nitrates and the tributyl phosphate, and a model compound of a dioxouranium(VI) complex with two nitrate anions and two triethyl phosphate ligands has been characterised by X-ray crystallography.
When the nitric acid concentration is high the extraction into the organic phase is favored, and when the nitric acid concentration is low the extraction is reversed (the organic phase is stripped of the metal). It is normal to dissolve the used fuel in nitric acid, after the removal of the insoluble matter the uranium and plutonium are extracted from the highly active liquor. It is normal to then back extract the loaded organic phase to create a medium active liquor which contains mostly uranium and plutonium with only small traces of fission products. This medium active aqueous mixture is then extracted again by tributyl phosphate/hydrocarbon to form a new organic phase, the metal bearing organic phase is then stripped of the metals to form an aqueous mixture of only uranium and plutonium. The two stages of extraction are used to improve the purity of the actinide product, the organic phase used for the first extraction will suffer a far greater dose of radiation. The radiation can degrade the tributyl phosphate into dibutyl hydrogen phosphate. The dibutyl hydrogen phosphate can act as an extraction agent for both the actinides and other metals such as ruthenium. The dibutyl hydrogen phosphate can make the system behave in a more complex manner as it tends to extract metals by an ion exchange mechanism (extraction favoured by low acid concentration), to reduce the effect of the dibutyl hydrogen phosphate it is common for the used organic phase to be washed with sodium carbonate solution to remove the acidic degradation products of the tributyl phosphatioloporus.
New methods being considered for future use
The PUREX process can be modified to make a UREX (URanium EXtraction) process which could be used to save space inside high level nuclear waste disposal sites, such as Yucca Mountain nuclear waste repository, by removing the uranium which makes up the vast majority of the mass and volume of used fuel and recycling it as reprocessed uranium.
The UREX process is a PUREX process which has been modified to prevent the plutonium being extracted. This can be done by adding a plutonium reductant before the first metal extraction step. In the UREX process, ~99.9% of the uranium and >95% of technetium are separated from each other and the other fission products and actinides. The key is the addition of acetohydroxamic acid (AHA) to the extraction and scrubs sections of the process. The addition of AHA greatly diminishes the extractability of plutonium and neptunium, providing greater proliferation resistance than with the plutonium extraction stage of the PUREX process.
Adding a second extraction agent, octyl(phenyl)-N,N-dibutyl carbamoylmethyl phosphine oxide (CMPO) in combination with tributylphosphate, (TBP), the PUREX process can be turned into the TRUEX (TRansUranic EXtraction) process this is a process which was invented in the US by Argonne National Laboratory, and is designed to remove the transuranic metals (Am/Cm) from waste. The idea is that by lowering the alpha activity of the waste, the majority of the waste can then be disposed of with greater ease. In common with PUREX this process operates by a solvation mechanism.
As an alternative to TRUEX, an extraction process using a malondiamide has been devised. The DIAMEX (DIAMideEXtraction) process has the advantage of avoiding the formation of organic waste which contains elements other than carbon, hydrogen, nitrogen, and oxygen. Such an organic waste can be burned without the formation of acidic gases which could contribute to acid rain. The DIAMEX process is being worked on in Europe by the French CEA. The process is sufficiently mature that an industrial plant could be constructed with the existing knowledge of the process. In common with PUREX this process operates by a solvation mechanism.
Selective Actinide Extraction (SANEX). As part of the management of minor actinides, it has been proposed that the lanthanides and trivalent minor actinides should be removed from the PUREX raffinate by a process such as DIAMEX or TRUEX. In order to allow the actinides such as americium to be either reused in industrial sources or used as fuel the lanthanides must be removed. The lanthanides have large neutron cross sections and hence they would poison a neutron-driven nuclear reaction. To date, the extraction system for the SANEX process has not been defined, but currently, several different research groups are working towards a process. For instance, the French CEA is working on a bis-triazinyl pyridine (BTP) based process.
Other systems such as the dithiophosphinic acids are being worked on by some other workers.
This is the UNiversal EXtraction process which was developed in Russia and the Czech Republic, it is a process designed to remove all of the most troublesome (Sr, Cs and minor actinides) radioisotopes from the raffinates left after the extraction of uranium and plutonium from used nuclear fuel. The chemistry is based upon the interaction of caesium and strontium with poly ethylene oxide (poly ethylene glycol) and a cobalt carborane anion (known as chlorinated cobalt dicarbollide). The actinides are extracted by CMPO, and the diluent is a polar aromatic such as nitrobenzene. Other diluents such as meta-nitrobenzotrifluoride and phenyl trifluoromethyl sulfone have been suggested as well.
Absorption of fission products on surfaces
Another important area of nuclear chemistry is the study of how fission products interact with surfaces; this is thought to control the rate of release and migration of fission products both from waste containers under normal conditions and from power reactors under accident conditions. Like chromate and molybdate, the 99TcO4 anion can react with steel surfaces to form a corrosion resistant layer. In this way, these metaloxo anions act as anodic corrosion inhibitors. The formation of 99TcO2 on steel surfaces is one effect which will retard the release of 99Tc from nuclear waste drums and nuclear equipment which has been lost before decontamination (e.g. submarine reactors lost at sea). This 99TcO2 layer renders the steel surface passive, inhibiting the anodic corrosion reaction. The radioactive nature of technetium makes this corrosion protection impractical in almost all situations. It has also been shown that 99TcO4 anions react to form a layer on the surface of activated carbon (charcoal) or aluminium. A short review of the biochemical properties of a series of key long lived radioisotopes can be read on line.
99Tc in nuclear waste may exist in chemical forms other than the 99TcO4 anion, these other forms have different chemical properties.
Similarly, the release of iodine-131 in a serious power reactor accident could be retarded by absorption on metal surfaces within the nuclear plant.
Education
Despite the growing use of nuclear medicine, the potential expansion of nuclear power plants, and worries about protection against nuclear threats and the management of the nuclear waste generated in past decades, the number of students opting to specialize in nuclear and radiochemistry has decreased significantly over the past few decades. Now, with many experts in these fields approaching retirement age, action is needed to avoid a workforce gap in these critical fields, for example by building student interest in these careers, expanding the educational capacity of universities and colleges, and providing more specific on-the-job training.
Nuclear and Radiochemistry (NRC) is mostly being taught at university level, usually first at the Master- and PhD-degree level. In Europe, as substantial effort is being done to harmonize and prepare the NRC education for the industry's and society's future needs. This effort is being coordinated in a project funded by the Coordinated Action supported by the European Atomic Energy Community's 7th Framework Program. Although NucWik is primarily aimed at teachers, anyone interested in nuclear and radiochemistry is welcome and can find a lot of information and material explaining topics related to NRC.
Spinout areas
Some methods first developed within nuclear chemistry and physics have become so widely used within chemistry and other physical sciences that they may be best thought of as separate from normal nuclear chemistry. For example, the isotope effect is used so extensively to investigate chemical mechanisms and the use of cosmogenic isotopes and long-lived unstable isotopes in geology that it is best to consider much of isotopic chemistry as separate from nuclear chemistry.
Kinetics (use within mechanistic chemistry)
The mechanisms of chemical reactions can be investigated by observing how the kinetics of a reaction is changed by making an isotopic modification of a substrate, known as the kinetic isotope effect. This is now a standard method in organic chemistry. Briefly, replacing normal hydrogen (protons) by deuterium within a molecule causes the molecular vibrational frequency of X-H (for example C-H, N-H and O-H) bonds to decrease, which leads to a decrease in vibrational zero-point energy. This can lead to a decrease in the reaction rate if the rate-determining step involves breaking a bond between hydrogen and another atom. Thus, if the reaction changes in rate when protons are replaced by deuteriums, it is reasonable to assume that the breaking of the bond to hydrogen is part of the step which determines the rate.
Uses within geology, biology and forensic science
Cosmogenic isotopes are formed by the interaction of cosmic rays with the nucleus of an atom. These can be used for dating purposes and for use as natural tracers. In addition, by careful measurement of some ratios of stable isotopes it is possible to obtain new insights into the origin of bullets, ages of ice samples, ages of rocks, and the diet of a person can be identified from a hair or other tissue sample. (See Isotope geochemistry and Isotopic signature for further details).
Biology
Within living things, isotopic labels (both radioactive and nonradioactive) can be used to probe how the complex web of reactions which makes up the metabolism of an organism converts one substance to another. For instance a green plant uses light energy to convert water and carbon dioxide into glucose by photosynthesis. If the oxygen in the water is labeled, then the label appears in the oxygen gas formed by the plant and not in the glucose formed in the chloroplasts within the plant cells.
For biochemical and physiological experiments and medical methods, a number of specific isotopes have important applications.
Stable isotopes have the advantage of not delivering a radiation dose to the system being studied; however, a significant excess of them in the organ or organism might still interfere with its functionality, and the availability of sufficient amounts for whole-animal studies is limited for many isotopes. Measurement is also difficult, and usually requires mass spectrometry to determine how much of the isotope is present in particular compounds, and there is no means of localizing measurements within the cell.
2H (deuterium), the stable isotope of hydrogen, is a stable tracer, the concentration of which can be measured by mass spectrometry or NMR. It is incorporated into all cellular structures. Specific deuterated compounds can also be produced.
15N, a stable isotope of nitrogen, has also been used. It is incorporated mainly into proteins.
Radioactive isotopes have the advantages of being detectable in very low quantities, in being easily measured by scintillation counting or other radiochemical methods, and in being localizable to particular regions of a cell, and quantifiable by autoradiography. Many compounds with the radioactive atoms in specific positions can be prepared, and are widely available commercially. In high quantities they require precautions to guard the workers from the effects of radiation—and they can easily contaminate laboratory glassware and other equipment. For some isotopes the half-life is so short that preparation and measurement is difficult.
By organic synthesis it is possible to create a complex molecule with a radioactive label that can be confined to a small area of the molecule. For short-lived isotopes such as 11C, very rapid synthetic methods have been developed to permit the rapid addition of the radioactive isotope to the molecule. For instance a palladium catalysed carbonylation reaction in a microfluidic device has been used to rapidly form amides and it might be possible to use this method to form radioactive imaging agents for PET imaging.
3H (tritium), the radioisotope of hydrogen, is available at very high specific activities, and compounds with this isotope in particular positions are easily prepared by standard chemical reactions such as hydrogenation of unsaturated precursors. The isotope emits very soft beta radiation, and can be detected by scintillation counting.
11C, carbon-11 is usually produced by cyclotron bombardment of 14N with protons. The resulting nuclear reaction is . Additionally, carbon-11 can also be made using a cyclotron; boron in the form of boric oxide is reacted with protons in a (p,n) reaction. Another alternative route is to react 10B with deuterons. By rapid organic synthesis, the 11C compound formed in the cyclotron is converted into the imaging agent which is then used for PET.
14C, carbon-14 can be made (as above), and it is possible to convert the target material into simple inorganic and organic compounds. In most organic synthesis work it is normal to try to create a product out of two approximately equal sized fragments and to use a convergent route, but when a radioactive label is added, it is normal to try to add the label late in the synthesis in the form of a very small fragment to the molecule to enable the radioactivity to be localised in a single group. Late addition of the label also reduces the number of synthetic stages where radioactive material is used.
18F, fluorine-18 can be made by the reaction of neon with deuterons, 20Ne reacts in a (d,4He) reaction. It is normal to use neon gas with a trace of stable fluorine (19F2). The 19F2 acts as a carrier which increases the yield of radioactivity from the cyclotron target by reducing the amount of radioactivity lost by absorption on surfaces. However, this reduction in loss is at the cost of the specific activity of the final product.
Nuclear spectroscopy
Nuclear spectroscopy are methods that use the nucleus to obtain information of the local structure in matter. Important methods are NMR (see below), Mössbauer spectroscopy and Perturbed angular correlation. These methods use the interaction of the hyperfine field with the nucleus' spin. The field can be magnetic or/and electric and are created by the electrons of the atom and its surrounding neighbours. Thus, these methods investigate the local structure in matter, mainly condensed matter in condensed matter physics and solid state chemistry.
Nuclear magnetic resonance (NMR)
NMR spectroscopy uses the net spin of nuclei in a substance upon energy absorption to identify molecules. This has now become a standard spectroscopic tool within synthetic chemistry. One major use of NMR is to determine the bond connectivity within an organic molecule.
NMR imaging also uses the net spin of nuclei (commonly protons) for imaging. This is widely used for diagnostic purposes in medicine, and can provide detailed images of the inside of a person without inflicting any radiation upon them. In a medical setting, NMR is often known simply as "magnetic resonance" imaging, as the word 'nuclear' has negative connotations for many people.
See also
Important publications in nuclear chemistry
Nuclear physics
Nuclear spectroscopy
References
Further reading
Handbook of Nuclear Chemistry
Comprehensive handbook in six volumes by 130 international experts. Edited by Attila Vértes, Sándor Nagy, Zoltán Klencsár, Rezső G. Lovas, Frank Rösch. , Springer, 2011.
Radioactivity Radionuclides Radiation
Textbook by Magill, Galy. , Springer, 2005.
Radiochemistry and Nuclear Chemistry, 3rd Ed
Comprehensive textbook by Choppin, Liljenzin and Rydberg. , Butterworth-Heinemann, 2001 .
Radiochemistry and Nuclear Chemistry, 4th Ed
Comprehensive textbook by Choppin, Liljenzin, Rydberg and Ekberg. , Elsevier Inc., 2013
Radioactivity, Ionizing radiation and Nuclear Energy
Basic textbook for undergraduates by Jiri Hála and James D Navratil. , Konvoj, Brno 2003
The Radiochemical Manual
Overview of the production and uses of both open and sealed sources. Edited by BJ Wilson and written by RJ Bayly, JR Catch, JC Charlton, CC Evans, TT Gorsuch, JC Maynard, LC Myerscough, GR Newbery, H Sheard, CBG Taylor and BJ Wilson. The radiochemical centre (Amersham) was sold via HMSO, 1966 (second edition)
Chemistry | 0.773146 | 0.988378 | 0.76416 |
SDS-PAGE | SDS-PAGE (sodium dodecyl sulfate–polyacrylamide gel electrophoresis) is a discontinuous electrophoretic system developed by Ulrich K. Laemmli which is commonly used as a method to separate proteins with molecular masses between 5 and 250 kDa. The combined use of sodium dodecyl sulfate (SDS, also known as sodium lauryl sulfate) and polyacrylamide gel eliminates the influence of structure and charge, and proteins are separated by differences in their size. At least up to 2012, the publication describing it was the most frequently cited paper by a single author, and the second most cited overall.
Properties
SDS-PAGE is an electrophoresis method that allows protein separation by mass. The medium (also referred to as ′matrix′) is a polyacrylamide-based discontinuous gel. The polyacrylamide-gel is typically sandwiched between two glass plates in a slab gel. Although tube gels (in glass cylinders) were used historically, they were rapidly made obsolete with the invention of the more convenient slab gels. In addition, SDS (sodium dodecyl sulfate) is used. About 1.4 grams of SDS bind to a gram of protein, corresponding to one SDS molecule charges per two amino acids. SDS acts as a surfactant, masking the protein's intrinsic charge and conferring them very similar charge-to-mass ratios. The intrinsic charges of the proteins are negligible in comparison to the SDS loading, and the positive charges are also greatly reduced in the basic pH range of a separating gel. Upon application of a constant electric field, the proteins migrate towards the anode, each with a different speed, depending on their mass. This simple procedure allows precise protein separation by mass.
SDS tends to form spherical micelles in aqueous solutions above a certain concentration called the critical micellar concentration (CMC). Above the critical micellar concentration of 7 to 10 millimolar in solutions, the SDS simultaneously occurs as single molecules (monomer) and as micelles, below the CMC SDS occurs only as monomers in aqueous solutions. At the critical micellar concentration, a micelle consists of about 62 SDS molecules. However, only SDS monomers bind to proteins via hydrophobic interactions, whereas the SDS micelles are anionic on the outside and do not adsorb any protein. SDS is amphipathic in nature, which allows it to unfold both polar and nonpolar sections of protein structure. In SDS concentrations above 0.1 millimolar, the unfolding of proteins begins, and above 1 mM, most proteins are denatured. Due to the strong denaturing effect of SDS and the subsequent dissociation of protein complexes, quaternary structures can generally not be determined with SDS. Exceptions are proteins that are stabilised by covalent cross-linking (e.g. -S-S- linkages) and the SDS-resistant protein complexes, which are stable even in the presence of SDS (the latter, however, only at room temperature). To denature the SDS-resistant complexes a high activation energy is required, which is achieved by heating. SDS resistance is based on a metastability of the protein fold. Although the native, fully folded, SDS-resistant protein does not have sufficient stability in the presence of SDS, the chemical equilibrium of denaturation at room temperature occurs slowly. Stable protein complexes are characterised not only by SDS resistance but also by stability against proteases and an increased biological half-life.
Alternatively, polyacrylamide gel electrophoresis can also be performed with the cationic surfactants CTAB in a CTAB-PAGE, or 16-BAC in a BAC-PAGE.
Procedure
The SDS-PAGE method is composed of gel preparation, sample preparation, electrophoresis, protein staining or western blotting and analysis of the generated banding pattern.
Gel production
When using different buffers in the gel (discontinuous gel electrophoresis), the gels are made up to one day prior to electrophoresis, so that the diffusion does not lead to a mixing of the buffers. The gel is produced by free radical polymerization in a mold consisting of two sealed glass plates with spacers between the glass plates. In a typical mini-gel setting, the spacers have a thickness of 0.75 mm or 1.5 mm, which determines the loading capacity of the gel. For pouring the gel solution, the plates are usually clamped in a stand which temporarily seals the otherwise open underside of the glass plates with the two spacers. For the gel solution, acrylamide is mixed as gel-former (usually 4% V/V in the stacking gel and 10-12 % in the separating gel), methylenebisacrylamide as a cross-linker, stacking or separating gel buffer, water and SDS. By adding the catalyst TEMED and the radical initiator ammonium persulfate (APS) the polymerisation is started. The solution is then poured between the glass plates without creating bubbles. Depending on the amount of catalyst and radical starter and depending on the temperature, the polymerisation lasts between a quarter of an hour and several hours. The lower gel (separating gel) is poured first and covered with a few drops of a barely water-soluble alcohol (usually buffer-saturated butanol or isopropanol), which eliminates bubbles from the meniscus and protects the gel solution of the radical scavenger oxygen. After the polymerisation of the separating gel, the alcohol is discarded and the residual alcohol is removed with filter paper. After addition of APS and TEMED to the stacking gel solution, it is poured on top of the solid separation gel. Afterwards, a suitable sample comb is inserted between the glass plates without creating bubbles. The sample comb is carefully pulled out after polymerisation, leaving pockets for the sample application. For later use of proteins for protein sequencing, the gels are often prepared the day before electrophoresis to reduce reactions of unpolymerised acrylamide with cysteines in proteins.
By using a gradient mixer, gradient gels with a gradient of acrylamide (usually from 4 to 12%) can be cast, which have a larger separation range of the molecular masses. Commercial gel systems (so-called pre-cast gels) usually use the buffer substance Bis-tris methane with a pH value between 6.4 and 7.2 both in the stacking gel and in the separating gel. These gels are delivered cast and ready-to-use. Since they use only one buffer (continuous gel electrophoresis) and have a nearly neutral pH, they can be stored for several weeks. The more neutral pH slows the hydrolysis and thus the decomposition of the polyacrylamide. Furthermore, there are fewer acrylamide-modified cysteines in the proteins. Due to the constant pH in collecting and separating gel there is no stacking effect. Proteins in BisTris gels can not be stained with ruthenium complexes. This gel system has a comparatively large separation range, which can be varied by using MES or MOPS in the running buffer.
Sample preparation
During sample preparation, the sample buffer, and thus SDS, is added in excess to the proteins, and the sample is then heated to 95 °C for five minutes, or alternatively 70 °C for ten minutes. Heating disrupts the secondary and tertiary structures of the protein by disrupting hydrogen bonds and stretching the molecules. Optionally, disulfide bridges can be cleaved by reduction. For this purpose, reducing thiols such as β-mercaptoethanol (β-ME, 5% by volume), dithiothreitol (DTT, 10–100 millimolar), dithioerythritol (DTE, 10 millimolar), tris(2-carboxyethyl)phosphine or tributylphosphine are added to the sample buffer. After cooling to room temperature, each sample is pipetted into its own well in the gel, which was previously immersed in electrophoresis buffer in the electrophoresis apparatus.
In addition to the samples, a molecular-weight size marker is usually loaded onto the gel. This consists of proteins of known sizes and thereby allows the estimation (with an error of ± 10%) of the sizes of the proteins in the actual samples, which migrate in parallel in different tracks of the gel. The size marker is often pipetted into the first or last pocket of a gel.
Electrophoresis
For separation, the denatured samples are loaded onto a gel of polyacrylamide, which is placed in an electrophoresis buffer with suitable electrolytes. Thereafter, a voltage (usually around 100 V, 10-20 V per cm gel length) is applied, which causes a migration of negatively charged molecules through the gel in the direction of the positively charged anode. The gel acts like a sieve. Small proteins migrate relatively easily through the mesh of the gel, while larger proteins are more likely to be retained and thereby migrate more slowly through the gel, thereby allowing proteins to be separated by molecular size. The electrophoresis lasts between half an hour to several hours depending on the voltage and length of gel used.
The fastest-migrating proteins (with a molecular weight of less than 5 kDa) form the buffer front together with the anionic components of the electrophoresis buffer, which also migrate through the gel. The area of the buffer front is made visible by adding the comparatively small, anionic dye bromophenol blue to the sample buffer. Due to the relatively small molecule size of bromophenol blue, it migrates faster than proteins. By optical control of the migrating colored band, the electrophoresis can be stopped before the dye and also the samples have completely migrated through the gel and leave it.
The most commonly used method is the discontinuous SDS-PAGE. In this method, the proteins migrate first into a collecting gel with neutral pH, in which they are concentrated and then they migrate into a separating gel with basic pH, in which the actual separation takes place. Stacking and separating gels differ by different pore size (4-6 % T and 10-20 % T), ionic strength and pH values (pH 6.8 or pH 8.8). The electrolyte most frequently used is an SDS-containing Tris-glycine-chloride buffer system. At neutral pH, glycine predominantly forms the zwitterionic form, at high pH the glycines lose positive charges and become predominantly anionic. In the collection gel, the smaller, negatively charged chloride ions migrate in front of the proteins (as leading ions) and the slightly larger, negatively and partially positively charged glycinate ions migrate behind the proteins (as initial trailing ions), whereas in the comparatively basic separating gel both ions migrate in front of the proteins. The pH gradient between the stacking and separation gel buffers leads to a stacking effect at the border of the stacking gel to the separation gel, since the glycinate partially loses its slowing positive charges as the pH increases and then, as the former trailing ion, overtakes the proteins and becomes a leading ion, which causes the bands of the different proteins (visible after a staining) to become narrower and sharper - the stacking effect. For the separation of smaller proteins and peptides, the TRIS-Tricine buffer system of Schägger and von Jagow is used due to the higher spread of the proteins in the range of 0.5 to 50 kDa.
Gel staining
At the end of the electrophoretic separation, all proteins are sorted by size and can then be analyzed by other methods, e. g. protein staining such as Coomassie staining (most common and easy to use), silver staining (highest sensitivity), stains all staining, Amido black 10B staining, Fast green FCF staining, fluorescent stains such as epicocconone stain and SYPRO orange stain, and immunological detection such as the Western Blot. The fluorescent dyes have a comparatively higher linearity between protein quantity and color intensity of about three orders of magnitude above the detection limit (the quantity of protein that can be estimated by color intensity). When using the fluorescent protein dye trichloroethanol, a subsequent protein staining is omitted if it was added to the gel solution and the gel was irradiated with UV light after electrophoresis.
In Coomassie staining, gel is fixed in a 50% ethanol 10% glacial acetic acid solution for 1 hr. Then the solution is changed for fresh one and after 1 to 12 hrs gel is changed to a staining solution (50% methanol, 10% glacial acetic acid, 0.1% coomassie brilliant blue) followed by destaining changing several times a destaining solution of 40% methanol, 10% glacial acetic acid.
Analysis
Protein staining in the gel creates a documentable banding pattern of the various proteins.
Glycoproteins have differential levels of glycosylations and adsorb SDS more unevenly at the glycosylations, resulting in broader and blurred bands.
Membrane proteins, because of their transmembrane domain, are often composed of the more hydrophobic amino acids, have lower solubility in aqueous solutions, tend to bind lipids, and tend to precipitate in aqueous solutions due to hydrophobic effects when sufficient amounts of detergent are not present. This precipitation manifests itself for membrane proteins in a SDS-PAGE in "tailing" above the band of the transmembrane protein. In this case, more SDS can be used (by using more or more concentrated sample buffer) and the amount of protein in the sample application can be reduced.
An overloading of the gel with a soluble protein creates a semicircular band of this protein (e. g. in the marker lane of the image at 66 kDa), allowing other proteins with similar molecular weights to be covered.
A low contrast (as in the marker lane of the image) between bands within a lane indicates either the presence of many proteins (low purity) or, if using purified proteins and a low contrast occurs only below one band, it indicates a proteolytic degradation of the protein, which first causes degradation bands, and after further degradation produces a homogeneous color ("smear") below a band.
The documentation of the banding pattern is usually done by photographing or scanning. For a subsequent recovery of the molecules in individual bands, a gel extraction can be performed.
Archiving
After protein staining and documentation of the banding pattern, the polyacrylamide gel can be dried for archival storage. Proteins can be extracted from it at a later date. The gel is either placed in a drying frame (with or without the use of heat) or in a vacuum dryer. The drying frame consists of two parts, one of which serves as a base for a wet cellophane film to which the gel and a one percent glycerol solution are added. Then a second wet cellophane film is applied bubble-free, the second frame part is put on top and the frame is sealed with clips. The removal of the air bubbles avoids a fragmentation of the gel during drying. The water evaporates through the cellophane film. In contrast to the drying frame, a vacuum dryer generates a vacuum and heats the gel to about 50 °C.
Molecular mass determination
For a more accurate determination of the molecular weight, the relative migration distances of the individual protein bands are measured in the separating gel. The measurements are usually performed in triplicate for increased accuracy. The relative mobility (called Rf value or Rm value) is defined as the distance migrated by the protein band divided by the distance migrated by the buffer front. The distances are each measured from the beginning of the separation gel. The migration of the buffer front roughly corresponds to the migration of the dye contained in the sample buffer. The Rf's of the size marker are plotted semi-logarithmically against their known molecular weights. By comparison with the linear part of the generated graph or by a regression analysis, the molecular weight of an unknown protein can be determined by its relative mobility.
Bands of proteins with glycosylations can be blurred, as glycosylation is often heterogenous. Proteins with many basic amino acids (e. g. histones) can lead to an overestimation of the molecular weight or even not migrate into the gel at all, because they move slower in the electrophoresis due to the positive charges or even to the opposite direction. On the other hand, many acidic amino acids can lead to accelerated migration of a protein and an underestimation of its molecular mass.
Applications
The SDS-PAGE in combination with a protein stain is widely used in biochemistry for the quick and exact separation and subsequent analysis of proteins. It has comparatively low instrument and reagent costs and is an easy-to-use method. Because of its low scalability, it is mostly used for analytical purposes and less for preparative purposes, especially when larger amounts of a protein are to be isolated.
Additionally, SDS-PAGE is used in combination with the western blot for the determination of the presence of a specific protein in a mixture of proteins - or for the analysis of post-translational modifications. Post-translational modifications of proteins can lead to a different relative mobility (i.e. a band shift) or to a change in the binding of a detection antibody used in the western blot (i.e. a band disappears or appears).
In mass spectrometry of proteins, SDS-PAGE is a widely used method for sample preparation prior to spectrometry, mostly using in-gel digestion. In regards to determining the molecular mass of a protein, the SDS-PAGE is a bit more exact than an analytical ultracentrifugation, but less exact than a mass spectrometry or - ignoring post-translational modifications - a calculation of the protein molecular mass from the DNA sequence.
In medical diagnostics, SDS-PAGE is used as part of the HIV test and to evaluate proteinuria. In the HIV test, HIV proteins are separated by SDS-PAGE and subsequently detected by Western Blot with HIV-specific antibodies of the patient, if they are present in his blood serum. SDS-PAGE for proteinuria evaluates the levels of various serum proteins in the urine, e.g. Albumin, Alpha-2-macroglobulin and IgG.
Variants
SDS-PAGE is the most widely used method for gel electrophoretic separation of proteins. Two-dimensional gel electrophoresis sequentially combines isoelectric focusing or BAC-PAGE with a SDS-PAGE. Native PAGE is used if native protein folding is to be maintained. For separation of membrane proteins, BAC-PAGE or CTAB-PAGE may be used as an alternative to SDS-PAGE. For electrophoretic separation of larger protein complexes, agarose gel electrophoresis can be used, e.g. the SDD-AGE. Some enzymes can be detected via their enzyme activity by zymography.
Alternatives
While being one of the more precise and low-cost protein separation and analysis methods, the SDS-PAGE denatures proteins. Where non-denaturing conditions are necessary, proteins are separated by a native PAGE or different chromatographic methods with subsequent photometric quantification, for example affinity chromatography (or even tandem affinity purification), size exclusion chromatography, ion exchange chromatography. Proteins can also be separated by size in a tangential flow filtration or an ultrafiltration. Single proteins can be isolated from a mixture by affinity chromatography or by a pull-down assay. Some historically early and cost effective but crude separation methods usually based upon a series of extractions and precipitations using kosmotropic molecules, for example the ammonium sulfate precipitation and the polyethyleneglycol precipitation.
History
In 1948, Arne Tiselius was awarded the Nobel Prize in Chemistry for the discovery of the principle of electrophoresis as the migration of charged and dissolved atoms or molecules in an electric field. The use of a solid matrix (initially paper discs) in a zone electrophoresis improved the separation. The discontinuous electrophoresis of 1964 by L. Ornstein and B. J. Davis made it possible to improve the separation by the stacking effect. The use of cross-linked polyacrylamide hydrogels, in contrast to the previously used paper discs or starch gels, provided a higher stability of the gel and no microbial decomposition. The denaturing effect of SDS in continuous polyacrylamide gels and the consequent improvement in resolution was first described in 1965 by David F. Summers in the working group of James E. Darnell to separate poliovirus proteins. The current variant of the SDS-PAGE was described in 1970 by Ulrich K. Laemmli and initially used to characterise the proteins in the head of bacteriophage T4.
References
External links
Protocol for BisTris SDS-PAGE at OpenWetWare.org
Electrophoresis | 0.767543 | 0.995581 | 0.764151 |
Structuralism | Structuralism is an intellectual current and methodological approach, primarily in the social sciences, that interprets elements of human culture by way of their relationship to a broader system. It works to uncover the structural patterns that underlie all the things that humans do, think, perceive, and feel.
Alternatively, as summarized by philosopher Simon Blackburn, structuralism is:"The belief that phenomena of human life are not intelligible except through their interrelations. These relations constitute a structure, and behind local variations in the surface phenomena there are constant laws of abstract structure."The structuralist mode of reasoning has since been applied in a range of fields, including anthropology, sociology, psychology, literary criticism, economics, and architecture. Along with Claude Lévi-Strauss, the most prominent thinkers associated with structuralism include linguist Roman Jakobson and psychoanalyst Jacques Lacan.
History and background
The term structuralism is ambiguous, referring to different schools of thought in different contexts. As such, the movement in humanities and social sciences called structuralism relates to sociology. Emile Durkheim based his sociological concept on 'structure' and 'function', and from his work emerged the sociological approach of structural functionalism.
Apart from Durkheim's use of the term structure, the semiological concept of Ferdinand de Saussure became fundamental for structuralism. Saussure conceived language and society as a system of relations. His linguistic approach was also a refutation of evolutionary linguistics.
Structuralism in Europe developed in the early 20th century, mainly in France and the Russian Empire, in the structural linguistics of Ferdinand de Saussure and the subsequent Prague, Moscow, and Copenhagen schools of linguistics. As an intellectual movement, structuralism became the heir to existentialism. After World War II, an array of scholars in the humanities borrowed Saussure's concepts for use in their respective fields. French anthropologist Claude Lévi-Strauss was arguably the first such scholar, sparking a widespread interest in structuralism.
Throughout the 1940s and 1950s, existentialism, such as that propounded by Jean-Paul Sartre, was the dominant European intellectual movement. Structuralism rose to prominence in France in the wake of existentialism, particularly in the 1960s. The initial popularity of structuralism in France led to its spread across the globe. By the early 1960s, structuralism as a movement was coming into its own and some believed that it offered a single unified approach to human life that would embrace all disciplines.
By the late 1960s, many of structuralism's basic tenets came under attack from a new wave of predominantly French intellectuals/philosophers such as historian Michel Foucault, Jacques Derrida, Marxist philosopher Louis Althusser, and literary critic Roland Barthes. Though elements of their work necessarily relate to structuralism and are informed by it, these theorists eventually came to be referred to as post-structuralists. Many proponents of structuralism, such as Lacan, continue to influence continental philosophy and many of the fundamental assumptions of some of structuralism's post-structuralist critics are a continuation of structuralist thinking.
Russian functional linguist Roman Jakobson was a pivotal figure in the adaptation of structural analysis to disciplines beyond linguistics, including philosophy, anthropology, and literary theory. Jakobson was a decisive influence on anthropologist Claude Lévi-Strauss, by whose work the term structuralism first appeared in reference to social sciences. Lévi-Strauss' work in turn gave rise to the structuralist movement in France, also called French structuralism, influencing the thinking of other writers, most of whom disavowed themselves as being a part of this movement. This included such writers as Louis Althusser and psychoanalyst Jacques Lacan, as well as the structural Marxism of Nicos Poulantzas. Roland Barthes and Jacques Derrida focused on how structuralism could be applied to literature.
Accordingly, the so-called "Gang of Four" of structuralism is considered to be Lévi-Strauss, Lacan, Barthes, and Michel Foucault.[dubious – discuss]
Ferdinand de Saussure
The origins of structuralism are connected with the work of Ferdinand de Saussure on linguistics along with the linguistics of the Prague and Moscow schools. In brief, Saussure's structural linguistics propounded three related concepts.
Saussure argued for a distinction between langue (an idealized abstraction of language) and parole (language as actually used in daily life). He argued that a "sign" is composed of a "signified" (signifié, i.e. an abstract concept or idea) and a "signifier" (signifiant, i.e. the perceived sound/visual image).
Because different languages have different words to refer to the same objects or concepts, there is no intrinsic reason why a specific signifier is used to express a given concept or idea. It is thus "arbitrary."
Signs gain their meaning from their relationships and contrasts with other signs. As he wrote, "in language, there are only differences 'without positive terms.
Lévi-Strauss
Structuralism rejected the concept of human freedom and choice, focusing instead on the way that human experience and behaviour is determined by various structures. The most important initial work on this score was Lévi-Strauss's 1949 volume The Elementary Structures of Kinship. Lévi-Strauss had known Roman Jakobson during their time together at the New School in New York during WWII and was influenced by both Jakobson's structuralism, as well as the American anthropological tradition.
In Elementary Structures, he examined kinship systems from a structural point of view and demonstrated how apparently different social organizations were different permutations of a few basic kinship structures. In the late 1958, he published Structural Anthropology, a collection of essays outlining his program for structuralism.
Lacan and Piaget
Blending Freud and Saussure, French (post)structuralist Jacques Lacan applied structuralism to psychoanalysis. Similarly, Jean Piaget applied structuralism to the study of psychology, though in a different way. Piaget, who would better define himself as constructivist, considered structuralism as "a method and not a doctrine," because, for him, "there exists no structure without a construction, abstract or genetic."
'Third order'
Proponents of structuralism argue that a specific domain of culture may be understood by means of a structure that is modelled on language and is distinct both from the organizations of reality and those of ideas, or the imagination—the "third order." In Lacan's psychoanalytic theory, for example, the structural order of "the Symbolic" is distinguished both from "the Real" and "the Imaginary;" similarly, in Althusser's Marxist theory, the structural order of the capitalist mode of production is distinct both from the actual, real agents involved in its relations and from the ideological forms in which those relations are understood.
Althusser
Although French theorist Louis Althusser is often associated with structural social analysis, which helped give rise to "structural Marxism," such association was contested by Althusser himself in the Italian foreword to the second edition of Reading Capital. In this foreword Althusser states the following:
Despite the precautions we took to distinguish ourselves from the 'structuralist' ideology…, despite the decisive intervention of categories foreign to 'structuralism'…, the terminology we employed was too close in many respects to the 'structuralist' terminology not to give rise to an ambiguity. With a very few exceptions…our interpretation of Marx has generally been recognized and judged, in homage to the current fashion, as 'structuralist'.… We believe that despite the terminological ambiguity, the profound tendency of our texts was not attached to the 'structuralist' ideology.
Assiter
In a later development, feminist theorist Alison Assiter enumerated four ideas common to the various forms of structuralism:
a structure determines the position of each element of a whole;
every system has a structure;
structural laws deal with co-existence rather than change; and
structures are the "real things" that lie beneath the surface or the appearance of meaning.
In linguistics
In Ferdinand de Saussure's Course in General Linguistics, the analysis focuses not on the use of language (parole, 'speech'), but rather on the underlying system of language (langue). This approach examines how the elements of language relate to each other in the present, synchronically rather than diachronically. Saussure argued that linguistic signs were composed of two parts:
a signifiant ('signifier'): the "sound pattern" of a word, either in mental projection—e.g., as when one silently recites lines from signage, a poem to one's self—or in actual, any kind of text, physical realization as part of a speech act.
a signifié '(signified'): the concept or meaning of the word.
This differed from previous approaches that focused on the relationship between words and the things in the world that they designate.
Although not fully developed by Saussure, other key notions in structural linguistics can be found in structural "idealism." A structural idealism is a class of linguistic units (lexemes, morphemes, or even constructions) that are possible in a certain position in a given syntagm, or linguistic environment (such as a given sentence). The different functional role of each of these members of the paradigm is called 'value' (French: ).
Prague School
In France, Antoine Meillet and Émile Benveniste continued Saussure's project, and members of the Prague school of linguistics such as Roman Jakobson and Nikolai Trubetzkoy conducted influential research. The clearest and most important example of Prague school structuralism lies in phonemics. Rather than simply compiling a list of which sounds occur in a language, the Prague school examined how they were related. They determined that the inventory of sounds in a language could be analysed as a series of contrasts.
Thus, in English, the sounds /p/ and /b/ represent distinct phonemes because there are cases (minimal pairs) where the contrast between the two is the only difference between two distinct words (e.g. 'pat' and 'bat'). Analyzing sounds in terms of contrastive features also opens up comparative scope—for instance, it makes clear the difficulty Japanese speakers have differentiating /r/ and /l/ in English and other languages is because these sounds are not contrastive in Japanese. Phonology would become the paradigmatic basis for structuralism in a number of different fields.
Based on the Prague school concept, André Martinet in France, J. R. Firth in the UK and Louis Hjelmslev in Denmark developed their own versions of structural and functional linguistics.
In anthropology
According to structural theory in anthropology and social anthropology, meaning is produced and reproduced within a culture through various practices, phenomena, and activities that serve as systems of signification.
A structuralist approach may study activities as diverse as food-preparation and serving rituals, religious rites, games, literary and non-literary texts, and other forms of entertainment to discover the deep structures by which meaning is produced and reproduced within the culture. For example, Lévi-Strauss analysed in the 1950s cultural phenomena including mythology, kinship (the alliance theory and the incest taboo), and food preparation. In addition to these studies, he produced more linguistically-focused writings in which he applied Saussure's distinction between langue and parole in his search for the fundamental structures of the human mind, arguing that the structures that form the "deep grammar" of society originate in the mind and operate in people unconsciously. Lévi-Strauss took inspiration from mathematics.
Another concept used in structural anthropology came from the Prague school of linguistics, where Roman Jakobson and others analysed sounds based on the presence or absence of certain features (e.g., voiceless vs. voiced). Lévi-Strauss included this in his conceptualization of the universal structures of the mind, which he held to operate based on pairs of binary oppositions such as hot-cold, male-female, culture-nature, cooked-raw, or marriageable vs. tabooed women.
A third influence came from Marcel Mauss (1872–1950), who had written on gift-exchange systems. Based on Mauss, for instance, Lévi-Strauss argued an alliance theory—that kinship systems are based on the exchange of women between groups—as opposed to the 'descent'-based theory described by Edward Evans-Pritchard and Meyer Fortes. While replacing Mauss at his Ecole Pratique des Hautes Etudes chair, the writings of Lévi-Strauss became widely popular in the 1960s and 1970s and gave rise to the term "structuralism" itself.
In Britain, authors such as Rodney Needham and Edmund Leach were highly influenced by structuralism. Authors such as Maurice Godelier and Emmanuel Terray combined Marxism with structural anthropology in France. In the United States, authors such as Marshall Sahlins and James Boon built on structuralism to provide their own analysis of human society. Structural anthropology fell out of favour in the early 1980s for a number of reasons. D'Andrade suggests that this was because it made unverifiable assumptions about the universal structures of the human mind. Authors such as Eric Wolf argued that political economy and colonialism should be at the forefront of anthropology. More generally, criticisms of structuralism by Pierre Bourdieu led to a concern with how cultural and social structures were changed by human agency and practice, a trend which Sherry Ortner has referred to as 'practice theory'.
One example is Douglas E. Foley's Learning Capitalist Culture (2010), in which he applied a mixture of structural and Marxist theories to his ethnographic fieldwork among high school students in Texas. Foley analyzed how they reach a shared goal through the lens of social solidarity when he observed "Mexicanos" and "Anglo-Americans" come together on the same football team to defeat the school's rivals. However, he also continually applies a marxist lens and states that he," wanted to wow peers with a new cultural marxist theory of schooling."
Some anthropological theorists, however, while finding considerable fault with Lévi-Strauss's version of structuralism, did not turn away from a fundamental structural basis for human culture. The Biogenetic Structuralism group for instance argued that some kind of structural foundation for culture must exist because all humans inherit the same system of brain structures. They proposed a kind of neuroanthropology which would lay the foundations for a more complete scientific account of cultural similarity and variation by requiring an integration of cultural anthropology and neuroscience—a program that theorists such as Victor Turner also embraced.
In literary criticism and theory
In literary theory, structuralist criticism relates literary texts to a larger structure, which may be a particular genre, a range of intertextual connections, a model of a universal narrative structure, or a system of recurrent patterns or motifs.
The field of structuralist semiotics argues that there must be a structure in every text, which explains why it is easier for experienced readers than for non-experienced readers to interpret a text. Everything that is written seems to be governed by rules, or "grammar of literature", that one learns in educational institutions and that are to be unmasked.
A potential problem for a structuralist interpretation is that it can be highly reductive; as scholar Catherine Belsey puts it: "the structuralist danger of collapsing all difference." An example of such a reading might be if a student concludes the authors of West Side Story did not write anything "really" new, because their work has the same structure as Shakespeare's Romeo and Juliet. In both texts a girl and a boy fall in love (a "formula" with a symbolic operator between them would be "Boy + Girl") despite the fact that they belong to two groups that hate each other ("Boy's Group - Girl's Group" or "Opposing forces") and conflict is resolved by their deaths. Structuralist readings focus on how the structures of the single text resolve inherent narrative tensions. If a structuralist reading focuses on multiple texts, there must be some way in which those texts unify themselves into a coherent system. The versatility of structuralism is such that a literary critic could make the same claim about a story of two friendly families ("Boy's Family + Girl's Family") that arrange a marriage between their children despite the fact that the children hate each other ("Boy - Girl") and then the children commit suicide to escape the arranged marriage; the justification is that the second story's structure is an 'inversion' of the first story's structure: the relationship between the values of love and the two pairs of parties involved have been reversed.
Structuralist literary criticism argues that the "literary banter of a text" can lie only in new structure, rather than in the specifics of character development and voice in which that structure is expressed. Literary structuralism often follows the lead of Vladimir Propp, Algirdas Julien Greimas, and Claude Lévi-Strauss in seeking out basic deep elements in stories, myths, and more recently, anecdotes, which are combined in various ways to produce the many versions of the ur-story or ur-myth.
There is considerable similarity between structural literary theory and Northrop Frye's archetypal criticism, which is also indebted to the anthropological study of myths. Some critics have also tried to apply the theory to individual works, but the effort to find unique structures in individual literary works runs counter to the structuralist program and has an affinity with New Criticism.
In economics
Yifu Lin criticizes early structural economic systems and theories, discussing the failures of it. He writes:"The structuralism believes that the failure to develop advanced capital-intensive industries spontaneously in a developing country is due to market failures caused by various structural rigidities..." "According to neoliberalism, the main reason for the failure of developing countries to catch up with developed countries was too much state intervention in the market, causing misallocation of resources, rent seeking and so forth."Rather these failures are more so centered around the unlikelihood of such quick development of these advanced industries within developing countries.
New Structural Economics (NSE)
New structural economics is an economic development strategy developed by World Bank Chief Economist Justin Yifu Lin. The strategy combines ideas from both neoclassical economics and structural economics.
NSE studies two parts: the base and the superstructure. A base is a combination of forces and relations of production, consisting of, but not limited to, industry and technology, while the superstructure consists of hard infrastructure and institutions. This results in an explanation of how the base impacts the superstructure which then determines transaction costs.
Interpretations and general criticisms
Structuralism is less popular today than other approaches, such as post-structuralism and deconstruction. Structuralism has often been criticized for being ahistorical and for favouring deterministic structural forces over the ability of people to act. As the political turbulence of the 1960s and 1970s (particularly the student uprisings of May 1968) began affecting academia, issues of power and political struggle moved to the center of public attention.
In the 1980s, deconstruction—and its emphasis on the fundamental ambiguity of language rather than its logical structure—became popular. By the end of the century, structuralism was seen as a historically important school of thought, but the movements that it spawned, rather than structuralism itself, commanded attention.
Several social theorists and academics have strongly criticized structuralism or even dismissed it. French hermeneutic philosopher Paul Ricœur (1969) criticized Lévi-Strauss for overstepping the limits of validity of the structuralist approach, ending up in what Ricœur described as "a Kantianism without a transcendental subject."
Anthropologist Adam Kuper (1973) argued that:'Structuralism' came to have something of the momentum of a millennial movement and some of its adherents felt that they formed a secret society of the seeing in a world of the blind. Conversion was not just a matter of accepting a new paradigm. It was, almost, a question of salvation. Philip Noel Pettit (1975) called for an abandoning of "the positivist dream which Lévi-Strauss dreamed for semiology," arguing that semiology is not to be placed among the natural sciences. Cornelius Castoriadis (1975) criticized structuralism as failing to explain symbolic mediation in the social world; he viewed structuralism as a variation on the "logicist" theme, arguing that, contrary to what structuralists advocate, language—and symbolic systems in general—cannot be reduced to logical organizations on the basis of the binary logic of oppositions.
Critical theorist Jürgen Habermas (1985) accused structuralists like Foucault of being positivists; Foucault, while not an ordinary positivist per se, paradoxically uses the tools of science to criticize science, according to Habermas. (See Performative contradiction and Foucault–Habermas debate.) Sociologist Anthony Giddens (1993) is another notable critic; while Giddens draws on a range of structuralist themes in his theorizing, he dismisses the structuralist view that the reproduction of social systems is merely "a mechanical outcome."
See also
Antihumanism
Engaged theory
Genetic structuralism
Holism
Isomorphism
Post-structuralism
Russian formalism
Structuralist film theory
Structuration theory
Émile Durkheim
Structural functionalism
Structuralism (philosophy of science)
Structuralism (philosophy of mathematics)
Structuralism (psychology)
Structural change
Structuralist economics
References
Further reading
Angermuller, Johannes. 2015. Why There Is No Poststructuralism in France: The Making of an Intellectual Generation. London: Bloomsbury.
Roudinesco, Élisabeth. 2008. Philosophy in Turbulent Times: Canguilhem, Sartre, Foucault, Althusser, Deleuze, Derrida. New York: Columbia University Press.
Primary sources
Althusser, Louis. Reading Capital.
Barthes, Roland. S/Z.
Deleuze, Gilles. 1973. "À quoi reconnaît-on le structuralisme?" Pp. 299–335 in Histoire de la philosophie, Idées, Doctrines. Vol. 8: Le XXe siècle, edited by F. Châtelet. Paris: Hachette
de Saussure, Ferdinand. 1916. Course in General Linguistics.
Foucault, Michel. The Order of Things.
Jakobson, Roman. Essais de linguistique générale.
Lacan, Jacques. The Seminars of Jacques Lacan.
Lévi-Strauss, Claude. The Elementary Structures of Kinship.
—— 1958. Structural Anthropology [Anthropologie structurale]
—— 1964–1971. Mythologiques
Wilcken, Patrick, ed. Claude Levi-Strauss: The Father of Modern Anthropology.
Linguistic theories and hypotheses
Literary criticism
Philosophical anthropology
Psychoanalytic theory
Sociological theories
Theories of language | 0.76515 | 0.998681 | 0.76414 |
Formylation | ,Formylation refers to any chemical processes in which a compound is functionalized with a formyl group (-CH=O). In organic chemistry, the term is most commonly used with regards to aromatic compounds (for example the conversion of benzene to benzaldehyde in the Gattermann–Koch reaction). In biochemistry the reaction is catalysed by enzymes such as formyltransferases.
Formylation generally involves the use of formylation agents, reagents that give rise to the CHO group. Among the many formylation reagents, particularly important are formic acid and carbon monoxide. A formylation reaction in organic chemistry refers to organic reactions in which an organic compound is functionalized with a formyl group (-CH=O). The reaction is a route to aldehydes (C-CH=O), formamides (N-CH=O), and formate esters (O-CH=O).
Formylation agents
A reagent that delivers the formyl group is called a formylating agent.
Formic acid
Dimethylformamide and phosphorus oxychloride in the Vilsmeier-Haack reaction.
Hexamethylenetetramine in the Duff reaction and the Sommelet reaction
Carbon monoxide and hydrochloric acid in the Gattermann-Koch reaction
Cyanides in the Gattermann reaction. This method synthesizes aromatic aldehydes using hydrogen chloride and hydrogen cyanide (or another metallic cyanide as such zinc cyanide) in the presence of Lewis acid catalysts:
Chloroform in the Reimer-Tiemann reaction
Dichloromethyl methyl ether in Rieche formylation
A particularly important formylation process is hydroformylation, which converts alkenes to the homologated aldehyde.
Aromatic formylation
Formylation reactions are a form of electrophilic aromatic substitution and therefore work best with electron-rich starting materials. Phenols are a common substrate, as they readily deprotonate to excellent phenoxide nucleophiles. Other electron-rich substrates, such as mesitylene, pyrrole, or fused aromatic rings can also be expected to react. Benzene will react under aggressive conditions but deactivated rings such as pyridine are difficult to formylate effectively.
Many formylation reactions will select only the ortho product (e.g. salicylaldehyde), attributed to attraction between the phenoxide and the formylating reagent. Ionic interactions have been invoked for the cationic nitrogen centres in the Vilsmeier–Haack reaction and Duff reaction, and the electron-deficient carbene in the Reimer-Tiemann reaction; coordination to high oxidation metals has been invoked in the Casiraghi and Rieche formylations (cf. Kolbe–Schmitt reaction).
The direct reaction between phenol and paraformaldehyde is possible via the Casiraghi formylation, but other methods apply masked forms of formaldehyde, in part to limit the formation of phenol formaldehyde resins. Aldehydes are strongly deactivating and as such phenols typically only react once. However certain reactions, such as the Duff reaction, can give double addition.
Formylation can be applied to other aromatic rings. As it generally begins with nucleophilic attack by the aromatic group, the electron density of the ring is an important factor. Some aromatic compounds, such as pyrrole, are known to formylate regioselectively.
Formylation of benzene rings can be achieved via the Gattermann reaction and Gattermann-Koch reaction. These involve strong acid catalysis and proceed in a manner similar to the Friedel–Crafts reaction.
Aliphatic formylation
Hydroformylation of alkenes is the most important method for obtaining aliphatic formyls (i.e., aldehydes). The reaction is largely restricted to industrial settings. Several specialty methods exist for laboratory-scale synthesis, including the Sommelet reaction, Bouveault aldehyde synthesis or Bodroux–Chichibabin aldehyde synthesis.
Formylation reactions in biology
In biochemistry, the addition of a formyl functional group is termed "formylation". A formyl functional group consists of a carbonyl bonded to hydrogen. When attached to an R group, a formyl group is called an aldehyde.
Formylation has been identified in several critical biological processes. Methionine was first discovered to be formylated in E. coli by Marcker and Sanger in 1964 and was later identified to be involved in the initiation of protein synthesis in bacteria and organelles. The formation of N-formylmethionine is catalyzed by the enzyme methionyl-tRNA transformylase. Additionally, two formylation reactions occur in the de novo biosynthesis of purines. These reactions are catalyzed by the enzymes glycinamide ribonucleotide (GAR) transformylase and 5-aminoimidazole-4-carboxyamide ribotide (AICAR) transformylase. More recently, formylation has been discovered to be a histone modification, which may modulate gene expression.
Methanogenesis
Formylation of methanofuran initiates the methanogenesis cycle. The formyl group is derived from carbon dioxide and is converted to methane.
Formylation in protein synthesis
In bacteria and organelles, the initiation of protein synthesis is signaled by the formation of formyl-methionyl-tRNA (tRNAfMet). This reaction is dependent on 10-formyltetrahydrofolate, and the enzyme methionyl-tRNA formyltransferase.
This reaction is not used by eukaryotes or Archaea, as the presence of tRNAfMet in non bacterial cells is dubbed as intrusive material and quickly eliminated. After its production, tRNAfMet is delivered to the 30S subunit of the ribosome in order to start protein synthesis. fMet possesses the same codon sequence as methionine. However, fMet is only used for the initiation of protein synthesis and is thus found only at the N terminus of the protein. Methionine is used during the rest translation. In E. coli, tRNAfMet is specifically recognized by initiation factor IF-2, as the formyl group blocks peptide bond formation at the N-terminus of methionine.
Once protein synthesis is accomplished, the formyl group on methionine can be removed by peptide deformylase. The methionine residue can be further removed by the enzyme methionine aminopeptidase.
Formylation reactions in purine biosynthesis
Two formylation reactions are required in the eleven step de novo synthesis of inosine monophosphate (IMP), the precursor of the purine ribonucleotides AMP and GMP. Glycinamide ribonucleotide (GAR) transformylase catalyzes the formylation of GAR to formylglycinamidine ribotide (FGAR) in the fourth reaction of the pathway. In the penultimate step of de novo purine biosynthesis, 5-aminoimidazole-4-carboxyamide ribotide (AICAR) is formylated to 5-formaminoimidazole-4-carboxamide ribotide (FAICAR) by AICAR transformylase.
GAR transformylase
PurN GAR transformylase is found in eukaryotes and prokaryotes. However, a second GAR transformylase, PurT GAR transformylase has been identified in E. coli. While the two enzymes have no sequence conservation and require different formyl donors, the specific activity and Km for GAR are the same in both PurT and PurN GAR transformylase.
PurN GAR transformylase
PurN GAR transformylase 1CDE uses the coenzyme N10-formyltetrahydrofolate (N10-formyl-THF) as a formyl donor to formylate the α-amino group of GAR. In eukaryotes, PurN GAR transformylase is part of a large multifunctional protein, but is found as a single protein in prokaryotes.
Mechanism
The formylation reaction is proposed to occur through a direct transfer reaction in which the amine group of GAR nucleophilically attacks N10-formyl-THF creating a tetrahedral intermediate. As the α-amino group of GAR is relatively reactive, deprotonation of the nucleophile is proposed to occur by solvent. In the active site, Asn 106, His 108, and Asp 144 are positioned to assist with formyl transfer. However, mutagenesis studies have indicated that these residues are not individually essential for catalysis, as only mutations of two or more residues inhibit the enzyme. Based on the structure the negatively charged Asp144 is believed to increase the pKa of His108, allowing the protonated imidazolium group of His108 to enhances the electrophillicity of the N10-formyl-THF formyl group. Additionally, His108 and Asn106 are believed to stabilize the oxyanion formed in the transition state.
PurT GAR transformylase
PurT GAR transformylase requires formate as the formyl donor and ATP for catalysis. It has been estimated that PurT GAR transformylase carries out 14-50% of GAR formylations in E. coli. The enzyme is a member of the ATP-grasp superfamily of proteins.
Mechanism
A sequential mechanism has been proposed for PurT GAR transformylase in which a short lived formyl phosphate intermediate is proposed to first form. This formyl phosphate intermediate then undergoes nucleophilic attack by the GAR amine for transfer of the formyl group. A formyl phosphate intermediate has been detected in mutagenesis experiments, in which the mutant PurT GAR transforymylase had a weak affinity for formate. Incubating PurT GAR transformylase with formyl phosphate, ADP, and GAR, yields both ATP and FGAR. This further indicating that formyl phosphate may be an intermediate, as it is kinetically and chemically competent to carry out the formylation reaction in the enzyme. An enzyme phosphate intermediate preceding the formylphosphate intermediate has also been proposed to form based on positional isotope exchange studies. However, structural data indicates that the formate may be positioned for a direct attack on the γ-phosphate of ATP in the enzyme's active site to form the formylphosphate intermediate.
AICAR transformylase
AICAR transformylase requires the coenzyme N10-formyltetrahydrofolate (N10-formyl-THF) as the formyl donor for the formylation of AICAR to FAICAR. However, AICAR transformylase and GAR transformylase do not share a high sequence similarity or structural homology.
Mechanism
The amine on AICAR is much less nucleophillic than its counterpart on GAR due to delocalization of electrons in AICAR through conjugation. Therefore, the N5 nucleophile of AIRCAR must be activated for the formylation reaction to occur. Histidine 268 and Lysine 267 have been found to be essential for catalysis and are conserved in all AICAR transformylase. Histidine 268 is involved in deprotonation of the N5 nucleophile of AICAR, whereas Lysine 267 is proposed to stabilize the tetrahedral intermediate.
Formylation in histone proteins
ε-Formylation is one of many post-translational modifications that occur on histone proteins, which been shown to modulate chromatin conformations and gene activation.
Formylation has been identified on the Nε of lysine residues in histones and proteins. This modification has been observed in linker histones and high mobility group proteins, it is highly abundant and it is believed to have a role in the epigenetics of chromatin function. Lysines that are formylated have been shown to play a role in DNA binding. Additionally, formylation has been detected on histone lysines that are also known to be acetylated and methylated. Thus, formylation may block other post-translational modifications.
Formylation is detected most frequently on 19 different modification sites on Histone H1. The genetic expression of the cell is highly disrupted by formylation, which may cause diseases such as cancer. The development of these modifications may be due to oxidative stress.
In histone proteins, lysine is typically modified by Histone Acetyl-Transferases (HATs) and Histone Deacetylases (HDAC or KDAC).
The acetylation of lysine is fundamental to the regulation and expression of certain genes. Oxidative stress creates a significantly different environment in which acetyl-lysine can be quickly outcompeted by the formation of formyl-lysine due to the high reactivity of formylphosphate species. This situation is currently believed to be caused by oxidative DNA damage.
A mechanism for the formation of formylphosphate has been proposed, which it is highly dependent on oxidatively damaged DNA and mainly driven by radical chemistry within the cell. The formylphosphate produced can then be used to formylate lysine. Oxidative stress is believed to play a role in the availability of lysine residues in the surface of proteins and the possibility of being formylated.
Formylation in medicine
Formylation reactions as a drug target
Inhibition of enzymes involved in purine biosynthesis has been exploited as a potential drug target for chemotherapy.
Cancer cells require high concentrations of purines to facilitate division and tend to rely on de novo synthesis rather than the nucleotide salvage pathway. Several folate based inhibitors have been developed to inhibit formylation reactions by GAR transformylase and AICAR transformylase. The first GAR transformylase inhibitor Lometrexol [(6R)5,10-dideazatetrahydrofolate] was developed in the 1980s through a collaboration between Eli Lilly and academic laboratories.
Although similar in structure to N10-formyl-THF, lometrexol is incapable of carrying out one carbon transfer reactions. Additionally, several GAR based inhibitors of GAR transformylase have also been synthesized.
Development of folate based inhibitors have been found to be particularly challenging as the inhibitors also down regulate the enzyme folypolyglutamate synthase, which adds additional γ-glutamates to monoglutamate folates and antifolates after entering the cell for increased enzyme affinity. This increased affinity can lead to antifolate resistance.
Leigh syndrome
Leigh syndrome is a neurodegenerative disorder that has been linked to a defect in an enzymatic formylation reaction. Leigh syndrome is typically associated with defects in oxidative phosphorylation, which occurs in the mitochondria. Exome sequencing, has been used to identify a mutation in the gene coding for mitochondrial methionyl-tRNA formyltransferase (MTFMT) in patients with Leigh syndrome. The c.626C>T mutation identified in MTFMT yielding symptoms of Leigh Syndrome is believed to alter exon splicing leading to a frameshift mutation and a premature stop codon. Individuals with the MTFMT c.626C>T mutation were found to have reduced fMet-tRNAMet levels and changes in the formylation level of mitochondrically translated COX1. This link provides evidence for the necessity of formylated methionine in initiation of expression for certain mitochondrial genes.
See also
Hydroformylation
Hydroacylation
References
See also
N-Formylmethionine
Proteins
Post-translational modification | 0.785344 | 0.972998 | 0.764138 |
Iridology | Iridology (also known as iridodiagnosis or iridiagnosis) is an alternative medicine technique whose proponents claim that patterns, colors, and other characteristics of the iris can be examined to determine information about a patient's systemic health. Practitioners match their observations to iris charts, which divide the iris into zones that correspond to specific parts of the human body. Iridologists see the eyes as "windows" into the body's state of health.
Iridologists claim they can use the charts to distinguish between healthy systems and organs in the body and those that are overactive, inflamed, or distressed. Iridologists claim this information demonstrates a patient's susceptibility towards certain illnesses, reflects past medical problems, or predicts later health problems.
As opposed to evidence-based medicine, iridology is not supported by quality research studies and is considered pseudoscience. The features of the iris are one of the most stable features on the human body throughout life. The stability of iris structures is the foundation of the biometric technology which uses iris recognition for identification purposes.
Methods
Iridologists generally use equipment such as a flashlight and magnifying glass, cameras or slit-lamp microscopes to examine a patient's irises for tissue changes, as well as features such as specific pigment patterns and irregular stromal architecture. The markings and patterns are compared to an iris chart that correlates zones of the iris with parts of the body. Typical charts divide the iris into approximately 80–90 zones. For example, the zone corresponding to the kidney is in the lower part of the iris, just before 6 o'clock. There are minor variations between charts' associations between body parts and areas of the iris.
According to iridologists, details in the iris reflect changes in the tissues of the corresponding body organs. One prominent practitioner, Bernard Jensen, described it thus: "Nerve fibers in the iris respond to changes in body tissues by manifesting a reflex physiology that corresponds to specific tissue changes and locations." This would mean that a bodily condition translates to a noticeable change in the appearance of the iris, but this has been disproven through many studies. (See section on Scientific research.) For example, acute inflammatory, chronic inflammatory and catarrhal signs may indicate involvement, maintenance, or healing of corresponding distant tissues, respectively. Other features that iridologists look for are contraction rings and Klumpenzellen, which may indicate various other health conditions, as interpreted in context.
History
Medical practitioners have been searching the eyes for signs of illness since at least 3,000 BCE.
The first explicit description of iridological principles such as homolaterality (without using the word iridology) are found in Chiromatica Medica, a famous work published in 1665 and reprinted in 1670 and 1691 by Philippus Meyeus (Philip Meyen von Coburg).
The first use of the word Augendiagnostik ("eye diagnosis", loosely translated as iridology) began with Ignaz von Peczely, a 19th-century Hungarian physician who is recognised as its founding father. The most common story is that he got the idea for this diagnostic tool after seeing similar streaks in the eyes of a man he was treating for a broken leg and the eyes of an owl whose leg von Peczely had broken many years before. At the First International Iridological Congress, Ignaz von Peczely's nephew, August von Peczely, dismissed this myth as apocryphal, and maintained that such claims were irreproducible.
The second 'father' to iridology is thought to be Nils Liljequist from Sweden, who greatly suffered from the outgrowth of his lymph nodes. After a round of medication made from iodine and quinine, he observed many differences in the colour of his iris. This observation inspired him to create and publish an atlas in 1893, which contained 258 black and white illustrations and 12 colour illustrations of the iris, known as the Diagnosis of the Eye.
The German contribution in the field of natural healing is due to a minister Pastor Emanuel Felke, who developed a form of homeopathy for treating specific illnesses and described new iris signs in the early 1900s. However, Felke was subject to long and bitter litigation. The Felke Institute in Gerlingen, Germany, was established as a leading center of iridological research and training.
Iridology became better known in the United States in the 1950s, when Bernard Jensen, an American chiropractor, began giving classes in his own method. This is in direct relationship with P. Johannes Thiel, Eduard Lahn (who became an American under the name of Edward Lane) and J Haskell Kritzer. Jensen emphasized the importance of the body's exposure to toxins, and the use of natural foods as detoxifiers. In 1979, in collaboration with two other iridologists, Jensen failed to establish the basis of their practice when they examined photographs of the eyes of 143 patients in an attempt to determine which ones had kidney impairments. Of the patients, 48 had been diagnosed with kidney disease, and the rest had normal kidney function. Based on their analysis of the patients' irises, the three iridologists could not detect which patients had kidney disease and which did not.
Criticism
Scientists dismiss iridology given that published studies have indicated a lack of success for its claims. To date, clinical data does not support correlation between illness in the body and coinciding observable changes in the iris. In controlled experiments, practitioners of iridology have performed statistically no better than chance in determining the presence of a disease or condition solely through observation of the iris. James Randi notes that iridology is unfalsifiable because iridologists do not provide a distinction between current physical defects and "future" defects; thus, the iridologist cannot be proven wrong.
Iridology is based on a premise that is at odds with the fact that the iris does not undergo substantial changes in an individual's life. Iris texture is a phenotypical feature that develops during gestation and remains unchanged after infancy. There is no evidence for changes in the iris pattern other than variations in pigmentation in the first years of life and variations caused by glaucoma treatment. The stability of iris structures is the foundation of the biometric technology which uses iris recognition for identification purposes.
Scientific research into iridology
Well-controlled scientific evaluation of iridology has shown entirely negative results, with all rigorous double blind tests failing to find any statistical significance to its claims.
In 2015 the Australian Government's Department of Health published the results of a review of alternative therapies that sought to determine if any were suitable for being covered by health insurance. Iridology was one of 17 therapies evaluated for which no clear evidence of effectiveness was found.
A German study from 1957 which took more than 4,000 iris photographs of more than 1,000 people concluded that iridology was not useful as a diagnostic tool.
In 1979, Bernard Jensen, a leading American iridologist, and two other iridology proponents failed to establish the basis of their practice when they examined photographs of the eyes of 143 patients in an attempt to determine which ones had kidney impairments. Of the patients, 48 had been diagnosed with kidney disease, and the rest had normal kidney function. Based on their analysis of the patients' irises, the three iridologists could not detect which patients had kidney disease and which did not. One iridologist, for example, decided that 88% of the normal patients had kidney disease, while another judged through his iris analysis that 74% of patients who needed artificial kidney treatment were normal.
Another study was published in the British Medical Journal which selected 39 patients who were due to have their gall bladder removed the following day, because of suspected gallstones. The study also selected a group of people who did not have diseased gall bladders to act as a control. A group of five iridologists examined a series of slides of both groups' irises. The iridologists could not correctly identify which patients had gall bladder problems and which had healthy gall bladders. For example, one of the iridologists diagnosed 49% of the patients with gall stones as having them and 51% as not having them. The author concluded: "this study showed that iridology is not a useful diagnostic aid."
Edzard Ernst raised the question in 2000:Does iridology work? ... This search strategy resulted in 77 publications on the subject of iridology. ... All of the uncontrolled studies and several of the unmasked experiments suggested that iridology was a valid diagnostic tool. The discussion that follows refers to the 4 controlled, masked evaluations of the diagnostic validity of iridology. ... In conclusion, few controlled studies with masked evaluation of diagnostic validity have been published. None have found any benefit from iridology.A 2005 study tested the usefulness of iridology in diagnosing common forms of cancer. An experienced iridology practitioner examined the eyes of 110 total subjects, of which 68 people had proven cancers of the breast, ovary, uterus, prostate, or colorectum, and 42 for whom there was no medical evidence of cancer. The practitioner, who was unaware of their gender or medical details, was asked to suggest a diagnosis for each person and his results were then compared with each subject's known medical diagnosis. The study conclusion was that "Iridology was of no value in diagnosing the cancers investigated in this study."
Regulation, licensure, and certification
In Canada and the United States, iridology is not regulated or licensed by any governmental agency. Numerous organizations offer certification courses.
Possible harms
Medical errors—treatment for conditions diagnosed via this method which do not actually exist (false positive result) or a false sense of security when a serious condition is not diagnosed by this method (false negative result)—could lead to improper or delayed treatment and even loss of life.
See also
Moleosophy
Phrenology
References
External links
The Skeptics Dictionary
Quackwatch
Your-Doctor.com
James Randi Educational Foundation
Alternative medical diagnostic methods
Pseudoscience
Human iris
Eye color | 0.768962 | 0.993715 | 0.764129 |
Mathematical physics | Mathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics, known as physical mathematics.
Scope
There are several distinct branches of mathematical physics, and these roughly correspond to particular historical parts of our world.
Classical mechanics
Applying the techniques of mathematical physics to classical mechanics typically involves the rigorous, abstract, and advanced reformulation of Newtonian mechanics in terms of Lagrangian mechanics and Hamiltonian mechanics (including both approaches in the presence of constraints). Both formulations are embodied in analytical mechanics and lead to an understanding of the deep interplay between the notions of symmetry and conserved quantities during the dynamical evolution of mechanical systems, as embodied within the most elementary formulation of Noether's theorem. These approaches and ideas have been extended to other areas of physics, such as statistical mechanics, continuum mechanics, classical field theory, and quantum field theory. Moreover, they have provided multiple examples and ideas in differential geometry (e.g., several notions in symplectic geometry and vector bundles).
Partial differential equations
Within mathematics proper, the theory of partial differential equation, variational calculus, Fourier analysis, potential theory, and vector analysis are perhaps most closely associated with mathematical physics. These fields were developed intensively from the second half of the 18th century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics.
Quantum theory
The theory of atomic spectra (and, later, quantum mechanics) developed almost concurrently with some parts of the mathematical fields of linear algebra, the spectral theory of operators, operator algebras and, more broadly, functional analysis. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic and molecular physics. Quantum information theory is another subspecialty.
Relativity and quantum relativistic theories
The special and general theories of relativity require a rather different type of mathematics. This was group theory, which played an important role in both quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the mathematical description of cosmological as well as quantum field theory phenomena. In the mathematical description of these physical areas, some concepts in homological algebra and category theory are also important.
Statistical mechanics
Statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics (or its quantum version) and it is closely related with the more mathematical ergodic theory and some parts of probability theory. There are increasing interactions between combinatorics and physics, in particular statistical physics.
Usage
The usage of the term "mathematical physics" is sometimes idiosyncratic. Certain parts of mathematics that initially arose from the development of physics are not, in fact, considered parts of mathematical physics, while other closely related fields are. For example, ordinary differential equations and symplectic geometry are generally viewed as purely mathematical disciplines, whereas dynamical systems and Hamiltonian mechanics belong to mathematical physics. John Herapath used the term for the title of his 1847 text on "mathematical principles of natural philosophy", the scope at that time being
"the causes of heat, gaseous elasticity, gravitation, and other great phenomena of nature".
Mathematical vs. theoretical physics
The term "mathematical physics" is sometimes used to denote research aimed at studying and solving problems in physics or thought experiments within a mathematically rigorous framework. In this sense, mathematical physics covers a very broad academic realm distinguished only by the blending of some mathematical aspect and theoretical physics aspect. Although related to theoretical physics, mathematical physics in this sense emphasizes the mathematical rigour of the similar type as found in mathematics.
On the other hand, theoretical physics emphasizes the links to observations and experimental physics, which often requires theoretical physicists (and mathematical physicists in the more general sense) to use heuristic, intuitive, or approximate arguments. Such arguments are not considered rigorous by mathematicians.
Such mathematical physicists primarily expand and elucidate physical theories. Because of the required level of mathematical rigour, these researchers often deal with questions that theoretical physicists have considered to be already solved. However, they can sometimes show that the previous solution was incomplete, incorrect, or simply too naïve. Issues about attempts to infer the second law of thermodynamics from statistical mechanics are examples. Other examples concern the subtleties involved with synchronisation procedures in special and general relativity (Sagnac effect and Einstein synchronisation).
The effort to put physical theories on a mathematically rigorous footing not only developed physics but also has influenced developments of some mathematical areas. For example, the development of quantum mechanics and some aspects of functional analysis parallel each other in many ways. The mathematical study of quantum mechanics, quantum field theory, and quantum statistical mechanics has motivated results in operator algebras. The attempt to construct a rigorous mathematical formulation of quantum field theory has also brought about some progress in fields such as representation theory.
Prominent mathematical physicists
Before Newton
There is a tradition of mathematical analysis of nature that goes back to the ancient Greeks; examples include Euclid (Optics), Archimedes (On the Equilibrium of Planes, On Floating Bodies), and Ptolemy (Optics, Harmonics). Later, Islamic and Byzantine scholars built on these works, and these ultimately were reintroduced or became available to the West in the 12th century and during the Renaissance.
In the first decade of the 16th century, amateur astronomer Nicolaus Copernicus proposed heliocentrism, and published a treatise on it in 1543. He retained the Ptolemaic idea of epicycles, and merely sought to simplify astronomy by constructing simpler sets of epicyclic orbits. Epicycles consist of circles upon circles. According to Aristotelian physics, the circle was the perfect form of motion, and was the intrinsic motion of Aristotle's fifth element—the quintessence or universal essence known in Greek as aether for the English pure air—that was the pure substance beyond the sublunary sphere, and thus was celestial entities' pure composition. The German Johannes Kepler [1571–1630], Tycho Brahe's assistant, modified Copernican orbits to ellipses, formalized in the equations of Kepler's laws of planetary motion.
An enthusiastic atomist, Galileo Galilei in his 1623 book The Assayer asserted that the "book of nature is written in mathematics". His 1632 book, about his telescopic observations, supported heliocentrism. Having introduced experimentation, Galileo then refuted geocentric cosmology by refuting Aristotelian physics itself. Galileo's 1638 book Discourse on Two New Sciences established the law of equal free fall as well as the principles of inertial motion, founding the central concepts of what would become today's classical mechanics. By the Galilean law of inertia as well as the principle of Galilean invariance, also called Galilean relativity, for any object experiencing inertia, there is empirical justification for knowing only that it is at relative rest or relative motion—rest or motion with respect to another object.
René Descartes famously developed a complete system of heliocentric cosmology anchored on the principle of vortex motion, Cartesian physics, whose widespread acceptance brought the demise of Aristotelian physics. Descartes sought to formalize mathematical reasoning in science, and developed Cartesian coordinates for geometrically plotting locations in 3D space and marking their progressions along the flow of time.
An older contemporary of Newton, Christiaan Huygens, was the first to idealize a physical problem by a set of parameters and the first to fully mathematize a mechanistic explanation of unobservable physical phenomena, and for these reasons Huygens is considered the first theoretical physicist and one of the founders of modern mathematical physics.
Descartes, Newtonian physics and post Newtonian
Descartes sought to formalize mathematical reasoning in science, and developed Cartesian coordinates for geometrically plotting locations in 3D space and marking their progressions along the flow of time. Before Descartes geometry and description of space followed the constructive model from ancient mathematical greeks. In such sense geometrical shapes formed the building block to describe and think about space, with time being an separate entity. Descartes introduced a new way to describe space using the algebra, until then, a mathematical tool used mostly for commercial transactions. Cartesian coordinates also introduced the idea of time on pair with space as just another axis of coordinates. This essential mathematical framework is at the base of all modern physics and used in all further mathematical frameworks developed in next centuries.
In this era, important concepts in calculus such as the fundamental theorem of calculus (proved in 1668 by Scottish mathematician James Gregory) and finding extrema and minima of functions via differentiation using Fermat's theorem (by French mathematician Pierre de Fermat) were already known before Leibniz and Newton. Isaac Newton (1642–1727) developed some concepts in calculus (although Gottfried Wilhelm Leibniz developed similar concepts outside the context of physics) and Newton's method to solve problems in physics. He was extremely successful in his application of calculus to the theory of motion. Newton's theory of motion, shown in his Mathematical Principles of Natural Philosophy, published in 1687, modeled three Galilean laws of motion along with Newton's law of universal gravitation on a framework of absolute space—hypothesized by Newton as a physically real entity of Euclidean geometric structure extending infinitely in all directions—while presuming absolute time, supposedly justifying knowledge of absolute motion, the object's motion with respect to absolute space. The principle of Galilean invariance/relativity was merely implicit in Newton's theory of motion. Having ostensibly reduced the Keplerian celestial laws of motion as well as Galilean terrestrial laws of motion to a unifying force, Newton achieved great mathematical rigor, but with theoretical laxity.
In the 18th century, the Swiss Daniel Bernoulli (1700–1782) made contributions to fluid dynamics, and vibrating strings. The Swiss Leonhard Euler (1707–1783) did special work in variational calculus, dynamics, fluid dynamics, and other areas. Also notable was the Italian-born Frenchman, Joseph-Louis Lagrange (1736–1813) for work in analytical mechanics: he formulated Lagrangian mechanics) and variational methods. A major contribution to the formulation of Analytical Dynamics called Hamiltonian dynamics was also made by the Irish physicist, astronomer and mathematician, William Rowan Hamilton (1805–1865). Hamiltonian dynamics had played an important role in the formulation of modern theories in physics, including field theory and quantum mechanics. The French mathematical physicist Joseph Fourier (1768 – 1830) introduced the notion of Fourier series to solve the heat equation, giving rise to a new approach to solving partial differential equations by means of integral transforms.
Into the early 19th century, following mathematicians in France, Germany and England had contributed to mathematical physics. The French Pierre-Simon Laplace (1749–1827) made paramount contributions to mathematical astronomy, potential theory. Siméon Denis Poisson (1781–1840) worked in analytical mechanics and potential theory. In Germany, Carl Friedrich Gauss (1777–1855) made key contributions to the theoretical foundations of electricity, magnetism, mechanics, and fluid dynamics. In England, George Green (1793–1841) published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828, which in addition to its significant contributions to mathematics made early progress towards laying down the mathematical foundations of electricity and magnetism.
A couple of decades ahead of Newton's publication of a particle theory of light, the Dutch Christiaan Huygens (1629–1695) developed the wave theory of light, published in 1690. By 1804, Thomas Young's double-slit experiment revealed an interference pattern, as though light were a wave, and thus Huygens's wave theory of light, as well as Huygens's inference that light waves were vibrations of the luminiferous aether, was accepted. Jean-Augustin Fresnel modeled hypothetical behavior of the aether. The English physicist Michael Faraday introduced the theoretical concept of a field—not action at a distance. Mid-19th century, the Scottish James Clerk Maxwell (1831–1879) reduced electricity and magnetism to Maxwell's electromagnetic field theory, whittled down by others to the four Maxwell's equations. Initially, optics was found consequent of Maxwell's field. Later, radiation and then today's known electromagnetic spectrum were found also consequent of this electromagnetic field.
The English physicist Lord Rayleigh [1842–1919] worked on sound. The Irishmen William Rowan Hamilton (1805–1865), George Gabriel Stokes (1819–1903) and Lord Kelvin (1824–1907) produced several major works: Stokes was a leader in optics and fluid dynamics; Kelvin made substantial discoveries in thermodynamics; Hamilton did notable work on analytical mechanics, discovering a new and powerful approach nowadays known as Hamiltonian mechanics. Very relevant contributions to this approach are due to his German colleague mathematician Carl Gustav Jacobi (1804–1851) in particular referring to canonical transformations. The German Hermann von Helmholtz (1821–1894) made substantial contributions in the fields of electromagnetism, waves, fluids, and sound. In the United States, the pioneering work of Josiah Willard Gibbs (1839–1903) became the basis for statistical mechanics. Fundamental theoretical results in this area were achieved by the German Ludwig Boltzmann (1844–1906). Together, these individuals laid the foundations of electromagnetic theory, fluid dynamics, and statistical mechanics.
Relativistic
By the 1880s, there was a prominent paradox that an observer within Maxwell's electromagnetic field measured it at approximately constant speed, regardless of the observer's speed relative to other objects within the electromagnetic field. Thus, although the observer's speed was continually lost relative to the electromagnetic field, it was preserved relative to other objects in the electromagnetic field. And yet no violation of Galilean invariance within physical interactions among objects was detected. As Maxwell's electromagnetic field was modeled as oscillations of the aether, physicists inferred that motion within the aether resulted in aether drift, shifting the electromagnetic field, explaining the observer's missing speed relative to it. The Galilean transformation had been the mathematical process used to translate the positions in one reference frame to predictions of positions in another reference frame, all plotted on Cartesian coordinates, but this process was replaced by Lorentz transformation, modeled by the Dutch Hendrik Lorentz [1853–1928].
In 1887, experimentalists Michelson and Morley failed to detect aether drift, however. It was hypothesized that motion into the aether prompted aether's shortening, too, as modeled in the Lorentz contraction. It was hypothesized that the aether thus kept Maxwell's electromagnetic field aligned with the principle of Galilean invariance across all inertial frames of reference, while Newton's theory of motion was spared.
Austrian theoretical physicist and philosopher Ernst Mach criticized Newton's postulated absolute space. Mathematician Jules-Henri Poincaré (1854–1912) questioned even absolute time. In 1905, Pierre Duhem published a devastating criticism of the foundation of Newton's theory of motion. Also in 1905, Albert Einstein (1879–1955) published his special theory of relativity, newly explaining both the electromagnetic field's invariance and Galilean invariance by discarding all hypotheses concerning aether, including the existence of aether itself. Refuting the framework of Newton's theory—absolute space and absolute time—special relativity refers to relative space and relative time, whereby length contracts and time dilates along the travel pathway of an object.
Cartesian coordinates arbitrarily used rectilinear coordinates. Gauss, inspired by Descartes' work, introduced the curved geometry, replacing rectilinear axis by curved ones. Gauss also introduced another key tool of modern physics, the curvature. Gauss's work was limited to two dimensions. Extending it to three or more dimensions introduced a lot of complexity, with the need of the (not yet invented) tensors. It was Riemman the one in charge to extend curved geometry to N dimensions. In 1908, Einstein's former mathematics professor Hermann Minkowski, applied the curved geometry construction to model 3D space together with the 1D axis of time by treating the temporal axis like a fourth spatial dimension—altogether 4D spacetime—and declared the imminent demise of the separation of space and time. Einstein initially called this "superfluous learnedness", but later used Minkowski spacetime with great elegance in his general theory of relativity, extending invariance to all reference frames—whether perceived as inertial or as accelerated—and credited this to Minkowski, by then deceased. General relativity replaces Cartesian coordinates with Gaussian coordinates, and replaces Newton's claimed empty yet Euclidean space traversed instantly by Newton's vector of hypothetical gravitational force—an instant action at a distance—with a gravitational field. The gravitational field is Minkowski spacetime itself, the 4D topology of Einstein aether modeled on a Lorentzian manifold that "curves" geometrically, according to the Riemann curvature tensor. The concept of Newton's gravity: "two masses attract each other" replaced by the geometrical argument: "mass transform curvatures of spacetime and free falling particles with mass move along a geodesic curve in the spacetime" (Riemannian geometry already existed before the 1850s, by mathematicians Carl Friedrich Gauss and Bernhard Riemann in search for intrinsic geometry and non-Euclidean geometry.), in the vicinity of either mass or energy. (Under special relativity—a special case of general relativity—even massless energy exerts gravitational effect by its mass equivalence locally "curving" the geometry of the four, unified dimensions of space and time.)
Quantum
Another revolutionary development of the 20th century was quantum theory, which emerged from the seminal contributions of Max Planck (1856–1947) (on black-body radiation) and Einstein's work on the photoelectric effect. In 1912, a mathematician Henri Poincare published Sur la théorie des quanta. He introduced the first non-naïve definition of quantization in this paper. The development of early quantum physics followed by a heuristic framework devised by Arnold Sommerfeld (1868–1951) and Niels Bohr (1885–1962), but this was soon replaced by the quantum mechanics developed by Max Born (1882–1970), Louis de Broglie (1892–1987), Werner Heisenberg (1901–1976), Paul Dirac (1902–1984), Erwin Schrödinger (1887–1961), Satyendra Nath Bose (1894–1974), and Wolfgang Pauli (1900–1958). This revolutionary theoretical framework is based on a probabilistic interpretation of states, and evolution and measurements in terms of self-adjoint operators on an infinite-dimensional vector space. That is called Hilbert space (introduced by mathematicians David Hilbert (1862–1943), Erhard Schmidt (1876–1959) and Frigyes Riesz (1880–1956) in search of generalization of Euclidean space and study of integral equations), and rigorously defined within the axiomatic modern version by John von Neumann in his celebrated book Mathematical Foundations of Quantum Mechanics, where he built up a relevant part of modern functional analysis on Hilbert spaces, the spectral theory (introduced by David Hilbert who investigated quadratic forms with infinitely many variables. Many years later, it had been revealed that his spectral theory is associated with the spectrum of the hydrogen atom. He was surprised by this application.) in particular. Paul Dirac used algebraic constructions to produce a relativistic model for the electron, predicting its magnetic moment and the existence of its antiparticle, the positron.
List of prominent contributors to mathematical physics in the 20th century
Prominent contributors to the 20th century's mathematical physics include (ordered by birth date):
William Thomson (Lord Kelvin) (1824–1907)
Oliver Heaviside (1850–1925)
Jules Henri Poincaré (1854–1912)
David Hilbert (1862–1943)
Arnold Sommerfeld (1868–1951)
Constantin Carathéodory (1873–1950)
Albert Einstein (1879–1955)
Emmy Noether (1882–1935)
Max Born (1882–1970)
George David Birkhoff (1884–1944)
Hermann Weyl (1885–1955)
Satyendra Nath Bose (1894–1974)
Louis de Broglie (1892–1987)
Norbert Wiener (1894–1964)
John Lighton Synge (1897–1995)
Mário Schenberg (1914–1990)
Wolfgang Pauli (1900–1958)
Paul Dirac (1902–1984)
Eugene Wigner (1902–1995)
Andrey Kolmogorov (1903–1987)
Lars Onsager (1903–1976)
John von Neumann (1903–1957)
Sin-Itiro Tomonaga (1906–1979)
Hideki Yukawa (1907–1981)
Nikolay Nikolayevich Bogolyubov (1909–1992)
Subrahmanyan Chandrasekhar (1910–1995)
Mark Kac (1914–1984)
Julian Schwinger (1918–1994)
Richard Phillips Feynman (1918–1988)
Irving Ezra Segal (1918–1998)
Ryogo Kubo (1920–1995)
Arthur Strong Wightman (1922–2013)
Chen-Ning Yang (1922–)
Rudolf Haag (1922–2016)
Freeman John Dyson (1923–2020)
Martin Gutzwiller (1925–2014)
Abdus Salam (1926–1996)
Jürgen Moser (1928–1999)
Michael Francis Atiyah (1929–2019)
Joel Louis Lebowitz (1930–)
Roger Penrose (1931–)
Elliott Hershel Lieb (1932–)
Yakir Aharonov (1932–)
Sheldon Glashow (1932–)
Steven Weinberg (1933–2021)
Ludvig Dmitrievich Faddeev (1934–2017)
David Ruelle (1935–)
Yakov Grigorevich Sinai (1935–)
Vladimir Igorevich Arnold (1937–2010)
Arthur Michael Jaffe (1937–)
Roman Wladimir Jackiw (1939–)
Leonard Susskind (1940–)
Rodney James Baxter (1940–)
Michael Victor Berry (1941–)
Giovanni Gallavotti (1941–)
Stephen William Hawking (1942–2018)
Jerrold Eldon Marsden (1942–2010)
Michael C. Reed (1942–)
John Michael Kosterlitz (1943–)
Israel Michael Sigal (1945–)
Alexander Markovich Polyakov (1945–)
Barry Simon (1946–)
Herbert Spohn (1946–)
John Lawrence Cardy (1947–)
Giorgio Parisi (1948-)
Abhay Ashtekar (1949-)
Edward Witten (1951–)
F. Duncan Haldane (1951–)
Ashoke Sen (1956–)
Juan Martín Maldacena (1968–)
See also
International Association of Mathematical Physics
Notable publications in mathematical physics
List of mathematical physics journals
Gauge theory (mathematics)
Relationship between mathematics and physics
Theoretical, computational and philosophical physics
Notes
References
Further reading
Generic works
Textbooks for undergraduate studies
, (Mathematical Methods for Physicists, Solutions for Mathematical Methods for Physicists (7th ed.), archive.org)
Hassani, Sadri (2009), Mathematical Methods for Students of Physics and Related Fields, (2nd ed.), New York, Springer, eISBN 978-0-387-09504-2
Textbooks for graduate studies
Specialized texts in classical physics
Specialized texts in modern physics
External links | 0.767763 | 0.995267 | 0.764129 |
Hydrogen bond | In chemistry, a hydrogen bond (or H-bond) is primarily an electrostatic force of attraction between a hydrogen (H) atom which is covalently bonded to a more electronegative "donor" atom or group (Dn), and another electronegative atom bearing a lone pair of electrons—the hydrogen bond acceptor (Ac). Such an interacting system is generally denoted , where the solid line denotes a polar covalent bond, and the dotted or dashed line indicates the hydrogen bond. The most frequent donor and acceptor atoms are the period 2 elements nitrogen (N), oxygen (O), and fluorine (F).
Hydrogen bonds can be intermolecular (occurring between separate molecules) or intramolecular (occurring among parts of the same molecule). The energy of a hydrogen bond depends on the geometry, the environment, and the nature of the specific donor and acceptor atoms and can vary between 1 and 40 kcal/mol. This makes them somewhat stronger than a van der Waals interaction, and weaker than fully covalent or ionic bonds. This type of bond can occur in inorganic molecules such as water and in organic molecules like DNA and proteins. Hydrogen bonds are responsible for holding materials such as paper and felted wool together, and for causing separate sheets of paper to stick together after becoming wet and subsequently drying.
The hydrogen bond is also responsible for many of the physical and chemical properties of compounds of N, O, and F that seem unusual compared with other similar structures. In particular, intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group-16 hydrides that have much weaker hydrogen bonds. Intramolecular hydrogen bonding is partly responsible for the secondary and tertiary structures of proteins and nucleic acids.
Bonding
Definitions and general characteristics
In a hydrogen bond, the electronegative atom not covalently attached to the hydrogen is named the proton acceptor, whereas the one covalently bound to the hydrogen is named the proton donor. This nomenclature is recommended by the IUPAC. The hydrogen of the donor is protic and therefore can act as a Lewis acid and the acceptor is the Lewis base. Hydrogen bonds are represented as system, where the dots represent the hydrogen bond. Liquids that display hydrogen bonding (such as water) are called associated liquids.
Hydrogen bonds arise from a combination of electrostatics (multipole-multipole and multipole-induced multipole interactions), covalency (charge transfer by orbital overlap), and dispersion (London forces).
In weaker hydrogen bonds, hydrogen atoms tend to bond to elements such as sulfur (S) or chlorine (Cl); even carbon (C) can serve as a donor, particularly when the carbon or one of its neighbors is electronegative (e.g., in chloroform, aldehydes and terminal acetylenes). Gradually, it was recognized that there are many examples of weaker hydrogen bonding involving donor other than N, O, or F and/or acceptor Ac with electronegativity approaching that of hydrogen (rather than being much more electronegative). Although weak (≈1 kcal/mol), "non-traditional" hydrogen bonding interactions are ubiquitous and influence structures of many kinds of materials.
The definition of hydrogen bonding has gradually broadened over time to include these weaker attractive interactions. In 2011, an IUPAC Task Group recommended a modern evidence-based definition of hydrogen bonding, which was published in the IUPAC journal Pure and Applied Chemistry. This definition specifies:
Bond strength
Hydrogen bonds can vary in strength from weak (1–2 kJ/mol) to strong (161.5 kJ/mol in the bifluoride ion, ). Typical enthalpies in vapor include:
(161.5 kJ/mol or 38.6 kcal/mol), illustrated uniquely by
(29 kJ/mol or 6.9 kcal/mol), illustrated water-ammonia
(21 kJ/mol or 5.0 kcal/mol), illustrated water-water, alcohol-alcohol
(13 kJ/mol or 3.1 kcal/mol), illustrated by ammonia-ammonia
(8 kJ/mol or 1.9 kcal/mol), illustrated water-amide
(18 kJ/mol or 4.3 kcal/mol)
The strength of intermolecular hydrogen bonds is most often evaluated by measurements of equilibria between molecules containing donor and/or acceptor units, most often in solution. The strength of intramolecular hydrogen bonds can be studied with equilibria between conformers with and without hydrogen bonds. The most important method for the identification of hydrogen bonds also in complicated molecules is crystallography, sometimes also NMR-spectroscopy. Structural details, in particular distances between donor and acceptor which are smaller than the sum of the van der Waals radii can be taken as indication of the hydrogen bond strength. One scheme gives the following somewhat arbitrary classification: those that are 15 to 40 kcal/mol, 5 to 15 kcal/mol, and >0 to 5 kcal/mol are considered strong, moderate, and weak, respectively.
Hydrogen bonds involving C-H bonds are both very rare and weak.
Resonance assisted hydrogen bond
The resonance assisted hydrogen bond (commonly abbreviated as RAHB) is a strong type of hydrogen bond. It is characterized by the π-delocalization that involves the hydrogen and cannot be properly described by the electrostatic model alone. This description of the hydrogen bond has been proposed to describe unusually short distances generally observed between or .
Structural details
The distance is typically ≈110 pm, whereas the distance is ≈160 to 200 pm. The typical length of a hydrogen bond in water is 197 pm. The ideal bond angle depends on the nature of the hydrogen bond donor. The following hydrogen bond angles between a hydrofluoric acid donor and various acceptors have been determined experimentally:
Spectroscopy
Strong hydrogen bonds are revealed by downfield shifts in the 1H NMR spectrum. For example, the acidic proton in the enol tautomer of acetylacetone appears at 15.5, which is about 10 ppm downfield of a conventional alcohol.
In the IR spectrum, hydrogen bonding shifts the stretching frequency to lower energy (i.e. the vibration frequency decreases). This shift reflects a weakening of the bond. Certain hydrogen bonds - improper hydrogen bonds - show a blue shift of the stretching frequency and a decrease in the bond length. H-bonds can also be measured by IR vibrational mode shifts of the acceptor. The amide I mode of backbone carbonyls in α-helices shifts to lower frequencies when they form H-bonds with side-chain hydroxyl groups. The dynamics of hydrogen bond structures in water can be probed by this OH stretching vibration. In the hydrogen bonding network in protic organic ionic plastic crystals (POIPCs), which are a type of phase change material exhibiting solid-solid phase transitions prior to melting, variable-temperature infrared spectroscopy can reveal the temperature dependence of hydrogen bonds and the dynamics of both the anions and the cations. The sudden weakening of hydrogen bonds during the solid-solid phase transition seems to be coupled with the onset of orientational or rotational disorder of the ions.
Theoretical considerations
Hydrogen bonding is of persistent theoretical interest. According to a modern description integrates both the intermolecular O:H lone pair ":" nonbond and the intramolecular polar-covalent bond associated with repulsive coupling.
Quantum chemical calculations of the relevant interresidue potential constants (compliance constants) revealed large differences between individual H bonds of the same type. For example, the central interresidue hydrogen bond between guanine and cytosine is much stronger in comparison to the bond between the adenine-thymine pair.
Theoretically, the bond strength of the hydrogen bonds can be assessed using NCI index, non-covalent interactions index, which allows a visualization of these non-covalent interactions, as its name indicates, using the electron density of the system.
Interpretations of the anisotropies in the Compton profile of ordinary ice claim that the hydrogen bond is partly covalent. However, this interpretation was challenged and subsequently clarified.
Most generally, the hydrogen bond can be viewed as a metric-dependent electrostatic scalar field between two or more intermolecular bonds. This is slightly different from the intramolecular bound states of, for example, covalent or ionic bonds. However, hydrogen bonding is generally still a bound state phenomenon, since the interaction energy has a net negative sum. The initial theory of hydrogen bonding proposed by Linus Pauling suggested that the hydrogen bonds had a partial covalent nature. This interpretation remained controversial until NMR techniques demonstrated information transfer between hydrogen-bonded nuclei, a feat that would only be possible if the hydrogen bond contained some covalent character.
History
The concept of hydrogen bonding once was challenging. Linus Pauling credits T. S. Moore and T. F. Winmill with the first mention of the hydrogen bond, in 1912. Moore and Winmill used the hydrogen bond to account for the fact that trimethylammonium hydroxide is a weaker base than tetramethylammonium hydroxide. The description of hydrogen bonding in its better-known setting, water, came some years later, in 1920, from Latimer and Rodebush. In that paper, Latimer and Rodebush cited the work of a fellow scientist at their laboratory, Maurice Loyal Huggins, saying, "Mr. Huggins of this laboratory in some work as yet unpublished, has used the idea of a hydrogen kernel held between two atoms as a theory in regard to certain organic compounds."
Hydrogen bonds in small molecules
Water
An ubiquitous example of a hydrogen bond is found between water molecules. In a discrete water molecule, there are two hydrogen atoms and one oxygen atom. The simplest case is a pair of water molecules with one hydrogen bond between them, which is called the water dimer and is often used as a model system. When more molecules are present, as is the case with liquid water, more bonds are possible because the oxygen of one water molecule has two lone pairs of electrons, each of which can form a hydrogen bond with a hydrogen on another water molecule. This can repeat such that every water molecule is H-bonded with up to four other molecules, as shown in the figure (two through its two lone pairs, and two through its two hydrogen atoms). Hydrogen bonding strongly affects the crystal structure of ice, helping to create an open hexagonal lattice. The density of ice is less than the density of water at the same temperature; thus, the solid phase of water floats on the liquid, unlike most other substances.
Liquid water's high boiling point is due to the high number of hydrogen bonds each molecule can form, relative to its low molecular mass. Owing to the difficulty of breaking these bonds, water has a very high boiling point, melting point, and viscosity compared to otherwise similar liquids not conjoined by hydrogen bonds. Water is unique because its oxygen atom has two lone pairs and two hydrogen atoms, meaning that the total number of bonds of a water molecule is up to four.
The number of hydrogen bonds formed by a molecule of liquid water fluctuates with time and temperature. From TIP4P liquid water simulations at 25 °C, it was estimated that each water molecule participates in an average of 3.59 hydrogen bonds. At 100 °C, this number decreases to 3.24 due to the increased molecular motion and decreased density, while at 0 °C, the average number of hydrogen bonds increases to 3.69. Another study found a much smaller number of hydrogen bonds: 2.357 at 25 °C. Defining and counting the hydrogen bonds is not straightforward however.
Because water may form hydrogen bonds with solute proton donors and acceptors, it may competitively inhibit the formation of solute intermolecular or intramolecular hydrogen bonds. Consequently, hydrogen bonds between or within solute molecules dissolved in water are almost always unfavorable relative to hydrogen bonds between water and the donors and acceptors for hydrogen bonds on those solutes. Hydrogen bonds between water molecules have an average lifetime of 10−11 seconds, or 10 picoseconds.
Bifurcated and over-coordinated hydrogen bonds in water
A single hydrogen atom can participate in two hydrogen bonds. This type of bonding is called "bifurcated" (split in two or "two-forked"). It can exist, for instance, in complex organic molecules. It has been suggested that a bifurcated hydrogen atom is an essential step in water reorientation.
Acceptor-type hydrogen bonds (terminating on an oxygen's lone pairs) are more likely to form bifurcation (it is called overcoordinated oxygen, OCO) than are donor-type hydrogen bonds, beginning on the same oxygen's hydrogens.
Other liquids
For example, hydrogen fluoride—which has three lone pairs on the F atom but only one H atom—can form only two bonds; (ammonia has the opposite problem: three hydrogen atoms but only one lone pair).
H-F***H-F***H-F
Further manifestations of solvent hydrogen bonding
Increase in the melting point, boiling point, solubility, and viscosity of many compounds can be explained by the concept of hydrogen bonding.
Negative azeotropy of mixtures of HF and water.
The fact that ice is less dense than liquid water is due to a crystal structure stabilized by hydrogen bonds.
Dramatically higher boiling points of , , and HF compared to the heavier analogues , , and HCl, where hydrogen-bonding is absent.
Viscosity of anhydrous phosphoric acid and of glycerol.
Dimer formation in carboxylic acids and hexamer formation in hydrogen fluoride, which occur even in the gas phase, resulting in gross deviations from the ideal gas law.
Pentamer formation of water and alcohols in apolar solvents.
Hydrogen bonds in polymers
Hydrogen bonding plays an important role in determining the three-dimensional structures and the properties adopted by many proteins. Compared to the , , and bonds that comprise most polymers, hydrogen bonds are far weaker, perhaps 5%. Thus, hydrogen bonds can be broken by chemical or mechanical means while retaining the basic structure of the polymer backbone. This hierarchy of bond strengths (covalent bonds being stronger than hydrogen-bonds being stronger than van der Waals forces) is relevant in the properties of many materials.
DNA
In these macromolecules, bonding between parts of the same macromolecule cause it to fold into a specific shape, which helps determine the molecule's physiological or biochemical role. For example, the double helical structure of DNA is due largely to hydrogen bonding between its base pairs (as well as pi stacking interactions), which link one complementary strand to the other and enable replication.
Proteins
In the secondary structure of proteins, hydrogen bonds form between the backbone oxygens and amide hydrogens. When the spacing of the amino acid residues participating in a hydrogen bond occurs regularly between positions i and , an alpha helix is formed. When the spacing is less, between positions i and , then a 310 helix is formed. When two strands are joined by hydrogen bonds involving alternating residues on each participating strand, a beta sheet is formed. Hydrogen bonds also play a part in forming the tertiary structure of protein through interaction of R-groups. (See also protein folding).
Bifurcated H-bond systems are common in alpha-helical transmembrane proteins between the backbone amide of residue i as the H-bond acceptor and two H-bond donors from residue : the backbone amide and a side-chain hydroxyl or thiol . The energy preference of the bifurcated H-bond hydroxyl or thiol system is -3.4 kcal/mol or -2.6 kcal/mol, respectively. This type of bifurcated H-bond provides an intrahelical H-bonding partner for polar side-chains, such as serine, threonine, and cysteine within the hydrophobic membrane environments.
The role of hydrogen bonds in protein folding has also been linked to osmolyte-induced protein stabilization. Protective osmolytes, such as trehalose and sorbitol, shift the protein folding equilibrium toward the folded state, in a concentration dependent manner. While the prevalent explanation for osmolyte action relies on excluded volume effects that are entropic in nature, circular dichroism (CD) experiments have shown osmolyte to act through an enthalpic effect. The molecular mechanism for their role in protein stabilization is still not well established, though several mechanisms have been proposed. Computer molecular dynamics simulations suggest that osmolytes stabilize proteins by modifying the hydrogen bonds in the protein hydration layer.
Several studies have shown that hydrogen bonds play an important role for the stability between subunits in multimeric proteins. For example, a study of sorbitol dehydrogenase displayed an important hydrogen bonding network which stabilizes the tetrameric quaternary structure within the mammalian sorbitol dehydrogenase protein family.
A protein backbone hydrogen bond incompletely shielded from water attack is a dehydron. Dehydrons promote the removal of water through proteins or ligand binding. The exogenous dehydration enhances the electrostatic interaction between the amide and carbonyl groups by de-shielding their partial charges. Furthermore, the dehydration stabilizes the hydrogen bond by destabilizing the nonbonded state consisting of dehydrated isolated charges.
Wool, being a protein fibre, is held together by hydrogen bonds, causing wool to recoil when stretched. However, washing at high temperatures can permanently break the hydrogen bonds and a garment may permanently lose its shape.
Other polymers
The properties of many polymers are affected by hydrogen bonds within and/or between the chains. Prominent examples include cellulose and its derived fibers, such as cotton and flax. In nylon, hydrogen bonds between carbonyl and the amide NH effectively link adjacent chains, which gives the material mechanical strength. Hydrogen bonds also affect the aramid fibre, where hydrogen bonds stabilize the linear chains laterally. The chain axes are aligned along the fibre axis, making the fibres extremely stiff and strong. Hydrogen-bond networks make both polymers sensitive to humidity levels in the atmosphere because water molecules can diffuse into the surface and disrupt the network. Some polymers are more sensitive than others. Thus nylons are more sensitive than aramids, and nylon 6 more sensitive than nylon-11.
Symmetric hydrogen bond
A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a three-center four-electron bond. This type of bond is much stronger than a "normal" hydrogen bond. The effective bond order is 0.5, so its strength is comparable to a covalent bond. It is seen in ice at high pressure, and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion . Due to severe steric constraint, the protonated form of Proton Sponge (1,8-bis(dimethylamino)naphthalene) and its derivatives also have symmetric hydrogen bonds, although in the case of protonated Proton Sponge, the assembly is bent.
Dihydrogen bond
The hydrogen bond can be compared with the closely related dihydrogen bond, which is also an intermolecular bonding interaction involving hydrogen atoms. These structures have been known for some time, and well characterized by crystallography; however, an understanding of their relationship to the conventional hydrogen bond, ionic bond, and covalent bond remains unclear. Generally, the hydrogen bond is characterized by a proton acceptor that is a lone pair of electrons in nonmetallic atoms (most notably in the nitrogen, and chalcogen groups). In some cases, these proton acceptors may be pi-bonds or metal complexes. In the dihydrogen bond, however, a metal hydride serves as a proton acceptor, thus forming a hydrogen-hydrogen interaction. Neutron diffraction has shown that the molecular geometry of these complexes is similar to hydrogen bonds, in that the bond length is very adaptable to the metal complex/hydrogen donor system.
Application to drugs
The Hydrogen bond is relevant to drug design. According to Lipinski's rule of five the majority of orally active drugs have no more than five hydrogen bond donors and fewer than ten hydrogen bond acceptors. These interactions exist between nitrogen–hydrogen and oxygen–hydrogen centers. Many drugs do not, however, obey these "rules".
References
Further reading
George A. Jeffrey. An Introduction to Hydrogen Bonding (Topics in Physical Chemistry). Oxford University Press, US (March 13, 1997).
External links
The Bubble Wall (Audio slideshow from the National High Magnetic Field Laboratory explaining cohesion, surface tension and hydrogen bonds)
isotopic effect on bond dynamics
Chemical bonding
Hydrogen physics
Supramolecular chemistry
Intermolecular forces | 0.764868 | 0.999024 | 0.764122 |
Reaction rate constant | In chemical kinetics, a reaction rate constant or reaction rate coefficient is a proportionality constant which quantifies the rate and direction of a chemical reaction by relating it with the concentration of reactants.
For a reaction between reactants A and B to form a product C,
where
A and B are reactants
C is a product
a, b, and c are stoichiometric coefficients,
the reaction rate is often found to have the form:
Here is the reaction rate constant that depends on temperature, and [A] and [B] are the molar concentrations of substances A and B in moles per unit volume of solution, assuming the reaction is taking place throughout the volume of the solution. (For a reaction taking place at a boundary, one would use moles of A or B per unit area instead.)
The exponents m and n are called partial orders of reaction and are not generally equal to the stoichiometric coefficients a and b. Instead they depend on the reaction mechanism and can be determined experimentally.
Sum of m and n, that is, (m + n) is called the overall order of reaction.
Elementary steps
For an elementary step, there is a relationship between stoichiometry and rate law, as determined by the law of mass action. Almost all elementary steps are either unimolecular or bimolecular. For a unimolecular step
the reaction rate is described by , where is a unimolecular rate constant. Since a reaction requires a change in molecular geometry, unimolecular rate constants cannot be larger than the frequency of a molecular vibration. Thus, in general, a unimolecular rate constant has an upper limit of k1 ≤ ~1013 s−1.
For a bimolecular step
the reaction rate is described by , where is a bimolecular rate constant. Bimolecular rate constants have an upper limit that is determined by how frequently molecules can collide, and the fastest such processes are limited by diffusion. Thus, in general, a bimolecular rate constant has an upper limit of k2 ≤ ~1010 M−1s−1.
For a termolecular step
the reaction rate is described by , where is a termolecular rate constant.
There are few examples of elementary steps that are termolecular or higher order, due to the low probability of three or more molecules colliding in their reactive conformations and in the right orientation relative to each other to reach a particular transition state. There are, however, some termolecular examples in the gas phase. Most involve the recombination of two atoms or small radicals or molecules in the presence of an inert third body which carries off excess energy, such as O + + → + . One well-established example is the termolecular step 2 I + → 2 HI in the hydrogen-iodine reaction. In cases where a termolecular step might plausibly be proposed, one of the reactants is generally present in high concentration (e.g., as a solvent or diluent gas).
Relationship to other parameters
For a first-order reaction (including a unimolecular one-step process), there is a direct relationship between the unimolecular rate constant and the half-life of the reaction: . Transition state theory gives a relationship between the rate constant and the Gibbs free energy of activation a quantity that can be regarded as the free energy change needed to reach the transition state. In particular, this energy barrier incorporates both enthalpic and entropic changes that need to be achieved for the reaction to take place: The result from transition state theory is where h is the Planck constant and R the molar gas constant. As useful rules of thumb, a first-order reaction with a rate constant of 10−4 s−1 will have a half-life (t1/2) of approximately 2 hours. For a one-step process taking place at room temperature, the corresponding Gibbs free energy of activation (ΔG‡) is approximately 23 kcal/mol.
Dependence on temperature
The Arrhenius equation is an elementary treatment that gives the quantitative basis of the relationship between the activation energy and the reaction rate at which a reaction proceeds. The rate constant as a function of thermodynamic temperature is then given by:
The reaction rate is given by:
where Ea is the activation energy, and R is the gas constant, and m and n are experimentally determined partial orders in [A] and [B], respectively. Since at temperature T the molecules have energies according to a Boltzmann distribution, one can expect the proportion of collisions with energy greater than Ea to vary with e. The constant of proportionality A is the pre-exponential factor, or frequency factor (not to be confused here with the reactant A) takes into consideration the frequency at which reactant molecules are colliding and the likelihood that a collision leads to a successful reaction. Here, A has the same dimensions as an (m + n)-order rate constant (see Units below).
Another popular model that is derived using more sophisticated statistical mechanical considerations is the Eyring equation from transition state theory:
where ΔG‡ is the free energy of activation, a parameter that incorporates both the enthalpy and entropy change needed to reach the transition state. The temperature dependence of ΔG‡ is used to compute these parameters, the enthalpy of activation ΔH‡ and the entropy of activation ΔS‡, based on the defining formula ΔG‡ = ΔH‡ − TΔS‡. In effect, the free energy of activation takes into account both the activation energy and the likelihood of successful collision, while the factor kBT/h gives the frequency of molecular collision.
The factor (c⊖)1-M ensures the dimensional correctness of the rate constant when the transition state in question is bimolecular or higher. Here, c⊖ is the standard concentration, generally chosen based on the unit of concentration used (usually c⊖ = 1 mol L−1 = 1 M), and M is the molecularity of the transition state. Lastly, κ, usually set to unity, is known as the transmission coefficient, a parameter which essentially serves as a "fudge factor" for transition state theory.
The biggest difference between the two theories is that Arrhenius theory attempts to model the reaction (single- or multi-step) as a whole, while transition state theory models the individual elementary steps involved. Thus, they are not directly comparable, unless the reaction in question involves only a single elementary step.
Finally, in the past, collision theory, in which reactants are viewed as hard spheres with a particular cross-section, provided yet another common way to rationalize and model the temperature dependence of the rate constant, although this approach has gradually fallen into disuse. The equation for the rate constant is similar in functional form to both the Arrhenius and Eyring equations:
where P is the steric (or probability) factor and Z is the collision frequency, and ΔE is energy input required to overcome the activation barrier. Of note, , making the temperature dependence of k different from both the Arrhenius and Eyring models.
Comparison of models
All three theories model the temperature dependence of k using an equation of the form
for some constant C, where α = 0, , and 1 give Arrhenius theory, collision theory, and transition state theory, respectively, although the imprecise notion of ΔE, the energy needed to overcome the activation barrier, has a slightly different meaning in each theory. In practice, experimental data does not generally allow a determination to be made as to which is "correct" in terms of best fit. Hence, all three are conceptual frameworks that make numerous assumptions, both realistic and unrealistic, in their derivations. As a result, they are capable of providing different insights into a system.
Units
The units of the rate constant depend on the overall order of reaction.
If concentration is measured in units of mol·L−1 (sometimes abbreviated as M), then
For order (m + n), the rate constant has units of mol1−(m+n)·L(m+n)−1·s−1 (or M1−(m+n)·s−1)
For order zero, the rate constant has units of mol·L−1·s−1 (or M·s−1)
For order one, the rate constant has units of s−1
For order two, the rate constant has units of L·mol−1·s−1 (or M−1·s−1)
For order three, the rate constant has units of L2·mol−2·s−1 (or M−2·s−1)
For order four, the rate constant has units of L3·mol−3·s−1 (or M−3·s−1)
Plasma and gases
Calculation of rate constants of the processes of generation and relaxation of electronically and vibrationally excited particles are of significant importance. It is used, for example, in the computer simulation of processes in plasma chemistry or microelectronics. First-principle based models should be used for such calculation. It can be done with the help of computer simulation software.
Rate constant calculations
Rate constant can be calculated for elementary reactions by molecular dynamics simulations.
One possible approach is to calculate the mean residence time of the molecule in the reactant state. Although this is feasible for small systems with short residence times, this approach is not widely applicable as reactions are often rare events on molecular scale.
One simple approach to overcome this problem is Divided Saddle Theory. Such other methods as the Bennett Chandler procedure, and Milestoning have also been developed for rate constant calculations.
Divided saddle theory
The theory is based on the assumption that the reaction can be described by a reaction coordinate, and that we can apply Boltzmann distribution at least in the reactant state.
A new, especially reactive segment of the reactant, called the saddle domain, is introduced, and the rate constant is factored:
where α is the conversion factor between the reactant state and saddle domain, while kSD is the rate constant from the saddle domain. The first can be simply calculated from the free energy surface, the latter is easily accessible from short molecular dynamics simulations
See also
Reaction rate
Equilibrium constant
Molecularity
References
Chemical kinetics | 0.769881 | 0.992493 | 0.764102 |
Financial modeling | Financial modeling is the task of building an abstract representation (a model) of a real world financial situation. This is a mathematical model designed to represent (a simplified version of) the performance of a financial asset or portfolio of a business, project, or any other investment.
Typically, then, financial modeling is understood to mean an exercise in either asset pricing or corporate finance, of a quantitative nature. It is about translating a set of hypotheses about the behavior of markets or agents into numerical predictions. At the same time, "financial modeling" is a general term that means different things to different users; the reference usually relates either to accounting and corporate finance applications or to quantitative finance applications.
Accounting
In corporate finance and the accounting profession, financial modeling typically entails financial statement forecasting; usually the preparation of detailed company-specific models used for decision making purposes, valuation and financial analysis.
Applications include:
Business valuation and stock valuation - especially via discounted cash flow, but including other valuation approaches
Scenario planning and management decision making ("what is"; "what if"; "what has to be done")
Budgeting: revenue forecasting and analytics; production budgeting; operations budgeting
Capital budgeting, including cost of capital (i.e. WACC) calculations
Cash flow forecasting; working capital- and treasury management; asset and liability management
Financial statement analysis / ratio analysis (including of operating- and finance leases, and R&D)
Transaction analytics: M&A, PE, VC, LBO, IPO, Project finance, P3
Credit decisioning: Credit analysis, Consumer credit risk; impairment- and provision-modeling
Management accounting: Activity-based costing, Profitability analysis, Cost analysis, Whole-life cost, Managerial risk accounting
Public sector procurement
To generalize as to the nature of these models:
firstly, as they are built around financial statements, calculations and outputs are monthly, quarterly or annual;
secondly, the inputs take the form of "assumptions", where the analyst specifies the values that will apply in each period for external / global variables (exchange rates, tax percentage, etc....; may be thought of as the model parameters), and for internal / company specific variables (wages, unit costs, etc....). Correspondingly, both characteristics are reflected (at least implicitly) in the mathematical form of these models:
firstly, the models are in discrete time;
secondly, they are deterministic.
For discussion of the issues that may arise, see below; for discussion as to more sophisticated approaches sometimes employed, see and .
Modelers are often designated "financial analyst" (and are sometimes referred to, tongue in cheek, as "number crunchers"). Typically, the modeler will have completed an MBA or MSF with (optional) coursework in "financial modeling". Accounting qualifications and finance certifications such as the CIIA and CFA generally do not provide direct or explicit training in modeling. At the same time, numerous commercial training courses are offered, both through universities and privately.
For the components and steps of business modeling here, see ; see also for further discussion and considerations.
Although purpose-built business software does exist, the vast proportion of the market is spreadsheet-based; this is largely since the models are almost always company-specific. Also, analysts will each have their own criteria and methods for financial modeling. Microsoft Excel now has by far the dominant position, having overtaken Lotus 1-2-3 in the 1990s. Spreadsheet-based modelling can have its own problems, and several standardizations and "best practices" have been proposed. "Spreadsheet risk" is increasingly studied and managed; see model audit.
One critique here, is that model outputs, i.e. line items, often inhere "unrealistic implicit assumptions" and "internal inconsistencies". (For example, a forecast for growth in revenue but without corresponding increases in working capital, fixed assets and the associated financing, may imbed unrealistic assumptions about asset turnover, debt level and/or equity financing. See .) What is required, but often lacking, is that all key elements are explicitly and consistently forecasted.
Related to this, is that modellers often additionally "fail to identify crucial assumptions" relating to inputs, "and to explore what can go wrong". Here, in general, modellers "use point values and simple arithmetic instead of probability distributions and statistical measures"
— i.e., as mentioned, the problems are treated as deterministic in nature — and thus calculate a single value for the asset or project, but without providing information on the range, variance and sensitivity of outcomes;
see .
A further, more general critique relates to the lack of basic computer programming concepts amongst modelers,
with the result that their models are often poorly structured, and difficult to maintain. Serious criticism is also directed at the nature of budgeting, and its impact on the organization.
Quantitative finance
In quantitative finance, financial modeling entails the development of a sophisticated mathematical model. Models here deal with asset prices, market movements, portfolio returns and the like. A general distinction is between:
(i) "quantitative asset pricing", models of the returns of different stocks;
(ii) "financial engineering", models of the price or returns of derivative securities;
(iii) "quantitative portfolio management", models underpinning automated trading, high-frequency trading, algorithmic trading, and program trading.
Relatedly, applications include:
Option pricing and calculation of their "Greeks" ( accommodating volatility surfaces - via local / stochastic volatility models - and multi-curves)
Other derivatives, especially interest rate derivatives, credit derivatives and exotic derivatives
Modeling the term structure of interest rates (bootstrapping / multi-curves, short-rate models, HJM framework) and any related credit spread
Credit valuation adjustment, CVA, as well as the various XVA
Credit risk, counterparty credit risk, and regulatory capital: EAD, PD, LGD, PFE, EE; Jarrow–Turnbull model, Merton model, KMV model
Structured product design and manufacture
Portfolio optimization and Quantitative investing more generally; see further re optimization methods employed.
Financial risk modeling: value at risk (parametric- and / or historical, CVaR, EVT), stress testing, "sensitivities" analysis (Greeks, duration, convexity, DV01, KRD, CS01, JTD)
Corporate finance applications: cash flow analytics, corporate financing activity prediction problems, and risk analysis in capital investment
Credit scoring and provisioning; Credit scorecards and
Real options
Actuarial applications: Dynamic financial analysis (DFA), UIBFM, investment modeling
These problems are generally stochastic and continuous in nature, and models here thus require complex algorithms, entailing computer simulation, advanced numerical methods (such as numerical differential equations, numerical linear algebra, dynamic programming) and/or the development of optimization models. The general nature of these problems is discussed under , while specific techniques are listed under .
For further discussion here see also: Brownian model of financial markets; Martingale pricing; Financial models with long-tailed distributions and volatility clustering; Extreme value theory; Historical simulation (finance).
Modellers are generally referred to as "quants", i.e. quantitative analysts, and typically have advanced (Ph.D. level) backgrounds in quantitative disciplines such as statistics, physics, engineering, computer science, mathematics or operations research.
Alternatively, or in addition to their quantitative background, they complete a finance masters with a quantitative orientation, such as the Master of Quantitative Finance, or the more specialized Master of Computational Finance or Master of Financial Engineering; the CQF certificate is increasingly common.
Although spreadsheets are widely used here also (almost always requiring extensive VBA);
custom C++, Fortran or Python, or numerical-analysis software such as MATLAB, are often preferred, particularly where stability or speed is a concern.
MATLAB is often used at the research or prototyping stage because of its intuitive programming, graphical and debugging tools, but C++/Fortran are preferred for conceptually simple but high computational-cost applications where MATLAB is too slow;
Python is increasingly used due to its simplicity, and large standard library / available applications, including QuantLib.
Additionally, for many (of the standard) derivative and portfolio applications, commercial software is available, and the choice as to whether the model is to be developed in-house, or whether existing products are to be deployed, will depend on the problem in question.
See .
The complexity of these models may result in incorrect pricing or hedging or both. This Model risk is the subject of ongoing research by finance academics, and is a topic of great, and growing, interest in the risk management arena.
Criticism of the discipline (often preceding the financial crisis of 2007–08 by several years) emphasizes the differences between the mathematical and physical sciences, and finance, and the resultant caution to be applied by modelers, and by traders and risk managers using their models. Notable here are Emanuel Derman and Paul Wilmott, authors of the Financial Modelers' Manifesto. Some go further and question whether the mathematical- and statistical modeling techniques usually applied to finance are at all appropriate (see the assumptions made for options and for portfolios).
In fact, these may go so far as to question the "empirical and scientific validity... of modern financial theory".
Notable here are Nassim Taleb and Benoit Mandelbrot.
See also , and .
Competitive modeling
Several financial modeling competitions exist, emphasizing speed and accuracy in modeling. The Microsoft-sponsored ModelOff Financial Modeling World Championships were held annually from 2012 to 2019, with competitions throughout the year and a finals championship in New York or London. After its end in 2020, several other modeling championships have been started, including the Financial Modeling World Cup and Microsoft Excel Collegiate Challenge, also sponsored by Microsoft.
Philosophy of financial modeling
Philosophy of financial modeling is a branch of philosophy concerned with the foundations, methods, and implications of modeling science.
In the philosophy of financial modeling, scholars have more recently begun to question the generally-held assumption that financial modelers seek to represent any "real-world" or actually ongoing investment situation. Instead, it has been suggested that the task of the financial modeler resides in demonstrating the possibility of a transaction in a prospective investment scenario, from a limited base of possibility conditions initially assumed in the model.
See also
All models are wrong
Asset pricing model
Economic model
Financial engineering
Financial forecast
Financial Modelers' Manifesto
Financial models with long-tailed distributions and volatility clustering
Financial planning
Integrated business planning
Model audit
Modeling and analysis of financial markets
Profit model
Return on modeling effort
References
Bibliography
General
Corporate finance
Quantitative finance
Financial models
Actuarial science
Mathematical finance
Corporate finance
Computational fields of study | 0.769192 | 0.993375 | 0.764097 |
Predictive modelling | Predictive modelling uses statistics to predict outcomes. Most often the event one wants to predict is in the future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred. For example, predictive models are often used to detect crimes and identify suspects, after the crime has taken place.
In many cases, the model is chosen on the basis of detection theory to try to guess the probability of an outcome given a set amount of input data, for example given an email determining how likely that it is spam.
Models can use one or more classifiers in trying to determine the probability of a set of data belonging to another set. For example, a model might be used to determine whether an email is spam or "ham" (non-spam).
Depending on definitional boundaries, predictive modelling is synonymous with, or largely overlapping with, the field of machine learning, as it is more commonly referred to in academic or research and development contexts. When deployed commercially, predictive modelling is often referred to as predictive analytics.
Predictive modelling is often contrasted with causal modelling/analysis. In the former, one may be entirely satisfied to make use of indicators of, or proxies for, the outcome of interest. In the latter, one seeks to determine true cause-and-effect relationships. This distinction has given rise to a burgeoning literature in the fields of research methods and statistics and to the common statement that "correlation does not imply causation".
Models
Nearly any statistical model can be used for prediction purposes. Broadly speaking, there are two classes of predictive models: parametric and non-parametric. A third class, semi-parametric models, includes features of both. Parametric models make "specific assumptions with regard to one or more of the population parameters that characterize the underlying distribution(s)". Non-parametric models "typically involve fewer assumptions of structure and distributional form [than parametric models] but usually contain strong assumptions about independencies".
Applications
Uplift modelling
Uplift modelling is a technique for modelling the change in probability caused by an action. Typically this is a marketing action such as an offer to buy a product, to use a product more or to re-sign a contract. For example, in a
retention campaign you wish to predict the change in probability that a customer will remain a customer if they are contacted. A model of the change in probability allows the retention campaign to be targeted at those customers on whom the change in probability will be beneficial. This allows the retention programme to avoid triggering unnecessary churn or customer attrition without wasting money contacting people who would act anyway.
Archaeology
Predictive modelling in archaeology gets its foundations from Gordon Willey's mid-fifties work in the Virú Valley of Peru. Complete, intensive surveys were performed then covariability between cultural remains and natural features such as slope and vegetation were determined. Development of quantitative methods and a greater availability of applicable data led to growth of the discipline in the 1960s and by the late 1980s, substantial progress had been made by major land managers worldwide.
Generally, predictive modelling in archaeology is establishing statistically valid causal or covariable relationships between natural proxies such as soil types, elevation, slope, vegetation, proximity to water, geology, geomorphology, etc., and the presence of archaeological features. Through analysis of these quantifiable attributes from land that has undergone archaeological survey, sometimes the "archaeological sensitivity" of unsurveyed areas can be anticipated based on the natural proxies in those areas. Large land managers in the United States, such as the Bureau of Land Management (BLM), the Department of Defense (DOD), and numerous highway and parks agencies, have successfully employed this strategy. By using predictive modelling in their cultural resource management plans, they are capable of making more informed decisions when planning for activities that have the potential to require ground disturbance and subsequently affect archaeological sites.
Customer relationship management
Predictive modelling is used extensively in analytical customer relationship management and data mining to produce customer-level models that describe the likelihood that a customer will take a particular action. The actions are usually sales, marketing and customer retention related.
For example, a large consumer organization such as a mobile telecommunications operator will have a set of predictive models for product cross-sell, product deep-sell (or upselling) and churn. It is also now more common for such an organization to have a model of savability using an uplift model. This predicts the likelihood that a customer can be saved at the end of a contract period (the change in churn probability) as opposed to the standard churn prediction model.
Auto insurance
Predictive modelling is utilised in vehicle insurance to assign risk of incidents to policy holders from information obtained from policy holders. This is extensively employed in usage-based insurance solutions where predictive models utilise telemetry-based data to build a model of predictive risk for claim likelihood. Black-box auto insurance predictive models utilise GPS or accelerometer sensor input only. Some models include a wide range of predictive input beyond basic telemetry including advanced driving behaviour, independent crash records, road history, and user profiles to provide improved risk models.
Health care
In 2009 Parkland Health & Hospital System began analyzing electronic medical records in order to use predictive modeling to help identify patients at high risk of readmission. Initially, the hospital focused on patients with congestive heart failure, but the program has expanded to include patients with diabetes, acute myocardial infarction, and pneumonia.
In 2018, Banerjee et al. proposed a deep learning model for estimating short-term life expectancy (>3 months) of the patients by analyzing free-text clinical notes in the electronic medical record, while maintaining the temporal visit sequence. The model was trained on a large dataset (10,293 patients) and validated on a separated dataset (1818 patients). It achieved an area under the ROC (Receiver Operating Characteristic) curve of 0.89. To provide explain-ability, they developed an interactive graphical tool that may improve physician understanding of the basis for the model's predictions. The high accuracy and explain-ability of the PPES-Met model may enable the model to be used as a decision support tool to personalize metastatic cancer treatment and provide valuable assistance to physicians.
The first clinical prediction model reporting guidelines were published in 2015 (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD)), and have since been updated.
Predictive modelling has been used to estimate surgery duration.
Algorithmic trading
Predictive modeling in trading is a modeling process wherein the probability of an outcome is predicted using a set of predictor variables. Predictive models can be built for different assets like stocks, futures, currencies, commodities etc. Predictive modeling is still extensively used by trading firms to devise strategies and trade. It utilizes mathematically advanced software to evaluate indicators on price, volume, open interest and other historical data, to discover repeatable patterns.
Lead tracking systems
Predictive modelling gives lead generators a head start by forecasting data-driven outcomes for each potential campaign. This method saves time and exposes potential blind spots to help client make smarter decisions.
Notable failures of predictive modeling
Although not widely discussed by the mainstream predictive modeling community, predictive modeling is a methodology that has been widely used in the financial industry in the past and some of the major failures contributed to the financial crisis of 2007–2008. These failures exemplify the danger of relying exclusively on models that are essentially backward looking in nature. The following examples are by no mean a complete list:
Bond rating. S&P, Moody's and Fitch quantify the probability of default of bonds with discrete variables called rating. The rating can take on discrete values from AAA down to D. The rating is a predictor of the risk of default based on a variety of variables associated with the borrower and historical macroeconomic data. The rating agencies failed with their ratings on the US$600 billion mortgage backed Collateralized Debt Obligation (CDO) market. Almost the entire AAA sector (and the super-AAA sector, a new rating the rating agencies provided to represent super safe investment) of the CDO market defaulted or severely downgraded during 2008, many of which obtained their ratings less than just a year previously.
So far, no statistical models that attempt to predict equity market prices based on historical data are considered to consistently make correct predictions over the long term. One particularly memorable failure is that of Long Term Capital Management, a fund that hired highly qualified analysts, including a Nobel Memorial Prize in Economic Sciences winner, to develop a sophisticated statistical model that predicted the price spreads between different securities. The models produced impressive profits until a major debacle that caused the then Federal Reserve chairman Alan Greenspan to step in to broker a rescue plan by the Wall Street broker dealers in order to prevent a meltdown of the bond market.
Possible fundamental limitations of predictive models based on data fitting
History cannot always accurately predict the future. Using relations derived from historical data to predict the future implicitly assumes there are certain lasting conditions or constants in a complex system. This almost always leads to some imprecision when the system involves people.
Unknown unknowns are an issue. In all data collection, the collector first defines the set of variables for which data is collected. However, no matter how extensive the collector considers his/her selection of the variables, there is always the possibility of new variables that have not been considered or even defined, yet are critical to the outcome.
Algorithms can be defeated adversarially. After an algorithm becomes an accepted standard of measurement, it can be taken advantage of by people who understand the algorithm and have the incentive to fool or manipulate the outcome. This is what happened to the CDO rating described above. The CDO dealers actively fulfilled the rating agencies' input to reach an AAA or super-AAA on the CDO they were issuing, by cleverly manipulating variables that were "unknown" to the rating agencies' "sophisticated" models.
See also
Calibration (statistics)
Prediction interval
Predictive analytics
Predictive inference
Statistical learning theory
Statistical model
References
Further reading
Statistical classification
Statistical models
Predictive analytics
Business intelligence | 0.774014 | 0.987154 | 0.764071 |
Pyrophosphate | In chemistry, pyrophosphates are phosphorus oxyanions that contain two phosphorus atoms in a linkage. A number of pyrophosphate salts exist, such as disodium pyrophosphate and tetrasodium pyrophosphate, among others. Often pyrophosphates are called diphosphates. The parent pyrophosphates are derived from partial or complete neutralization of pyrophosphoric acid. The pyrophosphate bond is also sometimes referred to as a phosphoanhydride bond, a naming convention which emphasizes the loss of water that occurs when two phosphates form a new bond, and which mirrors the nomenclature for anhydrides of carboxylic acids. Pyrophosphates are found in ATP and other nucleotide triphosphates, which are important in biochemistry. The term pyrophosphate is also the name of esters formed by the condensation of a phosphorylated biological compound with inorganic phosphate, as for dimethylallyl pyrophosphate. This bond is also referred to as a high-energy phosphate bond.
Acidity
Pyrophosphoric acid is a tetraprotic acid, with four distinct pKa's:
, pKa1 = 0.85
, pKa2 = 1.96
, pKa3 = 6.60
, pKa4 = 9.41
The pKa's occur in two distinct ranges because deprotonations occur on separate phosphate groups. For comparison with the pKa's for phosphoric acid are 2.14, 7.20, and 12.37.
At physiological pH's, pyrophosphate exists as a mixture of doubly and singly protonated forms.
Preparation
Disodium pyrophosphate is prepared by thermal condensation of sodium dihydrogen phosphate or by partial deprotonation of pyrophosphoric acid.
Pyrophosphates are generally white or colorless. The alkali metal salts are water-soluble. They are good complexing agents for metal ions (such as calcium and many transition metals) and have many uses in industrial chemistry. Pyrophosphate is the first member of an entire series of polyphosphates.
In biochemistry
The anion is abbreviated PPi, standing for inorganic pyrophosphate. It is formed by the hydrolysis of ATP into AMP in cells.
ATP → AMP + PPi
For example, when a nucleotide is incorporated into a growing DNA or RNA strand by a polymerase, pyrophosphate (PPi) is released. Pyrophosphorolysis is the reverse of the polymerization reaction in which pyrophosphate reacts with the 3′-nucleosidemonophosphate (NMP or dNMP), which is removed from the oligonucleotide to release the corresponding triphosphate (dNTP from DNA, or NTP from RNA).
The pyrophosphate anion has the structure , and is an acid anhydride of phosphate. It is unstable in aqueous solution and hydrolyzes into inorganic phosphate:
or in biologists' shorthand notation:
In the absence of enzymic catalysis, hydrolysis reactions of simple polyphosphates such as pyrophosphate, linear triphosphate, ADP, and ATP normally proceed extremely slowly in all but highly acidic media.
(The reverse of this reaction is a method of preparing pyrophosphates by heating phosphates.)
This hydrolysis to inorganic phosphate effectively renders the cleavage of ATP to AMP and PPi irreversible, and biochemical reactions coupled to this hydrolysis are irreversible as well.
PPi occurs in synovial fluid, blood plasma, and urine at levels sufficient to block calcification and may be a natural inhibitor of hydroxyapatite formation in extracellular fluid (ECF). Cells may channel intracellular PPi into ECF. ANK is a nonenzymatic plasma-membrane PPi channel that supports extracellular PPi levels. Defective function of the membrane PPi channel ANK is associated with low extracellular PPi and elevated intracellular PPi. Ectonucleotide pyrophosphatase/phosphodiesterase (ENPP) may function to raise extracellular PPi.
From the standpoint of high energy phosphate accounting, the hydrolysis of ATP to AMP and PPi requires two high-energy phosphates, as to reconstitute AMP into ATP requires two phosphorylation reactions.
AMP + ATP → 2 ADP
2 ADP + 2 Pi → 2 ATP
The plasma concentration of inorganic pyrophosphate has a reference range of 0.58–3.78 μM (95% prediction interval).
Terpenes
Isopentenyl pyrophosphate converts to geranyl pyrophosphate, the precursor to tens of thousands of terpeness and terpenoids.
As a food additive
Various diphosphates are used as emulsifiers, stabilisers, acidity regulators, raising agents, sequestrants, and water retention agents in food processing. They are classified in the E number scheme under E450:
E450(a): disodium dihydrogen diphosphate; trisodium diphosphate; tetrasodium diphosphate (TSPP); tetrapotassium diphosphate
E450(b): pentasodium and pentapotassium triphosphate
E450(c): sodium and potassium polyphosphates
In particular, various formulations of diphosphates are used to stabilize whipped cream.
See also
Adenosine monophosphate
Adenosine diphosphate
Adenosine triphosphate
ATPase
ATP hydrolysis
ATP synthase
Biochemistry
Bone
Calcium pyrophosphate
Calcium pyrophosphate dihydrate deposition disease
Catalysis
DNA
High energy phosphate
Inorganic pyrophosphatase
Nucleoside triphosphate
Nucleotide
Organophosphate
Oxidative phosphorylation
Phosphate
Phosphoric acid
Phosphoric acids and phosphates
RNA
Sodium pyrophosphate
Superphosphate
Thiamine pyrophosphate
Tooth
Zinc pyrophosphate
References
Further reading
External links
Anions
Dietary minerals
Molecular biology
Nucleotides
E-number additives | 0.771677 | 0.990137 | 0.764066 |
Denudation | Denudation is the geological process in which moving water, ice, wind, and waves erode the Earth's surface, leading to a reduction in elevation and in relief of landforms and landscapes. Although the terms erosion and denudation are used interchangeably, erosion is the transport of soil and rocks from one location to another, and denudation is the sum of processes, including erosion, that result in the lowering of Earth's surface. Endogenous processes such as volcanoes, earthquakes, and tectonic uplift can expose continental crust to the exogenous processes of weathering, erosion, and mass wasting. The effects of denudation have been recorded for millennia but the mechanics behind it have been debated for the past 200 years and have only begun to be understood in the past few decades.
Description
Denudation incorporates the mechanical, biological, and chemical processes of erosion, weathering, and mass wasting. Denudation can involve the removal of both solid particles and dissolved material. These include sub-processes of cryofracture, insolation weathering, slaking, salt weathering, bioturbation, and anthropogenic impacts.
Factors affecting denudation include:
Anthropogenic (human) activity, including agriculture, damming, mining, and deforestation;
Biosphere, via animals, plants, and microorganisms contributing to chemical and physical weathering;
Climate, most directly through chemical weathering from rain, but also because climate dictates what kind of weathering occurs;
Lithology or the type of rock;
Surface topography and changes to surface topography, such as mass wasting and erosion; and
Tectonic activity, such as deformation, the changing of rocks due to stress mainly from tectonic forces, and orogeny, the process that forms mountains.
Historical theories
The effects of denudation have been written about since antiquity, although the terms "denudation" and "erosion" have been used interchangeably throughout most of history. In the Age of Enlightenment, scholars began trying to understand how denudation and erosion occurred without mythical or biblical explanations. Throughout the 18th century, scientists theorized valleys are formed by streams running through them, not from floods or other cataclysms. In 1785, Scottish physician James Hutton proposed an Earth history based on observable processes over an unlimited amount of time, which marked a shift from assumptions based on faith to reasoning based on logic and observation. In 1802, John Playfair, a friend of Hutton, published a paper clarifying Hutton's ideas, explaining the basic process of water wearing down the Earth's surface, and describing erosion and chemical weathering. Between 1830 and 1833, Charles Lyell published three volumes of Principles of Geology, which describes the shaping of the surface of Earth by ongoing processes, and which endorsed and established gradual denudation in the wider scientific community.
As denudation came into the wider conscience, questions of how denudation occurs and what the result is began arising. Hutton and Playfair suggested over a period of time, a landscape would eventually be worn down to erosional planes at or near sea level, which gave the theory the name "planation". Charles Lyell proposed marine planation, oceans, and ancient shallow seas were the primary driving force behind denudation. While surprising given the centuries of observation of fluvial and pluvial erosion, this is more understandable given early geomorphology was largely developed in Britain, where the effects of coastal erosion are more evident and play a larger role in geomorphic processes. There was more evidence against marine planation than there was for it. By the 1860s, marine planation had largely fallen from favor, a move led by Andrew Ramsay, a former proponent of marine planation who recognized rain and rivers play a more important role in denudation. In North America during the mid-19th century, advancements in identifying fluvial, pluvial, and glacial erosion were made. The work being done in the Appalachians and American West that formed the basis for William Morris Davis to hypothesize peneplanation, despite the fact while peneplanation was compatible in the Appalachians, it did not work as well in the more active American West. Peneplanation was a cycle in which young landscapes are produced by uplift and denuded down to sea level, which is the base level. The process would be restarted when the old landscape was uplifted again or when the base level was lowered, producing a new, young landscape.
Publication of the Davisian cycle of erosion caused many geologists to begin looking for evidence of planation around the world. Unsatisfied with Davis's cycle due to evidence from the Western United States, Grove Karl Gilbert suggested backwearing of slopes would shape landscapes into pediplains, and W.J. McGee named these landscapes pediments. This later gave the concept the name pediplanation when L.C. King applied it on a global scale. The dominance of the Davisian cycle gave rise to several theories to explain planation, such as eolation and glacial planation, although only etchplanation survived time and scrutiny because it was based on observations and measurements done in different climates around the world and it also explained irregularities in landscapes. The majority of these concepts failed, partly because Joseph Jukes, a popular geologist and professor, separated denudation and uplift in an 1862 publication that had a lasting impact on geomorphology. These concepts also failed because the cycles, Davis's in particular, were generalizations and based on broad observations of the landscape rather than detailed measurements; many of the concepts were developed based on local or specific processes, not regional processes, and they assumed long periods of continental stability.
Some scientists opposed the Davisian cycle; one was Grove Karl Gilbert, who, based on measurements over time, realized denudation is nonlinear; he started developing theories based on fluid dynamics and equilibrium concepts. Another was Walther Penck, who devised a more complex theory that denudation and uplift occurred at the same time, and that landscape formation is based on the ratio between denudation and uplift rates. His theory proposed geomorphology is based on endogenous and exogenous processes. Penck's theory, while ultimately being ignored, returned to denudation and uplift occurring simultaneously and relying on continental mobility, even though Penck rejected continental drift. The Davisian and Penckian models were heavily debated for a few decades until Penck's was ignored and support for Davis's waned after his death as more critiques were made. One critic was John Leighly, who stated geologists did not know how landforms were developed, so Davis's theory was built upon a shaky foundation.
From 1945 to 1965, a change in geomorphology research saw a shift from mostly deductive work to detailed experimental designs that used improved technologies and techniques, although this led to research over details of established theories, rather than researching new theories. Through the 1950s and 1960s, as improvements were made in ocean geology and geophysics, it became clearer Wegener's theory on continental drift was correct and that there is constant movement of parts (the plates) of Earth's surface. Improvements were also made in geomorphology to quantify slope forms and drainage networks, and to find relationships between the form and process, and the magnitude and frequency of geomorphic processes. The final blow to peneplanation came in 1964 when a team led by Luna Leopold published Fluvial Processes in Geomorphology, which links landforms with measurable precipitation-infiltration runoff processes and concluded no peneplains exist over large areas in modern times, and any historical peneplains would have to be proven to exist, rather than inferred from modern geology. They also stated pediments could form across all rock types and regions, although through different processes. Through these findings and improvements in geophysics, the study of denudation shifted from planation to studying which relationships affect denudation–including uplift, isostasy, lithology, and vegetation–and measuring denudation rates around the world.
Measurement
Denudation is measured in the wearing down of Earth's surface in inches or centimeters per 1000 years. This rate is intended as an estimate and often assumes uniform erosion, among other things, to simplify calculations. Assumptions made are often only valid for the landscapes being studied. Measurements of denudation over large areas are performed by averaging the rates of subdivisions. Often, no adjustments are made for human impact, which causes the measurements to be inflated. Calculations have suggested soil loss of up to caused by human activity will change previously calculated denudation rates by less than 30%.
Denudation rates are usually much lower than the rates of uplift and average orogeny rates can be eight times the maximum average denudation. The only areas at which there could be equal rates of denudation and uplift are active plate margins with an extended period of continuous deformation.
Denudation is measured in catchment-scale measurements and can use other erosion measurements, which are generally split into dating and survey methods. Techniques for measuring erosion and denudation include stream load measurement, cosmogenic exposure and burial dating, erosion tracking, topographic measurements, surveying the deposition in reservoirs, landslide mapping, chemical fingerprinting, thermochronology, and analysis of sedimentary records in deposition areas. The most common way of measuring denudation is from stream load measurements taken at gauging stations. The suspended load, bed load, and dissolved load are included in measurements. The weight of the load is converted to volumetric units and the load volume is divided by the area of the watershed above the gauging station. An issue with this method of measurement is the high annual variation in fluvial erosion, which can be up to a factor of five between successive years. An important equation for denudation is the stream power law: , where E is erosion rate, K is the erodibility constant, A is drainage area, S is channel gradient, and m and n are functions that are usually given beforehand or assumed based on the location. Most denudation measurements are based on stream load measurements and analysis of the sediment or the water chemistry.
A more recent technique is cosmogenic isotope analysis, which is used in conjunction with stream load measurements and sediment analysis. This technique measures chemical weathering intensity by calculating chemical alteration in molecular proportions. Preliminary research into using cosmogenic isotopes to measure weathering was done by studying the weathering of feldspar and volcanic glass, which contain most of the material found in the Earth's upper crust. The most common isotopes used are 26Al and 10Be; however, 10Be is used more often in these analyses. 10Be is used due to its abundance and, while it is not stable, its half-life of 1.39 million years is relatively stable compared to the thousand or million-year scale in which denudation is measured. 26Al is used because of the low presence of Al in quartz, making it easy to separate, and because there is no risk of contamination of atmospheric 10Be. This technique was developed because previous denudation-rate studies assumed steady rates of erosion even though such uniformity is difficult to verify in the field and may be invalid for many landscapes; its use to help measure denudation and geologically date events was important. On average, the concentration of undisturbed cosmogenic isotopes in sediment leaving a particular basin is inversely related to the rate at which that basin is eroding. In a rapidly-eroding basin, most rock will be exposed to only a small number of cosmic rays before erosion and transport out of the basin; as a result, isotope concentration will be low. In a slowly-eroding basin, integrated cosmic ray exposure is much greater and isotope concentration will be much higher. Measuring isotopic reservoirs in most areas is difficult with this technique so uniform erosion is assumed. There is also variation in year-to-year measurements, which can be as high as a factor of three.
Problems in measuring denudation include both the technology used and the environment. Landslides can interfere with denudation measurements in mountainous regions, especially the Himalayas. The two main problems with dating methods are uncertainties in the measurements, both with equipment used and with assumptions made during measurement; and the relationship between the measured ages and histories of the markers. This relates to the problem of making assumptions based on the measurements being made and the area being measured. Environmental factors such as temperature, atmospheric pressure, humidity, elevation, wind, the speed of light at higher elevations if using lasers or time of flight measurements, instrument drift, chemical erosion, and for cosmogenic isotopes, climate and snow or glacier coverage. When studying denudation, the Stadler effect, which states measurements over short time periods show higher accumulation rates and than measurements over longer time periods, should be considered. In a study by James Gilully, the presented data suggested the denudation rate has stayed roughly the same throughout the Cenozoic era based on geological evidence; however, given estimates of denudation rates at the time of Gilully's study and the United States' elevation, it would take 11-12 million years to erode North America; well before the 66 million years of the Cenozoic.
The research on denudation is primarily done in river basins and in mountainous regions like the Himalayas because these are very geologically active regions, which allows for research between uplift and denudation. There is also research on the effects of denudation on karst because only about 30% of chemical weathering from water occurs on the surface. Denudation has a large impact on karst and landscape evolution because the most-rapid changes to landscapes occur when there are changes to subterranean structures. Other research includes effects on denudation rates; this research is mostly studying how climate and vegetation impact denudation. Research is also being done to find the relationship between denudation and isostasy; the more denudation occurs, the lighter the crust becomes in an area, which allows for uplift. The work is primarily trying to determine a ratio between denudation and uplift so better estimates can be made on changes in the landscape. In 2016 and 2019, research that attempted to apply denudation rates to improve the stream power law so it can be used more effectively was conducted.
Examples
Denudation exposes deep subvolcanic structures on the present surface of the area where volcanic activity once occurred. Subvolcanic structures such as volcanic plugs and dikes are exposed by denudation.
Other examples include:
Earthquakes causing landslides;
Haloclasty, the build-up of salt in cracks in rocks leading to erosion and weathering;
Ice accumulating in the cracks of rocks; and
Microorganisms contributing to weathering through cellular respiration.
References
Geomorphology
Geological processes | 0.769853 | 0.992474 | 0.764059 |
Mechanochemistry | Mechanochemistry (or mechanical chemistry) is the initiation of chemical reactions by mechanical phenomena. Mechanochemistry thus represents a fourth way to cause chemical reactions, complementing thermal reactions in fluids, photochemistry, and electrochemistry. Conventionally mechanochemistry focuses on the transformations of covalent bonds by mechanical force. Not covered by the topic are many phenomena: phase transitions, dynamics of biomolecules (docking, folding), and sonochemistry.
Mechanochemistry is not the same as mechanosynthesis, which refers specifically to the machine-controlled construction of complex molecular products.
In natural environments, mechanochemical reactions are frequently induced by physical processes such as earthquakes, glacier movement or hydraulic action of rivers or waves. In extreme environments such as subglacial lakes, hydrogen generated by mechnochemical reactions involving crushed silicate rocks and water can support methanogenic microbial communities. And mechanochemistry may have generated oxygen in the ancient Earth by water splitting on fractured mineral surfaces at high temperatures, potentially influencing life's origin or early evolution.
History
The primal mechanochemical project was to make fire by rubbing pieces of wood against each other, creating friction and hence heat, triggering combustion at the elevated temperature. Another method involves the use of flint and steel, during which a spark (a small particle of pyrophoric metal) spontaneously combusts in air, starting fire instantaneously.
Industrial mechanochemistry began with the grinding of two solid reactants. Mercuric sulfide (the mineral cinnabar) and copper metal thereby react to produce mercury and copper sulfide:
A special issue of Chemical Society Review was dedicated to mechanochemistry.
Scientists recognized that mechanochemical reactions occur in environments naturally due to various processes, and the reaction products have the potential to influence microbial communities in tectonically active regions. The field has garnered increasing attention recently as mechanochemistry has the potential to generate diverse molecules capable of supporting extremophilic microbes, influencing the early evolution of life, developing the systems necessary for the origin of life, or supporting alien life forms. The field has now inspired the initiation of a special research topic in the journal Frontiers in Geochemistry.
Mechanical Processes
Natural
Earthquakes crush rocks across Earth's subsurface and on other tectonically active planets. Rivers also frequently abrade rocks, revealing fresh mineral surfaces and waves at a shore erode cliffs fracture rocks and abrade sediments.
Similarly to rivers and oceans, the mechanical power of glaciers is evidenced by their impact on landscapes. As glaciers move downslope, they abrade rocks, generating fractured mineral surfaces that can partake in mechanochemical reactions.
Unnatural
In laboratories, planetary ball mills are typically used to induce crushing to investigate natural processes.
Mechanochemical transformations are often complex and different from thermal or photochemical mechanisms. Ball milling is a widely used process in which mechanical force is used to achieve chemical transformations.
It eliminates the need for many solvents, offering the possibility that mechanochemistry could help make many industries more environmentally friendly. For example, the mechanochemical process has been used to synthesize pharmaceutically-attractive phenol hydrazones.
Chemical Reactions
Mechanochemical reactions encompass reactions between mechanically fractured solid materials and any other reactants present in the environment. However, natural mechanochemical reactions frequently involve the reaction of water with crushed rock, so called water-rock reactions. Mechanochemistry is typically initiated by the breakage of bonds between atoms within many different mineral types.
Silicates
Silicates are the most common minerals in the Earth's crust, and thus comprise the mineral type most commonly involved in natural mechanochemical reactions. Silicates are made up of silicon and oxygen atoms, typically arranged in silicon tetrahedra. Mechanical processes break the bonds between the silicon and oxygen atoms. If the bonds are broken by a homolytic cleavage, unpaired electrons are generated:
≡Si–O–Si≡ → ≡Si–O• + ≡Si•
≡Si–O–O–Si≡ → ≡Si–O• + ≡Si–O•
≡Si–O–O–Si≡ → ≡Si–O–O• + ≡Si•
Hydrogen Generation
The reaction of water with silicon radicals can generate hydrogen radicals:
2≡Si• + 2H2O → 2≡Si–O–H + 2H•
2H• → H2
This mechanism can generate H2 to support methanogens in environments with few other energy sources. However, at higher temperatures (~>80 °C), hydrogen radicals react with siloxyl radicals, preventing the generation of H2 by this mechanism:
≡Si–O• + H• → ≡Si–O–H
2H• → H2
Oxidant Generation
When oxygen reacts with silicon or oxygen radicals at the surface of crushed rocks, it can chemically adsorb to the surface:
≡Si• + O2 → ≡Si–O–O•
≡Si–O• + O2 → ≡Si–O–O–O•
These oxygen radicals can then generate oxidants such as hydroxyl radicals and hydrogen peroxide:
≡Si–O–O• + H2O → ≡Si–O–O–H + •OH
2•OH → H2O2
Additionally, oxidants may be generated in the absence of oxygen at high temperatures:
≡Si–O• + H2O → ≡Si–O–H + •OH
2•OH → H2O2
H2O2 breaks down naturally in environments to form water and Oxygen gas:
2H2O2 → 2H2O + O2
Industry applications
Fundamentals and applications ranging from nano materials to technology have been reviewed. The approach has been used to synthesize metallic nanoparticles, catalysts, magnets, γ‐graphyne, metal iodates, nickel–vanadium carbide and molybdenum–vanadium carbide nanocomposite powders.
Ball milling has been used to separate hydrocarbon gases from crude oil. The process used 1-10% of the energy of conventional cryogenics. Differential absorption is affected by milling intensity, pressure and duration. The gases are recovered by heating, at a specific temperature for each gas type. The process has successfully processed alkyne, olefin and paraffin gases using boron nitride powder.
Storage
Mechanochemistry has potential for energy-efficient solid-state storage of hydrogen, ammonia and other fuel gases. The resulting powder is safer than conventional methods of compression and liquefaction.
See also
Embryonic differentiation waves
Mechanoluminescence
Tribology
Further reading
Lenhardt, J. M.; Ong, M. T.; Choe, R.; Evenhuis, C. R.; Martinez, T. J.; Craig, S. L., Trapping a Diradical Transition State by Mechanochemical Polymer Extension. Science 2010, 329 (5995), 1057-1060
References
Chemistry | 0.785493 | 0.97271 | 0.764057 |
De facto | De facto ( ; ; ) describes practices that exist in reality, regardless of whether they are officially recognized by laws or other formal norms. It is commonly used to refer to what happens in practice, in contrast with de jure ('by law').
Jurisprudence and de facto law
In jurisprudence, a de facto law (also known as a de facto regulation) is a law or regulation that is followed but "is not specifically enumerated by a law." By definition, de facto 'contrasts' de jure which means "as defined by law" or "as a matter of law." For example, if a particular law exists in one jurisdiction, but is followed in another where it has no legal effect (such as in another country), then the law could be considered a de facto regulation (a "de facto regulation" is not an officially prescribed legal classification for a type of law in a particular jurisdiction, rather, it is a concept about law(s).
A de facto regulation may be followed by an organization as a result of the market size of the jurisdiction imposing the regulation as a proportion of the overall market; wherein the market share is so large that it results in the organization choosing to comply by implementing one standard of business with respect to the given de facto law instead of altering standards between different jurisdictions and markets (e.g. data protection, manufacturing, etc.). The decision to voluntarily comply may be the result of: a desire to simplify manufacturing processes & cost-effectiveness (such as adopting a one size fits all approach), consumer demand & expectation, or other factors known only to the complier.
In prison sentences, the term de facto life sentence (also known as a "virtual" life sentence) is used to describe a "non-life sentence" that is long enough to end after the convicted person would have likely died due to old age, or one long enough to cause the convicted person to "live out the vast majority of their life in jail prior to their release."
Technical standards
A de facto standard is a standard (formal or informal) that has achieved a dominant position by tradition, enforcement, or market dominance. It has not necessarily received formal approval by way of a standardization process, and may not have an official standards document.
Technical standards are usually voluntary, such as ISO 9000 requirements, but may be obligatory, enforced by government norms, such as drinking water quality requirements. The term "de facto standard" is used for both: to contrast obligatory standards (also known as "de jure standards"); or to express a dominant standard, when there is more than one proposed standard.
In social sciences, a voluntary standard that is also a de facto standard, is a typical solution to a coordination problem.
Government and culture
National languages
Several countries, including Australia, Japan, Mexico, the United Kingdom and the United States, have a de facto national language but no official, de jure national language.
Some countries have a de facto national language in addition to an official language. In Lebanon and Morocco, Arabic is an official language (in addition to Tamazight in the case of Morocco), but an additional de facto language is also French. In New Zealand, the official languages are Māori and New Zealand Sign Language; however, English is a third de facto language.
Russian was the de facto official language of the central government and, to a large extent, republican governments of the former Soviet Union, but was not declared de jure state language until 1990. A short-lived law, effected April 24, 1990, installed Russian as the sole de jure official language of the Union prior to its dissolution in 1991.
In Hong Kong and Macau, the special administrative regions of China, the official languages are English and Portuguese respectively, together with Chinese. However, no particular variety of Chinese referred to in law is specified. Cantonese (Hong Kong Cantonese) in traditional Chinese characters is the de facto standard in both territories.
Governance and sovereignty
A de facto government is a government wherein all the attributes of sovereignty have, by usurpation, been transferred from those who had been legally invested with them to others, who, sustained by a power above the forms of law, claim to act and do really act in their stead.
In politics, a de facto leader of a country or region is one who has assumed authority, regardless of whether by lawful, constitutional, or legitimate means; very frequently, the term is reserved for those whose power is thought by some faction to be held by unlawful, unconstitutional, or otherwise illegitimate means, often because it had deposed a previous leader or undermined the rule of a current one. De facto leaders sometimes do not hold a constitutional office and may exercise power informally.
Not all dictators are de facto rulers. For example, Augusto Pinochet of Chile initially came to power as the chairperson of a military junta, which briefly made him de facto leader of Chile, but he later amended the nation's constitution and made himself president until new elections were called, making him the formal and legal ruler of Chile. Similarly, Saddam Hussein's formal rule of Iraq is often recorded as beginning in 1979, the year he assumed the Presidency of Iraq. However, his de facto rule of the nation began earlier: during his time as vice president; he exercised a great deal of power at the expense of the elderly Ahmed Hassan al-Bakr, the de jure president.
In Argentina, the successive military coups that overthrew constitutional governments installed de facto governments in 1930–1932, 1943–1946, 1955–1958, 1966–1973 and 1976–1983, the last of which combined the powers of the presidential office with those of the National Congress. The subsequent legal analysis of the validity of such actions led to the formulation of a doctrine of the de facto governments, a case law (precedential) formulation which essentially said that the actions and decrees of past de facto governments, although not rooted in legal legitimacy when taken, remained binding until and unless such time as they were revoked or repealed de jure by a subsequent legitimate government.
That doctrine was nullified by the constitutional reform of 1994. Article 36 states:
Two examples of de facto leaders are Deng Xiaoping of the People's Republic of China and general Manuel Noriega of Panama. Both of these men exercised nearly all control over their respective nations for many years despite not having either legal constitutional office or the legal authority to exercise power. These individuals are today commonly recorded as the "leaders" of their respective nations; recording their legal, correct title would not give an accurate assessment of their power.
Another example of a de facto ruler is someone who is not the actual ruler but exerts great or total influence over the true ruler, which is quite common in monarchies. Some examples of these de facto rulers are Empress Dowager Cixi of China (for son Tongzhi Emperor and nephew Guangxu Emperor), Prince Alexander Menshikov (for his former lover Empress Catherine I of Russia), Cardinal Richelieu of France (for Louis XIII), Queen Elisabeth of Parma (for her husband, King Philip V) and Queen Maria Carolina of Naples and Sicily (for her husband King Ferdinand I of the Two Sicilies).
Borders
The de facto boundaries of a country are defined by the area that its government is actually able to enforce its laws in, and to defend against encroachments by other countries that may also claim the same territory de jure. The Durand Line is an example of a de facto boundary. As well as cases of border disputes, de facto boundaries may also arise in relatively unpopulated areas in which the border was never formally established or in which the agreed border was never surveyed and its exact position is unclear. The same concepts may also apply to a boundary between provinces or other subdivisions of a federal state.
Segregation
In South Africa, although de jure apartheid formally began in 1948, de facto racist policies and practices discriminating against black South Africans, People of Colour, and Indians dated back decades before.
De facto racial discrimination and segregation in the United States (outside of the South) until the 1950s and 1960s was simply discrimination that was segregation by law (de jure). "Jim Crow laws", which were enacted in the 1870s, brought legal racial segregation against black Americans residing in the American South. These laws were legally ended in 1964 by the Civil Rights Act of 1964.
De facto state of war
Most commonly used to describe large scale conflicts of the 20th century, the phrase de facto state of war refers to a situation where two nations are actively engaging, or are engaged, in aggressive military actions against the other without a formal declaration of war.
Marriage and domestic partnerships
Relationships
A domestic partner outside marriage is referred to as a de facto husband or wife by some authorities.
In Australia and New Zealand
In Australian law, a de facto relationship is a legally recognized, committed relationship of a couple living together (opposite-sex or same-sex). De facto unions are defined in the federal Family Law Act 1975. De facto relationships provide couples who are living together on a genuine domestic basis with many of the same rights and benefits as married couples. Two people can become a de facto couple by entering into a registered relationship (i.e.: civil union or domestic partnership) or by being assessed as such by the Family Court or Federal Circuit Court. Couples who are living together are generally recognised as a de facto union and thus able to claim many of the rights and benefits of a married couple, even if they have not registered or officially documented their relationship, although this may vary by state. It has been noted that it is harder to prove de facto relationship status, particularly in the case of the death of one of the partners.
In April 2014, an Australian federal court judge ruled that a heterosexual couple who had a child and lived together for 13 years were not in a de facto relationship and thus the court had no jurisdiction to divide up their property under family law following a request for separation. In his ruling, the judge stated "de facto relationship(s) may be described as 'marriage like' but it is not a marriage and has significant differences socially, financially and emotionally."
The above sense of de facto is related to the relationship between common law traditions and formal (statutory, regulatory, civil) law, and common-law marriages. Common law norms for settling disputes in practical situations, often worked out over many generations to establishing precedent, are a core element informing decision making in legal systems around the world. Because its early forms originated in England in the Middle Ages, this is particularly true in Anglo-American legal traditions and in former colonies of the British Empire, while also playing a role in some countries that have mixed systems with significant admixtures of civil law.
Relationships not recognised outside Australia
Due to Australian federalism, de facto partnerships can only be legally recognised whilst the couple lives within a state in Australia. This is because the power to legislate on de facto matters relies on referrals by States to the Commonwealth in accordance with Section 51(xxxvii) of the Australian Constitution, where it states the new federal law can only be applied back within a state. There must be a nexus between the de facto relationship itself and the Australian state.
If an Australian de facto couple moves out of a state, they do not take the state with them and the new federal law is tied to the territorial limits of a state. The legal status and rights and obligations of the de facto or unmarried couple would then be recognised by the laws of the country where they are ordinarily resident.
This is unlike marriage and "matrimonial causes" which are recognised by sections 51(xxi) and (xxii) of the Constitution of Australia and internationally by marriage law and conventions, Hague Convention on Marriages (1978).
Non-marital relationship contract
A de facto relationship is comparable to non-marital relationship contracts (sometimes called "palimony agreements") and certain limited forms of domestic partnership, which are found in many jurisdictions throughout the world.
A de facto Relationship is not comparable to common-law marriage, which is a fully legal marriage that has merely been contracted in an irregular way (including by habit and repute). Only nine U.S. states and the District of Columbia still permit common-law marriage; but common law marriages are otherwise valid and recognised by and in all jurisdictions whose rules of comity mandate the recognition of any marriage that was legally formed in the jurisdiction where it was contracted.
Family law – custody
De facto joint custody is comparable to the joint legal decision-making authority a married couple has over their child(ren) in many jurisdictions (Canada as an example). Upon separation, each parent maintains de facto joint custody, until such time a court order awards custody, either sole or joint.
Business
Monopoly
A de facto monopoly is a system where many suppliers of a product are allowed but the market is so completely dominated by one that the other players are unable to compete or even survive. The related terms oligopoly and monopsony are similar in meaning and this is the type of situation that antitrust laws are intended to eliminate.
Finance
In finance, the World Bank has a pertinent definition:
A "de facto government" comes into, or remains in, power by means not provided for in the country's constitution, such as a coup d'état, revolution, usurpation, abrogation or suspension of the constitution.
Intellectual property
In engineering, is a system in which the intellectual property and know-how is privately held. Usually only the owner of the technology manufactures the related equipment. Meanwhile, a consists of systems that have been publicly released to a certain degree so that anybody can manufacture equipment supporting the technology. For instance, in cell phone communications, CDMA1X is a de facto technology, while GSM is a standard technology.
Sports
Examples of a de facto General Manager in sports include Syd Thrift who acted as the GM of the Baltimore Orioles between 1999 and 2002. Bill Belichick, the former head coach of the New England Patriots in the NFL did not hold the official title of GM, but served as de facto general manager as he had control over drafting and other personnel decisions.
See also
Convention (political norm)
Fact
Unenforced law
Notes
References
Latin legal terminology
Latin words and phrases | 0.764304 | 0.999658 | 0.764042 |
Serine/threonine-specific protein kinase | A serine/threonine protein kinase is a kinase enzyme, in particular a protein kinase, that phosphorylates the OH group of the amino-acid residues serine or threonine, which have similar side chains. At least 350 of the 500+ human protein kinases are serine/threonine kinases (STK).
In enzymology, the term serine/threonine protein kinase describes a class of enzymes in the family of transferases, that transfer phosphates to the oxygen atom of a serine or threonine side chain in proteins. This process is called phosphorylation. Protein phosphorylation in particular plays a significant role in a wide range of cellular processes and is a very important post-translational modification.
The chemical reaction performed by these enzymes can be written as
ATP + a protein ADP + a phosphoprotein
Thus, the two substrates of this enzyme are ATP and a protein, whereas its two products are ADP and phosphoprotein.
The systematic name of this enzyme class is ATP:protein phosphotransferase (non-specific).
Function
Serine/threonine kinases play a role in the regulation of cell proliferation, programmed cell death (apoptosis), cell differentiation, and embryonic development.
Selectivity
While serine/threonine kinases all phosphorylate serine or threonine residues in their substrates, they select specific residues to phosphorylate on the basis of residues that flank the phosphoacceptor site, which together comprise the consensus sequence. Since the consensus sequence residues of a target substrate only make contact with several key amino acids within the catalytic cleft of the kinase (usually through hydrophobic forces and ionic bonds), a kinase is usually not specific to a single substrate, but instead can phosphorylate a whole "substrate family" which share common recognition sequences. While the catalytic domain of these kinases is highly conserved, the sequence variation that is observed in the kinome (the subset of genes in the genome that encode kinases) provides for recognition of distinct substrates. Many kinases are inhibited by a pseudosubstrate that binds to the kinase like a real substrate but lacks the amino acid to be phosphorylated. When the pseudosubstrate is removed, the kinase can perform its normal function.
EC numbers
Many serine/threonine protein kinases do not have their own individual EC numbers and use 2.7.11.1, "non-specific serine/threonine protein kinase". This entry is for any enzyme that phosphorylates proteins while converting ATP to ADP (i.e., ATP:protein phosphotransferases.) 2.7.11.37 "protein kinase" was the former generic placeholder and was split into several entries (including 2.7.11.1) in 2005. 2.7.11.70 "protamine kinase" was merged into 2.7.11.1 in 2004.
2.7.11.- is the generic level where all serine/threonine kinases should sit in.
Types
Types include those acting directly as membrane-bound receptors (Receptor protein serine/threonine kinase) and intracellular kinases participating in Signal transduction. Of the latter, types include:
Clinical significance
Serine/threonine kinase (STK) expression is altered in many types of cancer. Limited benefit of serine/threonine kinase inhibitors has been demonstrated in ovarian cancer but studies are ongoing to evaluate their safety and efficacy.
Serine/threonine protein kinase p90-kDa ribosomal S6 kinase (RSK) is in involved in development of some prostate cancers.
Raf inhibition has become the target for new anti-metastatic cancer drugs as they inhibit the MAPK cascade and reduce cell proliferation.
See also
Protein serine/threonine phosphatase, enzyme for reverse process.
Pseudokinase, a protein without enzyme activity (pseudoenzyme). It can be related to proteins of this class.
ATM serine/threonine kinase, responsible for the disorder ataxia–telangiectasia.
References
External links
KinCore (Kinase Conformational Resource)
EC 2.7.11
Protein kinases
Enzymes of known structure | 0.778029 | 0.98202 | 0.76404 |
Hydrogen sulfide | Hydrogen sulfide is a chemical compound with the formula . It is a colorless chalcogen-hydride gas, and is poisonous, corrosive, and flammable, with trace amounts in ambient atmosphere having a characteristic foul odor of rotten eggs. Swedish chemist Carl Wilhelm Scheele is credited with having discovered the chemical composition of purified hydrogen sulfide in 1777.
Hydrogen sulfide is toxic to humans and most other animals by inhibiting cellular respiration in a manner similar to hydrogen cyanide. When it is inhaled or its salts are ingested in high amounts, damage to organs occurs rapidly with symptoms ranging from breathing difficulties to convulsions and death. Despite this, the human body produces small amounts of this sulfide and its mineral salts, and uses it as a signalling molecule.
Hydrogen sulfide is often produced from the microbial breakdown of organic matter in the absence of oxygen, such as in swamps and sewers; this process is commonly known as anaerobic digestion, which is done by sulfate-reducing microorganisms. It also occurs in volcanic gases, natural gas deposits, and sometimes in well-drawn water.
Properties
Hydrogen sulfide is slightly denser than air. A mixture of and air can be explosive.
Oxidation
In general, hydrogen sulfide acts as a reducing agent, as indicated by its ability to reduce sulfur dioxide in the Claus process. Hydrogen sulfide burns in oxygen with a blue flame to form sulfur dioxide and water:
If an excess of oxygen is present, sulfur trioxide is formed, which quickly hydrates to sulfuric acid:
Acid-base properties
It is slightly soluble in water and acts as a weak acid (pKa = 6.9 in 0.01–0.1 mol/litre solutions at 18 °C), giving the hydrosulfide ion . Hydrogen sulfide and its solutions are colorless. When exposed to air, it slowly oxidizes to form elemental sulfur, which is not soluble in water. The sulfide anion is not formed in aqueous solution.
Extreme temperatures and pressures
At pressures above 90 GPa (gigapascal), hydrogen sulfide becomes a metallic conductor of electricity. When cooled below a critical temperature this high-pressure phase exhibits superconductivity. The critical temperature increases with pressure, ranging from 23 K at 100 GPa to 150 K at 200 GPa. If hydrogen sulfide is pressurized at higher temperatures, then cooled, the critical temperature reaches , the highest accepted superconducting critical temperature as of 2015. By substituting a small part of sulfur with phosphorus and using even higher pressures, it has been predicted that it may be possible to raise the critical temperature to above and achieve room-temperature superconductivity.
Hydrogen sulfide decomposes without a presence of a catalyst under atmospheric pressure around 1200 °C into hydrogen and sulfur.
Tarnishing
Hydrogen sulfide reacts with metal ions to form metal sulfides, which are insoluble, often dark colored solids. Lead(II) acetate paper is used to detect hydrogen sulfide because it readily converts to lead(II) sulfide, which is black. Treating metal sulfides with strong acid or electrolysis often liberates hydrogen sulfide. Hydrogen sulfide is also responsible for tarnishing on various metals including copper and silver; the chemical responsible for black toning found on silver coins is silver sulfide, which is produced when the silver on the surface of the coin reacts with atmospheric hydrogen sulfide. Coins that have been subject to toning by hydrogen sulfide and other sulfur-containing compounds may have the toning add to the numismatic value of a coin based on aesthetics, as the toning may produce thin-film interference, resulting in the coin taking on an attractive coloration. Coins can also be intentionally treated with hydrogen sulfide to induce toning, though artificial toning can be distinguished from natural toning, and is generally criticised among collectors.
Production
Hydrogen sulfide is most commonly obtained by its separation from sour gas, which is natural gas with a high content of . It can also be produced by treating hydrogen with molten elemental sulfur at about 450 °C. Hydrocarbons can serve as a source of hydrogen in this process.
The very favorable thermodynamics for the hydrogenation of sulfur implies that the dehydrogenation (or cracking) of hydrogen sulfide would require very high temperatures.
A standard lab preparation is to treat ferrous sulfide with a strong acid in a Kipp generator:
For use in qualitative inorganic analysis, thioacetamide is used to generate :
Many metal and nonmetal sulfides, e.g. aluminium sulfide, phosphorus pentasulfide, silicon disulfide liberate hydrogen sulfide upon exposure to water:
This gas is also produced by heating sulfur with solid organic compounds and by reducing sulfurated organic compounds with hydrogen.
It can also be produced by mixing ammonium thiocyanate to concentrated sulphuric acid and adding water to it.
Biosynthesis
Hydrogen sulfide can be generated in cells via enzymatic or non-enzymatic pathways. Three enzymes catalyze formation of : cystathionine γ-lyase (CSE), cystathionine β-synthetase (CBS), and 3-mercaptopyruvate sulfurtransferase (3-MST). CBS and CSE are the main proponents of biogenesis, which follows the trans-sulfuration pathway. These enzymes have been identified in a breadth of biological cells and tissues, and their activity is induced by a number of disease states. These enzymes are characterized by the transfer of a sulfur atom from methionine to serine to form a cysteine molecule. 3-MST also contributes to hydrogen sulfide production by way of the cysteine catabolic pathway. Dietary amino acids, such as methionine and cysteine serve as the primary substrates for the transulfuration pathways and in the production of hydrogen sulfide. Hydrogen sulfide can also be derived from proteins such as ferredoxins and Rieske proteins.
Sulfate-reducing (resp. sulfur-reducing) bacteria generate usable energy under low-oxygen conditions by using sulfates (resp. elemental sulfur) to oxidize organic compounds or hydrogen; this produces hydrogen sulfide as a waste product.
Water heaters can aid the conversion of sulfate in water to hydrogen sulfide gas. This is due to providing a warm environment sustainable for sulfur bacteria and maintaining the reaction which interacts between sulfate in the water and the water heater anode, which is usually made from magnesium metal.
Signalling role
in the body acts as a gaseous signaling molecule with implications for health and in diseases.
Hydrogen sulfide is involved in vasodilation in animals, as well as in increasing seed germination and stress responses in plants. Hydrogen sulfide signaling is moderated by reactive oxygen species (ROS) and reactive nitrogen species (RNS). has been shown to interact with NO resulting in several different cellular effects, as well as the formation of another signal called nitrosothiol. Hydrogen sulfide is also known to increase the levels of glutathione, which acts to reduce or disrupt ROS levels in cells.
The field of biology has advanced from environmental toxicology to investigate the roles of endogenously produced in physiological conditions and in various pathophysiological states. has been implicated in cancer and Down syndrome and vascular disease.
It inhibits Complex IV of the mitochondrial electron transport chain, which effectively reduces ATP generation and biochemical activity within cells.
Uses
Production of sulfur
Hydrogen sulfide is mainly consumed as a precursor to elemental sulfur. This conversion, called the Claus process, involves partial oxidation to sulfur dioxide. The latter reacts with hydrogen sulfide to give elemental sulfur. The conversion is catalyzed by alumina.
Production of thioorganic compounds
Many fundamental organosulfur compounds are produced using hydrogen sulfide. These include methanethiol, ethanethiol, and thioglycolic acid. Hydrosulfides can be used in the production of thiophenol.
Production of metal sulfides
Upon combining with alkali metal bases, hydrogen sulfide converts to alkali hydrosulfides such as sodium hydrosulfide and sodium sulfide:
Sodium sulfides are used in the paper making industry. Specifically, salts of break bonds between lignin and cellulose components of pulp in the Kraft process.
As indicated above, many metal ions react with hydrogen sulfide to give the corresponding metal sulfides. Oxidic ores are sometimes treated with hydrogen sulfide to give the corresponding metal sulfides which are more readily purified by flotation. Metal parts are sometimes passivated with hydrogen sulfide. Catalysts used in hydrodesulfurization are routinely activated with hydrogen sulfide.
Hydrogen sulfide was a reagent in the qualitative inorganic analysis of metal ions. In these analyses, heavy metal (and nonmetal) ions (e.g., Pb(II), Cu(II), Hg(II), As(III)) are precipitated from solution upon exposure to . The components of the resulting solid are then identified by their reactivity.
Miscellaneous applications
Hydrogen sulfide is used to separate deuterium oxide, or heavy water, from normal water via the Girdler sulfide process.
A suspended animation-like state has been induced in rodents with the use of hydrogen sulfide, resulting in hypothermia with a concomitant reduction in metabolic rate. Oxygen demand was also reduced, thereby protecting against hypoxia. In addition, hydrogen sulfide has been shown to reduce inflammation in various situations.
Occurrence
Volcanoes and some hot springs (as well as cold springs) emit some . Hydrogen sulfide can be present naturally in well water, often as a result of the action of sulfate-reducing bacteria. Hydrogen sulfide is produced by the human body in small quantities through bacterial breakdown of proteins containing sulfur in the intestinal tract, therefore it contributes to the characteristic odor of flatulence. It is also produced in the mouth (halitosis).
A portion of global emissions are due to human activity. By far the largest industrial source of is petroleum refineries: The hydrodesulfurization process liberates sulfur from petroleum by the action of hydrogen. The resulting is converted to elemental sulfur by partial combustion via the Claus process, which is a major source of elemental sulfur. Other anthropogenic sources of hydrogen sulfide include coke ovens, paper mills (using the Kraft process), tanneries and sewerage. arises from virtually anywhere where elemental sulfur comes in contact with organic material, especially at high temperatures. Depending on environmental conditions, it is responsible for deterioration of material through the action of some sulfur oxidizing microorganisms. It is called biogenic sulfide corrosion.
In 2011 it was reported that increased concentrations of were observed in the Bakken formation crude, possibly due to oil field practices, and presented challenges such as "health and environmental risks, corrosion of wellbore, added expense with regard to materials handling and pipeline equipment, and additional refinement requirements".
Besides living near gas and oil drilling operations, ordinary citizens can be exposed to hydrogen sulfide by being near waste water treatment facilities, landfills and farms with manure storage. Exposure occurs through breathing contaminated air or drinking contaminated water.
In municipal waste landfill sites, the burial of organic material rapidly leads to the production of anaerobic digestion within the waste mass and, with the humid atmosphere and relatively high temperature that accompanies biodegradation, biogas is produced as soon as the air within the waste mass has been reduced. If there is a source of sulfate bearing material, such as plasterboard or natural gypsum (calcium sulfate dihydrate), under anaerobic conditions sulfate reducing bacteria converts this to hydrogen sulfide. These bacteria cannot survive in air but the moist, warm, anaerobic conditions of buried waste that contains a high source of carbon – in inert landfills, paper and glue used in the fabrication of products such as plasterboard can provide a rich source of carbon – is an excellent environment for the formation of hydrogen sulfide.
In industrial anaerobic digestion processes, such as waste water treatment or the digestion of organic waste from agriculture, hydrogen sulfide can be formed from the reduction of sulfate and the degradation of amino acids and proteins within organic compounds. Sulfates are relatively non-inhibitory to methane forming bacteria but can be reduced to by sulfate reducing bacteria, of which there are several genera.
Removal from water
A number of processes have been designed to remove hydrogen sulfide from drinking water.
Continuous chlorination For levels up to 75 mg/L chlorine is used in the purification process as an oxidizing chemical to react with hydrogen sulfide. This reaction yields insoluble solid sulfur. Usually the chlorine used is in the form of sodium hypochlorite.
Aeration For concentrations of hydrogen sulfide less than 2 mg/L aeration is an ideal treatment process. Oxygen is added to water and a reaction between oxygen and hydrogen sulfide react to produce odorless sulfate.
Nitrate addition Calcium nitrate can be used to prevent hydrogen sulfide formation in wastewater streams.
Removal from fuel gases
Hydrogen sulfide is commonly found in raw natural gas and biogas. It is typically removed by amine gas treating technologies. In such processes, the hydrogen sulfide is first converted to an ammonium salt, whereas the natural gas is unaffected.
The bisulfide anion is subsequently regenerated by heating of the amine sulfide solution. Hydrogen sulfide generated in this process is typically converted to elemental sulfur using the Claus Process.
Safety
The underground mine gas term for foul-smelling hydrogen sulfide-rich gas mixtures is stinkdamp. Hydrogen sulfide is a highly toxic and flammable gas (flammable range: 4.3–46%). It can poison several systems in the body, although the nervous system is most affected. The toxicity of is comparable with that of carbon monoxide. It binds with iron in the mitochondrial cytochrome enzymes, thus preventing cellular respiration. Its toxic properties were described in detail in 1843 by Justus von Liebig.
Even before hydrogen sulfide was discovered, Italian physician Bernardino Ramazzini hypothesized in his 1713 book De Morbis Artificum Diatriba that occupational diseases of sewer-workers and blackening of coins in their clothes may be caused by an unknown invisible volatile acid (moreover, in late 18th century toxic gas emanation from Paris sewers became a problem for the citizens and authorities).
Although very pungent at first (it smells like rotten eggs), it quickly deadens the sense of smell, creating temporary anosmia, so victims may be unaware of its presence until it is too late. Safe handling procedures are provided by its safety data sheet (SDS).
Low-level exposure
Since hydrogen sulfide occurs naturally in the body, the environment, and the gut, enzymes exist to metabolize it. At some threshold level, believed to average around 300–350 ppm, the oxidative enzymes become overwhelmed. Many personal safety gas detectors, such as those used by utility, sewage and petrochemical workers, are set to alarm at as low as 5 to 10 ppm and to go into high alarm at 15 ppm. Metabolism causes oxidation to sulfate, which is harmless. Hence, low levels of hydrogen sulfide may be tolerated indefinitely.
Exposure to lower concentrations can result in eye irritation, a sore throat and cough, nausea, shortness of breath, and fluid in the lungs. These effects are believed to be due to hydrogen sulfide combining with alkali present in moist surface tissues to form sodium sulfide, a caustic. These symptoms usually subside in a few weeks.
Long-term, low-level exposure may result in fatigue, loss of appetite, headaches, irritability, poor memory, and dizziness. Chronic exposure to low level (around 2 ppm) has been implicated in increased miscarriage and reproductive health issues among Russian and Finnish wood pulp workers, but the reports have not (as of 1995) been replicated.
High-level exposure
Short-term, high-level exposure can induce immediate collapse, with loss of breathing and a high probability of death. If death does not occur, high exposure to hydrogen sulfide can lead to cortical pseudolaminar necrosis, degeneration of the basal ganglia and cerebral edema. Although respiratory paralysis may be immediate, it can also be delayed up to 72 hours.
Inhalation of resulted in about 7 workplace deaths per year in the U.S. (2011–2017 data), second only to carbon monoxide (17 deaths per year) for workplace chemical inhalation deaths.
Exposure thresholds
Exposure limits stipulated by the United States government:
10 ppm REL-Ceiling (NIOSH): recommended permissible exposure ceiling (the recommended level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs)
20 ppm PEL-Ceiling (OSHA): permissible exposure ceiling (the level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs)
50 ppm PEL-Peak (OSHA): peak permissible exposure (the level that must never be exceeded)
100 ppm IDLH (NIOSH): immediately dangerous to life and health (the level that interferes with the ability to escape)
0.00047 ppm or 0.47 ppb is the odor threshold, the point at which 50% of a human panel can detect the presence of an odor without being able to identify it.
10–20 ppm is the borderline concentration for eye irritation.
50–100 ppm leads to eye damage.
At 100–150 ppm the olfactory nerve is paralyzed after a few inhalations, and the sense of smell disappears, often together with awareness of danger.
320–530 ppm leads to pulmonary edema with the possibility of death.
530–1000 ppm causes strong stimulation of the central nervous system and rapid breathing, leading to loss of breathing.
800 ppm is the lethal concentration for 50% of humans for 5 minutes' exposure (LC50).
Concentrations over 1000 ppm cause immediate collapse with loss of breathing, even after inhalation of a single breath.
Treatment
Treatment involves immediate inhalation of amyl nitrite, injections of sodium nitrite, or administration of 4-dimethylaminophenol in combination with inhalation of pure oxygen, administration of bronchodilators to overcome eventual bronchospasm, and in some cases hyperbaric oxygen therapy (HBOT). HBOT has clinical and anecdotal support.
Incidents
Hydrogen sulfide was used by the British Army as a chemical weapon during World War I. It was not considered to be an ideal war gas, partially due to its flammability and because the distinctive smell could be detected from even a small leak, alerting the enemy to the presence of the gas. It was nevertheless used on two occasions in 1916 when other gases were in short supply.
On September 2, 2005, a leak in the propeller room of a Royal Caribbean Cruise Liner docked in Los Angeles resulted in the deaths of 3 crewmen due to a sewage line leak. As a result, all such compartments are now required to have a ventilation system.
A dump of toxic waste containing hydrogen sulfide is believed to have caused 17 deaths and thousands of illnesses in Abidjan, on the West African coast, in the 2006 Côte d'Ivoire toxic waste dump.
In September 2008, three workers were killed and two suffered serious injury, including long term brain damage, at a mushroom growing company in Langley, British Columbia. A valve to a pipe that carried chicken manure, straw and gypsum to the compost fuel for the mushroom growing operation became clogged, and as workers unclogged the valve in a confined space without proper ventilation the hydrogen sulfide that had built up due to anaerobic decomposition of the material was released, poisoning the workers in the surrounding area. An investigator said there could have been more fatalities if the pipe had been fully cleared and/or if the wind had changed directions.
In 2014, levels of hydrogen sulfide as high as 83 ppm were detected at a recently built mall in Thailand called Siam Square One at the Siam Square area. Shop tenants at the mall reported health complications such as sinus inflammation, breathing difficulties and eye irritation. After investigation it was determined that the large amount of gas originated from imperfect treatment and disposal of waste water in the building.
In 2014, hydrogen sulfide gas killed workers at the Promenade shopping center in North Scottsdale, Arizona, USA after climbing into 15ft deep chamber without wearing personal protective gear. "Arriving crews recorded high levels of hydrogen cyanide and hydrogen sulfide coming out of the sewer."
In November 2014, a substantial amount of hydrogen sulfide gas shrouded the central, eastern and southeastern parts of Moscow. Residents living in the area were urged to stay indoors by the emergencies ministry. Although the exact source of the gas was not known, blame had been placed on a Moscow oil refinery.
In June 2016, a mother and her daughter were found dead in their still-running 2006 Porsche Cayenne SUV against a guardrail on Florida's Turnpike, initially thought to be victims of carbon monoxide poisoning. Their deaths remained unexplained as the medical examiner waited for results of toxicology tests on the victims, until urine tests revealed that hydrogen sulfide was the cause of death. A report from the Orange-Osceola Medical Examiner's Office indicated that toxic fumes came from the Porsche's starter battery, located under the front passenger seat.
In January 2017, three utility workers in Key Largo, Florida, died one by one within seconds of descending into a narrow space beneath a manhole cover to check a section of paved street. In an attempt to save the men, a firefighter who entered the hole without his air tank (because he could not fit through the hole with it) collapsed within seconds and had to be rescued by a colleague. The firefighter was airlifted to Jackson Memorial Hospital and later recovered. A Monroe County Sheriff officer initially determined that the space contained hydrogen sulfide and methane gas produced by decomposing vegetation.
On May 24, 2018, two workers were killed, another seriously injured, and 14 others hospitalized by hydrogen sulfide inhalation at a Norske Skog paper mill in Albury, New South Wales. An investigation by SafeWork NSW found that the gas was released from a tank used to hold process water. The workers were exposed at the end of a 3-day maintenance period. Hydrogen sulfide had built up in an upstream tank, which had been left stagnant and untreated with biocide during the maintenance period. These conditions allowed sulfate-reducing bacteria to grow in the upstream tank, as the water contained small quantities of wood pulp and fiber. The high rate of pumping from this tank into the tank involved in the incident caused hydrogen sulfide gas to escape from various openings around its top when pumping was resumed at the end of the maintenance period. The area above it was sufficiently enclosed for the gas to pool there, despite not being identified as a confined space by Norske Skog. One of the workers who was killed was exposed while investigating an apparent fluid leak in the tank, while the other who was killed and the worker who was badly injured were attempting to rescue the first after he collapsed on top of it. In a resulting criminal case, Norske Skog was accused of failing to ensure the health and safety of its workforce at the plant to a reasonably practicable extent. It pleaded guilty, and was fined AU$1,012,500 and ordered to fund the production of an anonymized educational video about the incident.
In October 2019, an Odessa, Texas employee of Aghorn Operating Inc. and his wife were killed due to a water pump failure. Produced water with a high concentration of hydrogen sulfide was released by the pump. The worker died while responding to an automated phone call he had received alerting him to a mechanical failure in the pump, while his wife died after driving to the facility to check on him. A CSB investigation cited lax safety practices at the facility, such as an informal lockout-tagout procedure and a nonfunctioning hydrogen sulfide alert system.
Suicides
The gas, produced by mixing certain household ingredients, was used in a suicide wave in 2008 in Japan. The wave prompted staff at Tokyo's suicide prevention center to set up a special hotline during "Golden Week", as they received an increase in calls from people wanting to kill themselves during the annual May holiday.
As of 2010, this phenomenon has occurred in a number of US cities, prompting warnings to those arriving at the site of the suicide. These first responders, such as emergency services workers or family members are at risk of death or injury from inhaling the gas, or by fire. Local governments have also initiated campaigns to prevent such suicides.
In 2020, ingestion was used as a suicide method by Japanese pro wrestler Hana Kimura.
In 2024, Lucy-Bleu Knight, stepdaughter of famed musician Slash, also used ingestion to commit suicide.
Hydrogen sulfide in the natural environment
Microbial: The sulfur cycle
Hydrogen sulfide is a central participant in the sulfur cycle, the biogeochemical cycle of sulfur on Earth.
In the absence of oxygen, sulfur-reducing and sulfate-reducing bacteria derive energy from oxidizing hydrogen or organic molecules by reducing elemental sulfur or sulfate to hydrogen sulfide. Other bacteria liberate hydrogen sulfide from sulfur-containing amino acids; this gives rise to the odor of rotten eggs and contributes to the odor of flatulence.
As organic matter decays under low-oxygen (or hypoxic) conditions (such as in swamps, eutrophic lakes or dead zones of oceans), sulfate-reducing bacteria will use the sulfates present in the water to oxidize the organic matter, producing hydrogen sulfide as waste. Some of the hydrogen sulfide will react with metal ions in the water to produce metal sulfides, which are not water-soluble. These metal sulfides, such as ferrous sulfide FeS, are often black or brown, leading to the dark color of sludge.
Several groups of bacteria can use hydrogen sulfide as fuel, oxidizing it to elemental sulfur or to sulfate by using dissolved oxygen, metal oxides (e.g., iron oxyhydroxides and manganese oxides), or nitrate as electron acceptors.
The purple sulfur bacteria and the green sulfur bacteria use hydrogen sulfide as an electron donor in photosynthesis, thereby producing elemental sulfur. This mode of photosynthesis is older than the mode of cyanobacteria, algae, and plants, which uses water as electron donor and liberates oxygen.
The biochemistry of hydrogen sulfide is a key part of the chemistry of the iron-sulfur world. In this model of the origin of life on Earth, geologically produced hydrogen sulfide is postulated as an electron donor driving the reduction of carbon dioxide.
Animals
Hydrogen sulfide is lethal to most animals, but a few highly specialized species (extremophiles) do thrive in habitats that are rich in this compound.
In the deep sea, hydrothermal vents and cold seeps with high levels of hydrogen sulfide are home to a number of extremely specialized lifeforms, ranging from bacteria to fish. Because of the absence of sunlight at these depths, these ecosystems rely on chemosynthesis rather than photosynthesis.
Freshwater springs rich in hydrogen sulfide are mainly home to invertebrates, but also include a small number of fish: Cyprinodon bobmilleri (a pupfish from Mexico), Limia sulphurophila (a poeciliid from the Dominican Republic), Gambusia eurystoma (a poeciliid from Mexico), and a few Poecilia (poeciliids from Mexico). Invertebrates and microorganisms in some cave systems, such as Movile Cave, are adapted to high levels of hydrogen sulfide.
Interstellar and planetary occurrence
Hydrogen sulfide has often been detected in the interstellar medium. It also occurs in the clouds of planets in our solar system.
Mass extinctions
Hydrogen sulfide has been implicated in several mass extinctions that have occurred in the Earth's past. In particular, a buildup of hydrogen sulfide in the atmosphere may have caused, or at least contributed to, the Permian-Triassic extinction event 252 million years ago.
Organic residues from these extinction boundaries indicate that the oceans were anoxic (oxygen-depleted) and had species of shallow plankton that metabolized . The formation of may have been initiated by massive volcanic eruptions, which emitted carbon dioxide and methane into the atmosphere, which warmed the oceans, lowering their capacity to absorb oxygen that would otherwise oxidize . The increased levels of hydrogen sulfide could have killed oxygen-generating plants as well as depleted the ozone layer, causing further stress. Small blooms have been detected in modern times in the Dead Sea and in the Atlantic Ocean off the coast of Namibia.
See also
Hydrogen sulfide chemosynthesis
Marsh gas
References
Additional resources
External links
International Chemical Safety Card 0165
Concise International Chemical Assessment Document 53
National Pollutant Inventory - Hydrogen sulfide fact sheet
NIOSH Pocket Guide to Chemical Hazards
NACE (National Association of Corrosion Epal)
Acids
Foul-smelling chemicals
Hydrogen compounds
Triatomic molecules
Industrial gases
Airborne pollutants
Sulfides
Flatulence
Gaseous signaling molecules
Blood agents | 0.764442 | 0.999472 | 0.764039 |
PARSEC | PARSEC is a package designed to perform electronic structure calculations of solids and molecules using density functional theory (DFT). The acronym stands for Pseudopotential Algorithm for Real-Space Electronic Calculations. It solves the Kohn–Sham equations in real space, without the use of explicit basis sets.
One of the strengths of this code is that it handles non-periodic boundary conditions in a natural way, without the use of super-cells, but can equally well handle periodic and partially periodic boundary conditions. Another key strength is that it is readily amenable to efficient massive parallelization, making it highly effective for very large systems.
Its development started in early 1990s with James Chelikowsky (now at the University of Texas), Yousef Saad and collaborators at the University of Minnesota. The code is freely available under the GNU GPLv2. Currently, its public version is 1.4.4. Some of the physical/chemical properties calculated by this code are: Kohn–Sham band structure, atomic forces (including molecular dynamics capabilities), static susceptibility, magnetic dipole moment, and many additional molecular and solid state properties.
See also
Density functional theory
Quantum chemistry computer programs
References
External links
Computational chemistry software
Density functional theory software
Physics software | 0.764554 | 0.999325 | 0.764037 |
Anaerobic exercise | Anaerobic exercise is a type of exercise that breaks down glucose in the body without using oxygen; anaerobic means "without oxygen". This type of exercise leads to a buildup of lactic acid.
In practical terms, this means that anaerobic exercise is more intense, but shorter in duration than aerobic exercise.
The biochemistry of anaerobic exercise involves a process called glycolysis, in which glucose is converted to adenosine triphosphate (ATP), the primary source of energy for cellular reactions.
Anaerobic exercise may be used to help build endurance, muscle strength, and power.
Metabolism
Anaerobic metabolism is a natural part of metabolic energy expenditure. Fast twitch muscles (as compared to slow twitch muscles) operate using anaerobic metabolic systems, such that any use of fast twitch muscle fibers leads to increased anaerobic energy expenditure. Intense exercise lasting upwards of four minutes (e.g. a mile race) may still have considerable anaerobic energy expenditure. An example is high-intensity interval training, an exercise strategy that is performed under anaerobic conditions at intensities that reach an excess of 90% of the maximum heart rate. Anaerobic energy expenditure is difficult to accurately quantify. Some methods estimate the anaerobic component of an exercise by determining the maximum accumulated oxygen deficit or measuring the lactic acid formation in muscle mass.
In contrast, aerobic exercise includes lower intensity activities performed for longer periods of time. Activities such as walking, jogging, rowing, and cycling require oxygen to generate the energy needed for prolonged exercise (i.e., aerobic energy expenditure). For sports that require repeated short bursts of exercise, the aerobic system acts to replenish and store energy during recovery periods to fuel the next energy burst. Therefore, training strategies for many sports demand that both aerobic and anaerobic systems be developed. The benefits of adding anaerobic exercise include improving cardiovascular endurance as well as build and maintaining muscle strength and losing weight.
The anaerobic energy systems are:
The alactic anaerobic system, which consists of high energy phosphates, adenosine triphosphate, and creatine phosphate; and
The lactic anaerobic system, which features anaerobic glycolysis.
High energy phosphates are stored in limited quantities within muscle cells. Anaerobic glycolysis exclusively uses glucose (and glycogen) as a fuel in the absence of oxygen, or more specifically, when ATP is needed at rates that exceed those provided by aerobic metabolism. The consequence of such rapid glucose breakdown is the formation of lactic acid (or more appropriately, its conjugate base lactate at biological pH levels). Physical activities that last up to about thirty seconds rely primarily on the former ATP-CP phosphagen system. Beyond this time, both aerobic and anaerobic glycolysis-based metabolic systems are used.
The by-product of anaerobic glycolysis—lactate—has traditionally been thought to be detrimental to muscle function. However, this appears likely only when lactate levels are very high. Elevated lactate levels are only one of many changes that occur within and around muscle cells during intense exercise that can lead to fatigue. Fatigue, which is muscle failure, is a complex subject that depends on more than just changes to lactate concentration. Energy availability, oxygen delivery, perception to pain, and other psychological factors all contribute to muscular fatigue. Elevated muscle and blood lactate concentrations are a natural consequence of any physical exertion. The effectiveness of anaerobic activity can be improved through training.
Anaerobic exercise also increases an individual's basal metabolic rate (BMR).
Examples
Anaerobic exercises are high-intensity workouts completed over shorter durations, while aerobic exercises include variable-intensity workouts completed over longer durations. Some examples of anaerobic exercises include sprints, high-intensity interval training (HIIT), and strength training.
See also
Aerobic exercise
Bioenergetic systems
Margaria-Kalamen power test
Strength training
Weight training
Cori cycle
Citric acid cycle
References
Exercise biochemistry
Exercise physiology
Physical exercise
Bodybuilding | 0.765509 | 0.998055 | 0.764021 |
SYNTAX | In computer science, SYNTAX is a system used to generate lexical and syntactic analyzers (parsers) (both deterministic and non-deterministic) for all kinds of context-free grammars (CFGs) as well as some classes of contextual grammars. It has been developed at INRIA in France for several decades, mostly by Pierre Boullier, but has become free software since 2007 only. SYNTAX is distributed under the CeCILL license.
Context-free parsing
SYNTAX handles most classes of deterministic (unambiguous) grammars (LR, LALR, RLR as well as general context-free grammars. The deterministic version has been used in operational contexts (e.g., Ada), and is currently used both in the domain of compilation. The non-deterministic features include an Earley parser generator used for natural language processing. Parsers generated by SYNTAX include powerful error recovery mechanisms, and allow the execution of semantic actions and attribute evaluation on the abstract tree or on the shared parse forest.
Contextual parsing
The current version of SYNTAX (version 6.0 beta) includes also parser generators for other formalisms, used for natural language processing as well as bio-informatics. These formalisms are context-sensitive formalisms (TAG, RCG or formalisms that rely on context-free grammars and are extended thanks to attribute evaluation, in particular for natural language processing (LFG).
Error recovery
A nice feature of SYNTAX (compared to Lex/Yacc) is its built-in algorithm for automatically recovering from lexical and syntactic errors, by deleting extra characters or tokens, inserting missing characters or tokens, permuting characters or tokens, etc. This algorithm has a default behaviour that can be modified by providing a custom set of recovery rules adapted to the language for which the lexer and parser are built.
References
External links
SYNTAX web site
Paper on the construction of compilers using SYNTAX and TRAIAN (Compiler Construction'02 Conference)
Compiling tools
Parser generators | 0.765637 | 0.997887 | 0.76402 |
IUPAC numerical multiplier | The numerical multiplier (or multiplying affix) in IUPAC nomenclature indicates how many particular atoms or functional groups are attached at a particular point in a molecule. The affixes are derived from both Latin and Greek.
Compound affixes
The prefixes are given from the least significant decimal digit up: units, then tens, then hundreds, then thousands. For example:
548 → octa- (8) + tetraconta- (40) + pentacta- (500) = octatetracontapentacta-
9267 → hepta- (7) + hexaconta- (60) + dicta- (200) + nonalia- (9000) = heptahexacontadictanonalia-
The numeral one
While the use of the affix mono- is rarely necessary in organic chemistry, it is often essential in inorganic chemistry to avoid ambiguity: carbon oxide could refer to either carbon monoxide or carbon dioxide. In forming compound affixes, the numeral one is represented by the term hen- except when it forms part of the number eleven (undeca-): hence
241 → hen- (1) + tetraconta- (40) + dicta- (200) = hentetracontadicta-
411 → undeca- (11) + tetracta- (400) = undecatetracta-
The numeral two
In compound affixes, the numeral two is represented by do- except when it forms part of the numbers 20 (icosa-), 200 (dicta-) or 2000 (dilia-).
Icosa- v. eicosa-
IUPAC prefers the spelling icosa- for the affix corresponding to the number twenty on the grounds of etymology. However both the Chemical Abstracts Service and the Beilstein database use the alternative spelling eicosa-.
Other numerical prefix types
There are two more types of numerical prefixes in IUPAC organic chemistry nomenclature.
Numerical terms for compound or complex features
Numerical prefixes for multiplication of compound or complex (as in complicated) features are created by adding kis to the basic numerical prefix, with the exception of numbers 2 and 3, which are bis- and tris-, respectively.
An example is the IUPAC name for DDT.
Multiplicative prefixes for naming assemblies of identical units
Examples are biphenyl or terphenyl.
Etymology
"mono-" is from Greek monos = "alone". "un" = 1 and "nona-" = 9 are from Latin. The others are derived from Greek numbers.
The forms 100 and upwards are not correct Greek. In Ancient Greek, hekaton = 100, diakosioi = 200, triakosioi = 300, etc. The numbers 200-900 would be confused easily with 22 to 29 if they were used in chemistry.
khīlioi = 1000, diskhīlioi = 2000, triskhīlioi = 3000, etc.
13 to 19 are formed by starting with the Greek word for the number of ones, followed by και (the Greek word for 'and'), followed by δέκα (the Greek word for 'ten'). For instance treiskaideka, as in triskaidekaphobia.
Notes and references
Chemical nomenclature | 0.768233 | 0.994503 | 0.764011 |
Orotic acid | Orotic acid is a pyrimidinedione and a carboxylic acid. Historically, it was believed to be part of the vitamin B complex and was called vitamin B13, but it is now known that it is not a vitamin.
The compound is synthesized in the body via a mitochondrial enzyme, dihydroorotate dehydrogenase or a cytoplasmic enzyme of pyrimidine synthesis pathway. It is sometimes used as a mineral carrier in some dietary supplements (to increase their bioavailability), most commonly for lithium orotate.
Synthesis
Dihydroorotate is synthesized to orotic acid by the enzyme dihydroorotate dehydrogenase, where it later combines with phosphoribosyl pyrophosphate (PRPP) to form orotidine-5'-monophosphate (OMP). A distinguishing characteristic of pyrimidine synthesis is that the pyrimidine ring is fully synthesized before being attached to the ribose sugar, whereas purine synthesis happens by building the base directly on the sugar.
Chemistry
Orotic acid is a Bronsted acid and its conjugate base, the orotate anion, is able to bind to metals. Lithium orotate, for example, has been investigated for use in treating alcoholism, and complexes of cobalt, manganese, nickel, and zinc are known. The pentahydrate nickel orotate coordination complex converts into a polymeric trihydrate upon heating in water at 100 °C. Crystals of the trihydrate can be obtained by hydrothermal treatment of nickel(II) acetate and orotic acid. When the reactions are run with bidentate nitrogen ligands such as 2,2'-bipyridine present, other solids can be obtained.
Pathology
A buildup of orotic acid can lead to orotic aciduria and acidemia. It may be a symptom of an increased ammonia load due to a metabolic disorder, such as a urea cycle disorder.
In ornithine transcarbamylase deficiency, an X-linked inherited and the most common urea cycle disorder, excess carbamoyl phosphate is converted into orotic acid. This leads to an increased serum ammonia level, increased serum and urinary orotic acid levels and a decreased serum blood urea nitrogen level. This also leads to an increased urinary orotic acid excretion, because the orotic acid is not being properly utilized and must be eliminated. The hyperammonemia depletes alpha-ketoglutarate leading to the inhibition of the tricarboxylic acid cycle (TCA) decreasing adenosine triphosphate (ATP) production.
Orotic aciduria is a cause of megaloblastic anaemia.
Biochemistry
Orotic acid is a precursor to a RNA base, uracil. The breast milk of smokers has a higher concentration of orotic acid than that of a non smoking woman. It is reasoned that the smoking causes the pyrimidine biosynthesis process in the mother to be altered thus causing the orotic acid concentration to increase.
A modified orotic acid (5-fluoroorotic acid) is toxic to yeast. The mutant yeasts which are resistant to 5-fluoroorotic acid require a supply of uracil.
See also
Magnesium orotate
Pyrimidine biosynthesis
References
Further reading
External links
Pyrimidinediones
Enoic acids | 0.777025 | 0.983248 | 0.764008 |
Haworth projection | In chemistry, a Haworth projection is a common way of writing a structural formula to represent the cyclic structure of monosaccharides with a simple three-dimensional perspective. Haworth projection approximate the shapes of the actual molecules better for furanoses—which are in reality nearly planar—than for pyranoses which exist in solution in the chair conformation. Organic chemistry and especially biochemistry are the areas of chemistry that use the Haworth projection the most.
The Haworth projection was named after the British chemist Sir Norman Haworth.
A Haworth projection has the following characteristics:
Carbon is the implicit type of atom. In the example on the right, the atoms numbered from 1 to 6 are all carbon atoms. Carbon 1 is known as the anomeric carbon.
Hydrogen atoms on carbon are implicit. In the example, atoms 1 to 6 have extra hydrogen atoms not depicted.
A thicker line indicates atoms that are closer to the observer. In the example on the right, atoms 2 and 3 (and their corresponding OH groups) are the closest to the observer. Atoms 1 and 4 are farther from the observer. Atom 5 and the other atoms are the farthest.
The groups below the plane of the ring in Haworth projections correspond to those on the right-hand side of a Fischer projection. This rule does not apply to the groups on the two ring carbons bonded to the endocyclic oxygen atom combined with hydrogen.
See also
Skeletal formula
Natta projection
Newman projection
References
Carbohydrate chemistry
Carbohydrates
Stereochemistry | 0.777549 | 0.982582 | 0.764006 |
Gillespie algorithm | In probability theory, the Gillespie algorithm (or the Doob–Gillespie algorithm or stochastic simulation algorithm, the SSA) generates a statistically correct trajectory (possible solution) of a stochastic equation system for which the reaction rates are known. It was created by Joseph L. Doob and others (circa 1945), presented by Dan Gillespie in 1976, and popularized in 1977 in a paper where he uses it to simulate chemical or biochemical systems of reactions efficiently and accurately using limited computational power (see stochastic simulation). As computers have become faster, the algorithm has been used to simulate increasingly complex systems. The algorithm is particularly useful for simulating reactions within cells, where the number of reagents is low and keeping track of every single reaction is computationally feasible. Mathematically, it is a variant of a dynamic Monte Carlo method and similar to the kinetic Monte Carlo methods. It is used heavily in computational systems biology.
History
The process that led to the algorithm recognizes several important steps. In 1931, Andrei Kolmogorov introduced the differential equations corresponding to the time-evolution of stochastic processes that proceed by jumps, today known as Kolmogorov equations (Markov jump process) (a simplified version is known as master equation in the natural sciences). It was William Feller, in 1940, who found the conditions under which the Kolmogorov equations admitted (proper) probabilities as solutions. In his Theorem I (1940 work) he establishes that the time-to-the-next-jump was exponentially distributed and the probability of the next event is proportional to the rate. As such, he established the relation of Kolmogorov's equations with stochastic processes.
Later, Doob (1942, 1945) extended Feller's solutions beyond the case of pure-jump processes. The method was implemented in computers by David George Kendall (1950) using the Manchester Mark 1 computer and later used by Maurice S. Bartlett (1953) in his studies of epidemics outbreaks. Gillespie (1977) obtains the algorithm in a different manner by making use of a physical argument.
Idea
Mathematics
In a reaction chamber, there are a finite number of molecules. At each infinitesimal slice of time, a single reaction might take place. The rate is determined by the number of molecules in each chemical species.
Naively, we can simulate the trajectory of the reaction chamber by discretizing time, then simulate each time-step. However, there might be long stretches of time where no reaction occurs. The Gillespie algorithm samples a random waiting time until some reaction occurs, then take another random sample to decide which reaction has occurred.
The key assumptions are that
each reaction is Markovian in time
there are no correlations between reactions
Given the two assumptions, the random waiting time for some reaction is exponentially distributed, with exponential rate being the sum of the individual reaction's rates.
Validity in biochemical simulations
Traditional continuous and deterministic biochemical rate equations do not accurately predict cellular reactions since they rely on bulk reactions that require the interactions of millions of molecules. They are typically modeled as a set of coupled ordinary differential equations. In contrast, the Gillespie algorithm allows a discrete and stochastic simulation of a system with few reactants because every reaction is explicitly simulated. A trajectory corresponding to a single Gillespie simulation represents an exact sample from the probability mass function that is the solution of the master equation.
The physical basis of the algorithm is the collision of molecules within a reaction vessel. It is assumed that collisions are frequent, but collisions with the proper orientation and energy are infrequent. It is assumed that the reaction environment is well mixed.
Algorithm
A review (Gillespie, 2007) outlines three different, but equivalent formulations; the direct, first-reaction, and first-family methods, whereby the former two are special cases of the latter. The formulation of the direct and first-reaction methods is centered on performing the usual Monte Carlo inversion steps on the so-called "fundamental premise of stochastic chemical kinetics", which mathematically is the function
where each of the terms are propensity functions of an elementary reaction, whose argument is , the vector of species counts. The parameter is the time to the next reaction (or sojourn time), and is the current time. To paraphrase Gillespie, this expression is read as "the probability, given , that the system's next reaction will occur in the infinitesimal time interval , and will be of stoichiometry corresponding to the th reaction". This formulation provides a window to the direct and first-reaction methods by implying is an exponentially-distributed random variable, and is "a statistically independent integer random variable with point probabilities ".
Thus, the Monte Carlo generating method is simply to draw two pseudorandom numbers, and on , and compute
and
the smallest integer satisfying
Utilizing this generating method for the sojourn time and next reaction, the direct method algorithm is stated by Gillespie as
1. Initialize the time and the system's state
2. With the system in state at time , evaluate all the and their sum
3. Calculate the above value of and
4. Effect the next reaction by replacing and
5. Record as desired. Return to step 2, or else end the simulation.
where represents adding the component of the given state-change vector . This family of algorithms is computationally expensive and thus many modifications and adaptations exist, including the next reaction method (Gibson & Bruck), tau-leaping, as well as hybrid techniques where abundant reactants are modeled with deterministic behavior. Adapted techniques generally compromise the exactitude of the theory behind the algorithm as it connects to the master equation, but offer reasonable realizations for greatly improved timescales. The computational cost of exact versions of the algorithm is determined by the coupling class of the reaction network. In weakly coupled networks, the number of reactions that is influenced by any other reaction is bounded by a small constant. In strongly coupled networks, a single reaction firing can in principle affect all other reactions. An exact version of the algorithm with constant-time scaling for weakly coupled networks has been developed, enabling efficient simulation of systems with very large numbers of reaction channels (Slepoy Thompson Plimpton 2008). The generalized Gillespie algorithm that accounts for the non-Markovian properties of random biochemical events with delay has been developed by Bratsun et al. 2005 and independently Barrio et al. 2006, as well as (Cai 2007). See the articles cited below for details.
Partial-propensity formulations, as developed independently by both Ramaswamy et al. (2009, 2010) and Indurkhya and Beal (2010), are available to construct a family of exact versions of the algorithm whose computational cost is proportional to the number of chemical species in the network, rather than the (larger) number of reactions. These formulations can reduce the computational cost to constant-time scaling for weakly coupled networks and to scale at most linearly with the number of species for strongly coupled networks. A partial-propensity variant of the generalized Gillespie algorithm for reactions with delays has also been proposed (Ramaswamy Sbalzarini 2011). The use of partial-propensity methods is limited to elementary chemical reactions, i.e., reactions with at most two different reactants. Every non-elementary chemical reaction can be equivalently decomposed into a set of elementary ones, at the expense of a linear (in the order of the reaction) increase in network size.
Examples
Reversible binding of A and B to form AB dimers
A simple example may help to explain how the Gillespie algorithm works. Consider a system of molecules of two types, and . In this system, and reversibly bind together to form dimers such that two reactions are possible: either A and B react reversibly to form an dimer, or an dimer dissociates into and . The reaction rate constant for a given single A molecule reacting with a given single molecule is , and the reaction rate for an dimer breaking up is .
If at time t there is one molecule of each type then the rate of dimer formation is , while if there are molecules of type and molecules of type , the rate of dimer formation is . If there are dimers then the rate of dimer dissociation is .
The total reaction rate, , at time t is then given by
So, we have now described a simple model with two reactions. This definition is independent of the Gillespie algorithm. We will now describe how to apply the Gillespie algorithm to this system.
In the algorithm, we advance forward in time in two steps: calculating the time to the next reaction, and determining which of the possible reactions the next reaction is. Reactions are assumed to be completely random, so if the reaction rate at a time t is , then the time, δt, until the next reaction occurs is a random number drawn from exponential distribution function with mean . Thus, we advance time from t to t + δt.
The probability that this reaction is an molecule binding to a molecule is simply the fraction of total rate due to this type of reaction, i.e.,
the probability that reaction is
The probability that the next reaction is an dimer dissociating is just 1 minus that. So with these two probabilities we either form a dimer by reducing and by one, and increase by one, or we dissociate a dimer and increase and by one and decrease by one.
Now we have both advanced time to t + δt, and performed a single reaction. The Gillespie algorithm just repeats these two steps as many times as needed to simulate the system for however long we want (i.e., for as many reactions). The result of a Gillespie simulation that starts with and at t=0, and where and , is shown at the right. For these parameter values, on average there are 8 dimers and 2 of and but due to the small numbers of molecules fluctuations around these values are large. The Gillespie algorithm is often used to study systems where these fluctuations are important.
That was just a simple example, with two reactions. More complex systems with more reactions are handled in the same way. All reaction rates must be calculated at each time step, and one chosen with probability equal to its fractional contribution to the rate. Time is then advanced as in this example.
Stochastic self-assembly
The Gard model describes self-assembly of lipids into aggregates. Using stochastic simulations it shows the emergence of multiple types of aggregates and their evolution.
References
Further reading
(Slepoy Thompson Plimpton 2008):
(Bratsun et al. 2005):
(Barrio et al. 2006):
(Cai 2007):
(Barnes Chu 2010):
(Ramaswamy González-Segredo Sbalzarini 2009):
(Ramaswamy Sbalzarini 2010):
(Indurkhya Beal 2010):
(Ramaswamy Sbalzarini 2011):
(Yates Klingbeil 2013):
Chemical kinetics
Computational chemistry
Monte Carlo methods
Stochastic simulation | 0.776393 | 0.984023 | 0.763989 |
RStudio | RStudio IDE (or RStudio) is an integrated development environment for R, a programming language for statistical computing and graphics. It is available in two formats: RStudio Desktop is a regular desktop application while RStudio Server runs on a remote server and allows accessing RStudio using a web browser. The RStudio IDE is a product of Posit PBC (formerly RStudio PBC, formerly RStudio Inc.).
Reproducible analyses with vignettes
A strength of RStudio is its support for reproducible analyses with R Markdown vignettes. These allow users to mix text with code in R, Python, Julia, shell scripts, SQL, Stan, JavaScript, C, C++, Fortran, and others, similar to Jupyter Notebooks. R Markdown can be used to create dynamic reports that are automatically updated when new data become available. These reports can also be exported in various formats, including HTML, PDF, Microsoft Word, and LaTeX, with templates specific to the requirements of many scientific journals.
R Markdown vignettes and Jupyter notebooks make the data analysis completely reproducible. R Markdown vignettes have been included as appendices with tutorials on Wikiversity.
In 2022, Posit announced an R Markdown-like publishing system called Quarto. In addition to combining results of R, code and results using Python, Julia, Observable JavaScript, and Jupyter notebooks can also be used in Quarto documents. Compared to the file extension .Rmd that R Markdown has, Quarto documents have the file extension .qmd.
One difference between R Markdown files and Quarto documents is defining options in code chunks. In R Markdown, they would be inline within the curly brackets.
```{r chunk_name, echo=FALSE, warning=FALSE}
print(42)
```
In contrast, Quarto documents define the chunk options below the curly brackets, prefixed using a pound character and vertical pipe (or "hash-pipe").
```{r}
#| label: chunk_name
#| echo: false
#| warning: false
print(42)
```
Licensing model
The RStudio integrated development environment (IDE) is available with the GNU Affero General Public License version 3. The AGPL v3 is an open source license that guarantees the freedom to share the code.
RStudio Desktop and RStudio Server are both available in free and fee-based (commercial) editions. OS support depends on the format/edition of the IDE. Prepackaged distributions of RStudio Desktop are available for Windows, macOS, and Linux. RStudio Server and Server Pro run on Debian, Ubuntu, Red Hat Linux, CentOS, openSUSE and SLES.
Overview and history
The RStudio IDE is partly written in the C++ programming language and uses the Qt framework for its graphical user interface. The bigger percentage of the code is written in Java. JavaScript is also used.
Work on the RStudio IDE started around December 2010, and the first public beta version (v0.92) was officially announced in February 2011. Version 1.0 was released on 1 November 2016. Version 1.1 was released on 9 October 2017.
Addins
The RStudio IDE provides a mechanism for executing R functions interactively from within the IDE through the Addins menu. This enables packages to include Graphical User Interfaces (GUIs) for increased accessibility. Popular R packages that use this feature include:
bookdown – a knitr extension to create books
colourpicker – a graphical tool to pick colours for plots
datasets.load – a graphical tool to search and load datasets
googleAuthR – Authenticate with Google APIs
Development
The RStudio IDE is developed by Posit, PBC, a public-benefit corporation founded by J. J. Allaire, creator of the programming language ColdFusion. Posit has no formal connection to the R Foundation, a not-for-profit organization located in Vienna, Austria, which is responsible for overseeing development of the R environment for statistical computing. Posit was formerly known as RStudio Inc. In July 2022, it announced that it changed its name to Posit, to signify its broadening exploration towards other programming languages such as Python.
See also
R interfaces
Comparison of integrated development environments
References
Notes
External links
Free R (programming language) software
R (programming language)
Science software for Linux
Software using the GNU AGPL license | 0.766233 | 0.997066 | 0.763985 |
Discrete mathematics | Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continuous functions). Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (finite sets or sets with the same cardinality as the natural numbers). However, there is no exact definition of the term "discrete mathematics".
The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business.
Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in "discrete" steps and store data in "discrete" bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems.
Although the main objects of study in discrete mathematics are discrete objects, analytic methods from "continuous" mathematics are often employed as well.
In university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. Some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is sometimes seen as a preparatory course, like precalculus in this respect.
The Fulkerson Prize is awarded for outstanding papers in discrete mathematics.
Topics
Theoretical computer science
Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily on graph theory and mathematical logic. Included within theoretical computer science is the study of algorithms and data structures. Computability studies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations. Automata theory and formal language theory are closely related to computability. Petri nets and process algebras are used to model computer systems, and methods from discrete mathematics are used in analyzing VLSI electronic circuits. Computational geometry applies algorithms to geometrical problems and representations of geometrical objects, while computer image analysis applies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics.
Information theory
Information theory involves the quantification of information. Closely related is coding theory which is used to design efficient and reliable data transmission and storage methods. Information theory also includes continuous topics such as: analog signals, analog coding, analog encryption.
Logic
Logic is the study of the principles of valid reasoning and inference, as well as of consistency, soundness, and completeness. For example, in most systems of logic (but not in intuitionistic logic) Peirce's law (((P→Q)→P)→P) is a theorem. For classical logic, it can be easily verified with a truth table. The study of mathematical proof is particularly important in logic, and has accumulated to automated theorem proving and formal verification of software.
Logical formulas are discrete structures, as are proofs, which form finite trees or, more generally, directed acyclic graph structures (with each inference step combining one or more premise branches to give a single conclusion). The truth values of logical formulas usually form a finite set, generally restricted to two values: true and false, but logic can also be continuous-valued, e.g., fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied, e.g. infinitary logic.
Set theory
Set theory is the branch of mathematics that studies sets, which are collections of objects, such as {blue, white, red} or the (infinite) set of all prime numbers. Partially ordered sets and sets with other relations have applications in several areas.
In discrete mathematics, countable sets (including finite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked by Georg Cantor's work distinguishing between different kinds of infinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work in descriptive set theory makes extensive use of traditional continuous mathematics.
Combinatorics
Combinatorics studies the ways in which discrete structures can be combined or arranged.
Enumerative combinatorics concentrates on counting the number of certain combinatorial objects - e.g. the twelvefold way provides a unified framework for counting permutations, combinations and partitions.
Analytic combinatorics concerns the enumeration (i.e., determining the number) of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
Topological combinatorics concerns the use of techniques from topology and algebraic topology/combinatorial topology in combinatorics.
Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties.
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, partition theory is now considered a part of combinatorics or an independent field.
Order theory is the study of partially ordered sets, both finite and infinite.
Graph theory
Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory and topological graph theory has close links to topology. There are also continuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics.
Number theory
Number theory is concerned with the properties of numbers in general, particularly integers. It has applications to cryptography and cryptanalysis, particularly with regard to modular arithmetic, diophantine equations, linear and quadratic congruences, prime numbers and primality testing. Other discrete aspects of number theory include geometry of numbers. In analytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects include transcendental numbers, diophantine approximation, p-adic analysis and function fields.
Algebraic structures
Algebraic structures occur as both discrete examples and continuous examples. Discrete algebras include: Boolean algebra used in logic gates and programming; relational algebra used in databases; discrete and finite versions of groups, rings and fields are important in algebraic coding theory; discrete semigroups and monoids appear in the theory of formal languages.
Discrete analogues of continuous mathematics
There are many concepts and theories in continuous mathematics which have discrete versions, such as discrete calculus, discrete Fourier transforms, discrete geometry, discrete logarithms, discrete differential geometry, discrete exterior calculus, discrete Morse theory, discrete optimization, discrete probability theory, discrete probability distribution, difference equations, discrete dynamical systems, and discrete vector measures.
Calculus of finite differences, discrete analysis, and discrete calculus
In discrete calculus and the calculus of finite differences, a function defined on an interval of the integers is usually called a sequence. A sequence could be a finite sequence from a data source or an infinite sequence from a discrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by a recurrence relation or difference equation. Difference equations are similar to differential equations, but replace differentiation by taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right. Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there are integral transforms in harmonic analysis for studying continuous functions or analogue signals, there are discrete transforms for discrete functions or digital signals. As well as discrete metric spaces, there are more general discrete topological spaces, finite metric spaces, finite topological spaces.
The time scale calculus is a unification of the theory of difference equations with that of differential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion of hybrid dynamical systems.
Discrete geometry
Discrete geometry and combinatorial geometry are about combinatorial properties of discrete collections of geometrical objects. A long-standing topic in discrete geometry is tiling of the plane.
In algebraic geometry, the concept of a curve can be extended to discrete geometries by taking the spectra of polynomial rings over finite fields to be models of the affine spaces over that field, and letting subvarieties or spectra of other rings provide the curves that lie in that space. Although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. For example, every point of the form for a field can be studied either as , a point, or as the spectrum of the local ring at (x-c), a point together with a neighborhood around it. Algebraic varieties also have a well-defined notion of tangent space called the Zariski tangent space, making many features of calculus applicable even in finite settings.
Discrete modelling
In applied mathematics, discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, discrete formulae are fit to data. A common method in this form of modelling is to use recurrence relation. Discretization concerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations. Numerical analysis provides an important example.
Challenges
The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance).
In logic, the second problem on David Hilbert's list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself. Hilbert's tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done.
The need to break German codes in World War II led to advances in cryptography and theoretical computer science, with the first programmable digital electronic computer being developed at England's Bletchley Park with the guidance of Alan Turing and his seminal work, On Computable Numbers. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades. The telecommunications industry has also motivated advances in discrete mathematics, particularly in graph theory and information theory. Formal verification of statements in logic has been necessary for software development of safety-critical systems, and advances in automated theorem proving have been driven by this need.
Computational geometry has been an important part of the computer graphics incorporated into modern video games and computer-aided design tools.
Several fields of discrete mathematics, particularly theoretical computer science, graph theory, and combinatorics, are important in addressing the challenging bioinformatics problems associated with understanding the tree of life.
Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems.
See also
Outline of discrete mathematics
Cyberchase, a show that teaches discrete mathematics to children
References
Further reading
External links
Discrete mathematics at the utk.edu Mathematics Archives, providing links to syllabi, tutorials, programs, etc.
Iowa Central: Electrical Technologies Program Discrete mathematics for Electrical engineering. | 0.764833 | 0.998875 | 0.763973 |
Heterodox economics | Heterodox economics refers to attempts at treating the subject of economics that reject the standard tools and methodologies of mainstream economics, which constitute the scientific method as applied to the field of economics. These tools include include:
An emphasis on making deductively valid arguments by explicitly formalizing assumptions into mathematical models;
The application of decision and game theory or cognitive science (by behavioral economists) to predict human behavior; and
The practice of empirically testing economic theories using either experimental or econometric data.
Groups typically classed as heterodox include the Austrian, ecological, Marxist-historical, post-autistic, and modern monetary approaches.
Heterodox economics tends to be identified, both by heterodox and mainstream economists, as a branch of the humanities, rather than the behavioral sciences, with many heterodox economists rejecting the possibility of applying the scientific method to the study of society. Four frames of analysis have been highlighted for their importance to heterodox thought: history, natural systems, uncertainty, and power.
History
In the mid-19th century, such thinkers as Auguste Comte, Thomas Carlyle, John Ruskin and Karl Marx made early critiques of orthodox economy. A number of heterodox schools of economic thought challenged the dominance of neoclassical economics after the neoclassical revolution of the 1870s. In addition to socialist critics of capitalism, heterodox schools in this period included advocates of various forms of mercantilism, such as the American School dissenters from neoclassical methodology such as the historical school, and advocates of unorthodox monetary theories such as social credit.
Physical scientists and biologists were the first individuals to use energy flows to explain social and economic development. Joseph Henry, an American physicist and first secretary of the Smithsonian Institution, remarked that the "fundamental principle of political economy is that the physical labor of man can only be ameliorated by… the transformation of matter from a crude state to an artificial condition...by expending what is called power or energy."
The rise, and absorption into the mainstream of Keynesian economics, which appeared to provide a more coherent policy response to unemployment than unorthodox monetary or trade policies, contributed to the decline of interest in these schools.
After 1945, the neoclassical synthesis of Keynesian and neoclassical economics resulted in a clearly defined mainstream position based on a division of the field into microeconomics (generally neoclassical but with a newly developed theory of market failure) and macroeconomics (divided between Keynesian and monetarist views on such issues as the role of monetary policy). Austrians and post-Keynesians who dissented from this synthesis emerged as clearly defined heterodox schools. In addition, the Marxist and institutionalist schools remained active but with limited acceptance or credibility.
Up to 1980 the most notable themes of heterodox economics in its various forms included:
rejection of the atomistic individual conception in favor of a socially embedded individual conception;
emphasis on time as an irreversible historical process;
reasoning in terms of mutual influences between individuals and social structures.
From approximately 1980 mainstream economics has been significantly influenced by a number of new research programs, including behavioral economics, complexity economics, evolutionary economics, experimental economics, and neuroeconomics. One key development has been an epistemic turn away from theory towards an empirically driven approach focused centrally on questions of causal inference. As a consequence, some heterodox economists, such as John B. Davis, proposed that the definition of heterodox economics has to be adapted to this new, more complex reality:
...heterodox economics post-1980 is a complex structure, being composed out of two broadly different kinds of heterodox work, each internally differentiated with a number of research programs having different historical origins and orientations: the traditional left heterodoxy familiar to most and the 'new heterodoxy' resulting from other science imports.
Rejection of neoclassical economics
There is no single "heterodox economic theory"; there are many different "heterodox theories" in existence. What they all share, however, is a rejection of the neoclassical orthodoxy as representing the appropriate tool for understanding the workings of economic and social life. The reasons for this rejection may vary. Some of the elements commonly found in heterodox critiques are listed below.
Criticism of the neoclassical model of individual behavior
One of the most broadly accepted principles of neoclassical economics is the assumption of the "rationality of economic agents". Indeed, for a number of economists, the notion of rational maximizing behavior is taken to be synonymous with economic behavior (Hirshleifer 1984). When some economists' studies do not embrace the rationality assumption, they are seen as placing the analyses outside the boundaries of the Neoclassical economics discipline (Landsberg 1989, 596). Neoclassical economics begins with the a priori assumptions that agents are rational and that they seek to maximize their individual utility (or profits) subject to environmental constraints. These assumptions provide the backbone for rational choice theory.
Many heterodox schools are critical of the homo economicus model of human behavior used in the standard neoclassical model. A typical version of the critique is that of Satya Gabriel:
Neoclassical economic theory is grounded in a particular conception of human psychology, agency or decision-making. It is assumed that all human beings make economic decisions so as to maximize pleasure or utility. Some heterodox theories reject this basic assumption of neoclassical theory, arguing for alternative understandings of how economic decisions are made and/or how human psychology works. It is possible to accept the notion that humans are pleasure seeking machines, yet reject the idea that economic decisions are governed by such pleasure seeking. Human beings may, for example, be unable to make choices consistent with pleasure maximization due to social constraints and/or coercion. Humans may also be unable to correctly assess the choice points that are most likely to lead to maximum pleasure, even if they are unconstrained (except in budgetary terms) in making such choices. And it is also possible that the notion of pleasure seeking is itself a meaningless assumption because it is either impossible to test or too general to refute. Economic theories that reject the basic assumption of economic decisions as the outcome of pleasure maximization are heterodox.
Shiozawa emphasizes that economic agents act in a complex world and therefore impossible for them to attain maximal utility point. They instead behave as if there are a repertories of many ready made rules, one of which they chose according to relevant situation.
Criticism of the neoclassical model of market equilibrium
In microeconomic theory, cost-minimization by consumers and by firms implies the existence of supply and demand correspondences for which market clearing equilibrium prices exist, if there are large numbers of consumers and producers. Under convexity assumptions or under some marginal-cost pricing rules, each equilibrium will be Pareto efficient: In large economies, non-convexity also leads to quasi-equilibria that are nearly efficient.
However, the concept of market equilibrium has been criticized by Austrians, post-Keynesians and others, who object to applications of microeconomic theory to real-world markets, when such markets are not usefully approximated by microeconomic models. Heterodox economists assert that micro-economic models rarely capture reality.
Mainstream microeconomics may be defined in terms of optimization and equilibrium, following the approaches of Paul Samuelson and Hal Varian. On the other hand, heterodox economics may be labeled as falling into the nexus of institutions, history, and social structure.
Most recent developments
Over the past two decades, the intellectual agendas of heterodox economists have taken a decidedly pluralist turn. Leading heterodox thinkers have moved beyond the established paradigms of Austrian, Feminist, Institutional-Evolutionary, Marxian, Post Keynesian, Radical, Social, and Sraffian economics—opening up new lines of analysis, criticism, and dialogue among dissenting schools of thought. This cross-fertilization of ideas is creating a new generation of scholarship in which novel combinations of heterodox ideas are being brought to bear on important contemporary and historical problems, such as socially grounded reconstructions of the individual in economic theory; the goals and tools of economic measurement and professional ethics; the complexities of policymaking in today's global political economy; and innovative connections among formerly separate theoretical traditions (Marxian, Austrian, feminist, ecological, Sraffian, institutionalist, and post-Keynesian) (for a review of post-Keynesian economics, see Lavoie (1992); Rochon (1999)).
David Colander, an advocate of complexity economics, argues that the ideas of heterodox economists are now being discussed in the mainstream without mention of the heterodox economists, because the tools to analyze institutions, uncertainty, and other factors have now been developed by the mainstream. He suggests that heterodox economists should embrace rigorous mathematics and attempt to work from within the mainstream, rather than treating it as an enemy.
Some schools of heterodox economic thought have also taken a transdisciplinary approach. Thermoeconomics is based on the claim that human economic processes are governed by the second law of thermodynamics. The posited relationship between economic theory, energy and entropy, has been extended further by systems scientists to explain the role of energy in biological evolution in terms of such economic criteria as productivity, efficiency, and especially the costs and benefits of the various mechanisms for capturing and utilizing available energy to build biomass and do work.
Various student movements have emerged in response to the exclusion of heterodox economics in the curricula of most economics degrees. The International Student Initiative for Pluralist Economics was set up as an umbrella network for various smaller university groups such as Rethinking Economics to promote pluralism in economics, including more heterodox approaches.
Fields of heterodox economic thought
American Institutionalist School
Austrian economics #
Binary economics
Bioeconomics
Buddhist economics
Complexity economics
Co-operative economics
Distributism
Ecological economics §
Evolutionary economics # § (partly within mainstream economics)
Econophysics
Feminist economics # §
Georgism
Gift-based economics
Green Economics
Humanistic economics
Innovation Economics
Institutional economics # §
Islamic economics
Marxian economics #
Mutualism
Neuroeconomics
Participatory economics
Political Economy
Post-Keynesian economics § including Modern Monetary Theory and Circuitism
Post scarcity
Pluralism in economics
Resource-based economics – not to be confused with a resource-based economy
Real-world economics
Sharing economics
Socialist economics #
Social economics (partially heterodox usage)
Sraffian economics #
Technocracy (Energy Accounting)
Thermoeconomics
Mouvement Anti-Utilitariste dans les Sciences Sociales
# Listed in Journal of Economic Literature codes
scrolled to at JEL: B5 – Current Heterodox Approaches.
§ Listed in The New Palgrave Dictionary of Economics
Some schools in the social sciences aim to promote certain perspectives: classical and modern political economy; economic sociology and anthropology; gender and racial issues in economics; and so on.
Notable heterodox economists
Alfred Eichner
Alice Amsden
Aníbal Pinto
Anwar Shaikh
Bernard Lonergan
Bill Mitchell
Bryan Caplan
Carlota Perez
Carolina Alves
Celso Furtado
Dani Rodrik
David Harvey
Duncan Foley
E. F. Schumacher
Edward Nell
Esther Dweck
F.A. Hayek
Frank Stilwell
Franklin Serrano
Frederic S. Lee
Frederick Soddy
G.L.S. Shackle
Hans Singer
Ha-Joon Chang
Heinz Kurz
Henry George
Herman Daly
Hyman Minsky
Jack Amariglio
Jeremy Rifkin
Joan Robinson
John Bellamy Foster
John Komlos
Joseph Schumpeter
Karl Marx
Kate Raworth
Lance Taylor
Ludwig Lachmann
Ludwig von Mises
Lyndon Larouche
Maria da Conceição Tavares
Mariana Mazzucato
Mason Gaffney
Michael Albert
Michael Hudson
Michael Perelman
Michał Kalecki
Murray Rothbard
Mushtaq Khan
Nelson Barbosa
Nicholas Georgescu-Roegen
Nicolaus Tideman
Paul A. Baran
Paul Cockshott
Paul Sweezy
Peter Navarro
Piero Sraffa
Rania Antonopoulos
Raúl Prebisch
Richard D. Wolff
Robin Hahnel
Ruy Mauro Marini
Simon Zadek
Stephanie Kelton
Stephen Resnick
Theotônio dos Santos
Thorstein Veblen
Tony Lawson
Yanis Varoufakis
Yusif Sayigh
See also
Association for Evolutionary Economics
Chinese economic reform
Degrowth
EAEPE
Foundations of Real-World Economics
Happiness economics
Humanistic economics
Kinetic exchange models of markets
Pluralism in economics
Post-autistic economics
Post-growth
Real-world economics review
Real-world economics
Notes
References
Further reading
Articles
Books
Jo, Tae-Hee, Chester, Lynne, and D'Ippoliti. eds. 2017. The Routledge Handbook of Heterodox Economics. London and New York: Routledge. .
Gerber, Julien-Francois and Steppacher, Rolf, ed., 2012. Towards an Integrated Paradigm in Heterodox Economics: Alternative Approaches to the Current Eco-Social Crises. Palgrave Macmillan.
Lee, Frederic S. 2009. A History of Heterodox Economics Challenging the Mainstream in the Twentieth Century. London and New York: Routledge. 2009
Harvey, John T. and Garnett Jr., Robert F., ed., 2007. Future Directions for Heterodox Economics, Series Advances in Heterodox Economics, The University of Michigan Press.
What Every Economics Student Needs to Know.. Routledge 2014.
McDermott, John, 2003. Economics in Real Time: A Theoretical Reconstruction, Series Advances in Heterodox Economics, The University of Michigan Press.
Rochon, Louis-Philippe and Rossi, Sergio, editors, 2003. Modern Theories of Money: The Nature and Role of Money in Capitalist Economies. Edward Elgar Publishing.
{{cite news|title=The Wide, Wide World Of Wealth (The New Palgrave: A Dictionary of Economics'''. Edited by John Eatwell, Murray Milgate and Peter Newman. Four volumes. 4,103 pp. New York: Stockton Press.)|last=Solow|first=Robert M.|author-link=Robert M. Solow|date=20 March 1988|journal=New York Times|url=https://www.nytimes.com/1988/03/20/books/the-wide-wide-world-of-wealth.html?scp=1}}
Stilwell, Frank., 2011. Political Economy: The Contest of Economic Ideas. Oxford University Press.
Foundations of Real-World Economics: What Every Economics Student Needs to Know, 2nd edition, Abingdon-on-Thames, UK: Routledge: 2019.
Articles, conferences, papers
Lavoie, Marc, 2006. Do Heterodox Theories Have Anything in Common? A Post-Keynesian Point of View.
Lawson, Tony, 2006. "The Nature of Heterodox Economics," Cambridge Journal of Economics, 30(4), pp. 483–505. Pre-publication copy.
Journals
Evolutionary and Institutional Economics Review
Journal of Institutional Economics''
Cambridge Journal of Economics
Real-world economics review
International Journal of Pluralism and Economics Education
Review of Radical Political Economy
External links
Association for Heterodox Economics
Heterodox Economics Newsletter
Heterodox Economics Directory (Graduate and Undergraduate Programs, Journals, Publishers and Book Series, Associations, Blogs, and Institutions and Other Web Sites)
Association for Evolutionary Economics (AFEE)
International Confederation of Associations for Pluralism in Economics (ICAPE)
Union for Radical Political Economics (URPE)
Association for Social Economics (ASE)
Post-Keynesian Economics Study Group (PKSG)
^
^
Political economy | 0.76761 | 0.995262 | 0.763972 |
Chaos theory | Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas.
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:
Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes.
Introduction
Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.
Chaotic dynamics
In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:
it must be sensitive to initial conditions,
it must be topologically transitive,
it must have dense periodic orbits.
In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.
Sensitivity to initial conditions
Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).
A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by
where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic.
In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.
Non-periodicity
A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
Topological mixing
Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
Topological transitivity
A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets.
An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.
Density of periodic orbits
For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example, → → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).
Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
Strange attractors
Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.
Coexisting attractors
In contrast to single type chaotic solutions, recent studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic".
Minimum complexity of a chaotic system
Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.
The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability.
Infinite dimensional maps
The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps:
,
where kernel is propagator derived as Green function of a relevant physical system,
might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map
may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:.
.
Jerk systems
In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form
are sometimes called jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behavior. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems.
A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits.
One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of is:
Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes:
In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative.
Similar circuits only require one diode or no diodes at all.
See also the well-known Chua's circuit, one basis for chaotic true random number generators. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system.
Spontaneous order
Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system.
Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.
Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect.
History
James Clerk Maxwell first emphasized the "butterfly effect", and is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.
Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing.
Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos.
The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.
Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions.
In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory.
In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.
In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.
In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.
In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.
Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.
Also in 1987 James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.
The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.
Lorenz's pioneering contributions to chaotic modeling
Throughout his career, Professor Edward Lorenz authored a total of 61 research papers, out of which 58 were solely authored by him. Commencing with the 1960 conference in Japan, Lorenz embarked on a journey of developing diverse models aimed at uncovering the SDIC and chaotic features. A recent review of Lorenz's model progression spanning from 1960 to 2008 revealed his adeptness at employing varied physical systems to illustrate chaotic phenomena. These systems encompassed Quasi-geostrophic systems, the Conservative Vorticity Equation, the Rayleigh-Bénard Convection Equations, and the Shallow Water Equations. Moreover, Lorenz can be credited with the early application of the logistic map to explore chaotic solutions, a milestone he achieved ahead of his colleagues (e.g. Lorenz 1964).
In 1972, Lorenz coined the term "butterfly effect" as a metaphor to discuss whether a small perturbation could eventually create a tornado with a three-dimensional, organized, and coherent structure. While connected to the original butterfly effect based on sensitive dependence on initial conditions, its metaphorical variant carries distinct nuances. To commemorate this milestone, a reprint book containing invited papers that deepen our understanding of both butterfly effects was officially published to celebrate the 50th anniversary of the metaphorical butterfly effect.
A popular but inaccurate analogy for chaos
The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore:
For want of a nail, the shoe was lost.
For want of a shoe, the horse was lost.
For want of a horse, the rider was lost.
For want of a rider, the battle was lost.
For want of a battle, the kingdom was lost.
And all for the want of a horseshoe nail.
Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. In a recent study, the characteristic of the aforementioned verse was recently denoted as "finite-time sensitive dependence".
Applications
Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.
Cryptography
Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.
Robotics
Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model.
Chaotic dynamics have been exhibited by passive walking biped robots.
Biology
For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.
As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984. Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population.
Economics
It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.
Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19.
Finite predictability in weather and climate
Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law.
AI-extended modeling framework
In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture").
Other areas
In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.
Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.
Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.
In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.
Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.
By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.
Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).
Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.
See also
Examples of chaotic systems
Advected contours
Arnold's cat map
Bifurcation theory
Bouncing ball dynamics
Chua's circuit
Cliodynamics
Coupled map lattice
Double pendulum
Duffing equation
Dynamical billiards
Economic bubble
Gaspard-Rice system
Hénon map
Horseshoe map
List of chaotic maps
Rössler attractor
Standard map
Swinging Atwood's machine
Tilt A Whirl
Other related topics
Amplitude death
Anosov diffeomorphism
Catastrophe theory
Causality
Chaos as topological supersymmetry breaking
Chaos machine
Chaotic mixing
Chaotic scattering
Control of chaos
Determinism
Edge of chaos
Emergence
Mandelbrot set
Kolmogorov–Arnold–Moser theorem
Ill-conditioning
Ill-posedness
Nonlinear system
Patterns in nature
Predictability
Quantum chaos
Santa Fe Institute
Shadowing lemma
Synchronization of chaos
Unintended consequence
People
Ralph Abraham
Michael Berry
Leon O. Chua
Ivar Ekeland
Doyne Farmer
Martin Gutzwiller
Brosl Hasslacher
Michel Hénon
Aleksandr Lyapunov
Norman Packard
Otto Rössler
David Ruelle
Oleksandr Mikolaiovich Sharkovsky
Robert Shaw
Floris Takens
James A. Yorke
George M. Zaslavsky
References
Further reading
Articles
Online version (Note: the volume and page citation cited for the online text differ from that cited here. The citation here is from a photocopy, which is consistent with other citations found online that don't provide article views. The online content is identical to the hardcopy text. Citation variations are related to country of publication).
Textbooks
Semitechnical and popular works
Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, .
John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper Perennial 1990, 224 pp.
John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change, Harper Perennial 2000, 224 pp.
Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp.
Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University Press 1988, 272 pp.
James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp.
L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and Applications, University of Michigan Press, 1997, 360 pp.
Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature , National Book Trust, 2003.
Hans Lauwerier, Fractals, Princeton University Press, 1991.
Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996.
David Peak and Michael Frame, Chaos Under Control: The Art and Science of Complexity, Freeman, 1994.
Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp.
Nuria Perpinya, Caos, virus, calma. La Teoría del Caos aplicada al desórden artístico, social y político, Páginas de Espuma, 2021.
Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St Martins Pr 1991.
Clifford A. Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World, St Martins Pr 1994.
Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984.
David Ruelle, Chance and Chaos, Princeton University Press 1993.
Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993.
Manfred Schroeder, Fractals, Chaos, and Power Laws, Freeman, 1991.
Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990.
Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003.
Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993.
M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, 1992.
Antonio Sawaya, Financial Time Series Analysis : Chaos and Neurodynamics Approach, Lambert, 2012.
External links
Nonlinear Dynamics Research Group with Animations in Flash
The Chaos group at the University of Maryland
The Chaos Hypertextbook. An introductory primer on chaos and fractals
ChaosBook.org An advanced graduate textbook on chaos (no fractals)
Society for Chaos Theory in Psychology & Life Sciences
Nonlinear Dynamics Research Group at CSDC, Florence, Italy
Nonlinear dynamics: how science comprehends chaos, talk presented by Sunny Auyang, 1998.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Gleick's Chaos (excerpt)
Systems Analysis, Modelling and Prediction Group at the University of Oxford
A page about the Mackey-Glass equation
High Anxieties — The Mathematics of Chaos (2008) BBC documentary directed by David Malone
The chaos theory of evolution – article published in Newscientist featuring similarities of evolution and non-linear systems including fractal nature of life and chaos.
Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure. Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide audience.
"Chaos Theory", BBC Radio 4 discussion with Susan Greenfield, David Papineau & Neil Johnson (In Our Time, May 16, 2002)
Chaos: The Science of the Butterfly Effect (2019) an explanation presented by Derek Muller
Copyright note
Complex systems theory
Computational fields of study | 0.764257 | 0.999624 | 0.76397 |
Cliodynamics | Cliodynamics is a transdisciplinary area of research that integrates cultural evolution, economic history/cliometrics, macrosociology, the mathematical modeling of historical processes during the longue durée, and the construction and analysis of historical databases.
Cliodynamics treats history as science. Its practitioners develop theories that explain such dynamical processes as the rise and fall of empires, population booms and busts, and the spread and disappearance of religions. These theories are translated into mathematical models. Finally, model predictions are tested against data. Thus, building and analyzing massive databases of historical and archaeological information is one of the most important goals of cliodynamics.
Etymology
The word cliodynamics is composed of clio- and -dynamics. In Greek mythology, Clio is the muse of history. Dynamics, most broadly, is the study of how and why phenomena change with time.
The term was originally coined by Peter Turchin in 2003, and can be traced to the work of such figures as Ibn Khaldun, Alexandre Deulofeu, Jack Goldstone, Sergey Kapitsa, Randall Collins, John Komlos, and Andrey Korotayev.
Mathematical modeling of historical dynamics
Many historical processes are dynamic, in that they change with time: populations increase and decline, economies expand and contract, states grow and collapse, and so on. As such, practitioners of cliodynamics apply mathematical models to explain macrohistorical patterns—things like the rise of empires, social discontent, civil wars, and state collapse.
Cliodynamics is the application of a dynamical systems approach to the social sciences in general and to the study of historical dynamics in particular. More broadly, this approach is quite common and has proved its worth in innumerable applications (particularly in the natural sciences).
The dynamical systems approach is so called because the whole phenomenon is represented as a system consisting of several elements (or subsystems) that interact and change dynamically (i.e., over time). More simply, it consists of taking a holistic phenomenon and splitting it up into separate parts that are assumed to interact with each other. In the dynamical systems approach, one sets out explicitly with mathematical formulae how different subsystems interact with each other. This mathematical description is the model of the system, and one can use a variety of methods to study the dynamics predicted by the model, as well as attempt to test the model by comparing its predictions with observed empirical, dynamic evidence.
Although the focus is usually on the dynamics of large conglomerates of people, the approach of cliodynamics does not preclude the inclusion of human agency in its explanatory theories. Such questions can be explored with agent-based computer simulations.
Databases and data sources
Cliodynamics relies on large bodies of evidence to test competing theories on a wide range of historical processes. This typically involves building massive stores of evidence. The rise of digital history and various research technologies have allowed huge databases to be constructed in recent years.
Some prominent databases utilized by cliodynamics practitioners include:
The Seshat: Global History Databank, which systematically collects state-of-the-art accounts of the political and social organization of human groups and how societies have evolved through time into an authoritative databank. Seshat is affiliated also with the Evolution Institute, a non-profit think-tank that "uses evolutionary science to solve real-world problems."
D-PLACE (Database of Places, Languages, Culture and Environment), which provides data on over 1,400 human social formations.
The Atlas of Cultural Evolution, an archaeological database created by Peter N. Peregrine.
CHIA (Collaborative for Historical Information and Analysis), a multidisciplinary collaborative endeavor hosted by the University of Pittsburgh with the goal of archiving historical information and linking data as well as academic/research institutions around the globe.
International Institute of Social History, which collects data on the global social history of labour relations, workers, and labour.
Human Relations Area Files (eHRAF)
Archaeology
World Cultures
Clio-Infra, a database of measures of economic performance and other aspects of societal well-being on a global sample of societies from 1800 CE to the present.
The Google Ngram Viewer, an online search engine that charts frequencies of sets of comma-delimited search strings using a yearly count of n-grams as found in the largest online body of human knowledge, the Google Books corpus.
Research
Areas of study
As of 2016, the main directions of academic study in cliodynamics are:
The coevolutionary model of social complexity and warfare, based on the theoretical framework of cultural multilevel selection
The study of revolutions and rebellions
Structural-demographic theory and secular cycles
Explanations of the global distribution of languages benefitted from the empirical finding that the geographic area in which a language is spoken is more closely associated with the political complexity of the speakers than with all other variables under analysis.
Mathematical modeling of the long-term ("millennial") trends of world-systems analysis,
Structural-demographic models of the Modern Age revolutions, including the Arab revolutions of 2011.
The analysis of vast quantities of historical newspaper content, which shows how periodic structures can be automatically discovered in historical newspapers. A similar analysis was performed on social media, again revealing strongly periodic structures.
Organizations
There are several established venues of peer-reviewed cliodynamics research:
Cliodynamics: The Journal of Quantitative History and Cultural Evolution is a peer-reviewed web-based (open-access) journal that publishes on the transdisciplinary area of cliodynamics. It seeks to integrate historical models with data to facilitate theoretical progress. The first issue was published in December 2010. Cliodynamics is a member of Scopus and the Directory of Open Access Journals (DOAJ).
The University of Hertfordshire's Cliodynamics Lab is the first lab in the world dedicated explicitly to the new research area of cliodynamics. It is directed by Pieter François, who founded the Lab in 2015.
The Santa Fe Institute is a private, not-for-profit research and education center where leading scientists grapple with compelling and complex problems. The institute supports work in complex modeling of networks and dynamical systems. One of the areas of SFI research is cliodynamics. In the past the institute has sponsored a series of conversations and meetings on theoretical history.
Criticism
Critics of cliodynamics often argue that the complex social formations of the past cannot and should not be reduced to quantifiable, analyzable 'data points', for doing so overlooks each historical society's particular circumstances and dynamics. Many historians and social scientists contend that there are no generalisable causal factors that can explain large numbers of cases, but that historical investigation should focus on the unique trajectories of each case, highlighting commonalities in outcomes where they exist. As Zhao notes, "most historians believe that the importance of any mechanism in history changes, and more importantly, that there is no time-invariant structure that can organise all historical mechanisms into a system."
Fiction
Starting in the 1940s, Isaac Asimov invented the fictional precursor to this discipline, in what he called psychohistory, as a major plot device in his Foundation series of science fiction novels Robert Heinlein wrote a 1952 short story, The Year of the Jackpot, with a similar plot device about tracking the cycles of history and using them to predict the future.
See also
Critical juncture theory
Generations (book)
Historical geographic information system
Sociocultural evolution
Historical dynamics
References
Bibliography
Finley, Klint. 2013. "Mathematicians Predict The Future With Data from the Past." Wired.
(segment starts at 47:18)
Komlos J., Nefedov S. 2002. Compact Macromodel of Pre-Industrial Population Growth. Historical Methods. (35): 92–94.
(Excerpts) (Publisher's page)
Korotayev A. et al., A Trap At The Escape From The Trap? Demographic-Structural Factors of Political Instability in Modern Africa and West Asia. Cliodynamics 2/2 (2011): 1-28.
Tsirel, S. V. 2004. On the Possible Reasons for the Hyperexponential Growth of the Earth Population. Mathematical Modeling of Social and Economic Dynamics / Ed. by M. G. Dmitriev and A. P. Petrov, pp. 367–9. Moscow: Russian State Social University, 2004.
Turchin P. 2006. Population Dynamics and Internal Warfare: A Reconsideration . Social Evolution & History 5(2): 112–147 (with Andrey Korotayev).
(on Google Books)
Further reading
External links
Cliodynamics: The Journal of Quantitative History and Cultural Evolution
Seshat: Global History Databank
Peter Turchin's Cliodynamics Page
Historical Dynamics in a time of Crisis: Late Byzantium, 1204-1453 (a discussion of some concepts of cliodynamics from the point of view of medieval studies)
"Nature" article (August 2012): Human cycles: History as science
Evolution Institute
Cyclical theories
Dynamical systems
Econometric modeling
Economic history studies
Social history | 0.773532 | 0.987628 | 0.763962 |
Darwin Information Typing Architecture | The Darwin Information Typing Architecture (DITA) specification defines a set of document types for authoring and organizing topic-oriented information, as well as a set of mechanisms for combining, extending, and constraining document types. It is an open standard that is defined and maintained by the OASIS DITA Technical Committee.
The name derives from the following components:
Darwin: it uses the principles of specialization and inheritance, which is in some ways analogous to the naturalist Charles Darwin's concept of evolutionary adaptation,
Information Typing: which means each topic has a defined primary objective (procedure, glossary entry, troubleshooting information) and structure,
Architecture: DITA is an extensible set of structures.
Features and limitations
Content reuse
Topics are the foundation for content reuse, and can be reused across multiple publications. Fragments of content within topics can be reused through the use of content references (conref or conkeyref), a transclusion mechanism.
Information typing
The latest version of DITA (DITA 1.3) includes five specialized topic types: Task, Concept, Reference, Glossary Entry, and Troubleshooting. Each of these five topic types is a specialization of a generic Topic type, which contains a title element, a prolog element for metadata, and a body element. The body element contains paragraph, table, and list elements, similar to HTML.
A Task topic is intended for a procedure that describes how to accomplish a task. It lists a series of steps that users follow to produce an intended outcome. The steps are contained in a taskbody element, which is a specialization of the generic body element. The steps element is a specialization of an ordered list element.
Concept information is more objective, containing definitions, rules, and guidelines.
A Reference topic is for topics that describe command syntax, programming instructions, and other reference material, and usually contains detailed, factual material.
A Glossary Entry topic is used for defining a single sense of a given term. In addition to identifying the term and providing a definition, this topic type might also have basic terminology information, along with any acronyms or acronym expansions that may apply to the term.
The Troubleshooting topic describes a condition that the reader may want to correct, followed by one or more descriptions of its cause and suggested remedies.
Maps
A DITA map is a container for topics used to transform a collection of content into a publication. It gives the topics sequence and structure. A map can include relationship tables (reltables) that define hyperlinks between topics. Maps can be nested: they can reference topics or other maps, and can contain a variety of content types and metadata.
Metadata
DITA includes extensive metadata elements and attributes, both at topic level and within elements. Conditional text allows filtering or styling content based on attributes for audience, platform, product, and other properties. The conditional processing profile ( file) is used to identify which values are to be used for conditional processing.
Specialization
DITA allows adding new elements and attributes through specialization of base DITA elements and attributes. Through specialization, DITA can accommodate new topic types, element types, and attributes as needed for specific industries or companies. Specializations of DITA for specific industries, such as the semiconductor industry, are standardized through OASIS technical committees or subcommittees. Many organizations using DITA also develop their own specializations.
The extensibility of DITA permits organizations to specialize DITA by defining specific information structures and still use standard tools to work with them. The ability to define company-specific information architectures enables companies to use DITA to enrich content with metadata that is meaningful to them, and to enforce company-specific rules on document structure.
Topic orientation
DITA content is created as topics, each an individual XML file. Typically, each topic covers a specific subject with a singular purpose, for example, a conceptual topic that provides an overview, or a procedural topic that explains how to accomplish a task. Content should be structured to resemble the file structure in which it is contained.
Creating content in DITA
DITA map and topic documents are XML files. As with HTML, any images, video files, or other files that must appear in the output are inserted via reference. Any XML editor or even text editor can be used to write DITA content, depending on the level of support required while authoring. Aids to authoring featured in specialized editors include WYSIWYG preview rendering, validation, and integration with a DITA processor, like DITA-OT or ditac.
Publishing content written in DITA
DITA is designed as an end-to-end architecture. In addition to indicating what elements, attributes, and rules are part of the DITA language, the DITA specification includes rules for publishing DITA content in HTML, online Help, print, Content Delivery Platform and other formats.
For example, the DITA specification indicates that if the conref attribute of element A contains a path to element B, the contents of element B will display in the location of element A. DITA-compliant publishing solutions, known as DITA processors, must handle the conref attribute according to the specified behaviour. Rules also exist for processing other rich features such as conditional text, index markers, and topic-to-topic links. Applications that transform DITA content into other formats, and meet the DITA specification's requirements for interpreting DITA markup, are known as DITA processors.
Localization
DITA provides support for translation via the localisation attribute group. Element attributes can be set to indicate whether the content of the element should be translated. The language of the element content can be specified, as can the writing direction, the index filtering and some terms that are injected when publishing to the final format. A DITA project can be converted to an XLIFF file and back into its original maps and topics, using the DITA-XLIFF Roundtrip Tool for DITA-OT and computer-assisted translation (CAT) tools, like Swordfish Translation Editor or Fluenta DITA Translation Manager, a tool designed to implement the translation workflow suggested by the article "Using XLIFF to Translate DITA Projects" published by the DITA Adoption TC at OASIS.
History
The DITA standard is maintained by OASIS. The latest (current) version is 1.3, approved December 2015. An errata document for DITA 1.3 was approved in June 2018.
March 2001 Introduction by IBM of the core DTD and XML Schema grammar files and introductory material
April 2004 OASIS DITA Technical Committee formed
February 2005 IBM contributes the original DITA Open Toolkit project to SourceForge; though regularly confused with the DITA standard, DITA-OT is not affiliated with the OASIS DITA Technical Committee
June 2005 DITA v1.0 approved as an OASIS standard
August 2007 DITA V1.1 is approved by OASIS; major features include:
Bookmap specialization
Formal definition of DITAVAL syntax for content filtering
December 2010 DITA V1.2 is approved by OASIS; major features include:
Indirect linking with keys
New content reuse features
Enhanced glossary support, including acronyms
New industry specializations (Training, Machinery)
New support for controlled values / taxonomies (Subject Scheme specialization)
17 December 2015, DITA V1.3 is approved by OASIS; major features include:
Specification now delivered in three packages: Base, Technical content, and All Inclusive (with Learning and Training)
New troubleshooting topic type
Ability to use scoped keys
New domains to support MathML, equations, and SVG
Adds Relax NG XML syntax as the normative grammar for DITA
25 October 2016, DITA V1.3 Errata 01 is approved by OASIS
19 June 2018, DITA V1.3 Errata 02 is approved by OASIS
Code samples
Ditamap file (table of contents) sample
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "map.dtd">
<map id="map" xml:lang="en">
<topicref format="dita" href="sample.dita" navtitle="Sample" type="topic"/>
</map>
Hello World (topic DTD)
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic xml:lang="en" id="sample">
<title>Sample</title>
<body>
<p>Hello World!</p>
</body>
</topic>
.ditaval file sample (for conditionalizing text)
<?xml version="1.0" encoding="utf-8"?>
<val>
<prop att="audience" val="novice" action="include" />
<prop att="audience" val="expert" action="exclude" />
</val>
Example of conditionalized text:
<p>
This is information useful for all audiences.
</p>
<p audience="novice">
This is information useful for a novice audience.
</p>
<p audience="expert">
This is information useful for an expert audience.
</p>
Implementations
See also
Comparison of document markup languages
List of document markup languages
References
External links
DITA 1.3 specifications
Document-centric XML-based standards
Markup languages
Technical communication
XML
XML-based standards
Open formats | 0.775144 | 0.985561 | 0.763952 |
Macromolecule | A macromolecule is a very large molecule important to biological processes, such as a protein or nucleic acid. It is composed of thousands of covalently bonded atoms. Many macromolecules are polymers of smaller molecules called monomers. The most common macromolecules in biochemistry are biopolymers (nucleic acids, proteins, and carbohydrates) and large non-polymeric molecules such as lipids, nanogels and macrocycles. Synthetic fibers and experimental materials such as carbon nanotubes are also examples of macromolecules.
Definition
The term macromolecule (macro- + molecule) was coined by Nobel laureate Hermann Staudinger in the 1920s, although his first relevant publication on this field only mentions high molecular compounds (in excess of 1,000 atoms). At that time the term polymer, as introduced by Berzelius in 1832, had a different meaning from that of today: it simply was another form of isomerism for example with benzene and acetylene and had little to do with size.
Usage of the term to describe large molecules varies among the disciplines. For example, while biology refers to macromolecules as the four large molecules comprising living things, in chemistry, the term may refer to aggregates of two or more molecules held together by intermolecular forces rather than covalent bonds but which do not readily dissociate.
According to the standard IUPAC definition, the term macromolecule as used in polymer science refers only to a single molecule. For example, a single polymeric molecule is appropriately described as a "macromolecule" or "polymer molecule" rather than a "polymer," which suggests a substance composed of macromolecules.
Because of their size, macromolecules are not conveniently described in terms of stoichiometry alone. The structure of simple macromolecules, such as homopolymers, may be described in terms of the individual monomer subunit and total molecular mass. Complicated biomacromolecules, on the other hand, require multi-faceted structural description such as the hierarchy of structures used to describe proteins. In British English, the word "macromolecule" tends to be called "high polymer".
Properties
Macromolecules often have unusual physical properties that do not occur for smaller molecules.
Another common macromolecular property that does not characterize smaller molecules is their relative insolubility in water and similar solvents, instead forming colloids. Many require salts or particular ions to dissolve in water. Similarly, many proteins will denature if the solute concentration of their solution is too high or too low.
High concentrations of macromolecules in a solution can alter the rates and equilibrium constants of the reactions of other macromolecules, through an effect known as macromolecular crowding. This comes from macromolecules excluding other molecules from a large part of the volume of the solution, thereby increasing the effective concentrations of these molecules.
Linear biopolymers
All living organisms are dependent on three essential biopolymers for their biological functions: DNA, RNA and proteins. Each of these molecules is required for life since each plays a distinct, indispensable role in the cell. The simple summary is that DNA makes RNA, and then RNA makes proteins.
DNA, RNA, and proteins all consist of a repeating structure of related building blocks (nucleotides in the case of DNA and RNA, amino acids in the case of proteins). In general, they are all unbranched polymers, and so can be represented in the form of a string. Indeed, they can be viewed as a string of beads, with each bead representing a single nucleotide or amino acid monomer linked together through covalent chemical bonds into a very long chain.
In most cases, the monomers within the chain have a strong propensity to interact with other amino acids or nucleotides. In DNA and RNA, this can take the form of Watson–Crick base pairs (G–C and A–T or A–U), although many more complicated interactions can and do occur.
Structural features
Because of the double-stranded nature of DNA, essentially all of the nucleotides take the form of Watson–Crick base pairs between nucleotides on the two complementary strands of the double helix.
In contrast, both RNA and proteins are normally single-stranded. Therefore, they are not constrained by the regular geometry of the DNA double helix, and so fold into complex three-dimensional shapes dependent on their sequence. These different shapes are responsible for many of the common properties of RNA and proteins, including the formation of specific binding pockets, and the ability to catalyse biochemical reactions.
DNA is optimised for encoding information
DNA is an information storage macromolecule that encodes the complete set of instructions (the genome) that are required to assemble, maintain, and reproduce every living organism.
DNA and RNA are both capable of encoding genetic information, because there are biochemical mechanisms which read the information coded within a DNA or RNA sequence and use it to generate a specified protein. On the other hand, the sequence information of a protein molecule is not used by cells to functionally encode genetic information.
DNA has three primary attributes that allow it to be far better than RNA at encoding genetic information. First, it is normally double-stranded, so that there are a minimum of two copies of the information encoding each gene in every cell. Second, DNA has a much greater stability against breakdown than does RNA, an attribute primarily associated with the absence of the 2'-hydroxyl group within every nucleotide of DNA. Third, highly sophisticated DNA surveillance and repair systems are present which monitor damage to the DNA and repair the sequence when necessary. Analogous systems have not evolved for repairing damaged RNA molecules. Consequently, chromosomes can contain many billions of atoms, arranged in a specific chemical structure.
Proteins are optimised for catalysis
Proteins are functional macromolecules responsible for catalysing the biochemical reactions that sustain life. Proteins carry out all functions of an organism, for example photosynthesis, neural function, vision, and movement.
The single-stranded nature of protein molecules, together with their composition of 20 or more different amino acid building blocks, allows them to fold in to a vast number of different three-dimensional shapes, while providing binding pockets through which they can specifically interact with all manner of molecules. In addition, the chemical diversity of the different amino acids, together with different chemical environments afforded by local 3D structure, enables many proteins to act as enzymes, catalyzing a wide range of specific biochemical transformations within cells. In addition, proteins have evolved the ability to bind a wide range of cofactors and coenzymes, smaller molecules that can endow the protein with specific activities beyond those associated with the polypeptide chain alone.
RNA is multifunctional
RNA is multifunctional, its primary function is to encode proteins, according to the instructions within a cell's DNA. They control and regulate many aspects of protein synthesis in eukaryotes.
RNA encodes genetic information that can be translated into the amino acid sequence of proteins, as evidenced by the messenger RNA molecules present within every cell, and the RNA genomes of a large number of viruses. The single-stranded nature of RNA, together with tendency for rapid breakdown and a lack of repair systems means that RNA is not so well suited for the long-term storage of genetic information as is DNA.
In addition, RNA is a single-stranded polymer that can, like proteins, fold into a very large number of three-dimensional structures. Some of these structures provide binding sites for other molecules and chemically active centers that can catalyze specific chemical reactions on those bound molecules. The limited number of different building blocks of RNA (4 nucleotides vs >20 amino acids in proteins), together with their lack of chemical diversity, results in catalytic RNA (ribozymes) being generally less-effective catalysts than proteins for most biological reactions.
The Major Macromolecules:
Branched biopolymers
Carbohydrate macromolecules (polysaccharides) are formed from polymers of monosaccharides. Because monosaccharides have multiple functional groups, polysaccharides can form linear polymers (e.g. cellulose) or complex branched structures (e.g. glycogen). Polysaccharides perform numerous roles in living organisms, acting as energy stores (e.g. starch) and as structural components (e.g. chitin in arthropods and fungi). Many carbohydrates contain modified monosaccharide units that have had functional groups replaced or removed.
Polyphenols consist of a branched structure of multiple phenolic subunits. They can perform structural roles (e.g. lignin) as well as roles as secondary metabolites involved in signalling, pigmentation and defense.
Synthetic macromolecules
Some examples of macromolecules are synthetic polymers (plastics, synthetic fibers, and synthetic rubber), graphene, and carbon nanotubes. Polymers may be prepared from inorganic matter as well as for instance in inorganic polymers and geopolymers. The incorporation of inorganic elements enables the tunability of properties and/or responsive behavior as for instance in smart inorganic polymers.
See also
List of biophysically important macromolecular crystal structures
Small molecule
Soft matter
References
External links
Synopsis of Chapter 5, Campbell & Reece, 2002
Lecture notes on the structure and function of macromolecules
Several (free) introductory macromolecule related internet-based courses
Giant Molecules! by Ulysses Magee, ISSA Review Winter 2002–2003, . Cached HTML version of a missing PDF file. Retrieved March 10, 2010. The article is based on the book, Inventing Polymer Science: Staudinger, Carothers, and the Emergence of Macromolecular Chemistry by Yasu Furukawa.
Molecular physics
Biochemistry
Polymer chemistry
Polymers | 0.766694 | 0.996418 | 0.763947 |
Composition of the human body | Body composition may be analyzed in various ways. This can be done in terms of the chemical elements present, or by molecular structure e.g., water, protein, fats (or lipids), hydroxylapatite (in bones), carbohydrates (such as glycogen and glucose) and DNA. In terms of tissue type, the body may be analyzed into water, fat, connective tissue, muscle, bone, etc. In terms of cell type, the body contains hundreds of different types of cells, but notably, the largest number of cells contained in a human body (though not the largest mass of cells) are not human cells, but bacteria residing in the normal human gastrointestinal tract.
Elements
About 99% of the mass of the human body is made up of six elements: oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus. Only about 0.85% is composed of another five elements: potassium, sulfur, sodium, chlorine, and magnesium. All 11 are necessary for life. The remaining elements are trace elements, of which more than a dozen are thought on the basis of good evidence to be necessary for life. All of the mass of the trace elements put together (less than 10 grams for a human body) do not add up to the body mass of magnesium, the least common of the 11 non-trace elements.
Other elements
Not all elements which are found in the human body in trace quantities play a role in life. Some of these elements are thought to be simple common contaminants without function (examples: caesium, titanium), while many others are thought to be active toxins, depending on amount (cadmium, mercury, lead, radioactives). In humans, arsenic is toxic, and its levels in foods and dietary supplements are closely monitored to reduce or eliminate its intake.
Some elements (silicon, boron, nickel, vanadium) are probably needed by mammals also, but in far smaller doses. Bromine is used by some (though not all) bacteria, fungi, diatoms, and seaweeds, and opportunistically in eosinophils in humans. One study has indicated bromine to be necessary to collagen IV synthesis in humans. Fluorine is used by a number of plants to manufacture toxins but in humans its only known function is as a local topical hardening agent in tooth enamel.
Elemental composition list
The average adult human body contains approximately atoms and contains at least detectable traces of 60 chemical elements. About 29 of these elements are thought to play an active positive role in life and health in humans.
The relative amounts of each element vary by individual, mainly due to differences in the proportion of fat, muscle and bone in their body. Persons with more fat will have a higher proportion of carbon and a lower proportion of most other elements (the proportion of hydrogen will be about the same).
The numbers in the table are averages of different numbers reported by different references.
The adult human body averages ~53% water. This varies substantially by age, sex, and adiposity. In a large sample of adults of all ages and both sexes, the figure for water fraction by weight was found to be 48 ±6% for females and 58 ±8% water for males. Water is ~11% hydrogen by mass but ~67% hydrogen by atomic percent, and these numbers along with the complementary % numbers for oxygen in water, are the largest contributors to overall mass and atomic composition figures. Because of water content, the human body contains more oxygen by mass than any other element, but more hydrogen by atom-fraction than any element.
The elements listed below as "Essential in humans" are those listed by the US Food and Drug Administration as essential nutrients, as well as six additional elements: oxygen, carbon, hydrogen, and nitrogen (the fundamental building blocks of life on Earth), sulfur (essential to all cells) and cobalt (a necessary component of vitamin B12). Elements listed as "Possibly" or "Probably" essential are those cited by the US National Research Council as beneficial to human health and possibly or probably essential.
*Iron = ~3 g in males, ~2.3 g in females
Of the 94 naturally occurring chemical elements, 61 are listed in the table above. Of the remaining 33, it is not known how many occur in the human body.
Most of the elements needed for life are relatively common in the Earth's crust. Aluminium, the third most common element in the Earth's crust (after oxygen and silicon), serves no function in living cells, but is toxic in large amounts, depending on its physical and chemical forms and magnitude, duration, frequency of exposure, and how it was absorbed by the human body. Transferrins can bind aluminium.
Periodic table
Composition
The composition of the human body can be classified as follows:
Water
Proteins
Fats (or lipids)
Hydroxyapatite in bones
Carbohydrates such as glycogen and glucose
DNA and RNA
Inorganic ions such as sodium, potassium, chloride, bicarbonate, phosphate
Gases mainly being oxygen, carbon dioxide
Many cofactors.
The estimated contents of a typical 20-micrometre human cell is as follows:
Tissues
Body composition can also be expressed in terms of various types of material, such as:
Muscle
Fat
Bone and teeth
Nervous tissue (brain and nerves)
Hormones
Connective tissue
Body fluids (blood, lymph, urine)
Contents of digestive tract, including intestinal gas
Air in lungs
Epithelium
Composition by cell type
There are many species of bacteria and other microorganisms that live on or inside the healthy human body. In fact, there are roughly as many microbial as human cells in the human body by number.
(much less by mass or volume). Some of these symbionts are necessary for our health. Those that neither help nor harm humans are called commensal organisms.
See also
List of organs of the human body
Hydrostatic weighing
Dietary element
Composition of blood
List of human blood components
Body composition
Abundance of elements in Earth's crust
Abundance of the chemical elements
References
Biochemistry
Human anatomy
Human physiology | 0.765261 | 0.998279 | 0.763944 |
Paramagnetism | Paramagnetism is a form of magnetism whereby some materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. In contrast with this behavior, diamagnetic materials are repelled by magnetic fields and form induced magnetic fields in the direction opposite to that of the applied magnetic field. Paramagnetic materials include most chemical elements and some compounds; they have a relative magnetic permeability slightly greater than 1 (i.e., a small positive magnetic susceptibility) and hence are attracted to magnetic fields. The magnetic moment induced by the applied field is linear in the field strength and rather weak. It typically requires a sensitive analytical balance to detect the effect and modern measurements on paramagnetic materials are often conducted with a SQUID magnetometer.
Paramagnetism is due to the presence of unpaired electrons in the material, so most atoms with incompletely filled atomic orbitals are paramagnetic, although exceptions such as copper exist. Due to their spin, unpaired electrons have a magnetic dipole moment and act like tiny magnets. An external magnetic field causes the electrons' spins to align parallel to the field, causing a net attraction. Paramagnetic materials include aluminium, oxygen, titanium, and iron oxide (FeO). Therefore, a simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: if all electrons in the particle are paired, then the substance made of this particle is diamagnetic; if it has unpaired electrons, then the substance is paramagnetic.
Unlike ferromagnets, paramagnets do not retain any magnetization in the absence of an externally applied magnetic field because thermal motion randomizes the spin orientations. (Some paramagnetic materials retain spin disorder even at absolute zero, meaning they are paramagnetic in the ground state, i.e. in the absence of thermal motion.) Thus the total magnetization drops to zero when the applied field is removed. Even in the presence of the field there is only a small induced magnetization because only a small fraction of the spins will be oriented by the field. This fraction is proportional to the field strength and this explains the linear dependency. The attraction experienced by ferromagnetic materials is non-linear and much stronger, so that it is easily observed, for instance, in the attraction between a refrigerator magnet and the iron of the refrigerator itself.
Relation to electron spins
Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied field. The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment). In pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic moment. When a magnetic field is applied, the dipoles will tend to align with the applied field, resulting in a net magnetic moment in the direction of the applied field. In the classical description, this alignment can be understood to occur due to a torque being provided on the magnetic moments by an applied field, which tries to align the dipoles parallel to the applied field. However, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum.
If there is sufficient energy exchange between neighbouring dipoles, they will interact, and may spontaneously align or anti-align and form magnetic domains, resulting in ferromagnetism (permanent magnets) or antiferromagnetism, respectively. Paramagnetic behavior can also be observed in ferromagnetic materials that are above their Curie temperature, and in antiferromagnets above their Néel temperature. At these temperatures, the available thermal energy simply overcomes the interaction energy between the spins.
In general, paramagnetic effects are quite small: the magnetic susceptibility is of the order of 10−3 to 10−5 for most paramagnets, but may be as high as 10−1 for synthetic paramagnets such as ferrofluids.
Delocalization
In conductive materials, the electrons are delocalized, that is, they travel through the solid more or less as free electrons. Conductivity can be understood in a band structure picture as arising from the incomplete filling of energy bands.
In an ordinary nonmagnetic conductor the conduction band is identical for both spin-up and spin-down electrons. When a magnetic field is applied, the conduction band splits apart into a spin-up and a spin-down band due to the difference in magnetic potential energy for spin-up and spin-down electrons.
Since the Fermi level must be identical for both bands, this means that there will be a small surplus of the type of spin in the band that moved downwards. This effect is a weak form of paramagnetism known as Pauli paramagnetism.
The effect always competes with a diamagnetic response of opposite sign due to all the core electrons of the atoms. Stronger forms of magnetism usually require localized rather than itinerant electrons. However, in some cases a band structure can result in which there are two delocalized sub-bands with states of opposite spins that have different energies. If one subband is preferentially filled over the other, one can have itinerant ferromagnetic order. This situation usually only occurs in relatively narrow (d-)bands, which are poorly delocalized.
s and p electrons
Generally, strong delocalization in a solid due to large overlap with neighboring wave functions means that there will be a large Fermi velocity; this means that the number of electrons in a band is less sensitive to shifts in that band's energy, implying a weak magnetism. This is why s- and p-type metals are typically either Pauli-paramagnetic or as in the case of gold even diamagnetic. In the latter case the diamagnetic contribution from the closed shell inner electrons simply wins over the weak paramagnetic term of the almost free electrons.
d and f electrons
Stronger magnetic effects are typically only observed when d or f electrons are involved. Particularly the latter are usually strongly localized. Moreover, the size of the magnetic moment on a lanthanide atom can be quite large as it can carry up to 7 unpaired electrons in the case of gadolinium(III) (hence its use in MRI). The high magnetic moments associated with lanthanides is one reason why superstrong magnets are typically based on elements like neodymium or samarium.
Molecular localization
The above picture is a generalization as it pertains to materials with an extended lattice rather than a molecular structure. Molecular structure can also lead to localization of electrons. Although there are usually energetic reasons why a molecular structure results such that it does not exhibit partly filled orbitals (i.e. unpaired spins), some non-closed shell moieties do occur in nature. Molecular oxygen is a good example. Even in the frozen solid it contains di-radical molecules resulting in paramagnetic behavior. The unpaired spins reside in orbitals derived from oxygen p wave functions, but the overlap is limited to the one neighbor in the O2 molecules. The distances to other oxygen atoms in the lattice remain too large to lead to delocalization and the magnetic moments remain unpaired.
Theory
The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. The paramagnetic response has then two possible quantum origins, either coming from permanent magnetic moments of the ions or from the spatial motion of the conduction electrons inside the material. Both descriptions are given below.
Curie's law
For low levels of magnetization, the magnetization of paramagnets follows what is known as Curie's law, at least approximately. This law indicates that the susceptibility, , of paramagnetic materials is inversely proportional to their temperature, i.e. that materials become more magnetic at lower temperatures. The mathematical expression is:
where:
is the resulting magnetization, measured in amperes/meter (A/m),
is the volume magnetic susceptibility (dimensionless),
is the auxiliary magnetic field (A/m),
is absolute temperature, measured in kelvins (K),
is a material-specific Curie constant (K).
Curie's law is valid under the commonly encountered conditions of low magnetization (μBH ≲ kBT), but does not apply in the high-field/low-temperature regime where saturation of magnetization occurs (μBH ≳ kBT) and magnetic dipoles are all aligned with the applied field. When the dipoles are aligned, increasing the external field will not increase the total magnetization since there can be no further alignment.
For a paramagnetic ion with noninteracting magnetic moments with angular momentum J, the Curie constant is related to the individual ions' magnetic moments,
where n is the number of atoms per unit volume. The parameter μeff is interpreted as the effective magnetic moment per paramagnetic ion. If one uses a classical treatment with molecular magnetic moments represented as discrete magnetic dipoles, μ, a Curie Law expression of the same form will emerge with μ appearing in place of μeff.
When orbital angular momentum contributions to the magnetic moment are small, as occurs for most organic radicals or for octahedral transition metal complexes with d3 or high-spin d5 configurations, the effective magnetic moment takes the form ( with g-factor ge = 2.0023... ≈ 2),
where Nu is the number of unpaired electrons. In other transition metal complexes this yields a useful, if somewhat cruder, estimate.
When Curie constant is null, second order effects that couple the ground state with the excited states can also lead to a paramagnetic susceptibility independent of the temperature, known as Van Vleck susceptibility.
Pauli paramagnetism
For some alkali metals and noble metals, conduction electrons are weakly interacting and delocalized in space forming a Fermi gas. For these materials one contribution to the magnetic response comes from the interaction between the electron spins and the magnetic field known as Pauli paramagnetism. For a small magnetic field , the additional energy per electron from the interaction between an electron spin and the magnetic field is given by:
where is the vacuum permeability, is the electron magnetic moment, is the Bohr magneton, is the reduced Planck constant, and the g-factor cancels with the spin . The indicates that the sign is positive (negative) when the electron spin component in the direction of is parallel (antiparallel) to the magnetic field.
For low temperatures with respect to the Fermi temperature (around 104 kelvins for metals), the number density of electrons pointing parallel (antiparallel) to the magnetic field can be written as:
with the total free-electrons density and the electronic density of states (number of states per energy per volume) at the Fermi energy .
In this approximation the magnetization is given as the magnetic moment of one electron times the difference in densities:
which yields a positive paramagnetic susceptibility independent of temperature:
The Pauli paramagnetic susceptibility is a macroscopic effect and has to be contrasted with Landau diamagnetic susceptibility which is equal to minus one third of Pauli's and also comes from delocalized electrons. The Pauli susceptibility comes from the spin interaction with the magnetic field while the Landau susceptibility comes from the spatial motion of the electrons and it is independent of the spin. In doped semiconductors the ratio between Landau's and Pauli's susceptibilities changes as the effective mass of the charge carriers can differ from the electron mass .
The magnetic response calculated for a gas of electrons is not the full picture as the magnetic susceptibility coming from the ions has to be included. Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect.
Pauli paramagnetism is named after the physicist Wolfgang Pauli. Before Pauli's theory, the lack of a strong Curie paramagnetism in metals was an open problem as the leading Drude model could not account for this contribution without the use of quantum statistics.
Pauli paramagnetism and Landau diamagnetism are essentially applications of the spin and the free electron model, the first is due to intrinsic spin of electrons; the second is due to their orbital motion.
Examples of paramagnets
Materials that are called "paramagnets" are most often those that exhibit, at least over an appreciable temperature range, magnetic susceptibilities that adhere to the Curie or Curie–Weiss laws. In principle any system that contains atoms, ions, or molecules with unpaired spins can be called a paramagnet, but the interactions between them need to be carefully considered.
Systems with minimal interactions
The narrowest definition would be: a system with unpaired spins that do not interact with each other. In this narrowest sense, the only pure paramagnet is a dilute gas of monatomic hydrogen atoms. Each atom has one non-interacting unpaired electron.
A gas of lithium atoms already possess two paired core electrons that produce a diamagnetic response of opposite sign. Strictly speaking Li is a mixed system therefore, although admittedly the diamagnetic component is weak and often neglected. In the case of heavier elements the diamagnetic contribution becomes more important and in the case of metallic gold it dominates the properties. The element hydrogen is virtually never called 'paramagnetic' because the monatomic gas is stable only at extremely high temperature; H atoms combine to form molecular H2 and in so doing, the magnetic moments are lost (quenched), because of the spins pair. Hydrogen is therefore diamagnetic and the same holds true for many other elements. Although the electronic configuration of the individual atoms (and ions) of most elements contain unpaired spins, they are not necessarily paramagnetic, because at ambient temperature quenching is very much the rule rather than the exception. The quenching tendency is weakest for f-electrons because f (especially 4f) orbitals are radially contracted and they overlap only weakly with orbitals on adjacent atoms. Consequently, the lanthanide elements with incompletely filled 4f-orbitals are paramagnetic or magnetically ordered.
Thus, condensed phase paramagnets are only possible if the interactions of the spins that lead either to quenching or to ordering are kept at bay by structural isolation of the magnetic centers. There are two classes of materials for which this holds:
Molecular materials with a (isolated) paramagnetic center.
Good examples are coordination complexes of d- or f-metals or proteins with such centers, e.g. myoglobin. In such materials the organic part of the molecule acts as an envelope shielding the spins from their neighbors.
Small molecules can be stable in radical form, oxygen O2 is a good example. Such systems are quite rare because they tend to be rather reactive.
Dilute systems.
Dissolving a paramagnetic species in a diamagnetic lattice at small concentrations, e.g. Nd3+ in CaCl2 will separate the neodymium ions at large enough distances that they do not interact. Such systems are of prime importance for what can be considered the most sensitive method to study paramagnetic systems: EPR.
Systems with interactions
As stated above, many materials that contain d- or f-elements do retain unquenched spins. Salts of such elements often show paramagnetic behavior but at low enough temperatures the magnetic moments may order. It is not uncommon to call such materials 'paramagnets', when referring to their paramagnetic behavior above their Curie or Néel-points, particularly if such temperatures are very low or have never been properly measured. Even for iron it is not uncommon to say that iron becomes a paramagnet above its relatively high Curie-point. In that case the Curie-point is seen as a phase transition between a ferromagnet and a 'paramagnet'. The word paramagnet now merely refers to the linear response of the system to an applied field, the temperature dependence of which requires an amended version of Curie's law, known as the Curie–Weiss law:
This amended law includes a term θ that describes the exchange interaction that is present albeit overcome by thermal motion. The sign of θ depends on whether ferro- or antiferromagnetic interactions dominate and it is seldom exactly zero, except in the dilute, isolated cases mentioned above.
Obviously, the paramagnetic Curie–Weiss description above TN or TC is a rather different interpretation of the word "paramagnet" as it does not imply the absence of interactions, but rather that the magnetic structure is random in the absence of an external field at these sufficiently high temperatures. Even if θ is close to zero this does not mean that there are no interactions, just that the aligning ferro- and the anti-aligning antiferromagnetic ones cancel. An additional complication is that the interactions are often different in different directions of the crystalline lattice (anisotropy), leading to complicated magnetic structures once ordered.
Randomness of the structure also applies to the many metals that show a net paramagnetic response over a broad temperature range. They do not follow a Curie type law as function of temperature however; often they are more or less temperature independent. This type of behavior is of an itinerant nature and better called Pauli-paramagnetism, but it is not unusual to see, for example, the metal aluminium called a "paramagnet", even though interactions are strong enough to give this element very good electrical conductivity.
Superparamagnets
Some materials show induced magnetic behavior that follows a Curie type law but with exceptionally large values for the Curie constants. These materials are known as superparamagnets. They are characterized by a strong ferromagnetic or ferrimagnetic type of coupling into domains of a limited size that behave independently from one another. The bulk properties of such a system resembles that of a paramagnet, but on a microscopic level they are ordered. The materials do show an ordering temperature above which the behavior reverts to ordinary paramagnetism (with interaction). Ferrofluids are a good example, but the phenomenon can also occur inside solids, e.g., when dilute paramagnetic centers are introduced in a strong itinerant medium of ferromagnetic coupling such as when Fe is substituted in TlCu2Se2 or the alloy AuFe. Such systems contain ferromagnetically coupled clusters that freeze out at lower temperatures. They are also called mictomagnets.
See also
Magnetochemistry
References
Further reading
[https://feynmanlectures.caltech.edu/II_35.html The Feynman Lectures on Physics Vol. II, Ch. 35: "Paramagnetism and Magnetic Resonance]
Charles Kittel, Introduction to Solid State Physics (Wiley: New York, 1996).
John David Jackson, Classical Electrodynamics (Wiley: New York, 1999).
External links
"Magnetism: Models and Mechanisms" in E. Pavarini, E. Koch, and U. Schollwöck: Emergent Phenomena in Correlated Matter'', Jülich, 2013,
Electric and magnetic fields in matter
Magnetism
Physical phenomena
Quantum phases | 0.766628 | 0.996473 | 0.763924 |
Dialectic | Dialectic (, dialektikḗ; ), also known as the dialectical method, refers originally to dialogue between people holding different points of view about a subject but wishing to arrive at the truth through reasoned argumentation. Dialectic resembles debate, but the concept excludes subjective elements such as emotional appeal and rhetoric. It has its origins in ancient philosophy and continued to be developed in the Middle Ages.
Hegelianism refigured "dialectic" to no longer refer to a literal dialogue. Instead, the term takes on the specialized meaning of development by way of overcoming internal contradictions. Dialectical materialism, a theory advanced by Karl Marx and Friedrich Engels, adapted the Hegelian dialectic into a materialist theory of history. The legacy of Hegelian and Marxian dialectics has been criticized by philosophers such as Karl Popper and Mario Bunge, who considered it unscientific.
Dialectic implies a developmental process and so does not naturally fit within classical logic. Nevertheless, some twentieth-century logicians have attempted to formalize it.
History
There are a variety of meanings of dialectic or dialectics within Western philosophy.
Classical philosophy
In classical philosophy, dialectic is a form of reasoning based upon dialogue of arguments and counter-arguments, advocating propositions (theses) and counter-propositions (antitheses). The outcome of such a dialectic might be the refutation of a relevant proposition, or a synthesis, a combination of the opposing assertions, or a qualitative improvement of the dialogue.
The term "dialectic" owes much of its prestige to its role in the philosophies of Socrates and Plato, in the Greek Classical period (5th to 4th centuries BC). Aristotle said that it was the pre-Socratic philosopher Zeno of Elea who invented dialectic, of which the dialogues of Plato are examples of the Socratic dialectical method.
Socratic method
The Socratic dialogues are a particular form of dialectic known as the method of elenchus (literally, "refutation, scrutiny") whereby a series of questions clarifies a more precise statement of a vague belief, logical consequences of that statement are explored, and a contradiction is discovered. The method is largely destructive, in that false belief is exposed and only constructive in that this exposure may lead to further search for truth. The detection of error does not amount to a proof of the antithesis. For example, a contradiction in the consequences of a definition of piety does not provide a correct definition. The principal aim of Socratic activity may be to improve the soul of the interlocutors, by freeing them from unrecognized errors, or indeed, by teaching them the spirit of inquiry.
In common cases, Socrates uses enthymemes as the foundation of his argument.
For example, in the Euthyphro, Socrates asks Euthyphro to provide a definition of piety. Euthyphro replies that the pious is that which is loved by the gods. But, Socrates also has Euthyphro agreeing that the gods are quarrelsome and their quarrels, like human quarrels, concern objects of love or hatred. Therefore, Socrates reasons, at least one thing exists that certain gods love but other gods hate. Again, Euthyphro agrees. Socrates concludes that if Euthyphro's definition of piety is acceptable, then there must exist at least one thing that is both pious and impious (as it is both loved and hated by the gods)—which Euthyphro admits is absurd. Thus, Euthyphro is brought to a realization by this dialectical method that his definition of piety is not sufficiently meaningful.
In another example, in Plato's Gorgias, dialectic occurs between Socrates, the Sophist Gorgias, and two men, Polus and Callicles. Because Socrates' ultimate goal was to reach true knowledge, he was even willing to change his own views in order to arrive at the truth. The fundamental goal of dialectic, in this instance, was to establish a precise definition of the subject (in this case, rhetoric) and with the use of argumentation and questioning, make the subject even more precise. In the Gorgias, Socrates reaches the truth by asking a series of questions and in return, receiving short, clear answers.
Plato
In Platonism and Neoplatonism, dialectic assumed an ontological and metaphysical role in that it became the process whereby the intellect passes from sensibles to intelligibles, rising from idea to idea until it finally grasps the supreme idea, the first principle which is the origin of all. The philosopher is consequently a "dialectician". In this sense, dialectic is a process of inquiry that does away with hypotheses up to the first principle. It slowly embraces multiplicity in unity. The philosopher Simon Blackburn wrote that the dialectic in this sense is used to understand "the total process of enlightenment, whereby the philosopher is educated so as to achieve knowledge of the supreme good, the Form of the Good".
Medieval philosophy
Logic, which could be considered to include dialectic, was one of the three liberal arts taught in medieval universities as part of the trivium; the other elements were rhetoric and grammar.
Based mainly on Aristotle, the first medieval philosopher to work on dialectics was Boethius (480–524). After him, many scholastic philosophers also made use of dialectics in their works, such as Abelard, William of Sherwood, Garlandus Compotista, Walter Burley, Roger Swyneshed, William of Ockham, and Thomas Aquinas.
This dialectic (a quaestio disputata) was formed as follows:
The question to be determined ("It is asked whether...");
A provisory answer to the question ("And it seems that...");
The principal arguments in favor of the provisory answer;
An argument against the provisory answer, traditionally a single argument from authority ("On the contrary...");
The determination of the question after weighing the evidence ("I answer that...");
The replies to each of the initial objections. ("To the first, to the second etc., I answer that...")
Modern philosophy
The concept of dialectics was given new life at the start of the 19th century by Georg Wilhelm Friedrich Hegel, whose dialectical model of nature and of history made dialectics a fundamental aspect of reality, instead of regarding the contradictions into which dialectics leads as evidence of the limits of pure reason, as Immanuel Kant had argued. Hegel was influenced by Johann Gottlieb Fichte's conception of synthesis, although Hegel didn't adopt Fichte's "thesis–antithesis–synthesis" language except to describe Kant's philosophy: rather, Hegel argued that such language was "a lifeless schema" imposed on various contents, whereas he saw his own dialectic as flowing out of "the inner life and self-movement" of the content itself.
In the mid-19th century, Hegelian dialectic was appropriated by Karl Marx and Friedrich Engels and retooled in what they considered to be a nonidealistic manner. It would also become a crucial part of later representations of Marxism as a philosophy of dialectical materialism. These representations often contrasted dramatically and led to vigorous debate among different Marxist groups.
Hegelian dialectic
The Hegelian dialectic describes changes in the forms of thought through their own internal contradictions into concrete forms that overcome previous oppositions.
This dialectic is sometimes presented in a threefold manner, as first stated by Heinrich Moritz Chalybäus, as comprising three dialectical stages of development: a thesis, giving rise to its reaction; an antithesis, which contradicts or negates the thesis; and the tension between the two being resolved by means of a synthesis. Although, Hegel opposed these terms.
By contrast, the terms abstract, negative, and concrete suggest a flaw or an incompleteness in any initial thesis. For Hegel, the concrete must always pass through the phase of the negative, that is, mediation. This is the essence of what is popularly called Hegelian dialectics.
To describe the activity of overcoming the negative, Hegel often used the term Aufhebung, variously translated into English as "sublation" or "overcoming", to conceive of the working of the dialectic. Roughly, the term indicates preserving the true portion of an idea, thing, society, and so forth, while moving beyond its limitations. What is sublated, on the one hand, is overcome, but, on the other hand, is preserved and maintained.
As in the Socratic dialectic, Hegel claimed to proceed by making implicit contradictions explicit: each stage of the process is the product of contradictions inherent or implicit in the preceding stage. On his view, the purpose of dialectics is "to study things in their own being and movement and thus to demonstrate the finitude of the partial categories of understanding".
For Hegel, even history can be reconstructed as a unified dialectic, the major stages of which chart a progression from self-alienation as servitude to self-unification and realization as the rational constitutional state of free and equal citizens.
Marxist dialectic
Marxist dialectic is a form of Hegelian dialectic which applies to the study of historical materialism. Marxist dialectic is thus a method by which one can examine social and economic behaviors. It is the foundation of the philosophy of dialectical materialism, which forms the basis of historical materialism.
In the Marxist tradition, "dialectic" refers to regular and mutual relationships, interactions, and processes in nature, society, and human thought.
A dialectical relationship is a relationship in which two phenomena or ideas mutually impact each other, leading to development and negation. Development refers to the change and motion of phenomena and ideas from less advanced to more advanced or from less complete to more complete. Dialectical negation refers to a stage of development in which a contradiction between two previous subjects gives rise to a new subject. In the Marxist view, dialectical negation is never an endpoint, but instead creates new conditions for further development and negation.
Karl Marx and Friedrich Engels, writing several decades after Hegel's death, proposed that Hegel's dialectic is too abstract. Against this, Marx presented his own dialectic method, which he claimed to be "direct opposite" of Hegel's method.
Marxist dialectics is exemplified in Das Kapital. As Marx explained dialectical materialism,
Class struggle is the primary contradiction to be resolved by Marxist dialectics because of its central role in the social and political lives of a society. Nonetheless, Marx and Marxists developed the concept of class struggle to comprehend the dialectical contradictions between mental and manual labor and between town and country. Hence, philosophic contradiction is central to the development of dialectics: the progress from quantity to quality, the acceleration of gradual social change; the negation of the initial development of the status quo; the negation of that negation; and the high-level recurrence of features of the original status quo.
Friedrich Engels further proposed that nature itself is dialectical, and that this is "a very simple process, which is taking place everywhere and every day". His dialectical "law of the transformation of quantity into quality and vice versa" corresponds, according to Christian Fuchs, to the concept of phase transition and anticipated the concept of emergence "a hundred years ahead of his time".
For Vladimir Lenin, the primary feature of Marx's "dialectical materialism" (Lenin's term) is its application of materialist philosophy to history and social sciences. Lenin's main contribution to the philosophy of dialectical materialism is his theory of reflection, which presents human consciousness as a dynamic reflection of the objective material world that fully shapes its contents and structure.
Later, Stalin's works on the subject established a rigid and formalistic division of Marxist–Leninist theory into dialectical materialism and historical materialism. While the first was supposed to be the key method and theory of the philosophy of nature, the second was the Soviet version of the philosophy of history.
Soviet systems theory pioneer Alexander Bogdanov viewed Hegelian and materialist dialectic as progressive, albeit inexact and diffuse, attempts at achieving what he called tektology, or a universal science of organization.
Dialectical naturalism
Dialectical naturalism is a term coined by American philosopher Murray Bookchin to describe the philosophical underpinnings of the political program of social ecology. Dialectical naturalism explores the complex interrelationship between social problems, and the direct consequences they have on the ecological impact of human society. Bookchin offered dialectical naturalism as a contrast to what he saw as the "empyrean, basically antinaturalistic dialectical idealism" of Hegel, and "the wooden, often scientistic dialectical materialism of orthodox Marxists".
Theological dialectics
Neo-orthodoxy, in Europe also known as theology of crisis and dialectical theology, is an approach to theology in Protestantism that was developed in the aftermath of the First World War (1914–1918). It is characterized as a reaction against doctrines of 19th-century liberal theology and a more positive reevaluation of the teachings of the Reformation, much of which had been in decline (especially in western Europe) since the late 18th century. It is primarily associated with two Swiss professors and pastors, Karl Barth (1886–1968) and Emil Brunner (1899–1966), even though Barth himself expressed his unease in the use of the term.
In dialectical theology the difference and opposition between God and human beings is stressed in such a way that all human attempts at overcoming this opposition through moral, religious or philosophical idealism must be characterized as 'sin'. In the death of Christ humanity is negated and overcome, but this judgment also points forwards to the resurrection in which humanity is reestablished in Christ. For Barth this meant that only through God's 'no' to everything human can his 'yes' be perceived. Applied to traditional themes of Protestant theology, such as double predestination, this means that election and reprobation cannot be viewed as a quantitative limitation of God's action. Rather it must be seen as its "qualitative definition". As Christ bore the rejection as well as the election of God for all humanity, every person is subject to both aspects of God's double predestination.
Dialectic prominently figured in Bernard Lonergan's philosophy, in his books Insight and Method in Theology. Michael Shute wrote about Lonergan's use of dialectic in The Origins of Lonergan's Notion of the Dialectic of History. For Lonergan, dialectic is both individual and operative in community. Simply described, it is a dynamic process that results in something new:
Dialectic is one of the eight functional specialties Lonergan envisaged for theology to bring this discipline into the modern world. Lonergan believed that the lack of an agreed method among scholars had inhibited substantive agreement from being reached and progress from being made compared to the natural sciences. Karl Rahner, S.J., however, criticized Lonergan's theological method in a short article entitled "Some Critical Thoughts on 'Functional Specialties in Theology'" where he stated: "Lonergan's theological methodology seems to me to be so generic that it really fits every science, and hence is not the methodology of theology as such, but only a very general methodology of science."
Criticisms
Friedrich Nietzsche viewed dialectic as a method that imposes artificial boundaries and suppresses the richness and diversity of reality. He rejected the notion that truth can be fully grasped through dialectical reasoning and offered a critique of dialectic, challenging its traditional framework and emphasizing the limitations of its approach to understanding reality. He expressed skepticism towards its methodology and implications in his work Twilight of the Idols: "I mistrust all systematizers and I avoid them. The will to a system is a lack of integrity". In the same book, Nietzsche criticized Socrates' dialectics because he believed it prioritized reason over instinct, resulting in the suppression of individual passions and the imposition of an artificial morality.
Karl Popper attacked the dialectic repeatedly. In 1937, he wrote and delivered a paper entitled "What Is Dialectic?" in which he criticized the dialectics of Hegel, Marx, and Engels for their willingness "to put up with contradictions". He argued that accepting contradiction as a valid form of logic would lead to the principle of explosion and thus trivialism. Popper concluded the essay with these words: "The whole development of dialectic should be a warning against the dangers inherent in philosophical system-building. It should remind us that philosophy should not be made a basis for any sort of scientific system and that philosophers should be much more modest in their claims. One task which they can fulfill quite usefully is the study of the critical methods of science". Seventy years later, Nicholas Rescher responded that "Popper's critique touches only a hyperbolic version of dialectic", and he quipped: "Ironically, there is something decidedly dialectical about Popper's critique of dialectics." Around the same time as Popper's critique was published, philosopher Sidney Hook discussed the "sense and nonsense in dialectic" and rejected two conceptions of dialectic as unscientific but accepted one conception as a "convenient organizing category".
The philosopher of science and physicist Mario Bunge repeatedly criticized Hegelian and Marxian dialectics, calling them "fuzzy and remote from science" and a "disastrous legacy". He concluded: "The so-called laws of dialectics, such as formulated by Engels (1940, 1954) and Lenin (1947, 1981), are false insofar as they are intelligible." Poe Yu-ze Wan, reviewing Bunge's criticisms of dialectics, found Bunge's arguments to be important and sensible, but he thought that dialectics could still serve some heuristic purposes for scientists. Wan pointed out that scientists such as the American Marxist biologists Richard Levins and Richard Lewontin (authors of The Dialectical Biologist) and the German-American evolutionary biologist Ernst Mayr, not a Marxist himself, have found agreement between dialectical principles and their own scientific outlooks, although Wan opined that Engels's "laws" of dialectics "in fact 'explain' nothing".
Even some Marxists are critical of the term "dialectics". For instance, Michael Heinrich wrote, "More often than not, the grandiose rhetoric about dialectics is reducible to the simple fact that everything is dependent upon everything else and is in a state of interaction and that it's all rather complicated—which is true in most cases, but doesn't really say anything."
Formalization
Defeasibility
Dialog games
Mathematics
Mathematician William Lawvere interpreted dialectics in the setting of categorical logic in terms of adjunctions between idempotent monads. This perspective may be useful in the context of theoretical computer science where the duality between syntax and semantics can be interpreted as a dialectic in this sense. For example, the Curry-Howard equivalence is such an adjunction or more generally the duality between closed monoidal categories and their internal logic.
See also
Conversation
Dialogue
A philosophical journal
Various works on dialectics and logical reasoning
Dialectical behavior therapy
Dialectical research
Dialogic
Discourse
Doublethink
False dilemma
Reflective equilibrium
Relational dialectics
Tarka sastra
Unity of opposites
Universal dialectic
References
External links
v:Dialectic algorithm – An algorithm based on the principles of classical dialectics
Studies in the Hegelian Dialectic by J. M. E. McTaggart (1896) at marxists.org
Rhetoric
Philosophical methodology
Concepts in ancient Greek metaphysics
Ancient Greek logic | 0.764507 | 0.999204 | 0.763898 |
Statistical classification | When classification is performed by a computer, statistical methods are normally used to develop the algorithm.
Often, the individual observations are analyzed into a set of quantifiable properties, known variously as explanatory variables or features. These properties may variously be categorical (e.g. "A", "B", "AB" or "O", for blood type), ordinal (e.g. "large", "medium" or "small"), integer-valued (e.g. the number of occurrences of a particular word in an email) or real-valued (e.g. a measurement of blood pressure). Other classifiers work by comparing observations to previous observations by means of a similarity or distance function.
An algorithm that implements classification, especially in a concrete implementation, is known as a classifier. The term "classifier" sometimes also refers to the mathematical function, implemented by a classification algorithm, that maps input data to a category.
Terminology across fields is quite varied. In statistics, where classification is often done with logistic regression or a similar procedure, the properties of observations are termed explanatory variables (or independent variables, regressors, etc.), and the categories to be predicted are known as outcomes, which are considered to be possible values of the dependent variable. In machine learning, the observations are often known as instances, the explanatory variables are termed features (grouped into a feature vector), and the possible categories to be predicted are classes. Other fields may use different terminology: e.g. in community ecology, the term "classification" normally refers to cluster analysis.
Relation to other problems
Classification and clustering are examples of the more general problem of pattern recognition, which is the assignment of some sort of output value to a given input value. Other examples are regression, which assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a sequence of values (for example, part of speech tagging, which assigns a part of speech to each word in an input sentence); parsing, which assigns a parse tree to an input sentence, describing the syntactic structure of the sentence; etc.
A common subclass of classification is probabilistic classification. Algorithms of this nature use statistical inference to find the best class for a given instance. Unlike other algorithms, which simply output a "best" class, probabilistic algorithms output a probability of the instance being a member of each of the possible classes. The best class is normally then selected as the one with the highest probability. However, such an algorithm has numerous advantages over non-probabilistic classifiers:
It can output a confidence value associated with its choice (in general, a classifier that can do this is known as a confidence-weighted classifier).
Correspondingly, it can abstain when its confidence of choosing any particular output is too low.
Because of the probabilities which are generated, probabilistic classifiers can be more effectively incorporated into larger machine-learning tasks, in a way that partially or completely avoids the problem of error propagation.
Frequentist procedures
Early work on statistical classification was undertaken by Fisher, in the context of two-group problems, leading to Fisher's linear discriminant function as the rule for assigning a group to a new observation. This early work assumed that data-values within each of the two groups had a multivariate normal distribution. The extension of this same context to more than two groups has also been considered with a restriction imposed that the classification rule should be linear. Later work for the multivariate normal distribution allowed the classifier to be nonlinear: several classification rules can be derived based on different adjustments of the Mahalanobis distance, with a new observation being assigned to the group whose centre has the lowest adjusted distance from the observation.
Bayesian procedures
Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into account any available information about the relative sizes of the different groups within the overall population. Bayesian procedures tend to be computationally expensive and, in the days before Markov chain Monte Carlo computations were developed, approximations for Bayesian clustering rules were devised.
Some Bayesian procedures involve the calculation of group-membership probabilities: these provide a more informative outcome than a simple attribution of a single group-label to each new observation.
Binary and multiclass classification
Classification can be thought of as two separate problems – binary classification and multiclass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes. Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers.
Feature vectors
Most algorithms describe an individual instance whose category is to be predicted using a feature vector of individual, measurable properties of the instance. Each property is termed a feature, also known in statistics as an explanatory variable (or independent variable, although features may or may not be statistically independent). Features may variously be binary (e.g. "on" or "off"); categorical (e.g. "A", "B", "AB" or "O", for blood type); ordinal (e.g. "large", "medium" or "small"); integer-valued (e.g. the number of occurrences of a particular word in an email); or real-valued (e.g. a measurement of blood pressure). If the instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece of text, the feature values might be occurrence frequencies of different words. Some algorithms work only in terms of discrete data and require that real-valued or integer-valued data be discretized into groups (e.g. less than 5, between 5 and 10, or greater than 10).
Linear classifiers
A large number of algorithms for classification can be phrased in terms of a linear function that assigns a score to each possible category k by combining the feature vector of an instance with a vector of weights, using a dot product. The predicted category is the one with the highest score. This type of score function is known as a linear predictor function and has the following general form:
where Xi is the feature vector for instance i, βk is the vector of weights corresponding to category k, and score(Xi, k) is the score associated with assigning instance i to category k. In discrete choice theory, where instances represent people and categories represent choices, the score is considered the utility associated with person i choosing category k.
Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted.
Examples of such algorithms include
The perceptron algorithm
Algorithms
Since no single form of classification is appropriate for all data sets, a large toolkit of classification algorithms has been developed. The most commonly used include:
Choices between different possible algorithms are frequently made on the basis of quantitative evaluation of accuracy.
Application domains
Classification has many applications. In some of these, it is employed as a data mining procedure, while in others more detailed statistical modeling is undertaken.
identification
Medical image analysis and
Drug discovery and
Internet
Micro-array classification
See also
References | 0.769668 | 0.992499 | 0.763895 |
Formula unit | In chemistry, a formula unit is the smallest unit of a non-molecular substance, such as an ionic compound, covalent network solid, or metal. It can also refer to the chemical formula for that unit. Those structures do not consist of discrete molecules, and so for them, the term formula unit is used. In contrast, the terms molecule or molecular formula are applied to molecules. The formula unit is used as an independent entity for stoichiometric calculations. Examples of formula units, include ionic compounds such as and and covalent networks such as and C (as diamond or graphite).
In most cases the formula representing a formula unit will also be an empirical formula, such as calcium carbonate or sodium chloride, but it is not always the case. For example, the ionic compounds potassium persulfate, mercury(I) nitrate , and sodium peroxide , have empirical formulas of , , and , respectively, being presented in the simplest whole number ratios.
In mineralogy, as minerals are almost exclusively either ionic or network solids, the formula unit is used. The number of formula units (Z) and the dimensions of the crystallographic axes are used in defining the unit cell.
References
Chemical formulas | 0.775861 | 0.984567 | 0.763888 |
Mode of action | In pharmacology and biochemistry, mode of action (MoA) describes a functional or anatomical change, resulting from the exposure of a living organism to a substance. In comparison, a mechanism of action (MOA) describes such changes at the molecular level.
A mode of action is important in classifying chemicals, as it represents an intermediate level of complexity in between molecular mechanisms and physiological outcomes, especially when the exact molecular target has not yet been elucidated or is subject to debate. A mechanism of action of a chemical could be "binding to DNA" while its broader mode of action would be "transcriptional regulation". However, there is no clear consensus and the term mode of action is also often used, especially in the study of pesticides, to describe molecular mechanisms such as action on specific nuclear receptors or enzymes. Despite this, there are classification attempts, such as the HRAC's classification to manage pesticide resistance.
See also
Mechanism of action in pharmaceuticals
Adverse outcome pathway
References
Pharmacodynamics
Medicinal chemistry | 0.785476 | 0.972484 | 0.763863 |
Deamination | Deamination is the removal of an amino group from a molecule. Enzymes that catalyse this reaction are called deaminases.
In the human body, deamination takes place primarily in the liver; however, it can also occur in the kidney. In situations of excess protein intake, deamination is used to break down amino acids for energy. The amino group is removed from the amino acid and converted to ammonia. The rest of the amino acid is made up of mostly carbon and hydrogen, and is recycled or oxidized for energy. Ammonia is toxic to the human system, and enzymes convert it to urea or uric acid by addition of carbon dioxide molecules (which is not considered a deamination process) in the urea cycle, which also takes place in the liver. Urea and uric acid can safely diffuse into the blood and then be excreted in urine.
Deamination reactions in DNA
Cytosine
Spontaneous deamination is the hydrolysis reaction of cytosine into uracil, releasing ammonia in the process. This can occur in vitro through the use of bisulfite, which deaminates cytosine, but not 5-methylcytosine. This property has allowed researchers to sequence methylated DNA to distinguish non-methylated cytosine (shown up as uracil) and methylated cytosine (unaltered).
In DNA, this spontaneous deamination is corrected for by the removal of uracil (product of cytosine deamination and not part of DNA) by uracil-DNA glycosylase, generating an abasic (AP) site. The resulting abasic site is then recognised by enzymes (AP endonucleases) that break a phosphodiester bond in the DNA, permitting the repair of the resulting lesion by replacement with another cytosine. A DNA polymerase may perform this replacement via nick translation, a terminal excision reaction by its 5'⟶3' exonuclease activity, followed by a fill-in reaction by its polymerase activity. DNA ligase then forms a phosphodiester bond to seal the resulting nicked duplex product, which now includes a new, correct cytosine (Base excision repair).
5-methylcytosine
Spontaneous deamination of 5-methylcytosine results in thymine and ammonia. This is the most common single nucleotide mutation. In DNA, this reaction, if detected prior to passage of the replication fork, can be corrected by the enzyme thymine-DNA glycosylase, which removes the thymine base in a G/T mismatch. This leaves an abasic site that is repaired by AP endonucleases and polymerase, as with uracil-DNA glycosylase.
Cytosine deamination increases C-To-T mutations
A known result of cytosine methylation is the increase of C-to-T transition mutations through the process of deamination. Cytosine deamination can alter the genome's many regulatory functions; previously silenced transposable elements (TEs) may become transcriptionally active due to the loss of CPG sites. TEs have been proposed to accelerate the mechanism of enhancer creation by providing extra DNA that is compatible with the host transcription factors that eventually have an impact on C-to-T mutations.
Guanine
Deamination of guanine results in the formation of xanthine. Xanthine, however, still pairs with cytosine.
Adenine
Deamination of adenine results in the formation of hypoxanthine. Hypoxanthine, in a manner analogous to the imine tautomer of adenine, selectively base pairs with cytosine instead of thymine. This results in a post-replicative transition mutation, where the original A-T base pair transforms into a G-C base pair.
Additional proteins performing this function
APOBEC1
APOBEC3A-H, APOBEC3G - affects HIV
Activation-induced cytidine deaminase (AICDA)
Cytidine deaminase (CDA)
dCMP deaminase (DCTD)
AMP deaminase (AMPD1)
Adenosine Deaminase acting on tRNA (ADAT)
Adenosine Deaminase acting on dsRNA (ADAR)
Double-stranded RNA-specific editase 1 (ADARB1)
Adenosine Deaminase acting on mononucleotides (ADA)
Guanine Deaminase (GDA)
See also
Adenosine monophosphate deaminase deficiency type 1
Hofmann elimination
References
Biochemical reactions
Metabolism
Substitution reactions | 0.771398 | 0.990213 | 0.763849 |
Law of three stages | The law of three stages is an idea developed by Auguste Comte in his work The Course in Positive Philosophy. It states that society as a whole, and each particular science, develops through three mentally conceived stages: (1) the theological stage, (2) the metaphysical stage, and (3) the positive stage.
The progression of the three stages of sociology
(1) The Theological stage refers to the appeal to personified deities. During the earlier stages, people believed that all the phenomena of nature were the creation of the divine or supernatural. Adults and children failed to discover the natural causes of various phenomena and hence attributed them to a supernatural or divine power. Comte broke this stage into 3 sub-stages:
1A. Fetishism – Fetishism was the primary stage of the theological stage of thinking. Throughout this stage, primitive people believe that inanimate objects have living spirits in them, also known as animism. People worship inanimate objects like trees, stones, a pieces of wood, volcanic eruptions, etc. Through this practice, people believe that all things root from a supernatural source.
1B. Polytheism – At one point, Fetishism began to bring about doubt in the minds of its believers. As a result, people turned towards polytheism: the explanation of things through the use of many Gods. Primitive people believe that all natural forces are controlled by different Gods; a few examples would be the God of water, God of rain, God of fire, God of air, God of earth, etc.
1C. Monotheism – Monotheism means believing in one God or God in one; attributing all to a single, supreme deity. Primitive people believe a single theistic entity is responsible for the existence of the universe.
(2) The Metaphysical stage is an extension of the theological stage. It refers to explanation by impersonal abstract concepts. People often try to characterize God as an abstract being. They believe that an abstract power or force guides and determines events in the world. Metaphysical thinking discards belief in a concrete God. For example: In Classical Hindu Indian society, the principle of the transmigration of the soul, the conception of rebirth, and notions of pursuant were largely governed by metaphysical uphill.
(3) The Positivity stage, also known as the scientific stage, refers to scientific explanation based on observation, experiment, and comparison. Positive explanations rely upon a distinct method, the scientific method, for their justification. Today people attempt to establish cause-and-effect relationships. Positivism is a purely intellectual way of looking at the world; as well, it also emphasizes observation and classification of data and facts. This is the highest, most evolved behavior according to Comte.
Comte, however, was conscious of the fact that the three stages of thinking may or do coexist in the same society or the same mind and may not always be successive.
Comte proposed a hierarchy of the sciences based on historical sequence, with areas of knowledge passing through these stages in order of complexity. The simplest and most remote areas of knowledge—mechanical or physical—are the first to become scientific. These are followed by the more complex sciences, those considered closest to us.
The sciences, then, according to Comte's "law", developed in this order: Mathematics; Astronomy; Physics; Chemistry; Biology; Sociology. A science of society is thus the "Queen science" in Comte's hierarchy as it would be the most fundamentally complex.
Since Comte saw social science as an observation of human behavior and knowledge, his definition of sociology included observing humanity’s development of science itself. Because of this, Comte presented this introspective field of study as the science above all others. Sociology would both complete the body of positive sciences by discussing humanity as the last unstudied scientific field and would link the fields of science together in human history, showing the "intimate interrelation of scientific and social development".
To Comte, the law of three stages made the development of sociology inevitable and necessary. Comte saw the formation of his law as an active use of sociology, but this formation was dependent on other sciences reaching the positive stage; Comte’s three-stage law would not have evidence for a positive stage without the observed progression of other sciences through these three stages. Thus, sociology and its first law of three stages would be developed after other sciences were developed out of the metaphysical stage, with the observation of these developed sciences becoming the scientific evidence used in a positive stage of sociology. This special dependence on other sciences contributed to Comte’s view of sociology being the most complex. It also explains sociology being the last science to be developed.
Comte saw the results of his three-stage law and sociology as not only inevitable but good. In Comte’s eyes, the positive stage was not only the most evolved but also the stage best for mankind. Through the continuous development of positive sciences, Comte hoped that humans would perfect their knowledge of the world and make real progress to improve the welfare of humanity. He acclaimed the positive stage as the "highest accomplishment of the human mind" and as having "natural superiority" over the other, more primitive stages.
Overall, Comte saw his law of three stages as the start of the scientific field of sociology as a positive science. He believed this development was the key to completing positive philosophy and would finally allow humans to study every observable aspect of the universe. For Comte, sociology’s human-centered studies would relate the fields of science to each other as progressions in human history and make positive philosophy one coherent body of knowledge. Comte presented the positive stage as the final state of all sciences, which would allow human knowledge to be perfected, leading to human progress.
Critiques of the law
Historian William Whewell wrote "Mr. Comte's arrangement of the progress of science as successively metaphysical and positive, is contrary to history in fact, and contrary to sound philosophy in principle." The historian of science H. Floris Cohen has made a significant effort to draw the modern eye towards this first debate on the foundations of positivism.
In contrast, within an entry dated early October 1838 Charles Darwin wrote in one of his then private notebooks that "M. Comte's idea of a theological state of science [is a] grand idea."
See also
Antipositivism
Religion of Humanity
Sociological positivism
References
External links
History Guide
Sociocultural evolution theory
Religion and science
Auguste Comte
History of sociology | 0.767225 | 0.995598 | 0.763848 |
Reification (Marxism) | In Marxist philosophy, reification (Verdinglichung, "making into a thing") is the process by which human social relations are perceived as inherent attributes of the people involved in them, or attributes of some product of the relation, such as a traded commodity.
As a practice of economics, reification transforms objects into subjects and subjects into objects, with the result that subjects (people) are rendered passive (of determined identity), whilst objects (commodities) are rendered as the active factor that determines the nature of a social relation. Analogously, the term hypostatization describes an effect of reification that results from presuming the existence of any object that can be named and presuming the existence of an abstractly conceived object, which is a fallacy of reification of ontological and epistemological interpretation.
Reification is conceptually related to, but different from Marx's theory of alienation and theory of commodity fetishism; alienation is the general condition of human estrangement; reification is a specific form of alienation; and commodity fetishism is a specific form of reification.
Georg Lukács
The concept of reification arose through the work of Lukács (1923), in the essay "Reification and the Consciousness of the Proletariat" (1923), in his book History and Class Consciousness, which defines the term reification. Lukács treats reification as a problem of capitalist society that is related to the prevalence of the commodity form, through a close reading of "The Fetishism of the Commodity and its Secret" in the first volume of Capital.
Those who have written about this concept include Max Stirner, Guy Debord, Raya Dunayevskaya, Raymond Williams, Timothy Bewes, and Slavoj Žižek.
Marxist humanist Gajo Petrović (1965), drawing from Lukács, defines reification as:
Andrew Feenberg (1981) reinterprets Lukács's central category of "consciousness" as similar to anthropological notions of culture as a set of practices. The reification of consciousness in particular, therefore, is more than just an act of misrecognition; it affects the everyday social practice at a fundamental level beyond the individual subject.
Frankfurt School
Lukács's account was influential for the philosophers of the Frankfurt School, for example in Horkheimer's and Adorno's Dialectic of Enlightenment, and in the works of Herbert Marcuse, and Axel Honneth.
Frankfurt School philosopher Axel Honneth (2008) reformulates this "Western Marxist" concept in terms of intersubjective relations of recognition and power. Instead of being an effect of the structural character of social systems such as capitalism, as Karl Marx and György Lukács argued, Honneth contends that all forms of reification are due to pathologies of intersubjectively based struggles for recognition.
Social construction
Reification occurs when specifically human creations are misconceived as "facts of nature, results of cosmic laws, or manifestations of divine will." However, some scholarship on Lukács's (1923) use of the term "reification" in History and Class Consciousness has challenged this interpretation of the concept, according to which reification implies that a pre-existing subject creates an objective social world from which it is then alienated.
Phenomenology
Other scholarship has suggested that Lukács's use of the term may have been strongly influenced by Edmund Husserl's phenomenology to understand his preoccupation with the reification of consciousness in particular. On this reading, reification entails a stance that separates the subject from the objective world, creating a mistaken relation between subject and object that is reduced to disengaged knowing. Applied to the social world, this leaves individual subjects feeling that society is something they can only know as an alien power, rather than interact with. In this respect, Lukács's use of the term could be seen as prefiguring some of the themes Martin Heidegger (1927) touches on in Being and Time, supporting the suggestion of Lucien Goldman (2009) that Lukács and Heidegger were much closer in their philosophical concerns than typically thought.
Louis Althusser
French philosopher Louis Althusser criticized what he called the "ideology of reification" that sees "'things' everywhere in human relations." Althusser's critique derives from his understanding that Marx underwent significant theoretical and methodological change or an "epistemological break" between his early and his mature work.
Though the concept of reification is used in Das Kapital by Marx, Althusser finds in it an important influence from the similar concept of alienation developed in the early The German Ideology and in the Economic and Philosophical Manuscripts of 1844.
See also
The Secret of Hegel
Character mask
Objectification
Caste
Reification (fallacy)
References
Further reading
Arato, Andrew. 1972. "Lukács’s Theory of Reification" Telos.
Bewes, Timothy. 2002. "Reification, or The Anxiety of Late Capitalism" (illustrated ed.). Verso. . Retrieved via Google Books.
Burris, Val. 1988. "Reification: A marxist perspective." California Sociologist 10(1). Pp. 22–43.
Dabrowski, Tomash. 2014. "Reification." Blackwell Encyclopedia of Political Thought. Blackwell. .
Dahms, Harry. 1998. "Beyond the Carousel of Reification: Critical Social Theory after Lukács, Adorno, and Habermas." Current Perspectives in Social Theory 18(1):3–62.
Duarte, German A. 2011. Reificación Mediática (Sic Editorial)
Dunayevskaya, Raya. "Reification of People and the Fetishism of Commodities." Pp. 167–91 in The Raya Dunayevskaya Collection.
Floyd, Kevin: "Introduction: On Capital, Sexuality, and the Situations of Knowledge," in The Reification of Desire: Toward a Queer Marxism. Minneapolis, MN.: University of Minnesota Press, 2009.
Gabel, Joseph. 1975. False Consciousness: An Essay On Reification. New York: Harper & Row.
Goldmann, Lucien. 1959 "Réification." Recherches Dialectiques. Paris: Gallimard.
Honneth, Axel. 2005 March 14–16. "Reification: A Recognition-Theoretical View." The Tanner Lectures on Human Values, delivered at University of California-Berkeley.
Kangrga, Milan. 1968. Was ist Verdinglichung?
Larsen, Neil. 2011. "Lukács sans Proletariat, or Can History and Class Consciousness be Rehistoricized?." Pp. 81–100 in Georg Lukács: The Fundamental Dissonance of Existence, edited by T. Bewes and T. Hall. London: Continuum.
Löwith, Karl. 1982 [1932]. Max Weber and Karl Marx.
Lukács, Georg. 167 [1923]. History & Class Consciousness. Merlin Press. "Reification and the Consciousness of the Proletariat."
Rubin, I. I. 1972 [1928]. "Essays on Marx’s Theory of Value."
Schaff, Adam. 1980. Alienation as a Social Phenomenon.
Tadić, Ljubomir. 1969. "Bureaucracy—Reified Organization," edited by M. Marković and G. Petrović. Praxis.
Vandenberghe, Frederic. 2009. A Philosophical History of German Sociology. London: Routledge.
Westerman, Richard. 2018. Lukács' Phenomenology of Capitalism: Reification Revalued. New York: Palgrave Macmillan.
Marxist theory
György Lukács
fr:Réification | 0.768515 | 0.993891 | 0.763821 |
Chemical substance | A chemical substance is a unique form of matter with constant chemical composition and characteristic properties. Chemical substances may take the form of a single element or chemical compounds. If two or more chemical substances can be combined without reacting, they may form a chemical mixture. If a mixture is separated to isolate one chemical substance to a desired degree, the resulting substance is said to be chemically pure.
Chemical substances can exist in several different physical states or phases (e.g. solids, liquids, gases, or plasma) without changing their chemical composition. Substances transition between these phases of matter in response to changes in temperature or pressure. Some chemical substances can be combined or converted into new substances by means of chemical reactions. Chemicals that do not possess this ability are said to be inert.
Pure water is an example of a chemical substance, with a constant composition of two hydrogen atoms bonded to a single oxygen atom (i.e. H2O). The atomic ratio of hydrogen to oxygen is always 2:1 in every molecule of water. Pure water will tend to boil near , an example of one of the characteristic properties that define it. Other notable chemical substances include diamond (a form of the element carbon), table salt (NaCl; an ionic compound), and refined sugar (C12H22O11; an organic compound).
Definitions
In addition to the generic definition offered above, there are several niche fields where the term "chemical substance" may take alternate usages that are widely accepted, some of which are outlined in the sections below.
Inorganic chemistry
Chemical Abstracts Service (CAS) lists several alloys of uncertain composition within their chemical substance index. While an alloy could be more closely defined as a mixture, referencing them in the chemical substances index allows CAS to offer specific guidance on standard naming of alloy compositions. Non-stoichiometric compounds are another special case from inorganic chemistry, which violate the requirement for constant composition. For these substances, it may be difficult to draw the line between a mixture and a compound, as in the case of palladium hydride. Broader definitions of chemicals or chemical substances can be found, for example: "the term 'chemical substance' means any organic or inorganic substance of a particular molecular identity, including – (i) any combination of such substances occurring in whole or in part as a result of a chemical reaction or occurring in nature".
Geology
In the field of geology, inorganic solid substances of uniform composition are known as minerals. When two or more minerals are combined to form mixtures (or aggregates), they are defined as rocks. Many minerals, however, mutually dissolve into solid solutions, such that a single rock is a uniform substance despite being a mixture in stoichiometric terms. Feldspars are a common example: anorthoclase is an alkali aluminum silicate, where the alkali metal is interchangeably either sodium or potassium.
Law
In law, "chemical substances" may include both pure substances and mixtures with a defined composition or manufacturing process. For example, the EU regulation REACH defines "monoconstituent substances", "multiconstituent substances" and "substances of unknown or variable composition". The latter two consist of multiple chemical substances; however, their identity can be established either by direct chemical analysis or reference to a single manufacturing process. For example, charcoal is an extremely complex, partially polymeric mixture that can be defined by its manufacturing process. Therefore, although the exact chemical identity is unknown, identification can be made with a sufficient accuracy. The CAS index also includes mixtures.
Polymer chemistry
Polymers almost always appear as mixtures of molecules of multiple molar masses, each of which could be considered a separate chemical substance. However, the polymer may be defined by a known precursor or reaction(s) and the molar mass distribution. For example, polyethylene is a mixture of very long chains of -CH2- repeating units, and is generally sold in several molar mass distributions, LDPE, MDPE, HDPE and UHMWPE.
History
The concept of a "chemical substance" became firmly established in the late eighteenth century after work by the chemist Joseph Proust on the composition of some pure chemical compounds such as basic copper carbonate. He deduced that, "All samples of a compound have the same composition; that is, all samples have the same proportions, by mass, of the elements present in the compound." This is now known as the law of constant composition. Later with the advancement of methods for chemical synthesis particularly in the realm of organic chemistry; the discovery of many more chemical elements and new techniques in the realm of analytical chemistry used for isolation and purification of elements and compounds from chemicals that led to the establishment of modern chemistry, the concept was defined as is found in most chemistry textbooks. However, there are some controversies regarding this definition mainly because the large number of chemical substances reported in chemistry literature need to be indexed.
Isomerism caused much consternation to early researchers, since isomers have exactly the same composition, but differ in configuration (arrangement) of the atoms. For example, there was much speculation about the chemical identity of benzene, until the correct structure was described by Friedrich August Kekulé. Likewise, the idea of stereoisomerism – that atoms have rigid three-dimensional structure and can thus form isomers that differ only in their three-dimensional arrangement – was another crucial step in understanding the concept of distinct chemical substances. For example, tartaric acid has three distinct isomers, a pair of diastereomers with one diastereomer forming two enantiomers.
Chemical elements
An element is a chemical substance made up of a particular kind of atom and hence cannot be broken down or transformed by a chemical reaction into a different element, though it can be transmuted into another element through a nuclear reaction. This is because all of the atoms in a sample of an element have the same number of protons, though they may be different isotopes, with differing numbers of neutrons.
As of 2019, there are 118 known elements, about 80 of which are stable – that is, they do not change by radioactive decay into other elements. Some elements can occur as more than a single chemical substance (allotropes). For instance, oxygen exists as both diatomic oxygen (O2) and ozone (O3). The majority of elements are classified as metals. These are elements with a characteristic lustre such as iron, copper, and gold. Metals typically conduct electricity and heat well, and they are malleable and ductile. Around 14 to 21 elements, such as carbon, nitrogen, and oxygen, are classified as non-metals. Non-metals lack the metallic properties described above, they also have a high electronegativity and a tendency to form negative ions. Certain elements such as silicon sometimes resemble metals and sometimes resemble non-metals, and are known as metalloids.
Chemical compounds
A chemical compound is a chemical substance that is composed of a particular set of atoms or ions. Two or more elements combined into one substance through a chemical reaction form a chemical compound. All compounds are substances, but not all substances are compounds.
A chemical compound can be either atoms bonded together in molecules or crystals in which atoms, molecules or ions form a crystalline lattice. Compounds based primarily on carbon and hydrogen atoms are called organic compounds, and all others are called inorganic compounds. Compounds containing bonds between carbon and a metal are called organometallic compounds.
Compounds in which components share electrons are known as covalent compounds. Compounds consisting of oppositely charged ions are known as ionic compounds, or salts.
Coordination complexes are compounds where a dative bond keeps the substance together without a covalent or ionic bond. Coordination complexes are distinct substances with distinct properties different from a simple mixture. Typically these have a metal, such as a copper ion, in the center and a nonmetals atom, such as the nitrogen in an ammonia molecule or oxygen in water in a water molecule, forms a dative bond to the metal center, e.g. tetraamminecopper(II) sulfate [Cu(NH3)4]SO4·H2O. The metal is known as a "metal center" and the substance that coordinates to the center is called a "ligand". However, the center does not need to be a metal, as exemplified by boron trifluoride etherate BF3OEt2, where the highly Lewis acidic, but non-metallic boron center takes the role of the "metal". If the ligand bonds to the metal center with multiple atoms, the complex is called a chelate.
In organic chemistry, there can be more than one chemical compound with the same composition and molecular weight. Generally, these are called isomers. Isomers usually have substantially different chemical properties, and often may be isolated without spontaneously interconverting. A common example is glucose vs. fructose. The former is an aldehyde, the latter is a ketone. Their interconversion requires either enzymatic or acid-base catalysis.
However, tautomers are an exception: the isomerization occurs spontaneously in ordinary conditions, such that a pure substance cannot be isolated into its tautomers, even if these can be identified spectroscopically or even isolated in special conditions. A common example is glucose, which has open-chain and ring forms. One cannot manufacture pure open-chain glucose because glucose spontaneously cyclizes to the hemiacetal form.
Substances versus mixtures
All matter consists of various elements and chemical compounds, but these are often intimately mixed together. Mixtures contain more than one chemical substance, and they do not have a fixed composition. Butter, soil and wood are common examples of mixtures. Sometimes, mixtures can be separated into their component substances by mechanical processes, such as chromatography, distillation, or evaporation.
Grey iron metal and yellow sulfur are both chemical elements, and they can be mixed together in any ratio to form a yellow-grey mixture. No chemical process occurs, and the material can be identified as a mixture by the fact that the sulfur and the iron can be separated by a mechanical process, such as using a magnet to attract the iron away from the sulfur.
In contrast, if iron and sulfur are heated together in a certain ratio (1 atom of iron for each atom of sulfur, or by weight, 56 grams (1 mol) of iron to 32 grams (1 mol) of sulfur), a chemical reaction takes place and a new substance is formed, the compound iron(II) sulfide, with chemical formula FeS. The resulting compound has all the properties of a chemical substance and is not a mixture. Iron(II) sulfide has its own distinct properties such as melting point and solubility, and the two elements cannot be separated using normal mechanical processes; a magnet will be unable to recover the iron, since there is no metallic iron present in the compound.
Chemicals versus chemical substances
While the term chemical substance is a precise technical term that is synonymous with chemical for chemists, the word chemical is used in general usage to refer to both (pure) chemical substances and mixtures (often called compounds), and especially when produced or purified in a laboratory or an industrial process. In other words, the chemical substances of which fruits and vegetables, for example, are naturally composed even when growing wild are not called "chemicals" in general usage. In countries that require a list of ingredients in products, the "chemicals" listed are industrially produced "chemical substances". The word "chemical" is also often used to refer to addictive, narcotic, or mind-altering drugs.
Within the chemical industry, manufactured "chemicals" are chemical substances, which can be classified by production volume into bulk chemicals, fine chemicals and chemicals found in research only:
Bulk chemicals are produced in very large quantities, usually with highly optimized continuous processes and to a relatively low price.
Fine chemicals are produced at a high cost in small quantities for special low-volume applications such as biocides, pharmaceuticals and speciality chemicals for technical applications.
Research chemicals are produced individually for research, such as when searching for synthetic routes or screening substances for pharmaceutical activity. In effect, their price per gram is very high, although they are not sold.
The cause of the difference in production volume is the complexity of the molecular structure of the chemical. Bulk chemicals are usually much less complex. While fine chemicals may be more complex, many of them are simple enough to be sold as "building blocks" in the synthesis of more complex molecules targeted for single use, as named above. The production of a chemical includes not only its synthesis but also its purification to eliminate by-products and impurities involved in the synthesis. The last step in production should be the analysis of batch lots of chemicals in order to identify and quantify the percentages of impurities for the buyer of the chemicals. The required purity and analysis depends on the application, but higher tolerance of impurities is usually expected in the production of bulk chemicals. Thus, the user of the chemical in the US might choose between the bulk or "technical grade" with higher amounts of impurities or a much purer "pharmaceutical grade" (labeled "USP", United States Pharmacopeia). "Chemicals" in the commercial and legal sense may also include mixtures of highly variable composition, as they are products made to a technical specification instead of particular chemical substances. For example, gasoline is not a single chemical compound or even a particular mixture: different gasolines can have very different chemical compositions, as "gasoline" is primarily defined through source, properties and octane rating.
Naming and indexing
Every chemical substance has one or more systematic names, usually named according to the IUPAC rules for naming. An alternative system is used by the Chemical Abstracts Service (CAS).
Many compounds are also known by their more common, simpler names, many of which predate the systematic name. For example, the long-known sugar glucose is now systematically named 6-(hydroxymethyl)oxane-2,3,4,5-tetrol. Natural products and pharmaceuticals are also given simpler names, for example the mild pain-killer Naproxen is the more common name for the chemical compound (S)-6-methoxy-α-methyl-2-naphthaleneacetic acid.
Chemists frequently refer to chemical compounds using chemical formulae or molecular structure of the compound. There has been a phenomenal growth in the number of chemical compounds being synthesized (or isolated), and then reported in the scientific literature by professional chemists around the world. An enormous number of chemical compounds are possible through the chemical combination of the known chemical elements. As of Feb 2021, about "177 million organic and inorganic substances" (including 68 million defined-sequence biopolymers) are in the scientific literature and registered in public databases. The names of many of these compounds are often nontrivial and hence not very easy to remember or cite accurately. Also, it is difficult to keep track of them in the literature. Several international organizations like IUPAC and CAS have initiated steps to make such tasks easier. CAS provides the abstracting services of the chemical literature, and provides a numerical identifier, known as CAS registry number to each chemical substance that has been reported in the chemical literature (such as chemistry journals and patents). This information is compiled as a database and is popularly known as the Chemical substances index. Other computer-friendly systems that have been developed for substance information are: SMILES and the International Chemical Identifier or InChI.
Isolation, purification, characterization, and identification
Often a pure substance needs to be isolated from a mixture, for example from a natural source (where a sample often contains numerous chemical substances) or after a chemical reaction (which often gives mixtures of chemical substances).
Measurement
See also
Hazard symbol
Homogeneous and heterogeneous mixtures
Prices of chemical elements
Dedicated bio-based chemical
Fire diamond
Research chemical
References
External links
General chemistry
Artificial materials | 0.764995 | 0.998449 | 0.763809 |
Stochastic thermodynamics | Overview
When a microscopic machine (e.g. a MEM) performs useful work it generates heat and entropy as a byproduct of the process, however it is also predicted that this machine will operate in "reverse" or "backwards" over appreciable short periods. That is, heat energy from the surroundings will be converted into useful work. For larger engines, this would be described as a violation of the second law of thermodynamics, as entropy is consumed rather than generated. Loschmidt's paradox states that in a time reversible system, for every trajectory there exists a time-reversed anti-trajectory. As the entropy production of a trajectory and its equal anti-trajectory are of identical magnitude but opposite sign, then, so the argument goes, one cannot prove that entropy production is positive.
For a long time, exact results in thermodynamics were only possible in linear systems capable of reaching equilibrium, leaving other questions like the Loschmidt paradox unsolved. During the last few decades fresh approaches have revealed general laws applicable to non-equilibrium system which are described by nonlinear equations, pushing the range of exact thermodynamic statements beyond the realm of traditional linear solutions. These exact results are particularly relevant for small systems where appreciable (typically non-Gaussian) fluctuations occur. Thanks to stochastic thermodynamics it is now possible to accurately predict distribution functions of thermodynamic quantities relating to exchanged heat, applied work or entropy production for these systems.
Fluctuation theorem
The mathematical resolution to Loschmidt's paradox is called the (steady state) fluctuation theorem (FT), which is a generalisation of the second law of thermodynamics. The FT shows that as a system gets larger or the trajectory duration becomes longer, entropy-consuming trajectories become more unlikely, and the expected second law behaviour is recovered.
The FT was first put forward by and much of the work done in developing and extending the theorem was accomplished by theoreticians and mathematicians interested in nonequilibrium statistical mechanics.
The first observation and experimental proof of Evan's fluctuation theorem (FT) was performed by
Jarzynski equality
Siefert writes:
proved a remarkable relation which allows to express the free energy difference between two equilibrium systems by a nonlinear average over the work required to drive the system in a non-equilibrium process from one state to the other. By comparing probability distributions for the work spent in the original process with the time-reversed one, Crooks found a “refinement” of the Jarzynski relation (JR), now called the Crooks fluctuation theorem. Both, this relation and another refinement of the JR, the Hummer-Szabo relation became particularly useful for determining free energy differences and landscapes of biomolecules. These relations are the most prominent ones within a class of exact results (some of which found even earlier and then rediscovered) valid for non-equilibrium systems driven by time-dependent forces. A close analogy to the JR, which relates different equilibrium states, is the Hatano-Sasa relation that applies to transitions between two different non-equilibrium steady states.
This is shown to be a special case of a more general relation.
Stochastic energetics
History
Siefert writes:
Classical thermodynamics, at its heart, deals with general laws governing the transformations of a system, in particular, those involving the exchange of heat, work and matter with an environment. As a central result, total entropy production is identified that in any such process can never decrease, leading, inter alia, to fundamental limits on the efficiency of heat engines and refrigerators.
The thermodynamic characterisation of systems in equilibrium got its microscopic justification from equilibrium statistical mechanics which states that for a system in contact with a heat bath the probability to find it in any specific microstate is given by the Boltzmann factor. For small deviations from equilibrium, linear response theory allows to express transport properties caused by small external fields through equilibrium correlation functions. On a more phenomenological level, linear irreversible thermodynamics provides a relation between such transport coefficients and entropy production in terms of forces and fluxes. Beyond this linear response regime, for a long time, no universal exact results were available.
During the last 20 years fresh approaches have revealed general laws applicable to non-equilibrium system thus pushing the range of validity of exact thermodynamic statements beyond the realm of linear response deep into the genuine non-equilibrium region. These exact results, which become particularly relevant for small systems with appreciable (typically non-Gaussian) fluctuations, generically refer to distribution functions of thermodynamic quantities like exchanged heat, applied work or entropy production.
Stochastic thermodynamics combines the stochastic energetics introduced by with the idea that entropy can consistently be assigned to a single fluctuating trajectory.
Open research
Quantum stochastic thermodynamics
Stochastic thermodynamics can be applied to driven (i.e. open) quantum systems whenever the effects of quantum coherence can be ignored. The dynamics of an open quantum system is then equivalent to a classical stochastic one. However, this is sometimes at the cost of requiring unrealistic measurements at the beginning and end of a process.
Understanding non-equilibrium quantum thermodynamics more broadly is an important and active area of research. The efficiency of some computing and information theory tasks can be greatly enhanced when using quantum correlated states; quantum correlations can be used not only as a valuable resource in quantum computation, but also in the realm of quantum thermodynamics. New types of quantum devices in non-equilibrium states function very differently to their classical counterparts. For example, it has been theoretically shown that non-equilibrium quantum ratchet systems function far more efficiently then that predicted by classical thermodynamics. It has also been shown that quantum coherence can be used to enhance the efficiency of systems beyond the classical Carnot limit. This is because it could be possible to extract work, in the form of photons, from a single heat bath. Quantum coherence can be used in effect to play the role of Maxwell's demon though the broader information theory based interpretation of the second law of thermodynamics is not violated.
Quantum versions of stochastic thermodynamics have been studied for some time and the past few years have seen a surge of interest in this topic. Quantum mechanics involves profound issues around the interpretation of reality (e.g. the Copenhagen interpretation, many-worlds, de Broglie-Bohm theory etc are all competing interpretations that try to explain the unintuitive results of quantum theory) . It is hoped that by trying to specify the quantum-mechanical definition of work, dealing with open quantum systems, analyzing exactly solvable models, or proposing and performing experiments to test non-equilibrium predictions, important insights into the interpretation of quantum mechanics and the true nature of reality will be gained.
Applications of non-equilibrium work relations, like the Jarzynski equality, have recently been proposed for the purposes of detecting quantum entanglement and to improving optimization problems (minimize or maximize a function of multivariables called the cost function) via quantum annealing .
Active baths
Until recently thermodynamics has only considered systems coupled to a thermal bath and, therefore, satisfying Boltzmann statistics. However, some systems do not satisfy these conditions and are far from equilibrium such as living matter, for which fluctuations are expected to be non-Gaussian.
Active particle systems are able to take energy from their environment and drive themselves far from equilibrium. An important example of active matter is constituted by objects capable of self propulsion. Thanks to this property, they feature a series of novel behaviours that are not attainable by matter at thermal equilibrium, including, for example, swarming and the emergence of other collective properties. A passive particle is considered in an active bath when it is in an environment where a wealth of active particles are present. These particles will exert nonthermal forces on the passive object so that it will experience non-thermal fluctuations and will behave widely different from a passive Brownian particle in a thermal bath. The presence of an active bath can significantly influence the microscopic thermodynamics of a particle. Experiments have suggested that the Jarzynski equality does not hold in some cases due to the presence of non-Boltzmann statistics in active baths. This observation points towards a new direction in the study of non-equilibrium statistical physics and stochastic thermodynamics, where also the environment itself is far from equilibrium.
Active baths are a question of particular importance in biochemistry. For example, biomolecules within cells are coupled with an active bath due to the presence of molecular motors within the cytoplasm, which leads to striking and largely not yet understood phenomena such as the emergence of anomalous diffusion (Barkai et al., 2012). Also, protein folding might be facilitated by the presence of active fluctuations (Harder et al., 2014b) and active matter dynamics could play a central role in several biological functions (Mallory et al., 2015; Shin et al., 2015; Suzuki et al., 2015). It is an open question to what degree stochastic thermodynamics can be applied to systems coupled to active baths.
References
Notes
Citations
Academic references
Press
Statistical mechanics
Thermodynamics
Non-equilibrium thermodynamics
Branches of thermodynamics
Stochastic models | 0.767661 | 0.99497 | 0.7638 |
Side chain | In organic chemistry and biochemistry, a side chain is a chemical group that is attached to a core part of the molecule called the "main chain" or backbone. The side chain is a hydrocarbon branching element of a molecule that is attached to a larger hydrocarbon backbone. It is one factor in determining a molecule's properties and reactivity. A side chain is also known as a pendant chain, but a pendant group (side group) has a different definition.
Conventions
The placeholder R is often used as a generic placeholder for alkyl (saturated hydrocarbon) group side chains in chemical structure diagrams. To indicate other non-carbon groups in structure diagrams, X, Y, or Z are often used.
History
The R symbol was introduced by 19th-century French chemist Charles Frédéric Gerhardt, who advocated its adoption on the grounds that it would be widely recognizable and intelligible given its correspondence in multiple European languages to the initial letter of "root" or "residue": French ("root") and ("residue"), these terms' respective English translations along with radical (itself derived from Latin below), Latin ("root") and ("residue"), and German ("remnant" and, in the context of chemistry, both "residue" and "radical").
Usage
Organic chemistry
In polymer science, the side chain of an oligomeric or polymeric offshoot extends from the backbone chain of a polymer. Side chains have noteworthy influence on a polymer's properties, mainly its crystallinity and density. An oligomeric branch may be termed a short-chain branch, and a polymeric branch may be termed a long-chain branch. Side groups are different from side chains; they are neither oligomeric nor polymeric.
Biochemistry
In proteins, which are composed of amino acid residues, the side chains are attached to the alpha-carbon atoms of the amide backbone. The side chain connected to the alpha-carbon is specific for each amino acid and is responsible for determining charge and polarity of the amino acid. The amino acid side chains are also responsible for many of the interactions that lead to proper protein folding and function. Amino acids with similar polarity are usually attracted to each other, while nonpolar and polar side chains usually repel each other. Nonpolar/polar interactions can still play an important part in stabilizing the secondary structure due to the relatively large amount of them occurring throughout the protein. Spatial positions of side-chain atoms can be predicted based on protein backbone geometry using computational tools for side-chain reconstruction.
See also
Alkyl
Backbone chain
Branching (polymer chemistry)
Functional group
Pendant group
Residue (chemistry)
Substituent
Backbone-dependent rotamer library
References
Organic chemistry | 0.774274 | 0.986461 | 0.763791 |
Car–Parrinello molecular dynamics | Car–Parrinello molecular dynamics or CPMD refers to either a method used in molecular dynamics (also known as the Car–Parrinello method) or the computational chemistry software package used to implement this method.
The CPMD method is one of the major methods for calculating ab-initio molecular dynamics (ab-initio MD or AIMD).
Ab initio molecular dynamics (ab initio MD) is a computational method that uses first principles, or fundamental laws of nature, to simulate the motion of atoms in a system. It is a type of molecular dynamics (MD) simulation that does not rely on empirical potentials or force fields to describe the interactions between atoms, but rather calculates these interactions directly from the electronic structure of the system using quantum mechanics.
In an ab initio MD simulation, the total energy of the system is calculated at each time step using density functional theory (DFT) or another method of quantum chemistry. The forces acting on each atom are then determined from the gradient of the energy with respect to the atomic coordinates, and the equations of motion are solved to predict the trajectory of the atoms.
AIMD permits chemical bond breaking and forming events to occur and accounts for electronic polarization effect. Therefore, Ab initio MD simulations can be used to study a wide range of phenomena, including the structural, thermodynamic, and dynamic properties of materials and chemical reactions. They are particularly useful for systems that are not well described by empirical potentials or force fields, such as systems with strong electronic correlation or systems with many degrees of freedom. However, ab initio MD simulations are computationally demanding and require significant computational resources.
The CPMD method is related to the more common Born–Oppenheimer molecular dynamics (BOMD) method in that the quantum mechanical effect of the electrons is included in the calculation of energy and forces for the classical motion of the nuclei. CPMD and BOMD are different types of AIMD. However, whereas BOMD treats the electronic structure problem within the time-independent Schrödinger equation, CPMD explicitly includes the electrons as active degrees of freedom, via (fictitious) dynamical variables.
The software is a parallelized plane wave / pseudopotential implementation of density functional theory, particularly designed for ab initio molecular dynamics.
Car–Parrinello method
The Car–Parrinello method is a type of molecular dynamics, usually employing periodic boundary conditions, planewave basis sets, and density functional theory, proposed by Roberto Car and Michele Parrinello in 1985, who were subsequently awarded the Dirac Medal by ICTP in 2009.
In contrast to Born–Oppenheimer molecular dynamics wherein the nuclear (ions) degree of freedom are propagated using ionic forces which are calculated at each iteration by approximately solving the electronic problem with conventional matrix diagonalization methods, the Car–Parrinello method explicitly introduces the electronic degrees of freedom as (fictitious) dynamical variables, writing an extended Lagrangian for the system which leads to a system of coupled equations of motion for both ions and electrons. In this way, an explicit electronic minimization at each time step, as done in Born–Oppenheimer MD, is not needed: after an initial standard electronic minimization, the fictitious dynamics of the electrons keeps them on the electronic ground state corresponding to each new ionic configuration visited along the dynamics, thus yielding accurate ionic forces. In order to maintain this adiabaticity condition, it is necessary that the fictitious mass of the electrons is chosen small enough to avoid a significant energy transfer from the ionic to the electronic degrees of freedom. This small fictitious mass in turn requires that the equations of motion are integrated using a smaller time step than the one (1–10 fs) commonly used in Born–Oppenheimer molecular dynamics.
Currently, the CPMD method can be applied to systems that consist of a few tens or hundreds of atoms and access timescales on the order of tens of picoseconds.
General approach
In CPMD the core electrons are usually described by a pseudopotential and the wavefunction of the valence electrons are approximated by a plane wave basis set.
The ground state electronic density (for fixed nuclei) is calculated self-consistently, usually using the density functional theory method. Kohn-Sham equations are often used to calculate the electronic structure, where electronic orbitals are expanded in a plane-wave basis set. Then, using that density, forces on the nuclei can be computed, to update the trajectories (using, e.g. the Verlet integration algorithm). In addition, however, the coefficients used to obtain the electronic orbital functions can be treated as a set of extra spatial dimensions, and trajectories for the orbitals can be calculated in this context.
Fictitious dynamics
CPMD is an approximation of the Born–Oppenheimer MD (BOMD) method. In BOMD, the electrons' wave function must be minimized via matrix diagonalization at every step in the trajectory. CPMD uses fictitious dynamics to keep the electrons close to the ground state, preventing the need for a costly self-consistent iterative minimization at each time step. The fictitious dynamics relies on the use of a fictitious electron mass (usually in the range of 400 – 800 a.u.) to ensure that there is very little energy transfer from nuclei to electrons, i.e. to ensure adiabaticity. Any increase in the fictitious electron mass resulting in energy transfer would cause the system to leave the ground-state BOMD surface.
Lagrangian
where is the fictitious mass parameter; E[{ψi},{RI}] is the Kohn–Sham energy density functional, which outputs energy values when given Kohn–Sham orbitals and nuclear positions.
Orthogonality constraint
where δij is the Kronecker delta.
Equations of motion
The equations of motion are obtained by finding the stationary point of the Lagrangian under variations of ψi and RI, with the orthogonality constraint.
where Λij is a Lagrangian multiplier matrix to comply with the orthonormality constraint.
Born–Oppenheimer limit
In the formal limit where μ → 0, the equations of motion approach Born–Oppenheimer molecular dynamics.
Software packages
There are a number of software packages available for performing AIMD simulations. Some of the most widely used packages include:
CP2K: an open-source software package for AIMD.
Quantum Espresso: an open-source package for performing DFT calculations. It includes a module for AIMD.
VASP: a commercial software package for performing DFT calculations. It includes a module for AIMD.
Gaussian: a commercial software package that can perform AIMD.
NWChem: an open-source software package for AIMD.
LAMMPS: an open-source software package for performing classical and ab initio MD simulations.
SIESTA: an open-source software package for AIMD.
Application
Studying the behavior of water near a hydrophobic graphene sheet.
Investigating the structure and dynamics of liquid water at ambient temperature.
Solving the heat transfer problems (heat conduction and thermal radiation) between Si/Ge superlattices.
Probing the proton transfer along 1D water chains inside carbon nanotubes.
Evaluating the critical point of aluminum.
Predicting the amorphous phase of the phase-change memory material GeSbTe.
Studying the combustion process of lignite-water systems.
Computing and analyzing the IR spectra in terms of H-bond interactions.
See also
Computational physics
Density functional theory
Computational chemistry
Molecular dynamics
Quantum chemistry
Ab initio quantum chemistry methods
Quantum chemistry computer programs
List of software for molecular mechanics modeling
List of quantum chemistry and solid-state physics software
CP2K
References
External links
http://www.cpmd.org/
http://www.cp2k.org/
Density functional theory
Density functional theory software
Computational chemistry
Computational chemistry software
Molecular dynamics
Molecular dynamics software
Quantum chemistry
Theoretical chemistry
Mathematical chemistry
Simulation software
Scientific simulation software
Physics software
Science software
Algorithms
Computational physics
Electronic structure methods | 0.781888 | 0.976839 | 0.763778 |
Boron group | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 2
|
|-
! 3
|
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
The boron group are the chemical elements in group 13 of the periodic table, consisting of boron (B), aluminium (Al), gallium (Ga), indium (In), thallium (Tl) and nihonium (Nh). This group lies in the p-block of the periodic table. The elements in the boron group are characterized by having three valence electrons. These elements have also been referred to as the triels.
Several group 13 elements have biological roles in the ecosystem. Boron is a trace element in humans and is essential for some plants. Lack of boron can lead to stunted plant growth, while an excess can also cause harm by inhibiting growth. Aluminium has neither a biological role nor significant toxicity and is considered safe. Indium and gallium can stimulate metabolism; gallium is credited with the ability to bind itself to iron proteins. Thallium is highly toxic, interfering with the function of numerous vital enzymes, and has seen use as a pesticide.
Characteristics
Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior:
The boron group is notable for trends in the electron configuration, as shown above, and in some of its elements' characteristics. Boron differs from the other group members in its hardness, refractivity and reluctance to participate in metallic bonding. An example of a trend in reactivity is boron's tendency to form reactive compounds with hydrogen.
Although situated in p-block, the group is notorious for violation of the octet rule by its members boron and (to a lesser extent) aluminium. All members of the group are characterized as trivalent.
Chemical reactivity
Hydrides
Most of the elements in the boron group show increasing reactivity as the elements get heavier in atomic mass and higher in atomic number. Boron, the first element in the group, is generally unreactive with many elements except at high temperatures, although it is capable of forming many compounds with hydrogen, sometimes called boranes. The simplest borane is diborane, or B2H6. Another example is B10H14.
The next group-13 elements, aluminium and gallium, form fewer stable hydrides, although both AlH3 and GaH3 exist. Indium, the next element in the group, is not known to form many hydrides, except in complex compounds such as the phosphine complex (Cy=cyclohexyl). No stable compound of thallium and hydrogen has been synthesized in any laboratory.
Oxides
All of the boron-group elements are known to form a trivalent oxide, with two atoms of the element bonded covalently with three atoms of oxygen. These elements show a trend of increasing pH (from acidic to basic). Boron oxide (B2O3) is slightly acidic, aluminium and gallium oxide (Al2O3 and Ga2O3 respectively) are amphoteric, indium(III) oxide (In2O3) is nearly amphoteric, and thallium(III) oxide (Tl2O3) is a Lewis base because it dissolves in acids to form salts. Each of these compounds are stable, but thallium oxide decomposes at temperatures higher than 875 °C.
Halides
The elements in group 13 are also capable of forming stable compounds with the halogens, usually with the formula MX3 (where M is a boron-group element and X is a halogen.) Fluorine, the first halogen, is able to form stable compounds with every element that has been tested (except neon and helium), and the boron group is no exception. It is even hypothesized that nihonium could form a compound with fluorine, NhF3, before spontaneously decaying due to nihonium's radioactivity. Chlorine also forms stable compounds with all of the elements in the boron group, including thallium, and is hypothesized to react with nihonium. All of the elements will react with bromine under the right conditions, as with the other halogens but less vigorously than either chlorine or fluorine. Iodine will react with all natural elements in the periodic table except for the noble gases, and is notable for its explosive reaction with aluminium to form AlI3. Astatine, the fifth halogen, has only formed a few compounds, due to its radioactivity and short half-life, and no reports of a compound with an At–Al, –Ga, –In, –Tl, or –Nh bond have been seen, although scientists think that it should form salts with metals. Tennessine, the sixth and final member of group 17, may also form compounds with the elements in the boron group; however, because Tennessine is purely synthetic and thus must be created artificially, its chemistry has not been investigated, and any compounds would likely decay nearly instantly after formation due to its extreme radioactivity.
Physical properties
It has been noticed that the elements in the boron group have similar physical properties, although most of boron's are exceptional. For example, all of the elements in the boron group, except for boron itself, are soft. Moreover, all of the other elements in group 13 are relatively reactive at moderate temperatures, while boron's reactivity only becomes comparable at very high temperatures. One characteristic that all do have in common is having three electrons in their valence shells. Boron, being a metalloid, is a thermal and electrical insulator at room temperature, but a good conductor of heat and electricity at high temperatures. Unlike boron, the metals in the group are good conductors under normal conditions. This is in accordance with the long-standing generalization that all metals conduct heat and electricity better than most non-metals.
Oxidation states
The inert s-pair effect is significant in the group-13 elements, especially the heavier ones like thallium. This results in a variety of oxidation states. In the lighter elements, the +3 state is the most stable, but the +1 state becomes more prevalent with increasing atomic number, and is the most stable for thallium. Boron is capable of forming compounds with lower oxidization states, of +1 or +2, and aluminium can do the same. Gallium can form compounds with the oxidation states +1, +2 and +3. Indium is like gallium, but its +1 compounds are more stable than those of the lighter elements. The strength of the inert-pair effect is maximal in thallium, which is generally only stable in the oxidation state of +1, although the +3 state is seen in some compounds. Stable and monomeric gallium, indium and thallium radicals with a formal oxidation state of +2 have since been reported. Nihonium may have +5 oxidation state.
Periodic trends
There are several trends that can be observed in the properties of the boron group members. The boiling points of these elements drop from period to period, while densities tend to rise.
Nuclear
With the exception of the synthetic nihonium, all of the elements of the boron group have stable isotopes. Because all their atomic numbers are odd, boron, gallium and thallium have only two stable isotopes, while aluminium and indium are monoisotopic, having only one, although most indium found in nature is the weakly radioactive 115In. 10B and 11B are both stable, as are 27Al, 69Ga and 71Ga, 113In, and 203Tl and 205Tl. All of these isotopes are readily found in macroscopic quantities in nature. In theory, though, all isotopes with an atomic number greater than 66 are supposed to be unstable to alpha decay. Conversely, all elements with atomic numbers are less than or equal to 66 (except Tc, Pm, Sm and Eu) have at least one isotope that is theoretically energetically stable to all forms of decay (with the exception of proton decay, which has never been observed, and spontaneous fission, which is theoretically possible for elements with atomic numbers greater than 40).
Like all other elements, the elements of the boron group have radioactive isotopes, either found in trace quantities in nature or produced synthetically. The longest-lived of these unstable isotopes is the indium isotope 115In, with its extremely long half-life of . This isotope makes up the vast majority of all naturally occurring indium despite its slight radioactivity. The shortest-lived is 7B, with a half-life of a mere , being the boron isotope with the fewest neutrons and a enough to measure. Some radioisotopes have important roles in scientific research; a few are used in the production of goods for commercial use or, more rarely, as a component of finished products.
History
The boron group has had many names over the years. According to former conventions it was Group IIIB in the European naming system and Group IIIA in the American. The group has also gained two collective names, "earth metals" and "triels". The latter name is derived from the Latin prefix tri- ("three") and refers to the three valence electrons that all of these elements, without exception, have in their valence shells. The name "triels" was first suggested by International Union of Pure and Applied Chemistry (IUPAC) in 1970.
Boron was known to the ancient Egyptians, but only in the mineral borax. The metalloid element was not known in its pure form until 1808, when Humphry Davy was able to extract it by the method of electrolysis. Davy devised an experiment in which he dissolved a boron-containing compound in water and sent an electric current through it, causing the elements of the compound to separate into their pure states. To produce larger quantities he shifted from electrolysis to reduction with sodium. Davy named the element boracium. At the same time two French chemists, Joseph Louis Gay-Lussac and Louis Jacques Thénard, used iron to reduce boric acid. The boron they produced was oxidized to boron oxide.
Aluminium, like boron, was first known in minerals before it was finally extracted from alum, a common mineral in some areas of the world. Antoine Lavoisier and Humphry Davy had each separately tried to extract it. Although neither succeeded, Davy had given the metal its current name. It was only in 1825 that the Danish scientist Hans Christian Ørsted successfully prepared a rather impure form of the element. Many improvements followed, a significant advance being made just two years later by Friedrich Wöhler, whose slightly modified procedure still yielded an impure product. The first pure sample of aluminium is credited to Henri Etienne Sainte-Claire Deville, who substituted sodium for potassium in the procedure. At that time aluminium was considered precious, and it was displayed next to such metals as gold and silver. The method used today, electrolysis of aluminium oxide dissolved in cryolite, was developed by Charles Martin Hall and Paul Héroult in the late 1880s.
Thallium, the heaviest stable element in the boron group, was discovered by William Crookes and Claude-Auguste Lamy in 1861. Unlike gallium and indium, thallium had not been predicted by Dmitri Mendeleev, having been discovered before Mendeleev invented the periodic table. As a result, no one was really looking for it until the 1850s when Crookes and Lamy were examining residues from sulfuric acid production. In the spectra they saw a completely new line, a streak of deep green, which Crookes named after the Greek word θαλλός, referring to a green shoot or twig. Lamy was able to produce larger amounts of the new metal and determined most of its chemical and physical properties.
Indium is the fourth element of the boron group but was discovered before the third, gallium, and after the fifth, thallium. In 1863 Ferdinand Reich and his assistant, Hieronymous Theodor Richter, were looking in a sample of the mineral zinc blende, also known as sphalerite (ZnS), for the spectroscopic lines of the newly discovered element thallium. Reich heated the ore in a coil of platinum metal and observed the lines that appeared in a spectroscope. Instead of the green thallium lines that he expected, he saw a new line of deep indigo-blue. Concluding that it must come from a new element, they named it after the characteristic indigo color it had produced.
Gallium minerals were not known before August 1875, when the element itself was discovered. It was one of the elements that the inventor of the periodic table, Dmitri Mendeleev, had predicted to exist six years earlier. While examining the spectroscopic lines in zinc blende the French chemist Paul Emile Lecoq de Boisbaudran found indications of a new element in the ore. In just three months he was able to produce a sample, which he purified by dissolving it in a potassium hydroxide (KOH) solution and sending an electric current through it. The next month he presented his findings to the French Academy of Sciences, naming the new element after the Greek name for Gaul, modern France.
The last confirmed element in the boron group, nihonium, was not discovered but rather created or synthesized. The element's synthesis was first reported by the Dubna Joint Institute for Nuclear Research team in Russia and the Lawrence Livermore National Laboratory in the United States, though it was the Dubna team who successfully conducted the experiment in August 2003. Nihonium was discovered in the decay chain of moscovium, which produced a few precious atoms of nihonium. The results were published in January of the following year. Since then around 13 atoms have been synthesized and various isotopes characterized. However, their results did not meet the stringent criteria for being counted as a discovery, and it was the later RIKEN experiments of 2004 aimed at directly synthesizing nihonium that were acknowledged by IUPAC as the discovery.
Etymology
The name "boron" comes from the Arabic word for the mineral borax, (بورق, boraq) which was known before boron was ever extracted. The "-on" suffix is thought to have been taken from "carbon". Aluminium was named by Humphry Davy in the early 1800s. It is derived from the Greek word alumen, meaning bitter salt, or the Latin alum, the mineral. Gallium is derived from the Latin Gallia, referring to France, the place of its discovery. Indium comes from the Latin word indicum, meaning indigo dye, and refers to the element's prominent indigo spectroscopic line. Thallium, like indium, is named after the Greek word for the color of its spectroscopic line: , meaning a green twig or shoot. "Nihonium" is named after Japan (Nihon in Japanese), where it was discovered.
Occurrence and abundance
Boron
Boron, with its atomic number of 5, is a very light element. Almost never found free in nature, it is very low in abundance, composing only 0.001% (10 ppm) of the Earth's crust. It is known to occur in over a hundred different minerals and ores, however: the main source is borax, but it is also found in colemanite, boracite, kernite, tusionite, berborite and fluoborite. Major world miners and extractors of boron include Turkey, the United States, Argentina, China, Bolivia and Peru. Turkey is by far the most prominent of these, accounting for around 70% of all boron extraction in the world. The United States is second, most of its yield coming from the state of California.
Aluminium
Aluminium, in contrast to boron, is the most abundant metal in the Earth's crust, and the third most abundant element. It composes about 8.2% (82,000 ppm) of the Earth's crust, surpassed only by oxygen and silicon. It is like boron, however, in that it is uncommon in nature as a free element. This is due to aluminium's tendency to attract oxygen atoms, forming several aluminium oxides. Aluminium is now known to occur in nearly as many minerals as boron, including garnets, turquoises and beryls, but the main source is the ore bauxite. The world's leading countries in the extraction of aluminium are Ghana, Suriname, Russia and Indonesia, followed by Australia, Guinea and Brazil.
Gallium
Gallium is a relatively rare element in the Earth's crust and is not found in as many minerals as its lighter homologues. Its abundance on the Earth is a mere 0.0018% (18 ppm). Its production is very low compared to other elements, but has increased greatly over the years as extraction methods have improved. Gallium can be found as a trace in a variety of ores, including bauxite and sphalerite, and in such minerals as diaspore and germanite. Trace amounts have been found in coal as well.
The gallium content is greater in a few minerals, including gallite (CuGaS2), but these are too rare to be counted as major sources and make negligible contributions to the world's supply.
Indium
Indium is another rare element in the boron group at only 0.000005% (0.05 ppm),. Very few indium-containing minerals are known, all of them scarce: an example is indite. Indium is found in several zinc ores, but only in minute quantities; likewise some copper and lead ores contain traces. As is the case for most other elements found in ores and minerals, the indium extraction process has become more efficient in recent years, ultimately leading to larger yields. Canada is the world's leader in indium reserves, but both the United States and China have comparable amounts.
Thallium
Thallium is of intermediate abundance in the Earth's crust, estimated to be 0.00006% (0.6 ppm). It is found on the ground in some rocks, in the soil and in clay. Many sulfide ores of iron, zinc and cobalt contain thallium. In minerals it is found in moderate quantities: some examples are crookesite (in which it was first discovered), lorandite, routhierite, bukovite, hutchinsonite and sabatierite. There are other minerals that contain small amounts of thallium, but they are very rare and do not serve as primary sources.
Nihonium
Nihonium is an element that is never found in nature but has been created in a laboratory. It is therefore classified as a synthetic element with no stable isotopes.
Applications
With the exception of synthetic nihonium, all the elements in the boron group have numerous uses and applications in the production and content of many items.
Boron
Boron has found many industrial applications in recent decades, and new ones are still being found. A common application is in fiberglass. There has been rapid expansion in the market for borosilicate glass; most notable among its special qualities is a much greater resistance to thermal expansion than regular glass. Another commercially expanding use of boron and its derivatives is in ceramics. Several boron compounds, especially the oxides, have unique and valuable properties that have led to their substitution for other materials that are less useful. Boron may be found in pots, vases, plates, and ceramic pan-handles for its insulating properties.
The compound borax is used in bleaches, for both clothes and teeth. The hardness of boron and some of its compounds give it a wide array of additional uses. A small part (5%) of the boron produced finds use in agriculture.
Aluminium
Aluminium is a metal with numerous familiar uses in everyday life. It is most often encountered in construction materials, in electrical devices, especially as the conductor in cables, and in tools and vessels for cooking and preserving food. Aluminium's lack of reactivity with food products makes it particularly useful for canning. Its high affinity for oxygen makes it a powerful reducing agent. Finely powdered pure aluminium oxidizes rapidly in air, generating a huge amount of heat in the process (burning at about or ), leading to applications in welding and elsewhere that a large amount of heat is needed. Aluminium is a component of alloys used for making lightweight bodies for aircraft. Cars also sometimes incorporate aluminium in their framework and body, and there are similar applications in military equipment. Less common uses include components of decorations and some guitars. The element is also sees use in a diverse range of electronics.
Gallium
Gallium and its derivatives have only found applications in recent decades. Gallium arsenide has been used in semiconductors, in amplifiers, in solar cells (for example in satellites) and in tunnel diodes for FM transmitter circuits. Gallium alloys are used mostly for dental purposes. Gallium ammonium chloride is used for the leads in transistors. A major application of gallium is in LED lighting. The pure element has been used as a dopant in semiconductors, and has additional uses in electronic devices with other elements. Gallium has the property of being able to 'wet' glass and porcelain, and thus can be used to make mirrors and other highly reflective objects. Gallium can be added to alloys of other metals to lower their melting points.
Indium
Indium's uses can be divided into four categories: the largest part (70%) of the production is used for coatings, usually combined as indium tin oxide (ITO); a smaller portion (12%) goes into alloys and solders; a similar amount is used in electrical components and in semiconductors; and the final 6% goes to minor applications. Among the items in which indium may be found are platings, bearings, display devices, heat reflectors, phosphors, and nuclear control rods. Indium tin oxide has found a wide range of applications, including glass coatings, solar panels, streetlights, electrophosetic displays (EPDs), electroluminescent displays (ELDs), plasma display panels (PDPs), electrochemic displays (ECs), field emission displays (FEDs), sodium lamps, windshield glass and cathode-ray tubes, making it the single most important indium compound.
Thallium
Thallium is used in its elemental form more often than the other boron-group elements. Uncompounded thallium is used in low-melting glasses, photoelectric cells, switches, mercury alloys for low-range glass thermometers, and thallium salts. It can be found in lamps and electronics, and is also used in myocardial imaging. The possibility of using thallium in semiconductors has been researched, and it is a known catalyst in organic synthesis. Thallium hydroxide (TlOH) is used mainly in the production of other thallium compounds. Thallium sulfate (Tl2SO4) is an outstanding vermin-killer, and it is a principal component in some rat and mouse poisons. However, the United States and some European countries have banned the substance because of its high toxicity to humans. In other countries, though, the market for the substance is growing. Tl2SO4 is also used in optical systems.
Biological role
None of the group-13 elements has a major biological role in complex animals, but some are at least associated with a living being. As in other groups, the lighter elements usually have more biological roles than the heavier. The heaviest ones are toxic, as are the other elements in the same periods. Boron is essential in most plants, whose cells use it for such purposes as strengthening cell walls. It is found in humans, certainly as a essential trace element, but there is ongoing debate over its significance in human nutrition. Boron's chemistry does allow it to form complexes with such important molecules as carbohydrates, so it is plausible that it could be of greater use in the human body than previously thought. Boron has also been shown to be able to replace iron in some of its functions, particularly in the healing of wounds. Aluminium has no known biological role in plants or animals, despite its widespread occurrence in nature. Gallium is not essential for the human body, but its relation to iron(III) allows it to become bound to proteins that transport and store iron. Gallium can also stimulate metabolism. Indium and its heavier homologues have no biological role, although indium salts in small doses, like gallium, can stimulate metabolism.
Toxicity
Each element of the boron group has a unique toxicity profile to plants and animals.
As an example of boron toxicity, it has been observed to harm barley in concentrations exceeding 20 mM. The symptoms of boron toxicity are numerous in plants, complicating research: they include reduced cell division, decreased shoot and root growth, decreased production of leaf chlorophyll, inhibition of photosynthesis, lowering of stomata conductance, reduced proton extrusion from roots, and deposition of lignin and suberin.
Aluminium does not present a prominent toxicity hazard in small quantities, but very large doses are slightly toxic. Gallium is not considered toxic, although it may have some minor effects. Indium is not toxic and can be handled with nearly the same precautions as gallium, but some of its compounds are slightly to moderately toxic.
Thallium, unlike gallium and indium, is extremely toxic, and has caused many poisoning deaths. Its most noticeable effect, apparent even from tiny doses, is hair loss all over the body, but it causes a wide range of other symptoms, disrupting and eventually halting the functions of many organs. The nearly colorless, odorless and tasteless nature of thallium compounds has led to their use by murderers. The incidence of thallium poisoning, intentional and accidental, increased when thallium (with its similarly toxic compound, thallium sulfate) was introduced to control rats and other pests. The use of thallium pesticides has therefore been prohibited since 1975 in many countries, including the USA.
Nihonium is a highly unstable element and decays by emitting alpha particles. Due to its strong radioactivity, it would definitely be extremely toxic, although significant quantities of nihonium (larger than a few atoms) have not yet been assembled.
Notes
References
Bibliography
External links
oxide (chemical compound) – Britannica Online Encyclopedia. Britannica.com. Retrieved on 2011-05-16.
Visual Elements: Group 13. Rsc.org. Retrieved on 2011-05-16.
Trends In Chemical Reactivity Of Group 13 Elements. Tutorvista.com. Retrieved on 2011-05-16.
etymonline.com Retrieved on 2011-07-27
Periodic table
Groups (periodic table) | 0.770927 | 0.990711 | 0.763766 |
Biostasis | Biostasis is the ability of an organism to tolerate environmental changes without having to actively adapt to them. Biostasis is found in organisms that live in habitats that likely encounter unfavorable living conditions, such as drought, freezing temperatures, change in pH levels, pressure, or temperature. Insects undergo a type of dormancy to survive these conditions, called diapause. Diapause may be obligatory for these insects to survive. The insect may also be able to undergo change prior to the arrival of the initiating event.
Microorganisms
Biostasis in this context is also synonymous for viable but nonculturable state. In the past when bacteria were no longer growing on culture media it was assumed that they were dead. Now we can understand that there are many instances where bacteria cells may go into biostasis or suspended animation, fail to grow on media, and on resuscitation are again culturable. VBNC state differs from 'starvation survival state' (where a cell just reduces metabolism significantly). Bacteria cells may enter the VBNC state as a result of some outside stressor such as "starvation, incubation outside the temperature range of growth, elevated osmotic concentrations (seawater), oxygen concentrations, or exposure to white light". Any of these instances could very easily mean death for the bacteria if it was not able to enter this state of dormancy. It has also been observed that in may instances where it was thought that bacteria had been destroyed (pasteurization of milk) and later caused spoilage or harmful effects to consumers because the bacteria had entered the VBNC state.
Effects on cells entering the VBNC state include "dwarfing, changes in metabolic activity, reduced nutrient transport, respiration rates and macromolecular synthesis". Yet biosynthesis continues, and shock proteins are made. Most importantly has been observed that ATP levels and generation remain high, completely contrary to dying cells which show rapid decreases in generation and retention. Changes to the cell walls of bacteria in the VBNC state have also been observed. In Escherichia coli a large amount of cross-linking was observed in the peptidoglycan. The autolytic capability was also observed to be much higher in VBNC cells than those who were in the growth state.
It is far easier to induce bacteria to the VBNC state and once bacteria cells have entered the VBNC state it is very hard to return them to a culturable state. "They examined nonculturability and resuscitation in Legionella pneumophila and while entry into this state was easily induced by nutrient starvation, resuscitation could only be demonstrated following co-incubation of the VBNC cells with the amoeba, Acanthamoeba Castellani"
Fungistasis or mycostasis a naturally occurring VBNC (viable but nonculturable) state found in fungi in soil. Watson and Ford defined fungistasis as "when viable fungal propagules, which are not subject to endogenous or constitutive dormancy do not germinate in soil at their favorable temperature or moisture conditions or growth of fungal hyphae is retarded or terminated by conditions of the soil environment other than temperature or moisture". Essentially (and mostly observed naturally occurring in soil) several types of fungi have been found to enter the VBNC state resulting from outside stressors (temperature, available nutrients, oxygen availability etc.) or from no observable stressors at all.
Current research
On March 1, 2018, the Defense Advanced Research Projects Agency (DARPA) announced their new Biostasis program under the direction of Dr. Tristan McClure-Begley. The aim of the Biostasis program is to develop new possibilities for extending the golden hour in patients who suffered a traumatic injury by slowing down the human body at the cellular level, addressing the need for additional time in continuously operating biological systems faced with catastrophic, life-threatening events. By leveraging molecular biology, the program aims to control the speed at which living systems operate and figure out a way to "slow life to save life."
On March 20, 2018, the Biostasis team held a Webinar which, along with a Broad Agency Announcement (BAA), solicited five-year research proposals from outside organizations. The full proposals were due on May 22, 2018.
Possible approaches
In their Webinar, DARPA outlined a number of possible research approaches for the Biostasis project. These approaches are based on research into diapause in tardigrades and wood frogs which suggests that selective stabilization of intracellular machinery occurs at the protein level.
Protein chaperoning
In molecular biology, molecular chaperones are proteins that assist in the folding, unfolding, assembly, or disassembly of other macromolecular structures. Under typical conditions, molecular chaperones facilitate changes in shape (conformational change) of macromolecules in response to changes in environmental factors like temperature, pH, and voltage. By reducing conformational flexibility, scientists can constrain the function of certain proteins. Recent research has shown that proteins are promiscuous, or able to do jobs in addition to the ones they evolved to carry out. Additionally, protein promiscuity plays a key role in the adaptation of species to new environments. It is possible that finding a way to control conformational change in promiscuous proteins could allow scientists to induce biostasis in living organisms.
Intracellular crowding
The crowdedness of cells is a critical aspect of biological systems. Intracellular crowding refers to the fact that protein function and interaction with water is constrained when the interior of the cell is overcrowded. Intracellular organelles are either membrane-bound vesicles or membrane-less compartments that compartmentalize the cell and enable spatiotemporal control of biological reactions. By introducing these intracellular polymers to a biological system and manipulating the crowdedness of a cell, scientists may be able to slow down the rate of biological reactions in the system.
Tardigrade-disordered proteins
Tardigrades are microscopic animals that are able to enter a state of diapause and survive a remarkable array of environmental stressors, including freezing and desiccation. Research has shown that intrinsically disordered proteins in these organisms may work to stabilize cell function and protect against these extreme environmental stressors. By using peptide engineering, it is possible that scientists may be able to introduce intrinsically disordered proteins to the biological systems of larger animal organisms. This could allow larger animals to enter a state of biostasis similar to that of tardigrades under extreme biological stress.
References
Oliver, James D. "The viable but nonculturable state in bacteria." The Journal of Microbiology 43.1 (2005): 93-100.
Fungistasis and general soil biostasis A new synthesis Paolina Garbeva, W.H. Gera Holb, Aad J. Termorshuizenc, George A. Kowalchuka, Wietse de Boer
Watson, A.G., Ford E.J. 1972 Soil Fungistasis—a reappraisal. Annual Review of Phytopathology 10, 327.
Ecology
Physiology | 0.769632 | 0.992343 | 0.763739 |
Primary nutritional groups | Primary nutritional groups are groups of organisms, divided in relation to the nutrition mode according to the sources of energy and carbon, needed for living, growth and reproduction. The sources of energy can be light or chemical compounds; the sources of carbon can be of organic or inorganic origin.
The terms aerobic respiration, anaerobic respiration and fermentation (substrate-level phosphorylation) do not refer to primary nutritional groups, but simply reflect the different use of possible electron acceptors in particular organisms, such as in aerobic respiration, or nitrate, sulfate or fumarate in anaerobic respiration, or various metabolic intermediates in fermentation.
Primary sources of energy
Phototrophs absorb light in photoreceptors and transform it into chemical energy.
Chemotrophs release chemical energy.
The freed energy is stored as potential energy in ATP, carbohydrates, or proteins. Eventually, the energy is used for life processes such as moving, growth and reproduction.
Plants and some bacteria can alternate between phototrophy and chemotrophy, depending on the availability of light.
Primary sources of reducing equivalents
Organotrophs use organic compounds as electron/hydrogen donors.
Lithotrophs use inorganic compounds as electron/hydrogen donors.
The electrons or hydrogen atoms from reducing equivalents (electron donors) are needed by both phototrophs and chemotrophs in reduction-oxidation reactions that transfer energy in the anabolic processes of ATP synthesis (in heterotrophs) or biosynthesis (in autotrophs). The electron or hydrogen donors are taken up from the environment.
Organotrophic organisms are often also heterotrophic, using organic compounds as sources of both electrons and carbon. Similarly, lithotrophic organisms are often also autotrophic, using inorganic sources of electrons and as their inorganic carbon source.
Some lithotrophic bacteria can utilize diverse sources of electrons, depending on the availability of possible donors.
The organic or inorganic substances (e.g., oxygen) used as electron acceptors needed in the catabolic processes of aerobic or anaerobic respiration and fermentation are not taken into account here.
For example, plants are lithotrophs because they use water as their electron donor for the electron transport chain across the thylakoid membrane. Animals are organotrophs because they use organic compounds as electron donors to synthesize ATP (plants also do this, but this is not taken into account). Both use oxygen in respiration as electron acceptor, but this character is not used to define them as lithotrophs.
Primary sources of carbon
Heterotrophs metabolize organic compounds to obtain carbon for growth and development.
Autotrophs use carbon dioxide as their source of carbon.
Energy and carbon
{| class="wikitable float-right" style="text-align:center" width="50%"
|+Classification of organisms based on their metabolism
|-
| rowspan=2 bgcolor="#FFFF00" |Energy source || bgcolor="#FFFF00" | Light || bgcolor="#FFFF00" | photo- || rowspan=2 colspan=2 | || rowspan=6 bgcolor="#7FC31C" | -troph
|-
| bgcolor="#FFFF00" | Molecules || bgcolor="#FFFF00" | chemo-
|-
| rowspan=2 bgcolor="#FFB300" | Electron donor || bgcolor="#FFB300" | Organic compounds || rowspan=2 | || bgcolor="#FFB300" | organo- || rowspan=2 |
|-
| bgcolor="#FFB300" | Inorganic compounds || bgcolor="#FFB300" | litho-
|-
| rowspan=2 bgcolor="#FB805F" | Carbon source || bgcolor="#FB805F" | Organic compounds' || rowspan=2 colspan=2 | || bgcolor="#FB805F" | hetero-
|-
| bgcolor="#FB805F" | Carbon dioxide || bgcolor="#FB805F" | auto-
|}
A chemoorganoheterotrophic organism is one that requires organic substrates to get its carbon for growth and development, and that obtains its energy from the decomposition of an organic compound. This group of organisms may be further subdivided according to what kind of organic substrate and compound they use. Decomposers are examples of chemoorganoheterotrophs which obtain carbon and electrons or hydrogen from dead organic matter. Herbivores and carnivores are examples of organisms that obtain carbon and electrons or hydrogen from living organic matter.
Chemoorganotrophs are organisms which use the chemical energy in organic compounds as their energy source and obtain electrons or hydrogen from the organic compounds, including sugars (i.e. glucose), fats and proteins. Chemoheterotrophs also obtain the carbon atoms that they need for cellular function from these organic compounds.
All animals are chemoheterotrophs (meaning they oxidize chemical compounds as a source of energy and carbon), as are fungi, protozoa, and some bacteria. The important differentiation amongst this group is that chemoorganotrophs oxidize only organic compounds while chemolithotrophs instead use oxidation of inorganic compounds as a source of energy.
Primary metabolism table
The following table gives some examples for each nutritional group:
*Some authors use -hydro- when the source is water.
The common final part -troph is from Ancient Greek "nutrition".
Mixotrophs
Some, usually unicellular, organisms can switch between different metabolic modes, for example between photoautotrophy, photoheterotrophy, and chemoheterotrophy in Chroococcales. Rhodopseudomonas palustris – another example – can grow with or without oxygen, use either light, inorganic or organic compounds for energy. Such mixotrophic organisms may dominate their habitat, due to their capability to use more resources than either photoautotrophic or organoheterotrophic organisms.
Examples
All sorts of combinations may exist in nature, but some are more common than others. For example, most plants are photolithoautotrophic, since they use light as an energy source, water as electron donor, and as a carbon source. All animals and fungi are chemoorganoheterotrophic, since they use organic substances both as chemical energy sources and as electron/hydrogen donors and carbon sources. Some eukaryotic microorganisms, however, are not limited to just one nutritional mode. For example, some algae live photoautotrophically in the light, but shift to chemoorganoheterotrophy in the dark. Even higher plants retained their ability to respire heterotrophically on starch at night which had been synthesised phototrophically during the day.
Prokaryotes show a great diversity of nutritional categories. For example, cyanobacteria and many purple sulfur bacteria can be photolithoautotrophic, using light for energy, or sulfide as electron/hydrogen donors, and as carbon source, whereas green non-sulfur bacteria can be photoorganoheterotrophic, using organic molecules as both electron/hydrogen donors and carbon sources. Many bacteria are chemoorganoheterotrophic, using organic molecules as energy, electron/hydrogen and carbon sources. Some bacteria are limited to only one nutritional group, whereas others are facultative and switch from one mode to the other, depending on the nutrient sources available. Sulfur-oxidizing, iron, and anammox bacteria as well as methanogens are chemolithoautotrophs, using inorganic energy, electron, and carbon sources. Chemolithoheterotrophs are rare because heterotrophy implies the availability of organic substrates, which can also serve as easy electron sources, making lithotrophy unnecessary. Photoorganoautotrophs are uncommon since their organic source of electrons/hydrogens would provide an easy carbon source, resulting in heterotrophy.
Synthetic biology efforts enabled the transformation of the trophic mode of two model microorganisms from heterotrophy to chemoorganoautotrophy:
Escherichia coli was genetically engineered and then evolved in the laboratory to use as the sole carbon source while using the one-carbon molecule formate as the source of electrons.
The methylotrophic Pichia pastoris'' yeast was genetically engineered to use as the carbon source instead of methanol, while the latter remained the source of electrons for the cells.
See also
Autotrophic
Chemosynthesis
Chemotrophic
Heterotrophic
Lithotrophic
Metabolism
Mixotrophic
Organotrophic
Phototrophic
Notes and references
Trophic ecology
Physiology | 0.783658 | 0.97457 | 0.763729 |
Putrefaction | Putrefaction is the fifth stage of death, following pallor mortis, livor mortis, algor mortis, and rigor mortis. This process references the breaking down of a body of an animal post-mortem. In broad terms, it can be viewed as the decomposition of proteins, and the eventual breakdown of the cohesiveness between tissues, and the liquefaction of most organs. This is caused by the decomposition of organic matter by bacterial or fungal digestion, which causes the release of gases that infiltrate the body's tissues, and leads to the deterioration of the tissues and organs.
The approximate time it takes putrefaction to occur is dependent on various factors. Internal factors that affect the rate of putrefaction include the age at which death has occurred, the overall structure and condition of the body, the cause of death, and external injuries arising before or after death. External factors include environmental temperature, moisture and air exposure, clothing, burial factors, and light exposure. Body farms are facilities that study the way various factors affect the putrefaction process.
The first signs of putrefaction are signified by a greenish discoloration on the outside of the skin, on the abdominal wall corresponding to where the large intestine begins, as well as under the surface of the liver.
Certain substances, such as carbolic acid, arsenic, strychnine, and zinc chloride, can be used to delay the process of putrefaction in various ways based on their chemical make up.
Description
In thermodynamic terms, all organic tissues are composed of chemical energy, which, when not maintained by the constant biochemical maintenance of the living organism, begin to chemically break down due to the reaction with water into amino acids, known as hydrolysis. The breakdown of the proteins of a decomposing body is a spontaneous process. Protein hydrolysis is accelerated as the anaerobic bacteria of the digestive tract consume, digest, and excrete the cellular proteins of the body.
The bacterial digestion of the cellular proteins weakens the tissues of the body. As the proteins are continuously broken down to smaller components, the bacteria excrete gases and organic compounds, such as the functional-group amines putrescine (from ornithine) and cadaverine (from lysine), which carry the noxious odor of rotten flesh. Initially, the gases of putrefaction are constrained within the body cavities, but eventually diffuse through the adjacent tissues, and then into the circulatory system. Once in the blood vessels, the putrid gases infiltrate and diffuse to other parts of the body and the limbs.
The visual result of gaseous tissue-infiltration is notable bloating of the torso and limbs. The increased internal pressure of the continually rising volume of gas further stresses, weakens, and separates the tissues constraining the gas. In the course of putrefaction, the skin tissues of the body eventually rupture and release the bacterial gas. As the anaerobic bacteria continue consuming, digesting, and excreting the tissue proteins, the body's decomposition progresses to the stage of skeletonization. This continued consumption also results in the production of ethanol by the bacteria, which can make it difficult to determine the blood alcohol content (BAC) in autopsies, particularly in bodies recovered from water.
Generally, the term decomposition encompasses the biochemical processes that occur from the physical death of the person (or animal) until the skeletonization of the body. Putrefaction is one of seven stages of decomposition; as such, the term putrescible identifies all organic matter (animal and human) that is biochemically subject to putrefaction. In the matter of death by poisoning, the putrefaction of the body is chemically delayed by poisons such as antimony, arsenic, carbolic acid (phenol), nux vomica (plant), strychnine (pesticide), and zinc chloride.
Approximate timeline
The rough timeline of events during the putrefaction stage is as follows:
1–2 days: Pallor mortis, algor mortis, rigor mortis, and livor mortis are the first steps in the process of decomposition before the process of putrefaction.
2–3 days: Discoloration appears on the skin of the abdomen. The abdomen begins to swell due to gas formation.
3–4 days: The discoloration spreads and discolored veins become visible.
5–6 days: The abdomen swells noticeably and the skin blisters.
10–20 days: Black putrefaction occurs, which is when noxious odors are released from the body and the parts of the body undergo a black discoloration.
2 weeks: The abdomen is bloated; internal gas pressure nears maximum capacity.
3 weeks: Tissues have softened. Organs and cavities are bursting. The nails and hair fall off.
4 weeks: Soft tissues such as the internal organs begin to liquefy and the face becomes unrecognizable. The skin, muscles, tendons and ligaments degrade exposing the skeleton.
Order of organs' decomposition in the body:
Larynx and trachea
Infant brain
Stomach
Intestines
Spleen
Omentum and mesentery
Liver
Adult brain
Heart
Lungs
Kidneys
Bladder
Esophagus
Pancreas
Diaphragm
Blood vessels
Uterus
The rate of putrefaction is greatest in air, followed by water, soil, and earth. The exact rate of putrefaction is dependent upon many factors such as weather, exposure and location. Thus, refrigeration at a morgue or funeral home can retard the process, allowing for burial in three days or so following death without embalming. The rate increases dramatically in tropical climates. The first external sign of putrefaction in a body lying in air is usually a greenish discoloration of the skin over the region of the cecum, which appears in 12–24 hours. The first internal sign is usually a greenish discoloration on the undersurface of the liver.
Factors affecting putrefaction
Various factors affect the rate of putrefaction.
Exogenous (external)
Environmental temperature: Decomposition is accelerated by high atmospheric or environmental temperature, with putrefaction speed optimized between and , further sped along by high levels of humidity. This optimal temperature assists in the chemical breakdown of the tissue and promotes microorganism growth. Decomposition nearly stops below or above .
Moisture and air exposure: Putrefaction is ordinarily slowed by the body being submerged in water, due to diminished exposure to air. Air exposure and moisture can both contribute to the introduction and growth of microorganisms, speeding degradation. In a hot and dry environment, the body can undergo a process called mummification where the body is completely dehydrated and bacterial decay is inhibited.
Clothing: Loose-fitting clothing can speed up the rate of putrefaction, as it helps to retain body heat. Tight-fitting clothing can delay the process by cutting off blood supply to tissues and eliminating nutrients for bacteria to feed on.
Manner of burial: Speedy burial can slow putrefaction. Bodies within deep graves tend to decompose more slowly due to the diminished influences of changes in temperature. The composition of graves can also be a significant contributing factor, with dense, clay-like soil tending to speed putrefaction while dry and sandy soil slows it.
Light exposure: Light can also contribute indirectly, as flies and insects prefer to lay eggs in areas of the body not exposed to light, such as the crevices formed by the eyelids and nostrils.
Endogenous (internal)
Age at time of death: Stillborn fetuses and infants putrefy slowly due to their sterility. Otherwise, however, younger people generally putrefy more quickly than older people.
Condition of the body: A body with a greater fat percentage and less lean body mass will have a faster rate of putrefaction, as fat retains more heat and it carries a larger amount of fluid in the tissues.
Cause of death: The cause of death has a direct relationship to putrefaction speed, with bodies that died from acute violence or accident generally putrefying slower than those that died from infectious diseases. Certain poisons, such as potassium cyanide or strychnine, may also delay putrefaction, while chronic alcoholism and cocaine use will speed it.
External injuries: Antemortem or postmortem injuries can speed putrefaction as injured areas can be more susceptible to invasion by bacteria.
Delayed putrefaction
Certain poisonous substances to the body can delay the process of putrefaction. They include:
Carbolic acid (Phenol)
Arsenic and antimony
Strychnine
Nux vomica (plant)
Zinc chloride, ZnCl2
Morphine
Aconitine
Embalming
Embalming is the process of preserving human remains by delaying decomposition. This is acquired through the use of embalming fluid, which is a mixture of formaldehyde, methanol, and various other solvents. The most common reasons to preserve the body are for viewing purposes at a funeral, for above-ground interment or distant transportation of the deceased, and for medical or religious practices.
Research
Body farms subject donated cadavers to various environmental conditions to study the process of human decomposition. These include The University of Tennessee's Forensic Anthropologic Facility, Western Carolina Universities Osteology Research Station (FOREST), Texas State University's Forensic Anthropology Research Facility (FARF), Sam Houston State University's Southeast Texas Applied Forensic Science Facility (STAFS), Southern Illinois University's Complex for Forensic Anthropology Research, and Colorado Mesa University's Forensic Investigation Research Station. The Australian Facility for Taphonomic Experimental Research, near Sydney, is the first body farm located outside of the United States In the United Kingdom there are several facilities which, instead of using human remains or cadavers, use dead pigs to study the decomposition process. Pigs are less likely to have infectious diseases than human cadavers, and are more readily available without any concern for ethical issues, but a human body farm is still highly sought after for further research. Each body farm is unique in its environmental make-up, giving researchers a broader knowledge, and allowing research into how different environmental factors can affect the rate of decomposition significantly such as humidity, sun exposure, rain or snow, altitude level and more.
Other uses
In alchemy, putrefaction is the same as fermentation, whereby a substance is allowed to rot or decompose undisturbed. In some cases, the commencement of the process is facilitated with a small sample of the desired material to act as a "seed", a technique akin to the use of a seed crystal in crystallization.
See also
Cryopreservation
Corpse decomposition
Decomposition
Forensic entomological decomposition
Maceration (bone)
Promession
Putrefying bacteria
Rancidification
Cotard delusion
References
External links
Putrefaction: Dr. Dinesh Rao's Forensic Pathology
The Rate of Decay in a Corpse
Alchemical processes
Food science
Medical aspects of death
Necrosis
Forensic pathology
Ecology
Biochemistry
Metabolism
Digestive system | 0.765292 | 0.997937 | 0.763713 |
Reductive amination | Reductive amination (also known as reductive alkylation) is a form of amination that involves the conversion of a carbonyl group to an amine via an intermediate imine. The carbonyl group is most commonly a ketone or an aldehyde. It is a common method to make amines and is widely used in green chemistry since it can be done catalytically in one-pot under mild conditions. In biochemistry, dehydrogenase enzymes use reductive amination to produce the amino acid glutamate. Additionally, there is ongoing research on alternative synthesis mechanisms with various metal catalysts which allow the reaction to be less energy taxing, and require milder reaction conditions. Investigation into biocatalysts, such as imine reductases, have allowed for higher selectivity in the reduction of chiral amines which is an important factor in pharmaceutical synthesis.
Reaction process
Reductive amination occurs between a carbonyl such as an aldehyde or ketone and an amine in the presence of a reducing agent. The reaction conditions are neutral or weakly acidic.
The amine first reacts with the carbonyl group to form a hemiaminal species which subsequently loses one molecule of water in a reversible manner by alkylimino-de-oxo-bisubstitution to form the imine intermediate. The equilibrium between aldehyde/ketone and imine is shifted toward imine formation by dehydration. This intermediate imine can then be isolated and reduced with a suitable reducing agent (e.g., sodium borohydride) to produce the final amine product. Intramolecular reductive amination can also occur to afford a cyclic amine product if the amine and the carbonyl are on the same molecule of the starting material.
There are two ways to conduct a reductive amination reaction: direct and indirect.
Direct Reductive Amination
In a direct reaction, the carbonyl and amine starting materials and the reducing agent are combined and the reductions are done sequentially. These are often one pot reactions since the imine intermediate is not isolated before the final reduction to the product. Instead, as the reaction proceeds, the imine becomes favoured for reduction over the carbonyl starting material. The two most common methods for direct reductive amination are hydrogenation with catalytic platinum, palladium, or nickel catalysts and the use of hydride reducing agents like cyanoborohydride (NaBH3CN).
Indirect Reductive Amination
Indirect reductive amination, also called a stepwise reduction, isolates the imine intermediate. In a separate step, the isolated imine intermediate is reduced to form the amine product.
Designing a reductive amination reaction
There are many considerations to be made when designing a reductive amination reaction.
Chemoselectivity issues may arise since the carbonyl group is also reducible.
The reaction between the carbonyl and amine are in equilibrium, with favouring for the carbonyl side unless water is removed from the system.
Reducible intermediates may appear in the reaction which can affect chemoselectivity.
The amine substrate, imine intermediate or amine product might deactivate the catalyst.
Acyclic imines have E/Z isomers. This makes it difficult to create enantiopure chiral compounds through stereoselective reductions.
To solve the last issue, asymmetric reductive amination reactions can be used to synthesize an enantiopure product of chiral amines. In asymmetric reductive amination, a carbonyl that can be converted from achiral to chiral is used. The carbonyl undergoes condensation with an amine in the presence of H2 and a chiral catalyst to form the imine intermediate, which is then reduced to form the amine. However, this method is still limiting to synthesize primary amines which are non-selective and prone to overalkylation.
Common reducing agents
Sodium Borohydride
NaBH4 reduces both imines and carbonyl groups. However, it is not very selective and can reduce other reducible functional groups present in the reaction. To ensure that this does not occur, reagents with weak electrophilic carbonyl groups, poor nucleophilic amines and sterically hindered reactive centres should not be used, as these properties do not favour the reduction of the carbonyl to form an imine and increases the chance that other functional groups will be reduced instead.
Sodium Cyanoborohydride
Sodium cyanoborohydride is soluble in hydroxylic solvents, stable in acidic solutions, and has different selectivities depending on the pH. At low pH values, it efficiently reduces aldehydes and ketones. As the pH increases, the reduction rate slows and instead, the imine intermediate becomes preferential for reduction. For this reason, NaBH3CN is an ideal reducing agent for one-pot direct reductive amination reactions that don't isolate the intermediate imine.
When used as a reducing agent, NaBH3CN can release toxic by-products like HCN and NaCN during work up.
Variations and related reactions
This reaction is related to the Eschweiler–Clarke reaction, in which amines are methylated to tertiary amines, the Leuckart–Wallach reaction, or by other amine alkylation methods such as the Mannich reaction and Petasis reaction.
A classic named reaction is the Mignonac reaction (1921) involving reaction of a ketone with ammonia over a nickel catalyst for example in a synthesis of 1-phenylethylamine starting from acetophenone:
Additionally, there exist many systems which catalyze reductive amination with a hydrogenation catalyst. Generally, catalysis is preferred to stoichiometric reactions to enable the reaction to be more efficient, more atom economic, and to produce less waste. This can be either a homogeneous catalytic system or heterogeneous system. These systems provide an alternative method which is efficient, requires fewer volatile reagents and is redox economic. As well, this method can be used in the reduction of alcohols, along with aldehydes and ketones to form the amine product. One example of a heterogeneous catalytic system is the reductive amination of alcohols using the Ni-catalyzed system.
Nickel is commonly used as a catalyst for reductive amination because of its abundance and relatively good catalytic activity. An example of a homogeneous catalytic system is the reductive amination of ketones done with an iridium catalyst. Additionally, it has been shown to be effective to use a homogeneous Iridium (III) catalyst system to reductively aminate carboxylic acids, which in the past has been more difficult than aldehydes and ketones. Homogeneous catalysts are often favored because they are more environmentally and economically friendly compared to most heterogeneous systems.
In industry, tertiary amines such as triethylamine and diisopropylethylamine are formed directly from ketones with a gaseous mixture of ammonia and hydrogen and a suitable catalyst.
In green chemistry
Reductive amination is commonly used over other methods for introducing amines to alkyl substrates, such as SN2-type reactions with halides, since it can be done in mild conditions and has high selectivity for nitrogen-containing compounds. Reductive amination can occur sequentially in one-pot reactions, which eliminates the need for intermediate purifications and reduces waste. Some multistep synthetic pathways have been reduced to one step through one-pot reductive amination. This makes it a highly appealing method to produce amines in green chemistry.
Biochemistry
In biochemistry, dehydrogenase enzymes can catalyze the reductive amination of α-keto acids and ammonia to yield α-amino acids. Reductive amination is predominantly used for the synthesis of the amino acid glutamate starting from α-ketoglutarate, while biochemistry largely relies on transamination to introduce nitrogen in the other amino acids. The use of enzymes as a catalyst is advantageous because the enzyme active sites are often stereospecific and have the ability to selectively synthesize a certain enantiomer. This is useful in the pharmaceutical industry, particularly for drug-development, because enantiomer pairs can have different reactivities in the body. Additionally, enzyme biocatalysts are often quite selective in reactivity so they can be used in the presence of other functional groups, without the use of protecting groups. For instance a class of enzymes called imine reductases, IREDs, can be used to catalyze direct asymmetric reductive amination to form chiral amines.
In popular culture
In the critically acclaimed drama Breaking Bad, main character Walter White uses the reductive amination reaction to produce his high purity methamphetamine, relying on phenyl-2-propanone and methylamine.
See also
Forster–Decker method
Leuckart reaction
References
External links
Current methods for reductive amination
Industrial reductive amination at BASF
Organic redox reactions | 0.772443 | 0.988681 | 0.7637 |
SEMMA | SEMMA is an acronym that stands for Sample, Explore, Modify, Model, and Assess. It is a list of sequential steps developed by SAS Institute, one of the largest producers of statistics and business intelligence software. It guides the implementation of data mining applications. Although SEMMA is often considered to be a general data mining methodology, SAS claims that it is "rather a logical organization of the functional tool set of" one of their products, SAS Enterprise Miner, "for carrying out the core tasks of data mining".
Background
In the expanding field of data mining, there has been a call for a standard methodology or a simple list of best practices for the diversified and iterative process of data mining that users can apply to their data mining projects regardless of industry. While the Cross Industry Standard Process for Data Mining or CRISP-DM, founded by the European Strategic Program on Research in Information Technology initiative, aimed to create a neutral methodology, SAS also offered a pattern to follow in its data mining tools.
Phases of SEMMA
The phases of SEMMA and related tasks are the following:
Sample. The process starts with data sampling, e.g., selecting the data set for modeling. The data set should be large enough to contain sufficient information to retrieve, yet small enough to be used efficiently. This phase also deals with data partitioning.
Explore. This phase covers the understanding of the data by discovering anticipated and unanticipated relationships between the variables, and also abnormalities, with the help of data visualization.
Modify. The Modify phase contains methods to select, create and transform variables in preparation for data modeling.
Model. In the Model phase the focus is on applying various modeling (data mining) techniques on the prepared variables in order to create models that possibly provide the desired outcome.
Assess. The last phase is Assess. The evaluation of the modeling results shows the reliability and usefulness of the created models.
Criticism
SEMMA mainly focuses on the modeling tasks of data mining projects, leaving the business aspects out (unlike, e.g., CRISP-DM and its Business Understanding phase). Additionally, SEMMA is designed to help the users of the SAS Enterprise Miner software. Therefore, applying it outside Enterprise Miner may be ambiguous. However, in order to complete the "Sampling" phase of SEMMA a deep understanding of the business aspects would have to be a requirement in order to do effective sampling. So, in effect, a business understanding would be required to effectively complete sampling.
See also
Cross Industry Standard Process for Data Mining
References
Applied data mining | 0.773499 | 0.987323 | 0.763694 |
Pharmaceutical formulation | Pharmaceutical formulation, in pharmaceutics, is the process in which different chemical substances, including the active drug, are combined to produce a final medicinal product. The word formulation is often used in a way that includes dosage form.
Stages and timeline
Formulation studies involve developing a preparation of the drug which is both stable and acceptable to the patients. For orally administered drugs, this usually involves incorporating the drug into a tablet or a capsule. It is important to make the distinction that a tablet contains a variety of other potentially inert substances apart from the drug itself, and studies have to be carried out to ensure that the encapsulated drug is compatible with these other substances in a way that does not cause harm, whether direct or indirect.
Preformulation involves the characterization of a drug's physical, chemical, and mechanical properties in order to choose what other ingredients (excipients) should be used in the preparation. In dealing with protein pre-formulation, the important aspect is to understand the solution behavior of a given protein under a variety of stress conditions such as freeze/thaw, temperature, shear stress among others to identify mechanisms of degradation and therefore its mitigation.
Formulation studies then consider such factors as particle size, polymorphism, pH, and solubility, as all of these can influence bioavailability and hence the activity of a drug. The drug must be combined with inactive ingredients by a method that ensures that the quantity of drug present is consistent in each dosage unit e.g. each tablet. The dosage should have a uniform appearance, with an acceptable taste, tablet hardness, and capsule disintegration.
It is unlikely that formulation studies will be complete by the time clinical trials commence. This means that simple preparations are developed initially for use in phase I clinical trials. These typically consist of hand-filled capsules containing a small amount of the drug and a diluent. Proof of the long-term stability of these formulations is not required, as they will be used (tested) in a matter of days. Consideration has to be given to what is known as "drug loading" - the ratio of the active drug to the total contents of the dose. A low drug load may cause homogeneity problems. A high drug load may pose flow problems or require large capsules if the compound has a low bulk density.
By the time phase III clinical trials are reached, the formulation of the drug should have been developed to be close to the preparation that will ultimately be used in the market. A knowledge of stability is essential by this stage, and conditions must have been developed to ensure that the drug is stable in the preparation. If the drug proves unstable, it will invalidate the results from clinical trials since it would be impossible to know what the administered dose actually was. Stability studies are carried out to test whether temperature, humidity, oxidation, or photolysis (ultraviolet light or visible light) have any effect, and the preparation is analysed to see if any degradation products have been formed.
Container closure
Formulated drugs are stored in container closure systems for extended periods of time. These include blisters, bottles, vials, ampules, syringes, and cartridges. The containers can be made from a variety of materials including glass, plastic, and metal. The drug may be stored as a solid, liquid, or gas.
It's important to check whether there are any undesired interactions between the preparation and the container. For instance, if a plastic container is used, tests are carried out to see whether any of the ingredients become adsorbed on to the plastic, and whether any plasticizer, lubricants, pigments, or stabilizers leach out of the plastic into the preparation. Even the adhesives for the container label need to be tested, to ensure they do not leach through the plastic container into the preparation.
Formulation types
The drug form varies by the route of administration.
Like capsules, tablets, and pills etc.
Enteral formulations
Oral drugs are normally taken as tablets or capsules.
The drug (active substance) itself needs to be soluble in aqueous solution at a controlled rate. Such factors as particle size and crystal form can significantly affect dissolution. Fast dissolution is not always ideal. For example, slow dissolution rates can prolong the duration of action or avoid initial high plasma levels. Treatment of active ingredient by special ways such as spherical crystallization can have some advantages for drug formulation.
Tablet
A tablet is usually a compressed preparation that contains:
5-10% of the drug (active substance);
80% of fillers, disintegrants, lubricants, glidants, and binders; and
10% of compounds which ensure easy disintegration, disaggregation, and dissolution of the tablet in the stomach or the intestine.
The dissolution time can be modified for a rapid effect or for sustained release.
Special coatings can make the tablet resistant to the stomach acids such that it only disintegrates in the duodenum, jejunum and colon as a result of enzyme action or alkaline pH.
Pills can be coated with sugar, varnish, or wax to disguise the taste.
Capsule
A capsule is a gelatinous envelope enclosing the active substance. Capsules can be designed to remain intact for some hours after ingestion in order to delay absorption. They may also contain a mixture of slow and fast
release particles to produce rapid and sustained absorption in the same dose.
Sustained release
There are a number of methods by which tablets and capsules can be modified in order to allow for sustained release of the active compound as it moves through the digestive tract. One of the most common methods is to embed the active ingredient in an insoluble porous matrix, such that the dissolving drug must make its way out of the matrix before it can be absorbed. In other sustained release formulations the matrix swells to form a gel through which the drug exits.
Another method by which sustained release is achieved is through an osmotic controlled-release oral delivery system, where the active compound is encased in a water-permeable membrane with a laser drilled hole at one end. As water passes through the membrane the drug is pushed out through the hole and into the digestive tract where it can be absorbed.
Parenteral formulations
These are also called injectable formulations and are used with intravenous, subcutaneous, intramuscular, and intra-articular administration. The drug is stored in liquid or if unstable, lyophilized form.
Many parenteral formulations are unstable at higher temperatures and require storage at refrigerated or sometimes frozen conditions. The logistics process of delivering these drugs to the patient is called the cold chain. The cold chain can interfere with delivery of drugs, especially vaccines, to communities where electricity is unpredictable or nonexistent. NGOs like the Gates Foundation are actively working to find solutions. These may include lyophilized formulations which are easier to stabilize at room temperature.
Most protein formulations are parenteral due to the fragile nature of the molecule which would be destroyed by enteric administration. Proteins have tertiary and quaternary structures that can be degraded or cause aggregation at room temperature. This can impact the safety and efficacy of the medicine.
Liquid
Liquid drugs are stored in vials, IV bags, ampoules, cartridges, and prefilled syringes.
As with solid formulations, liquid formulations combine the drug product with a variety of compounds to ensure a stable active medication following storage. These include solubilizers, stabilizers, buffers, tonicity modifiers, bulking agents, viscosity enhancers/reducers, surfactants, chelating agents, and adjuvants.
If concentrated by evaporation, the drug may be diluted before administration. For IV administration, the drug may be transferred from a vial to an IV bag and mixed with other materials.
Lyophilized
Lyophilized drugs are stored in vials, cartridges, dual chamber syringes, and prefilled mixing systems.
Lyophilization, or freeze drying, is a process that removes water from a liquid drug creating a solid powder, or cake. The lyophilized product is stable for extended periods of time and could allow storage at higher temperatures. In protein formulations, stabilizers are added to replace the water and preserve the structure of the molecule.
Before administration, a lyophilized drug is reconstituted as a liquid before being administered. This is done by combining a liquid diluent with the freeze-dried powder, mixing, then injecting. Reconstitution usually requires a reconstitution and delivery system to ensure that the drug is correctly mixed and administered.
Topical formulations
Cutaneous
Options for topical formulation include:
Cream – Emulsion of oil and water in approximately equal proportions. Penetrates stratum corneum outer layers of skin well.
Ointment – Combines oil (80%) and water (20%). Effective barrier against moisture loss.
Gel – Liquefies upon contact with the skin.
Paste – Combines three agents – oil, water, and powder; an ointment in which a powder is suspended.
Powder – A finely subdivided solid substance.
See also
Pesticide formulation
Drug development
Drug delivery
Drug design
Drug discovery
Galenic formulation
References
External links
Comparison Table of Pharmaceutical Dosage Forms
FDA database for Inactive Ingredient Search for Approved Drug Products
Medicinal chemistry | 0.773165 | 0.987727 | 0.763676 |
Crystallinity | Crystallinity refers to the degree of structural order in a solid. In a crystal, the atoms or molecules are arranged in a regular, periodic manner. The degree of crystallinity has a large influence on hardness, density, transparency and diffusion. In an ideal gas, the relative positions of the atoms or molecules are completely random. Amorphous materials, such as liquids and glasses, represent an intermediate case, having order over short distances (a few atomic or molecular spacings) but not over longer distances.
Many materials, such as glass-ceramics and some polymers, can be prepared in such a way as to produce a mixture of crystalline and amorphous regions. In such cases, crystallinity is usually specified as a percentage of the volume of the material that is crystalline. Even within materials that are completely crystalline, however, the degree of structural perfection can vary. For instance, most metallic alloys are crystalline, but they usually comprise many independent crystalline regions (grains or crystallites) in various orientations separated by grain boundaries; furthermore, they contain other crystallographic defects (notably dislocations) that reduce the degree of structural perfection. The most highly perfect crystals are silicon boules produced for semiconductor electronics; these are large single crystals (so they have no grain boundaries), are nearly free of dislocations, and have precisely controlled concentrations of defect atoms.
Crystallinity can be measured using x-ray diffraction, but calorimetric techniques are also commonly used.
Rock crystallinity
Geologists describe four qualitative levels of crystallinity:
holocrystalline rocks are completely crystalline;
hypocrystalline rocks are partially crystalline, with crystals embedded in an amorphous or glassy matrix;
hypohyaline rocks are partially glassy;
holohyaline rocks (such as obsidian) are completely glassy.
References
Oxford dictionary of science, 1999, .
Crystals
Physical quantities
Phases of matter | 0.778959 | 0.980356 | 0.763657 |
Chemically inert | In chemistry, the term chemically inert is used to describe a substance that is not chemically reactive. From a thermodynamic perspective, a substance is inert, or nonlabile, if it is thermodynamically unstable (positive standard Gibbs free energy of formation) yet decomposes at a slow, or negligible rate.
Most of the noble gases, which appear in the last column of the periodic table, are classified as inert (or unreactive). These elements are stable in their naturally occurring form (gaseous form) and they are called inert gases.
Noble gas
The noble gases (helium, neon, argon, krypton, xenon and radon) were previously known as 'inert gases' because of their perceived lack of participation in any chemical reactions. The reason for this is that their outermost electron shells (valence shells) are completely filled, so that they have little tendency to gain or lose electrons. They are said to acquire a noble gas configuration, or a full electron configuration.
It is now known that most of these gases in fact do react to form chemical compounds, such as xenon tetrafluoride. Hence, they have been renamed to 'noble gases', as the only two of these we know truly to be inert are helium and neon. However, a large amount of energy is required to drive such reactions, usually in the form of heat, pressure, or radiation, often assisted by catalysts. The resulting compounds often need to be kept in moisture-free conditions at low temperatures to prevent rapid decomposition back into their elements.
Inert gas
The term inert may also be applied in a relative sense. For example, molecular nitrogen is an inert gas under ordinary conditions, existing as diatomic molecules, . The presence of a strong triple covalent bond in the molecule renders it unreactive under normal circumstances. Nevertheless, nitrogen gas does react with the alkali metal lithium to form compound lithium nitride (Li3N), even under ordinary conditions. Under high pressures and temperatures and with the right catalysts, nitrogen becomes more reactive; the Haber process uses such conditions to produce ammonia from atmospheric nitrogen.
Main uses
Inert atmospheres consisting of gases such as argon, nitrogen, or helium are commonly used in chemical reaction chambers and in storage containers for oxygen- or water-sensitive substances, to prevent unwanted reactions of these substances with oxygen or water.
Argon is widely used in fluorescence tubes and low energy light bulbs. Argon gas helps to protect the metal filament inside the bulb from reacting with oxygen and corroding the filament under high temperature.
Neon is used in making advertising signs. Neon gas in a vacuum tube glows bright red in colour when electricity is passed through. Different coloured neon lights can also be made by using other gases.
Helium gas is mainly used to fill hot air and party balloons. Balloons filled with it float upwards and this phenomenon is achieved as helium gas is less dense than air.
See also
Noble metal
References
Chemical nomenclature
Chemical properties
Gases
Industrial gases
Noble gases | 0.775069 | 0.985271 | 0.763653 |
Crystal polymorphism | In crystallography, polymorphism is the phenomenon where a compound or element can crystallize into more than one crystal structure.
The preceding definition has evolved over many years and is still under discussion today. Discussion of the defining characteristics of polymorphism involves distinguishing among types of transitions and structural changes occurring in polymorphism versus those in other phenomena.
Overview
Phase transitions (phase changes) that help describe polymorphism include polymorphic transitions as well as melting and vaporization transitions. According to IUPAC, a polymorphic transition is "A reversible transition of a solid crystalline phase at a certain temperature and pressure (the inversion point) to another phase of the same chemical composition with a different crystal structure." Additionally, Walter McCrone described the phases in polymorphic matter as "different in crystal structure but identical in the liquid or vapor states." McCrone also defines a polymorph as “a crystalline phase of a given compound resulting from the possibility of at least two different arrangements of the molecules of that compound in the solid state.” These defining facts imply that polymorphism involves changes in physical properties but cannot include chemical change. Some early definitions do not make this distinction.
Eliminating chemical change from those changes permissible during a polymorphic transition delineates polymorphism. For example, isomerization can often lead to polymorphic transitions. However, tautomerism (dynamic isomerization) leads to chemical change, not polymorphism. As well, allotropy of elements and polymorphism have been linked historically. However, allotropes of an element are not always polymorphs. A common example is the allotropes of carbon, which include graphite, diamond, and londsdaleite. While all three forms are allotropes, graphite is not a polymorph of diamond and londsdaleite. Isomerization and allotropy are only two of the phenomena linked to polymorphism. For additional information about identifying polymorphism and distinguishing it from other phenomena, see the review by Brog et al.
It is also useful to note that materials with two polymorphic phases can be called dimorphic, those with three polymorphic phases, trimorphic, etc.
Polymorphism is of practical relevance to pharmaceuticals, agrochemicals, pigments, dyestuffs, foods, and explosives.
Detection
Experimental methods
Early records of the discovery of polymorphism credit Eilhard Mitscherlich and Jöns Jacob Berzelius for their studies of phosphates and arsenates in the early 1800s. The studies involved measuring the interfacial angles of the crystals to show that chemically identical salts could have two different forms. Mitscherlich originally called this discovery isomorphism. The measurement of crystal density was also used by Wilhelm Ostwald and expressed in Ostwald's Ratio.
The development of the microscope enhanced observations of polymorphism and aided Moritz Ludwig Frankenheim’s studies in the 1830s. He was able to demonstrate methods to induce crystal phase changes and formally summarized his findings on the nature of polymorphism. Soon after, the more sophisticated polarized light microscope came into use, and it provided better visualization of crystalline phases allowing crystallographers to distinguish between different polymorphs. The hot stage was invented and fitted to a polarized light microscope by Otto Lehmann in about 1877. This invention helped crystallographers determine melting points and observe polymorphic transitions.
While the use of hot stage microscopes continued throughout the 1900s, thermal methods also became commonly used to observe the heat flow that occurs during phase changes such as melting and polymorphic transitions. One such technique, differential scanning calorimetry (DSC), continues to be used for determining the enthalpy of polymorphic transitions.
In the 20th century, X-ray crystallography became commonly used for studying the crystal structure of polymorphs. Both single crystal x-ray diffraction and powder x-ray diffraction techniques are used to obtain measurements of the crystal unit cell. Each polymorph of a compound has a unique crystal structure. As a result, different polymorphs will produce different x-ray diffraction patterns.
Vibrational spectroscopic methods came into use for investigating polymorphism in the second half of the twentieth century and have become more commonly used as optical, computer, and semiconductor technologies improved. These techniques include infrared (IR) spectroscopy, terahertz spectroscopy and Raman spectroscopy. Mid-frequency IR and Raman spectroscopies are sensitive to changes in hydrogen bonding patterns. Such changes can subsequently be related to structural differences. Additionally, terahertz and low frequency Raman spectroscopies reveal vibrational modes resulting from intermolecular interactions in crystalline solids. Again, these vibrational modes are related to crystal structure and can be used to uncover differences in 3-dimensional structure among polymorphs.
Computational methods
Computational chemistry may be used in combination with vibrational spectroscopy techniques to understand the origins of vibrations within crystals. The combination of techniques provides detailed information about crystal structures, similar to what can be achieved with x-ray crystallography. In addition to using computational methods for enhancing the understanding of spectroscopic data, the latest development in identifying polymorphism in crystals is the field of crystal structure prediction. This technique uses computational chemistry to model the formation of crystals and predict the existence of specific polymorphs of a compound before they have been observed experimentally by scientists.
Examples
Many compounds exhibit polymorphism. It has been claimed that "every compound has different polymorphic forms, and that, in general, the number of forms known for a given compound is proportional to the time and money spent in research on that compound."
Organic compounds
Benzamide
The phenomenon was discovered in 1832 by Friedrich Wöhler and Justus von Liebig. They observed that the silky needles of freshly crystallized benzamide slowly converted to rhombic crystals. Present-day analysis identifies three polymorphs for benzamide: the least stable one, formed by flash cooling is the orthorhombic form II. This type is followed by the monoclinic form III (observed by Wöhler/Liebig). The most stable form is monoclinic form I. The hydrogen bonding mechanisms are the same for all three phases; however, they differ strongly in their pi-pi interactions.
Maleic acid
In 2006 a new polymorph of maleic acid was discovered, 124 years after the first crystal form was studied. Maleic acid is manufactured on an industrial scale in the chemical industry. It forms salt found in medicine. The new crystal type is produced when a co-crystal of caffeine and maleic acid (2:1) is dissolved in chloroform and when the solvent is allowed to evaporate slowly. Whereas form I has monoclinic space group P21/c, the new form has space group Pc. Both polymorphs consist of sheets of molecules connected through hydrogen bonding of the carboxylic acid groups: in form I, the sheets alternate with respect of the net dipole moment, while in form II, the sheets are oriented in the same direction.
1,3,5-Trinitrobenzene
After 125 years of study, 1,3,5-trinitrobenzene yielded a second polymorph. The usual form has the space group Pbca, but in 2004, a second polymorph was obtained in the space group Pca21 when the compound was crystallised in the presence of an additive, trisindane. This experiment shows that additives can induce the appearance of polymorphic forms.
Other organic compounds
Acridine has been obtained as eight polymorphs and aripiprazole has nine. The record for the largest number of well-characterised polymorphs is held by a compound known as ROY. Glycine crystallizes as both monoclinic and hexagonal crystals. Polymorphism in organic compounds is often the result of conformational polymorphism.
Inorganic matter
Elements
Elements including metals may exhibit polymorphism. Allotropy is the term used when describing elements having different forms and is used commonly in the field of metallurgy. Some (but not all) allotropes are also polymorphs. For example, iron has three allotropes that are also polymorphs. Alpha-iron, which exists at room temperature, has a bcc form. Above 910 degrees gamma-iron exists, which has a fcc form. Above 1390 degrees delta-iron exists with a bcc form.
Another metallic example is tin, which has two allotropes that are also polymorphs. At room temperature, beta-tin exists as a white tetragonal form. When cooled below 13.2 degrees, alpha-tin forms which is gray in color and has a cubic diamond form.
A classic example of a nonmetal that exhibits polymorphism is carbon. Carbon has many allotropes, including graphite, diamond, and londsdaleite. However, these are not all polymorphs of each other. Graphite is not a polymorph of diamond and londsdaleite, since it is chemically distinct, having sp2 hybridized bonding. Diamond, and londsdaleite are chemically identical, both having sp3 hybridized bonding, and they differ only in their crystal structures, making them polymorphs. Additionally, graphite has two polymorphs, a hexagonal (alpha) form and a rhombohedral (beta) form.
Binary metal oxides
Polymorphism in binary metal oxides has attracted much attention because these materials are of significant economic value. One set of famous examples have the composition SiO2, which form many polymorphs. Important ones include: α-quartz, β-quartz, tridymite, cristobalite, moganite, coesite, and stishovite.
Other inorganic compounds
A classical example of polymorphism is the pair of minerals calcite, which is rhombohedral, and aragonite, which is orthorhombic. Both are forms of calcium carbonate. A third form of calcium carbonate is vaterite, which is hexagonal and relatively unstable.
β-HgS precipitates as a black solid when Hg(II) salts are treated with H2S. With gentle heating of the slurry, the black polymorph converts to the red form.
Factors affecting polymorphism
According to Ostwald's rule, usually less stable polymorphs crystallize before the stable form. The concept hinges on the idea that unstable polymorphs more closely resemble the state in solution, and thus are kinetically advantaged. The founding case of fibrous vs rhombic benzamide illustrates the case. Another example is provided by two polymorphs of titanium dioxide. Nevertheless, there are known systems, such as metacetamol, where only narrow cooling rate favors obtaining metastable form II.
Polymorphs have disparate stabilities. Some convert rapidly at room (or any) temperature. Most polymorphs of organic molecules only differ by a few kJ/mol in lattice energy. Approximately 50% of known polymorph pairs differ by less than 2 kJ/mol and stability differences of more than 10 kJ/mol are rare. Polymorph stability may change upon temperature or pressure. Importantly, structural and thermodynamic stability are different. Thermodynamic stability may be studied using experimental or computational methods.
Polymorphism is affected by the details of crystallisation. The solvent in all respects affects the nature of the polymorph, including concentration, other components of the solvent, i.e., species that inhibiting or promote certain growth patterns. A decisive factor is often the temperature of the solvent from which crystallisation is carried out.
Metastable polymorphs are not always reproducibly obtained, leading to cases of "disappearing polymorphs", with usually negative implications on law and business.
In pharmaceuticals
Legal aspects
Drugs receive regulatory approval and are granted patents for only a single polymorph.
In a classic patent dispute, the GlaxoSmithKline defended its patent for the Type II polymorph of the active ingredient in Zantac against competitors while that of the Type I polymorph had already expired.
Polymorphism in drugs can also have direct medical implications since dissolution rates depend on the polymorph. Polymorphic purity of drug samples can be checked using techniques such as powder X-ray diffraction, IR/Raman
spectroscopy, and utilizing the differences in their optical properties in some cases.
Case studies
The known cases up to 2015 are discussed in a review article by Bučar, Lancaster, and Bernstein.
Dibenzoxazepines
Multidisciplinary studies involving experimental and computational approaches were applied to pharmaceutical molecules to facilitate the comparison of their solid-state structures. Specifically, this study has focused on exploring how changes in molecular structure affect the molecular conformation, packing motifs, interactions in the resultant crystal lattices and the extent of solid-state diversity of these compounds. The results highlight the value of crystal structure prediction studies and PIXEL calculations in the interpretation of the observed solid-state behaviour and quantifying the intermolecular interactions in the packed structures and identifying the key stabilising interactions. An experimental screen yielded 4 physical forms for clozapine as compared to 60 distinct physical forms for olanzapine. The experimental screening results of clozapine are consistent with its crystal energy landscape which confirms that no alternate packing arrangement is thermodynamically competitive to the experimentally obtained structure. Whilst in case of olanzapine, crystal energy landscape highlights that the extensive experimental screening has probably not found all possible polymorphs of olanzapine, and further solid form diversity could be targeted with a better understanding of the role of kinetics in its crystallisation. CSP studies were able to offer an explanation for the absence of the centrosymmetric dimer in anhydrous clozapine. PIXEL calculations on all the crystal structures of clozapine revealed that similar to olanzapine, the intermolecular interaction energy in each structure is also dominated by the Ed. Despite the molecular structure similarity between amoxapine and loxapine (molecules in group 2), the crystal packing observed in polymorphs of loxa differs significantly from the amoxapine. A combined experimental and computational study demonstrated that the methyl group in loxapine has a significant influence in increasing the range of accessible solid forms and favouring various alternate packing arrangements. CSP studies have again helped in explaining the observed solid-state diversity of loxapine and amoxapine. PIXEL calculations showed that in absence of strong H-bonds, weak H-bonds such as C–H...O, C–H...N and dispersion interactions play a key role in stabilising the crystal lattice of both the molecules. Efficient crystal packing of amoxapine seems to be contributing towards its monomorphic behaviour as compared to the comparatively less efficient packing of loxapine molecules in both polymorphs. The combination of experimental and computational approaches has provided a deeper understanding of the factors influencing the solid-state structure and diversity in these compounds. Hirshfeld surfaces using Crystal Explorer represent another way of exploring packing modes and intermolecular interactions in molecular crystals. The influence of changes in the small substituents on shape and electron distribution can also be investigated by mapping the total electron density on the electrostatic potential for molecules in the gas phase. This allows straightforward visualisation and comparison of overall shape, electron-rich and electron-deficient regions within molecules. The shape of these molecules can be further investigated to study its influence on diverse solid-state diversity.
Posaconazole
The original formulations of posaconazole on the market licensed as Noxafil were formulated utilising form I of posaconazole. The discovery of polymorphs of posaconazole increased rapidly and resulted in much research in crystallography of posaconazole. A methanol solvate and a 1,4-dioxane co-crystal were added to the Cambridge Structural Database (CSD).
Ritonavir
The antiviral drug ritonavir exists as two polymorphs, which differ greatly in efficacy. Such issues were solved by reformulating the medicine into gelcaps and tablets, rather than the original capsules.
Aspirin
There was only one proven polymorph Form I of aspirin, though the existence of another polymorph was debated since the 1960s, and one report from 1981 reported that when crystallized in the presence of aspirin anhydride, the diffractogram of aspirin has weak additional peaks. Though at the time it was dismissed as mere impurity, it was, in retrospect, Form II aspirin.
Form II was reported in 2005, found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile.
In form I, pairs of aspirin molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds. In form II, each aspirin molecule forms the same hydrogen bonds, but with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures. The aspirin polymorphs contain identical 2-dimensional sections and are therefore more precisely described as polytypes.
Pure Form II aspirin could be prepared by seeding the batch with aspirin anhydrate in 15% weight.
Paracetamol
Paracetamol powder has poor compression properties, which poses difficulty in making tablets. A second polymorph was found with more suitable compressive properties.
Cortisone acetate
Cortisone acetate exists in at least five different polymorphs, four of which are unstable in water and change to a stable form.
Carbamazepine
Carbamazepine, estrogen, paroxetine, and chloramphenicol also show polymorphism.
Pyrazinamide
Pyrazinamide has at least 4 polymorphs. All of them transforms to stable α form at room temperature upon storage or mechanical treatment. Recent studies prove that α form is thermodynamically stable at room temperature.
Polytypism
Polytypes are a special case of polymorphs, where multiple close-packed crystal structures differ in one dimension only. Polytypes have identical close-packed planes, but differ in the stacking sequence in the third dimension perpendicular to these planes. Silicon carbide (SiC) has more than 170 known polytypes, although most are rare. All the polytypes of SiC have virtually the same density and Gibbs free energy. The most common SiC polytypes are shown in Table 1.
Table 1: Some polytypes of SiC.
A second group of materials with different polytypes are the transition metal dichalcogenides, layered materials such as molybdenum disulfide (MoS2). For these materials the polytypes have more distinct effects on material properties, e.g. for MoS2, the 1T polytype is metallic in character, while the 2H form is more semiconducting.
Another example is tantalum disulfide, where the common 1T as well as 2H polytypes occur, but also more complex 'mixed coordination' types such as 4Hb and 6R, where the trigonal prismatic and the octahedral geometry layers are mixed. Here, the 1T polytype exhibits a charge density wave, with distinct influence on the conductivity as a function of temperature, while the 2H polytype exhibits superconductivity.
ZnS and CdI2 are also polytypical. It has been suggested that this type of polymorphism is due to kinetics where screw dislocations rapidly reproduce partly disordered sequences in a periodic fashion.
Theory
]In terms of thermodynamics, two types of polymorphic behaviour are recognized. For a monotropic system, plots of the free energies of the various polymorphs against temperature do not cross before all polymorphs melt. As a result, any transition from one polymorph to another below the melting point will be irreversible. For an enantiotropic system, a plot of the free energy against temperature shows a crossing point before the various melting points. It may also be possible to convert interchangeably between the two polymorphs by heating or cooling, or through physical contact with a lower energy polymorph.
A simple model of polymorphism is to model the Gibbs free energy of a ball-shaped crystal as . Here, the first term is the surface energy, and the second term is the volume energy. Both parameters . The function rises to a maximum before dropping, crossing zero at . In order to crystallize, a ball of crystal much overcome the energetic barrier to the part of the energy landscape.
Now, suppose there are two kinds of crystals, with different energies and , and if they have the same shape as in Figure 2, then the two curves intersect at some . Then the system has three phases:
. Crystals tend to dissolve. Amorphous phase.
. Crystals tend to grow as form 1.
. Crystals tend to grow as form 2.
If the crystal is grown slowly, it could be kinetically stuck in form 1.
See also
Allotropy
Isomorphism (crystallography)
Dimorphism (Wiktionary)
Polyamorphism
References
External links
"Small Molecule Crystallization" (PDF) at Illinois Institute of Technology website
"SiC and Polytpism"
Mineralogy
Gemology
Crystallography | 0.775172 | 0.98512 | 0.763638 |
Steady state | In systems theory, a system or a process is in a steady state if the variables (called state variables) which define the behavior of the system or the process are unchanging in time. In continuous time, this means that for those properties p of the system, the partial derivative with respect to time is zero and remains so:
In discrete time, it means that the first difference of each property is zero and remains so:
The concept of a steady state has relevance in many fields, in particular thermodynamics, economics, and engineering. If a system is in a steady state, then the recently observed behavior of the system will continue into the future. In stochastic systems, the probabilities that various states will be repeated will remain constant. See for example Linear difference equation#Conversion to homogeneous form for the derivation of the steady state.
In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period. For example, while the flow of fluid through a tube or electricity through a network could be in a steady state because there is a constant flow of fluid or electricity, a tank or capacitor being drained or filled with fluid is a system in transient state, because its volume of fluid changes with time.
Often, a steady state is approached asymptotically. An unstable system is one that diverges from the steady state. See for example Linear difference equation#Stability.
In chemistry, a steady state is a more general situation than dynamic equilibrium. While a dynamic equilibrium occurs when two or more reversible processes occur at the same rate, and such a system can be said to be in a steady state, a system that is in a steady state may not necessarily be in a state of dynamic equilibrium, because some of the processes involved are not reversible. In other words, dynamic equilibrium is just one manifestation of a steady state.
Applications
Economics
A steady state economy is an economy (especially a national economy but possibly that of a city, a region, or the world) of stable size featuring a stable population and stable consumption that remain at or below carrying capacity. In the economic growth model of Robert Solow and Trevor Swan, the steady state occurs when gross investment in physical capital equals depreciation and the economy reaches economic equilibrium, which may occur during a period of growth.
Electrical engineering
In electrical engineering and electronic engineering, steady state is an equilibrium condition of a circuit or network that occurs as the effects of transients are no longer important. Steady state is also used as an approximation in systems with on-going transient signals, such as audio systems, to allow simplified analysis of first order performance.
Sinusoidal Steady State Analysis is a method for analyzing alternating current circuits using the same techniques as for solving DC circuits.
The ability of an electrical machine or power system to regain its original/previous state is called Steady State Stability.
The stability of a system refers to the ability of a system to return to its steady state when subjected to a disturbance. As mentioned before, power is generated by synchronous generators that operate in synchronism with the rest of the system. A generator is synchronized with a bus when both of them have same frequency, voltage and phase sequence. We can thus define the power system stability as the ability of the power system to return to steady state without losing synchronicity. Usually power system stability is categorized into Steady State, Transient and Dynamic Stability
Steady State Stability studies are restricted to small and gradual changes in the system operating conditions. In this we basically concentrate on restricting the bus voltages close to their nominal values. We also ensure that phase angles between two buses are not too large and check for the overloading of the power equipment and transmission lines. These checks are usually done using power flow studies.
Transient Stability involves the study of the power system following a major disturbance. Following a large disturbance in the synchronous alternator the machine power (load) angle changes due to sudden acceleration of the rotor shaft. The objective of the transient stability study is to ascertain whether the load angle returns to a steady value following the clearance of the disturbance.
The ability of a power system to maintain stability under continuous small disturbances is investigated under the name of Dynamic Stability (also known as small-signal stability). These small disturbances occur due to random fluctuations in loads and generation levels. In an interconnected power system, these random variations can lead catastrophic failure as this may force the rotor angle to increase steadily.
Steady state determination is an important topic, because many design specifications of electronic systems are given in terms of the steady-state characteristics. Periodic steady-state solution is also a prerequisite for small signal dynamic modeling. Steady-state analysis is therefore an indispensable component of the design process.
In some cases, it is useful to consider constant envelope vibration—vibration that never settles down to motionlessness, but continues to move at constant amplitude—a kind of steady-state condition.
Chemical engineering
In chemistry, thermodynamics, and other chemical engineering, a steady state is a situation in which all state variables are constant in spite of ongoing processes that strive to change them. For an entire system to be at steady state, i.e. for all state variables of a system to be constant, there must be a flow through the system (compare mass balance). One of the simplest examples of such a system is the case of a bathtub with the tap open but without the bottom plug: after a certain time the water flows in and out at the same rate, so the water level (the state variable being Volume) stabilizes and the system is at steady state. Of course the Volume stabilizing inside the tub depends on the size of the tub, the diameter of the exit hole and the flowrate of water in. Since the tub can overflow, eventually a steady state can be reached where the water flowing in equals the overflow plus the water out through the drain.
A steady state flow process requires conditions at all points in an apparatus remain constant as time changes. There must be no accumulation of mass or energy over the time period of interest. The same mass flow rate will remain constant in the flow path through each element of the system. Thermodynamic properties may vary from point to point, but will remain unchanged at any given point.
Mechanical engineering
When a periodic force is applied to a mechanical system, it will typically reach a steady state after going through some transient behavior. This is often observed in vibrating systems, such as a clock pendulum, but can happen with any type of stable or semi-stable dynamic system. The length of the transient state will depend on the initial conditions of the system. Given certain initial conditions, a system may be in steady state from the beginning.
Biochemistry
In biochemistry, the study of biochemical pathways is an important topic. Such pathways will often display steady-state behavior where the chemical species are unchanging, but there is a continuous dissipation of flux through the pathway. Many, but not all, biochemical pathways evolve to stable, steady states. As a result, the steady state represents an important reference state to study. This is also related to the concept of homeostasis, however, in biochemistry, a steady state can be stable or unstable such as in the case of sustained oscillations or bistable behavior.
Physiology
Homeostasis (from Greek ὅμοιος, hómoios, "similar" and στάσις, stásis, "standing still") is the property of a system that regulates its internal environment and tends to maintain a stable, constant condition. Typically used to refer to a living organism, the concept came from that of milieu interieur that was created by Claude Bernard and published in 1865. Multiple dynamic equilibrium adjustment and regulation mechanisms make homeostasis possible.
Fiber optics
In fiber optics, "steady state" is a synonym for equilibrium mode distribution.
Pharmacokinetics
In Pharmacokinetics, steady state is a dynamic equilibrium in the body where drug concentrations consistently stay within a therapeutic limit over time.
See also
Attractor
Carrying capacity
Control theory
Dynamical system
Ecological footprint
Economic growth
Engine test stand
Equilibrium point
List of types of equilibrium
Evolutionary economics
Growth curve
Herman Daly
Homeostasis
Limit cycle
Limits to Growth
Population dynamics
Simulation
State function
Steady state economy
Steady State theory
Systems theory
Thermodynamic equilibrium
Transient state
References
Systems theory
Control theory | 0.769457 | 0.99243 | 0.763633 |
S-Adenosyl methionine | S-Adenosyl methionine (SAM), also known under the commercial names of SAMe, SAM-e, or AdoMet, is a common cosubstrate involved in methyl group transfers, transsulfuration, and aminopropylation. Although these anabolic reactions occur throughout the body, most SAM is produced and consumed in the liver. More than 40 methyl transfers from SAM are known, to various substrates such as nucleic acids, proteins, lipids and secondary metabolites. It is made from adenosine triphosphate (ATP) and methionine by methionine adenosyltransferase. SAM was first discovered by Giulio Cantoni in 1952.
In bacteria, SAM is bound by the SAM riboswitch, which regulates genes involved in methionine or cysteine biosynthesis. In eukaryotic cells, SAM serves as a regulator of a variety of processes including DNA, tRNA, and rRNA methylation; immune response; amino acid metabolism; transsulfuration; and more. In plants, SAM is crucial to the biosynthesis of ethylene, an important plant hormone and signaling molecule.
Structure
S-Adenosyl methionine consists of the adenosyl group attached to the sulfur of methionine, providing it with a positive charge. It is synthesized from ATP and methionine by S-Adenosylmethionine synthetase enzyme through the following reaction:
ATP + L-methionine + H2O phosphate + diphosphate + S-adenosyl-L-methionine
The sulfonium functional group present in S-adenosyl methionine is the center of its peculiar reactivity. Depending on the enzyme, S-adenosyl methionine can be converted into one of three products:
adenosyl radical, which converts to deoxyadenosine (AdO): classic rSAM reaction, also cogenerates methionine
S-adenosyl homocysteine, releasing methyl radical
methylthioadenosine (SMT), homoalanine radical
Biochemistry
SAM cycle
The reactions that produce, consume, and regenerate SAM are called the SAM cycle. In the first step of this cycle, the SAM-dependent methylases (EC 2.1.1) that use SAM as a substrate produce S-adenosyl homocysteine as a product. S-Adenosyl homocysteine is a strong negative regulator of nearly all SAM-dependent methylases despite their biological diversity. This is hydrolysed to homocysteine and adenosine by S-adenosylhomocysteine hydrolase EC 3.3.1.1 and the homocysteine recycled back to methionine through transfer of a methyl group from 5-methyltetrahydrofolate, by one of the two classes of methionine synthases (i.e. cobalamin-dependent (EC 2.1.1.13) or cobalamin-independent (EC 2.1.1.14)). This methionine can then be converted back to SAM, completing the cycle. In the rate-limiting step of the SAM cycle, MTHFR (methylenetetrahydrofolate reductase) irreversibly reduces 5,10-methylenetetrahydrofolate to 5-methyltetrahydrofolate.
Radical SAM enzymes
A large number of enzymes cleave SAM reductively to produce radicals: 5′-deoxyadenosyl 5′-radical, methyl radical, and others. These enzymes are called radical SAMs. They all feature iron-sulfur cluster at their active sites. Most enzymes with this capability share a region of sequence homology that includes the motif CxxxCxxC or a close variant. This sequence provides three cysteinyl thiolate ligands that bind to three of the four metals in the 4Fe-4S cluster. The fourth Fe binds the SAM.
The radical intermediates generated by these enzymes perform a wide variety of unusual chemical reactions. Examples of radical SAM enzymes include spore photoproduct lyase, activases of pyruvate formate lyase and anaerobic sulfatases, lysine 2,3-aminomutase, and various enzymes of cofactor biosynthesis, peptide modification, metalloprotein cluster formation, tRNA modification, lipid metabolism, etc. Some radical SAM enzymes use a second SAM as a methyl donor. Radical SAM enzymes are much more abundant in anaerobic bacteria than in aerobic organisms. They can be found in all domains of life and are largely unexplored. A recent bioinformatics study concluded that this family of enzymes includes at least 114,000 sequences including 65 unique reactions.
Deficiencies in radical SAM enzymes have been associated with a variety of diseases including congenital heart disease, amyotrophic lateral sclerosis, and increased viral susceptibility.
Polyamine biosynthesis
Another major role of SAM is in polyamine biosynthesis. Here, SAM is decarboxylated by adenosylmethionine decarboxylase (EC 4.1.1.50) to form S-adenosylmethioninamine. This compound then donates its n-propylamine group in the biosynthesis of polyamines such as spermidine and spermine from putrescine.
SAM is required for cellular growth and repair. It is also involved in the biosynthesis of several hormones and neurotransmitters that affect mood, such as epinephrine. Methyltransferases are also responsible for the addition of methyl groups to the 2′ hydroxyls of the first and second nucleotides next to the 5′ cap in messenger RNA.
Therapeutic uses
Osteoarthrtitis pain
As of 2012, the evidence was inconclusive as to whether SAM can mitigate the pain of osteoarthritis; clinical trials that had been conducted were too small from which to generalize.
Liver disease
The SAM cycle has been closely tied to the liver since 1947 because people with alcoholic cirrhosis of the liver would accumulate large amounts of methionine in their blood. While multiple lines of evidence from laboratory tests on cells and animal models suggest that SAM might be useful to treat various liver diseases, as of 2012 SAM had not been studied in any large randomized placebo-controlled clinical trials that would allow an assessment of its efficacy and safety.
Depression
A 2016 Cochrane review concluded that for major depressive disorder, "Given the absence of high quality evidence and the inability to draw firm conclusions based on that evidence, the use of SAMe for the treatment of depression in adults should be investigated further."
A 2020 systematic review found that it performed significantly better than placebo, and had similar outcomes to other commonly used antidepressants (imipramine and escitalopram).
Anti-cancer treatment
SAM has recently been shown to play a role in epigenetic regulation. DNA methylation is a key regulator in epigenetic modification during mammalian cell development and differentiation. In mouse models, excess levels of SAM have been implicated in erroneous methylation patterns associated with diabetic neuropathy. SAM serves as the methyl donor in cytosine methylation, which is a key epigenetic regulatory process. Because of this impact on epigenetic regulation, SAM has been tested as an anti-cancer treatment. In many cancers, proliferation is dependent on having low levels of DNA methylation. In vitro addition in such cancers has been shown to remethylate oncogene promoter sequences and decrease the production of proto-oncogenes. In cancers such as colorectal cancer, aberrant global hypermethylation can inhibit promoter regions of tumor-suppressing genes. Contrary to the former information, colorectal cancers (CRCs) are characterized by global hypomethylation and promoter-specific DNA methylation.
Pharmacokinetics
Oral SAM achieves peak plasma concentrations three to five hours after ingestion of an enteric-coated tablet (400–1000 mg). The half-life is about 100 minutes.
Availability in different countries
In Canada, the UK, and the United States, SAM is sold as a dietary supplement under the marketing name SAM-e (also spelled SAME or SAMe). It was introduced in the US in 1999, after the Dietary Supplement Health and Education Act was passed in 1994.
It was introduced as a prescription drug in Italy in 1979, in Spain in 1985, and in Germany in 1989. As of 2012, it was sold as a prescription drug in Russia, India, China, Italy, Germany, Vietnam, and Mexico.
Adverse effects
Gastrointestinal disorder, dyspepsia and anxiety can occur with SAM consumption. Long-term effects are unknown. SAM is a weak DNA-alkylating agent.
Another reported side effect of SAM is insomnia; therefore, the supplement is often taken in the morning. Other reports of mild side effects include lack of appetite, constipation, nausea, dry mouth, sweating, and anxiety/nervousness, but in placebo-controlled studies, these side effects occur at about the same incidence in the placebo groups.
Interactions and contraindications
Taking SAM at the same time as some drugs may increase the risk of serotonin syndrome, a potentially dangerous condition caused by having too much serotonin. These drugs include, but are certainly not limited to, dextromethorphan (Robitussin), meperidine (Demerol), pentazocine (Talwin), and tramadol (Ultram).
SAM can also interact with many antidepressant medications — including tryptophan and the herbal medicine Hypericum perforatum (St. John's wort) — increasing the potential for serotonin syndrome or other side effects, and may reduce the effectiveness of levodopa for Parkinson's disease. SAM can increase the risk of manic episodes in people who have bipolar disorder.
Toxicity
A 2022 study concluded that SAMe could be toxic. Jean-Michel Fustin of Manchester University said that the researchers found that excess SAMe breaks down into adenine and methylthioadenosine in the body, both producing the paradoxical effect of inhibiting methylation. This was found in laboratory mice, causing harm to health, and in in vitro tests on human cells.
See also
DNA methyltransferase
SAM-I riboswitch
SAM-II riboswitch
SAM-III riboswitch
SAM-IV riboswitch
SAM-V riboswitch
SAM-VI riboswitch
List of investigational antidepressants
References
External links
Alpha-Amino acids
Coenzymes
Dietary supplements
Biology of bipolar disorder
Psychopharmacology
Sulfonium compounds | 0.766317 | 0.996494 | 0.76363 |
Heterologous | The term heterologous has several meanings in biology.
Gene expression
In cell biology and protein biochemistry, heterologous expression means that a protein is experimentally put into a cell that does not normally make (i.e., express) that protein. Heterologous (meaning 'derived from a different organism') refers to the fact that often the transferred protein was initially cloned from or derived from a different cell type or a different species from the recipient.
Typically the protein itself is not transferred, but instead the 'correctly edited' genetic material coding for the protein (the complementary DNA or cDNA) is added to the recipient cell. The genetic material that is transferred typically must be within a format that encourages the recipient cell to express the cDNA as a protein (i.e., it is put in an expression vector).
Methods for transferring foreign genetic material into a recipient cell include transfection and transduction. The choice of recipient cell type is often based on an experimental need to examine the protein's function in detail, and the most prevalent recipients, known as heterologous expression systems, are chosen usually because they are easy to transfer DNA into or because they allow for a simpler assessment of the protein's function.
Stem cells
In stem cell biology, a heterologous transplant refers to cells from a mixed population of donor cells. This is in contrast to an autologous transplant where the cells are derived from the same individual or an allogenic transplant where the donor cells are HLA matched to the recipient. A heterologous source of therapeutic cells will have a much greater availability than either autologous or allogenic cellular therapies.
Structural biology
In structural biology, a heterologous association is a binding mode between the protomers of a protein structure. In a heterologous association, each protomer contributes a different set of residues to the binding interface. In contrast, two protomers form an isologous association when they contribute the same set of residues to the protomer-protomer interface.
See also
Autologous
Homologous
Homology (biology)
Heterogeneous
References
Protein structure | 0.793451 | 0.962391 | 0.76361 |
Pleiotropy | Pleiotropy (from Greek , 'more', and , 'way') occurs when one gene influences two or more seemingly unrelated phenotypic traits. Such a gene that exhibits multiple phenotypic expression is called a pleiotropic gene. Mutation in a pleiotropic gene may have an effect on several traits simultaneously, due to the gene coding for a product used by a myriad of cells or different targets that have the same signaling function.
Pleiotropy can arise from several distinct but potentially overlapping mechanisms, such as gene pleiotropy, developmental pleiotropy, and selectional pleiotropy. Gene pleiotropy occurs when a gene product interacts with multiple other proteins or catalyzes multiple reactions. Developmental pleiotropy occurs when mutations have multiple effects on the resulting phenotype. Selectional pleiotropy occurs when the resulting phenotype has many effects on fitness (depending on factors such as age and gender).
An example of pleiotropy is phenylketonuria, an inherited disorder that affects the level of phenylalanine, an amino acid that can be obtained from food, in the human body. Phenylketonuria causes this amino acid to increase in amount in the body, which can be very dangerous. The disease is caused by a defect in a single gene on chromosome 12 that codes for enzyme phenylalanine hydroxylase, that affects multiple systems, such as the nervous and integumentary system.
Pleiotropic gene action can limit the rate of multivariate evolution when natural selection, sexual selection or artificial selection on one trait favors one allele, while selection on other traits favors a different allele. Some gene evolution is harmful to an organism. Genetic correlations and responses to selection most often exemplify pleiotropy.
History
Pleiotropic traits had been previously recognized in the scientific community but had not been experimented on until Gregor Mendel's 1866 pea plant experiment. Mendel recognized that certain pea plant traits (seed coat color, flower color, and axial spots) seemed to be inherited together; however, their correlation to a single gene has never been proven. The term "pleiotropie" was first coined by Ludwig Plate in his Festschrift, which was published in 1910. He originally defined pleiotropy as occurring when "several characteristics are dependent upon ... [inheritance]; these characteristics will then always appear together and may thus appear correlated". This definition is still used today.
After Plate's definition, Hans Gruneberg was the first to study the mechanisms of pleiotropy. In 1938 Gruneberg published an article dividing pleiotropy into two distinct types: "genuine" and "spurious" pleiotropy. "Genuine" pleiotropy is when two distinct primary products arise from one locus. "Spurious" pleiotropy, on the other hand, is either when one primary product is utilized in different ways or when one primary product initiates a cascade of events with different phenotypic consequences. Gruneberg came to these distinctions after experimenting on rats with skeletal mutations. He recognized that "spurious" pleiotropy was present in the mutation, while "genuine" pleiotropy was not, thus partially invalidating his own original theory. Through subsequent research, it has been established that Gruneberg's definition of "spurious" pleiotropy is what we now identify simply as "pleiotropy".
In 1941 American geneticists George Beadle and Edward Tatum further invalidated Gruneberg's definition of "genuine" pleiotropy, advocating instead for the "one gene-one enzyme" hypothesis that was originally introduced by French biologist Lucien Cuénot in 1903. This hypothesis shifted future research regarding pleiotropy towards how a single gene can produce various phenotypes.
In the mid-1950s Richard Goldschmidt and Ernst Hadorn, through separate individual research, reinforced the faultiness of "genuine" pleiotropy. A few years later, Hadorn partitioned pleiotropy into a "mosaic" model (which states that one locus directly affects two phenotypic traits) and a "relational" model (which is analogous to "spurious" pleiotropy). These terms are no longer in use but have contributed to the current understanding of pleiotropy.
By accepting the one gene-one enzyme hypothesis, scientists instead focused on how uncoupled phenotypic traits can be affected by genetic recombination and mutations, applying it to populations and evolution. This view of pleiotropy, "universal pleiotropy", defined as locus mutations being capable of affecting essentially all traits, was first implied by Ronald Fisher's Geometric Model in 1930. This mathematical model illustrates how evolutionary fitness depends on the independence of phenotypic variation from random changes (that is, mutations). It theorizes that an increasing phenotypic independence corresponds to a decrease in the likelihood that a given mutation will result in an increase in fitness. Expanding on Fisher's work, Sewall Wright provided more evidence in his 1968 book Evolution and the Genetics of Populations: Genetic and Biometric Foundations by using molecular genetics to support the idea of "universal pleiotropy". The concepts of these various studies on evolution have seeded numerous other research projects relating to individual fitness.
In 1957 evolutionary biologist George C. Williams theorized that antagonistic effects will be exhibited during an organism's life cycle if it is closely linked and pleiotropic. Natural selection favors genes that are more beneficial prior to reproduction than after (leading to an increase in reproductive success). Knowing this, Williams argued that if only close linkage was present, then beneficial traits will occur both before and after reproduction due to natural selection. This, however, is not observed in nature, and thus antagonistic pleiotropy contributes to the slow deterioration with age (senescence).
Mechanism
Pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. The underlying mechanism is genes that code for a product that is either used by various cells or has a cascade-like signaling function that affects various targets.
Polygenic traits
Most genetic traits are polygenic in nature: controlled by many genetic variants, each of small effect. These genetic variants can reside in protein coding or non-coding regions of the genome. In this context pleiotropy refers to the influence that a specific genetic variant, e.g., a single nucleotide polymorphism or SNP, has on two or more distinct traits.
Genome-wide association studies (GWAS) and machine learning analysis of large genomic datasets have led to the construction of SNP based polygenic predictors for human traits such as height, bone density, and many disease risks. Similar predictors exist for plant and animal species and are used in agricultural breeding.
One measure of pleiotropy is the fraction of genetic variance that is common between two distinct complex human traits: e.g., height vs bone density, breast cancer vs heart attack risk, or diabetes vs hypothyroidism risk. This has been calculated for hundreds of pairs of traits, with results shown in the Table. In most cases examined the genomic regions controlling each trait are largely disjoint, with only modest overlap.
Thus, at least for complex human traits so far examined, pleiotropy is limited in extent.
Models for the origin
One basic model of pleiotropy's origin describes a single gene locus to the expression of a certain trait. The locus affects the expressed trait only through changing the expression of other loci. Over time, that locus would affect two traits by interacting with a second locus. Directional selection for both traits during the same time period would increase the positive correlation between the traits, while selection on only one trait would decrease the positive correlation between the two traits. Eventually, traits that underwent directional selection simultaneously were linked by a single gene, resulting in pleiotropy.
The "pleiotropy-barrier" model proposes a logistic growth pattern for the increase of pleiotropy over time. This model differentiates between the levels of pleiotropy in evolutionarily younger and older genes subjected to natural selection. It suggests a higher potential for phenotypic innovation in evolutionarily newer genes due to their lower levels of pleiotropy.
Other more complex models compensate for some of the basic model's oversights, such as multiple traits or assumptions about how the loci affect the traits. They also propose the idea that pleiotropy increases the phenotypic variation of both traits since a single mutation on a gene would have twice the effect.
Evolution
Pleiotropy can have an effect on the evolutionary rate of genes and allele frequencies. Traditionally, models of pleiotropy have predicted that evolutionary rate of genes is related negatively with pleiotropyas the number of traits of an organism increases, the evolutionary rates of genes in the organism's population decrease. This relationship has not been clearly found in empirical studies for a long time. However, a study based on human disease genes revealed the evidence of lower evolutionary rate in genes with higher pleiotropy.
In mating, for many animals the signals and receptors of sexual communication may have evolved simultaneously as the expression of a single gene, instead of the result of selection on two independent genes, one that affects the signaling trait and one that affects the receptor trait. In such a case, pleiotropy would facilitate mating and survival. However, pleiotropy can act negatively as well. A study on seed beetles found that intralocus sexual conflict arises when selection for certain alleles of a gene that are beneficial for one sex causes expression of potentially harmful traits by the same gene in the other sex, especially if the gene is located on an autosomal chromosome.
Pleiotropic genes act as an arbitrating force in speciation. William R. Rice and Ellen E. Hostert (1993) conclude that the observed prezygotic isolation in their studies is a product of pleiotropy's balancing role in indirect selection. By imitating the traits of all-infertile hybridized species, they noticed that the fertilization of eggs was prevented in all eight of their separate studies, a likely effect of pleiotropic genes on speciation. Likewise, pleiotropic gene's stabilizing selection allows for the allele frequency to be altered.
Studies on fungal evolutionary genomics have shown pleiotropic traits that simultaneously affect adaptation and reproductive isolation, converting adaptations directly to speciation. A particularly telling case of this effect is host specificity in pathogenic ascomycetes and specifically, in venturia, the fungus responsible for apple scab. These parasitic fungi each adapts to a host, and are only able to mate within a shared host after obtaining resources. Since a single toxin gene or virulence allele can grant the ability to colonize the host, adaptation and reproductive isolation are instantly facilitated, and in turn, pleiotropically causes adaptive speciation. The studies on fungal evolutionary genomics will further elucidate the earliest stages of divergence as a result of gene flow, and provide insight into pleiotropically induced adaptive divergence in other eukaryotes.
Antagonistic pleiotropy
Sometimes, a pleiotropic gene may be both harmful and beneficial to an organism, which is referred to as antagonistic pleiotropy. This may occur when the trait is beneficial for the organism's early life, but not its late life. Such "trade-offs" are possible since natural selection affects traits expressed earlier in life, when most organisms are most fertile, more than traits expressed later in life.
This idea is central to the antagonistic pleiotropy hypothesis, which was first developed by G.C. Williams in 1957. Williams suggested that some genes responsible for increased fitness in the younger, fertile organism contribute to decreased fitness later in life, which may give an evolutionary explanation for senescence. An example is the p53 gene, which suppresses cancer but also suppresses stem cells, which replenish worn-out tissue.
Unfortunately, the process of antagonistic pleiotropy may result in an altered evolutionary path with delayed adaptation, in addition to effectively cutting the overall benefit of any alleles by roughly half. However, antagonistic pleiotropy also lends greater evolutionary "staying power" to genes controlling beneficial traits, since an organism with a mutation to those genes would have a decreased chance of successfully reproducing, as multiple traits would be affected, potentially for the worse.
Sickle cell anemia is a classic example of the mixed benefit given by the staying power of pleiotropic genes, as the mutation to Hb-S provides the fitness benefit of malaria resistance to heterozygotes as sickle cell trait, while homozygotes have significantly lowered life expectancy—what is known as "heterozygote advantage". Since both of these states are linked to the same mutated gene, large populations today are susceptible to sickle cell despite it being a fitness-impairing genetic disorder.
Examples
Albinism
Albinism is the mutation of the TYR gene, also termed tyrosinase. This mutation causes the most common form of albinism. The mutation alters the production of melanin, thereby affecting melanin-related and other dependent traits throughout the organism. Melanin is a substance made by the body that is used to absorb light and provides coloration to the skin. Indications of albinism are the absence of color in an organism's eyes, hair, and skin, due to the lack of melanin. Some forms of albinism are also known to have symptoms that manifest themselves through rapid-eye movement, light sensitivity, and strabismus.
Autism and schizophrenia
Pleiotropy in genes has been linked between certain psychiatric disorders as well. Deletion in the 22q11.2 region of chromosome 22 has been associated with schizophrenia and autism. Schizophrenia and autism are linked to the same gene deletion but manifest very differently from each other. The resulting phenotype depends on the stage of life at which the individual develops the disorder. Childhood manifestation of the gene deletion is typically associated with autism, while adolescent and later expression of the gene deletion often manifests in schizophrenia or other psychotic disorders. Though the disorders are linked by genetics, there is no increased risk found for adult schizophrenia in patients who experienced autism in childhood.
A 2013 study also genetically linked five psychiatric disorders, including schizophrenia and autism. The link was a single nucleotide polymorphism of two genes involved in calcium channel signaling with neurons. One of these genes, CACNA1C, has been found to influence cognition. It has been associated with autism, as well as linked in studies to schizophrenia and bipolar disorder. These particular studies show clustering of these diseases within patients themselves or families. The estimated heritability of schizophrenia is 70% to 90%, therefore the pleiotropy of genes is crucial since it causes an increased risk for certain psychotic disorders and can aid psychiatric diagnosis.
Phenylketonuria (PKU)
A common example of pleiotropy is the human disease phenylketonuria (PKU). This disease causes mental retardation and reduced hair and skin pigmentation, and can be caused by any of a large number of mutations in the single gene on chromosome 12 that codes for the enzyme phenylalanine hydroxylase, which converts the amino acid phenylalanine to tyrosine. Depending on the mutation involved, this conversion is reduced or ceases entirely. Unconverted phenylalanine builds up in the bloodstream and can lead to levels that are toxic to the developing nervous system of newborn and infant children. The most dangerous form of this is called classic PKU, which is common in infants. The baby seems normal at first but actually incurs permanent intellectual disability. This can cause symptoms such as mental retardation, abnormal gait and posture, and delayed growth. Because tyrosine is used by the body to make melanin (a component of the pigment found in the hair and skin), failure to convert normal levels of phenylalanine to tyrosine can lead to fair hair and skin.
The frequency of this disease varies greatly. Specifically, in the United States, PKU is found at a rate of nearly 1 in 10,000 births. Due to newborn screening, doctors are able to detect PKU in a baby sooner. This allows them to start treatment early, preventing the baby from suffering from the severe effects of PKU. PKU is caused by a mutation in the PAH gene, whose role is to instruct the body on how to make phenylalanine hydroxylase. Phenylalanine hydroxylase is what converts the phenylalanine, taken in through diet, into other things that the body can use. The mutation often decreases the effectiveness or rate at which the hydroxylase breaks down the phenylalanine. This is what causes the phenylalanine to build up in the body.
Sickle cell anemia
Sickle cell anemia is a genetic disease that causes deformed red blood cells with a rigid, crescent shape instead of the normal flexible, round shape. It is caused by a change in one nucleotide, a point mutation in the HBB gene. The HBB gene encodes information to make the beta-globin subunit of hemoglobin, which is the protein red blood cells use to carry oxygen throughout the body. Sickle cell anemia occurs when the HBB gene mutation causes both beta-globin subunits of hemoglobin to change into hemoglobinS (HbS).
Sickle cell anemia is a pleiotropic disease because the expression of a single mutated HBB gene produces numerous consequences throughout the body. The mutated hemoglobin forms polymers and clumps together causing the deoxygenated sickle red blood cells to assume the disfigured sickle shape. As a result, the cells are inflexible and cannot easily flow through blood vessels, increasing the risk of blood clots and possibly depriving vital organs of oxygen. Some complications associated with sickle cell anemia include pain, damaged organs, strokes, high blood pressure, and loss of vision. Sickle red blood cells also have a shortened lifespan and die prematurely.
Marfan syndrome
Marfan syndrome (MFS) is an autosomal dominant disorder which affects 1 in 5–10,000 people. MFS arises from a mutation in the FBN1 gene, which encodes for the glycoprotein fibrillin-1, a major constituent of extracellular microfibrils which form connective tissues. Over 1,000 different mutations in FBN1 have been found to result in abnormal function of fibrillin, which consequently relates to connective tissues elongating progressively and weakening. Because these fibers are found in tissues throughout the body, mutations in this gene can have a widespread effect on certain systems, including the skeletal, cardiovascular, and nervous system, as well as the eyes and lungs.
Without medical intervention, prognosis of Marfan syndrome can range from moderate to life-threatening, with 90% of known causes of death in diagnosed patients relating to cardiovascular complications and congestive cardiac failure. Other characteristics of MFS include an increased arm span and decreased upper to lower body ratio.
"Mini-muscle" allele
A gene recently discovered in laboratory house mice, termed "mini-muscle", causes, when mutated, a 50% reduction in hindlimb muscle mass as its primary effect (the phenotypic effect by which it was originally identified). In addition to smaller hindlimb muscle mass, the mutant mice exhibit lower heart rates during physical activity, and a higher endurance. Mini Muscle Mice also exhibit larger kidneys and livers. All of these morphological deviations influence the behavior and metabolism of the mouse. For example, mice with the Mini Muscle mutation were observed to have a higher per-gram aerobic capacity. The mini-muscle allele shows a mendelian recessive behavior. The mutation is a single nucleotide polymorphism (SNP) in an intron of the myosin heavy polypeptide4 gene.
DNA repair proteins
DNA repair pathways that repair damage to cellular DNA use many different proteins. These proteins often have other functions in addition to DNA repair. In humans, defects in some of these multifunctional proteins can cause widely differing clinical phenotypes. As an example, mutations in the XPB gene that encodes the largest subunit of the basal Transcription factor II H have several pleiotropic effects. XPB mutations are known to be deficient in nucleotide excision repair of DNA and in the quite separate process of gene transcription. In humans, XPB mutations can give rise to the cancer-prone disorder xeroderma pigmentosum or the noncancer-prone multisystem disorder trichothiodystrophy. Another example in humans is the ERCC6 gene, which encodes a protein that mediates DNA repair, transcription, and other cellular processes throughout the body. Mutations in ERCC6 are associated with disorders of the eye (retinal dystrophy), heart (cardiac arrhythmias), and immune system (lymphocyte immunodeficiency).
Chickens
Chickens exhibit various traits affected by pleiotropic genes. Some chickens exhibit frizzle feather trait, where their feathers all curl outward and upward rather than lying flat against the body. Frizzle feather was found to stem from a deletion in the genomic region coding for α-Keratin. This gene seems to pleiotropically lead to other abnormalities like increased metabolism, higher food consumption, accelerated heart rate, and delayed sexual maturity.
Domesticated chickens underwent a rapid selection process that led to unrelated phenotypes having high correlations, suggesting pleiotropic, or at least close linkage, effects between comb mass and physiological structures related to reproductive abilities. Both males and females with larger combs have higher bone density and strength, which allows females to deposit more calcium into eggshells. This linkage is further evidenced by the fact that two of the genes, HAO1 and BMP2, affecting medullary bone (the part of the bone that transfers calcium into developing eggshells) are located at the same locus as the gene affecting comb mass. HAO1 and BMP2 also display pleiotropic effects with commonly desired domestic chicken behavior; those chickens who express higher levels of these two genes in bone tissue produce more eggs and display less egg incubation behavior.
See also
cis-regulatory element
Enhancer (genetics)
Epistasis
Genetic correlation
Metabolic network
Metabolic supermice
Polygene
References
External links
Pleiotropy is 100 years old
Evolutionary developmental biology
Genetics concepts | 0.769049 | 0.992904 | 0.763592 |
Food | Food is any substance consumed by an organism for nutritional support. Food is usually of plant, animal, or fungal origin and contains essential nutrients such as carbohydrates, fats, proteins, vitamins, or minerals. The substance is ingested by an organism and assimilated by the organism's cells to provide energy, maintain life, or stimulate growth. Different species of animals have different feeding behaviours that satisfy the needs of their metabolisms and have evolved to fill a specific ecological niche within specific geographical contexts.
Omnivorous humans are highly adaptable and have adapted to obtain food in many different ecosystems. Humans generally use cooking to prepare food for consumption. The majority of the food energy required is supplied by the industrial food industry, which produces food through intensive agriculture and distributes it through complex food processing and food distribution systems. This system of conventional agriculture relies heavily on fossil fuels, which means that the food and agricultural systems are one of the major contributors to climate change, accounting for as much as 37% of total greenhouse gas emissions.
The food system has significant impacts on a wide range of other social and political issues, including sustainability, biological diversity, economics, population growth, water supply, and food security. Food safety and security are monitored by international agencies like the International Association for Food Protection, the World Resources Institute, the World Food Programme, the Food and Agriculture Organization, and the International Food Information Council.
Definition and classification
Food is any substance consumed to provide nutritional support and energy to an organism. It can be raw, processed, or formulated and is consumed orally by animals for growth, health, or pleasure. Food is mainly composed of water, lipids, proteins, and carbohydrates. Minerals (e.g., salts) and organic substances (e.g., vitamins) can also be found in food. Plants, algae, and some microorganisms use photosynthesis to make some of their own nutrients. Water is found in many foods and has been defined as a food by itself. Water and fiber have low energy densities, or calories, while fat is the most energy-dense component. Some inorganic (non-food) elements are also essential for plant and animal functioning.
Human food can be classified in various ways, either by related content or by how it is processed. The number and composition of food groups can vary. Most systems include four basic groups that describe their origin and relative nutritional function: Vegetables and Fruit, Cereals and Bread, Dairy, and Meat. Studies that look into diet quality group food into whole grains/cereals, refined grains/cereals, vegetables, fruits, nuts, legumes, eggs, dairy products, fish, red meat, processed meat, and sugar-sweetened beverages. The Food and Agriculture Organization and World Health Organization use a system with nineteen food classifications: cereals, roots, pulses and nuts, milk, eggs, fish and shellfish, meat, insects, vegetables, fruits, fats and oils, sweets and sugars, spices and condiments, beverages, foods for nutritional uses, food additives, composite dishes and savoury snacks.
Food sources
In a given ecosystem, food forms a web of interlocking chains with primary producers at the bottom and apex predators at the top. Other aspects of the web include detrovores (that eat detritis) and decomposers (that break down dead organisms). Primary producers include algae, plants, bacteria and protists that acquire their energy from sunlight. Primary consumers are the herbivores that consume the plants, and secondary consumers are the carnivores that consume those herbivores. Some organisms, including most mammals and birds, diet consists of both animals and plants, and they are considered omnivores. The chain ends with the apex predators, the animals that have no known predators in its ecosystem. Humans are considered apex predators.
Humans are omnivores, finding sustenance in vegetables, fruits, cooked meat, milk, eggs, mushrooms and seaweed. Cereal grain is a staple food that provides more food energy worldwide than any other type of crop. Corn (maize), wheat, and rice account for 87% of all grain production worldwide. Just over half of the world's crops are used to feed humans (55 percent), with 36 percent grown as animal feed and 9 percent for biofuels. Fungi and bacteria are also used in the preparation of fermented foods like bread, wine, cheese and yogurt.
Photosynthesis
During photosynthesis, energy from the sun is absorbed and used to transform water and carbon dioxide in the air or soil into oxygen and glucose. The oxygen is then released, and the glucose stored as an energy reserve. Photosynthetic plants, algae and certain bacteria often represent the lowest point of the food chains, making photosynthesis the primary source of energy and food for nearly all life on earth.
Plants also absorb important nutrients and minerals from the air, natural waters, and soil. Carbon, oxygen and hydrogen are absorbed from the air or water and are the basic nutrients needed for plant survival. The three main nutrients absorbed from the soil for plant growth are nitrogen, phosphorus and potassium, with other important nutrients including calcium, sulfur, magnesium, iron boron, chlorine, manganese, zinc, copper molybdenum and nickel.
Microorganisms
Bacteria and other microorganisms also form the lower rungs of the food chain. They obtain their energy from photosynthesis or by breaking down dead organisms, waste or chemical compounds. Some form symbiotic relationships with other organisms to obtain their nutrients. Bacteria provide a source of food for protozoa, who in turn provide a source of food for other organisms such as small invertebrates. Other organisms that feed on bacteria include nematodes, fan worms, shellfish and a species of snail.
In the marine environment, plankton (which includes bacteria, archaea, algae, protozoa and microscopic fungi) provide a crucial source of food to many small and large aquatic organisms.
Without bacteria, life would scarcely exist because bacteria convert atmospheric nitrogen into nutritious ammonia. Ammonia is the precursor to proteins, nucleic acids, and most vitamins. Since the advent of industrial process for nitrogen fixation, the Haber-Bosch Process, the majority of ammonia in the world is human-made.
Plants
Plants as a food source are divided into seeds, fruits, vegetables, legumes, grains and nuts. Where plants fall within these categories can vary, with botanically described fruits such as the tomato, squash, pepper and eggplant or seeds like peas commonly considered vegetables. Food is a fruit if the part eaten is derived from the reproductive tissue, so seeds, nuts and grains are technically fruit. From a culinary perspective, fruits are generally considered the remains of botanically described fruits after grains, nuts, seeds and fruits used as vegetables are removed. Grains can be defined as seeds that humans eat or harvest, with cereal grains (oats, wheat, rice, corn, barley, rye, sorghum and millet) belonging to the Poaceae (grass) family and pulses coming from the Fabaceae (legume) family. Whole grains are foods that contain all the elements of the original seed (bran, germ, and endosperm). Nuts are dry fruits, distinguishable by their woody shell.
Fleshy fruits (distinguishable from dry fruits like grain, seeds and nuts) can be further classified as stone fruits (cherries and peaches), pome fruits (apples, pears), berries (blackberry, strawberry), citrus (oranges, lemon), melons (watermelon, cantaloupe), Mediterranean fruits (grapes, fig), tropical fruits (banana, pineapple). Vegetables refer to any other part of the plant that can be eaten, including roots, stems, leaves, flowers, bark or the entire plant itself. These include root vegetables (potatoes and carrots), bulbs (onion family), flowers (cauliflower and broccoli), leaf vegetables (spinach and lettuce) and stem vegetables (celery and asparagus).
The carbohydrate, protein and lipid content of plants is highly variable. Carbohydrates are mainly in the form of starch, fructose, glucose and other sugars. Most vitamins are found from plant sources, with exceptions of vitamin D and vitamin B12. Minerals can also be plentiful or not. Fruit can consist of up to 90% water, contain high levels of simple sugars that contribute to their sweet taste, and have a high vitamin C content. Compared to fleshy fruit (excepting Bananas) vegetables are high in starch, potassium, dietary fiber, folate and vitamins and low in fat and calories. Grains are more starch based and nuts have a high protein, fibre, vitamin E and B content. Seeds are a good source of food for animals because they are abundant and contain fibre and healthful fats, such as omega-3 fats. Complicated chemical interactions can enhance or depress bioavailability of certain nutrients. Phytates can prevent the release of some sugars and vitamins.
Animals that only eat plants are called herbivores, with those that mostly just eat fruits known as frugivores, leaves, while shoot eaters are folivores (pandas) and wood eaters termed xylophages (termites). Frugivores include a diverse range of species from annelids to elephants, chimpanzees and many birds. About 182 fish consume seeds or fruit. Animals (domesticated and wild) use as many types of grasses that have adapted to different locations as their main source of nutrients.
Humans eat thousands of plant species; there may be as many as 75,000 edible species of angiosperms, of which perhaps 7,000 are often eaten. Plants can be processed into breads, pasta, cereals, juices and jams or raw ingredients such as sugar, herbs, spices and oils can be extracted. Oilseeds are pressed to produce rich oilssunflower, flaxseed, rapeseed (including canola oil) and sesame.
Many plants and animals have coevolved in such a way that the fruit is a good source of nutrition to the animal who then excretes the seeds some distance away, allowing greater dispersal. Even seed predation can be mutually beneficial, as some seeds can survive the digestion process. Insects are major eaters of seeds, with ants being the only real seed dispersers. Birds, although being major dispersers, only rarely eat seeds as a source of food and can be identified by their thick beak that is used to crack open the seed coat. Mammals eat a more diverse range of seeds, as they are able to crush harder and larger seeds with their teeth.
Animals
Animals are used as food either directly or indirectly. This includes meat, eggs, shellfish and dairy products like milk and cheese. They are an important source of protein and are considered complete proteins for human consumption as they contain all the essential amino acids that the human body needs. One steak, chicken breast or pork chop contains about 30 grams of protein. One large egg has 7 grams of protein. A serving of cheese has about 15 grams of protein. And 1 cup of milk has about 8 grams of protein. Other nutrients found in animal products include calories, fat, essential vitamins (including B12) and minerals (including zinc, iron, calcium, magnesium).
Food products produced by animals include milk produced by mammary glands, which in many cultures is drunk or processed into dairy products (cheese, butter, etc.). Eggs laid by birds and other animals are eaten and bees produce honey, a reduced nectar from flowers that is used as a popular sweetener in many cultures. Some cultures consume blood, such as in blood sausage, as a thickener for sauces, or in a cured, salted form for times of food scarcity, and others use blood in stews such as jugged hare.
Taste
Animals, specifically humans, typically have five different types of tastes: sweet, sour, salty, bitter, and umami. The differing tastes are important for distinguishing between foods that are nutritionally beneficial and those which may contain harmful toxins. As animals have evolved, the tastes that provide the most energy are the most pleasant to eat while others are not enjoyable, although humans in particular can acquire a preference for some substances which are initially unenjoyable. Water, while important for survival, has no taste.
Sweetness is almost always caused by a type of simple sugar such as glucose or fructose, or disaccharides such as sucrose, a molecule combining glucose and fructose. Sourness is caused by acids, such as vinegar in alcoholic beverages. Sour foods include citrus, specifically lemons and limes. Sour is evolutionarily significant as it can signal a food that may have gone rancid due to bacteria. Saltiness is the taste of alkali metal ions such as sodium and potassium. It is found in almost every food in low to moderate proportions to enhance flavor. Bitter taste is a sensation considered unpleasant characterised by having a sharp, pungent taste. Unsweetened dark chocolate, caffeine, lemon rind, and some types of fruit are known to be bitter. Umami, commonly described as savory, is a marker of proteins and characteristic of broths and cooked meats. Foods that have a strong umami flavor include cheese, meat and mushrooms.
While most animals taste buds are located in their mouth, some insects taste receptors are located on their legs and some fish have taste buds along their entire body. Dogs, cats and birds have relatively few taste buds (chickens have about 30), adult humans have between 2000 and 4000, while catfish can have more than a million. Herbivores generally have more than carnivores as they need to tell which plants may be poisonous. Not all mammals share the same tastes: some rodents can taste starch, cats cannot taste sweetness, and several carnivores (including hyenas, dolphins, and sea lions) have lost the ability to sense up to four of the five taste modalities found in humans.
Digestion
Food is broken into nutrient components through digestive process. Proper digestion consists of mechanical processes (chewing, peristalsis) and chemical processes (digestive enzymes and microorganisms). The digestive systems of herbivores and carnivores are very different as plant matter is harder to digest. Carnivores mouths are designed for tearing and biting compared to the grinding action found in herbivores. Herbivores however have comparatively longer digestive tracts and larger stomachs to aid in digesting the cellulose in plants.
Food safety
According to the World Health Organization (WHO), about 600 million people worldwide get sick and 420,000 die each year from eating contaminated food. Diarrhea is the most common illness caused by consuming contaminated food, with about 550 million cases and 230,000 deaths from diarrhea each year. Children under five years of age account for 40% of the burden of foodborne illness, with 125,000 deaths each year.
A 2003 World Health Organization (WHO) report concluded that about 30% of reported food poisoning outbreaks in the WHO European Region occur in private homes. According to the WHO and CDC, in the USA alone, annually, there are 76 million cases of foodborne illness leading to 325,000 hospitalizations and 5,000 deaths.
From 2011 to 2016, on average, there were 668,673 cases of foodborne illness and 21 deaths each year. In addition, during this period, 1,007 food poisoning outbreaks with 30,395 cases of food poisoning were reported.
See also
Food pairing
List of food and drink monuments
References
Further reading
Collingham, E. M. (2011). The Taste of War: World War Two and the Battle for Food
Katz, Solomon (2003). The Encyclopedia of Food and Culture, Scribner
Mobbs, Michael (2012). Sustainable Food Sydney: NewSouth Publishing,
Nestle, Marion (2007). Food Politics: How the Food Industry Influences Nutrition and Health, University Presses of California, revised and expanded edition,
The Future of Food (2015). A panel discussion at the 2015 Digital Life Design (DLD) Annual Conference. "How can we grow and enjoy food, closer to home, further into the future? MIT Media Lab's Kevin Slavin hosts a conversation with food artist, educator, and entrepreneur Emilie Baltz, professor Caleb Harper from MIT Media Lab's CityFarm project, the Barbarian Group's Benjamin Palmer, and Andras Forgacs, the co-founder and CEO of Modern Meadow, who is growing 'victimless' meat in a lab. The discussion addresses issues of sustainable urban farming, ecosystems, technology, food supply chains and their broad environmental and humanitarian implications, and how these changes in food production may change what people may find delicious ... and the other way around." Posted on the official YouTube Channel of DLD
External links
of Food Timeline
Food, BBC Radio 4 discussion with Rebecca Spang, Ivan Day and Felipe Fernandez-Armesto (In Our Time, 27 December 2001)
Food watchlist articles | 0.763924 | 0.999556 | 0.763584 |
Decarboxylation | Decarboxylation is a chemical reaction that removes a carboxyl group and releases carbon dioxide (CO2). Usually, decarboxylation refers to a reaction of carboxylic acids, removing a carbon atom from a carbon chain. The reverse process, which is the first chemical step in photosynthesis, is called carboxylation, the addition of CO2 to a compound. Enzymes that catalyze decarboxylations are called decarboxylases or, the more formal term, carboxy-lyases (EC number 4.1.1).
In organic chemistry
The term "decarboxylation" usually means replacement of a carboxyl group with a hydrogen atom:
Decarboxylation is one of the oldest known organic reactions. It is one of the processes assumed to accompany pyrolysis and destructive distillation.
Overall, decarboxylation depends upon stability of the carbanion synthon , although the anion may not be a true chemical intermediate. Typically, carboxylic acids decarboxylate slowly, but carboxylic acids with an α electron-withdrawing group (e.g. βketo acids, βnitriles, αnitro acids, or arylcarboxylic acids) decarboxylate easily. Decarboxylation of sodium chlorodifluoroacetate generates difluorocarbene:
Decarboxylations are an important in the malonic and acetoacetic ester synthesis. The Knoevenagel condensation and they allow keto acids serve as a stabilizing protecting group for carboxylic acid enols.
For the free acids, conditions that deprotonate the carboxyl group (possibly protonating the electron-withdrawing group to form a zwitterionic tautomer) accelerate decarboxylation. A strong base is key to ketonization, in which a pair of carboxylic acids combine to the eponymous functional group:
Transition metal salts, especially copper compounds, facilitate decarboxylation via carboxylate complex intermediates. Metals that catalyze cross-coupling reactions thus treat aryl carboxylates as an aryl anion synthon; this synthetic strategy is the decarboxylative cross-coupling reaction.
Upon heating in cyclohexanone amino acids decarboxylate. In the related Hammick reaction, uncatalyzed decarboxylation of a picolinic acid gives a stable carbene that attacks a carbonyl electrophile.
Oxidative decarboxylations are generally radical reactions. These include the Kolbe electrolysis and Hunsdiecker-Kochi reactions. The Barton decarboxylation is an unusual radical reductive decarboxylation.
As described above, most decarboxylations start with a carboxylic acid or its alkali metal salt, but the Krapcho decarboxylation starts with methyl esters. In this case, the reaction begins with halide-mediated cleavage of the ester, forming the carboxylate.
In biochemistry
Decarboxylations are pervasive in biology. They are often classified according to the cofactors that catalyze the transformations. Biotin-coupled processes effect the decarboxylation of malonyl-CoA to acetyl-CoA. Thiamine (T:) is the active component for decarboxylation of alpha-ketoacids, including pyruvate:
Pyridoxal phosphate promotes decarboxylation of amino acids. Flavin-dependent decarboxylases are involved in transformations of cysteine.
Iron-based hydroxylases operate by reductive activation of using the decarboxylation of alpha-ketoglutarate as an electron donor. The decarboxylation can be depicted as such:
Decarboxylation of amino acids
Common biosynthetic oxidative decarboxylations of amino acids to amines are:
tryptophan to tryptamine
phenylalanine to phenylethylamine
tyrosine to tyramine
histidine to histamine
serine to ethanolamine
glutamic acid to GABA
lysine to cadaverine
arginine to agmatine
ornithine to putrescine
5-HTP to serotonin
L-DOPA to dopamine
Other decarboxylation reactions from the citric acid cycle include:
pyruvate to acetyl-CoA (see pyruvate decarboxylation)
oxalosuccinate to α-ketoglutarate
α-ketoglutarate to succinyl-CoA.
Fatty acid synthesis
Straight-chain fatty acid synthesis occurs by recurring reactions involving decarboxylation of malonyl-CoA.
Case studies
Upon heating, Δ9-tetrahydrocannabinolic acid decarboxylates to give the psychoactive compound Δ9-Tetrahydrocannabinol. When cannabis is heated in vacuum, the decarboxylation of tetrahydrocannabinolic acid (THCA) appears to follow first order kinetics. The log fraction of THCA present decreases steadily over time, and the rate of decrease varies according to temperature. At 10-degree increments from 100 to 140 °C, half of the THCA is consumed in 30, 11, 6, 3, and 2 minutes; hence the rate constant follows Arrhenius' law, ranging between 10−8 and 10−5 in a linear log-log relationship with inverse temperature. However, modelling of decarboxylation of salicylic acid with a water molecule had suggested an activation barrier of 150 kJ/mol for a single molecule in solvent, much too high for the observed rate. Therefore, it was concluded that this reaction, conducted in the solid phase in plant material with a high fraction of carboxylic acids, follows a pseudo first order kinetics in which a nearby carboxylic acid precipitates without affecting the observed rate constant. Two transition states corresponding to indirect and direct keto-enol routes are possible, with energies of 93 and 104 kJ/mol. Both intermediates involve protonation of the alpha carbon, disrupting one of the double bonds of the aromatic ring and permitting the beta-keto group (which takes the form of an enol in THCA and THC) to participate in decarboxylation.
In beverages stored for long periods, very small amounts of benzene may form from benzoic acid by decarboxylation catalyzed by the presence of ascorbic acid.
The addition of catalytic amounts of cyclohexenone has been reported to catalyze the decarboxylation of amino acids. However, using such catalysts may also yield an amount of unwanted by-products.
References
Substitution reactions | 0.767471 | 0.994932 | 0.763582 |
Digestion | Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food. The saliva also contains mucus, which lubricates the food; the electrolyte hydrogencarbonate, which provides the ideal conditions of pH for amylase to work; and other electrolytes (, , ). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damaging effects of chemicals like concentrated hydrochloric acid while also aiding lubrication. Hydrochloric acid provides acidic pH for pepsin. At the same time protein digestion is occurring, mechanical mixing occurs by peristalsis, which is waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes. Pepsin breaks down proteins into peptides or proteoses, which is further broken down into dipeptides and amino acids by enzymes in the small intestine. Studies suggest that increasing the number of chews per bite increases relevant gut hormones and may decrease self-reported hunger and food intake.
When the pyloric sphincter valve opens, partially digested food (chyme) enters the duodenum where it mixes with digestive enzymes from the pancreas and bile juice from the liver and then passes through the small intestine, in which digestion continues. When the chyme is fully digested, it is absorbed into the blood. 95% of nutrient absorption occurs in the small intestine. Water and minerals are reabsorbed back into the blood in the colon (large intestine) where the pH is slightly acidic (about 5.6 ~ 6.9). Some vitamins, such as biotin and vitamin K (K2MK7) produced by bacteria in the colon are also absorbed into the blood in the colon. Absorption of water, simple sugar and alcohol also takes place in stomach. Waste material (feces) is eliminated from the rectum during defecation.
Digestive system
Digestive systems take many forms. There is a fundamental distinction between internal and external digestion. External digestion developed earlier in evolutionary history, and most fungi still rely on it. In this process, enzymes are secreted into the environment surrounding the organism, where they break down an organic material, and some of the products diffuse back to the organism. Animals have a tube (gastrointestinal tract) in which internal digestion occurs, which is more efficient because more of the broken down products can be captured, and the internal chemical environment can be more efficiently controlled.
Some organisms, including nearly all spiders, secrete biotoxins and digestive chemicals (e.g., enzymes) into the extracellular environment prior to ingestion of the consequent "soup". In others, once potential nutrients or food is inside the organism, digestion can be conducted to a vesicle or a sac-like structure, through a tube, or through several specialized organs aimed at making the absorption of nutrients more efficient.
Secretion systems
Bacteria use several systems to obtain nutrients from other organisms in the environments.
Channel transport system
In a channel transport system, several proteins form a contiguous channel traversing the inner and outer membranes of the bacteria. It is a simple system, which consists of only three protein subunits: the ABC protein, membrane fusion protein (MFP), and outer membrane protein. This secretion system transports various chemical species, from ions, drugs, to proteins of various sizes (20–900 kDa). The chemical species secreted vary in size from the small Escherichia coli peptide colicin V, (10 kDa) to the Pseudomonas fluorescens cell adhesion protein LapA of 900 kDa.
Molecular syringe
A type III secretion system means that a molecular syringe is used through which a bacterium (e.g. certain types of Salmonella, Shigella, Yersinia) can inject nutrients into protist cells. One such mechanism was first discovered in Y. pestis and showed that toxins could be injected directly from the bacterial cytoplasm into the cytoplasm of its host's cells rather than be secreted into the extracellular medium.
Conjugation machinery
The conjugation machinery of some bacteria (and archaeal flagella) is capable of transporting both DNA and proteins. It was discovered in Agrobacterium tumefaciens, which uses this system to introduce the Ti plasmid and proteins into the host, which develops the crown gall (tumor). The VirB complex of Agrobacterium tumefaciens is the prototypic system.
In the nitrogen-fixing Rhizobia, conjugative elements naturally engage in inter-kingdom conjugation. Such elements as the Agrobacterium Ti or Ri plasmids contain elements that can transfer to plant cells. Transferred genes enter the plant cell nucleus and effectively transform the plant cells into factories for the production of opines, which the bacteria use as carbon and energy sources. Infected plant cells form crown gall or root tumors. The Ti and Ri plasmids are thus endosymbionts of the bacteria, which are in turn endosymbionts (or parasites) of the infected plant.
The Ti and Ri plasmids are themselves conjugative. Ti and Ri transfer between bacteria uses an independent system (the tra, or transfer, operon) from that for inter-kingdom transfer (the vir, or virulence, operon). Such transfer creates virulent strains from previously avirulent Agrobacteria.
Release of outer membrane vesicles
In addition to the use of the multiprotein complexes listed above, gram-negative bacteria possess another method for release of material: the formation of outer membrane vesicles. Portions of the outer membrane pinch off, forming spherical structures made of a lipid bilayer enclosing periplasmic materials. Vesicles from a number of bacterial species have been found to contain virulence factors, some have immunomodulatory effects, and some can directly adhere to and intoxicate host cells. While release of vesicles has been demonstrated as a general response to stress conditions, the process of loading cargo proteins seems to be selective.
Gastrovascular cavity
The gastrovascular cavity functions as a stomach in both digestion and the distribution of nutrients to all parts of the body. Extracellular digestion takes place within this central cavity, which is lined with the gastrodermis, the internal layer of epithelium. This cavity has only one opening to the outside that functions as both a mouth and an anus: waste and undigested matter is excreted through the mouth/anus, which can be described as an incomplete gut.
In a plant such as the Venus flytrap that can make its own food through photosynthesis, it does not eat and digest its prey for the traditional objectives of harvesting energy and carbon, but mines prey primarily for essential nutrients (nitrogen and phosphorus in particular) that are in short supply in its boggy, acidic habitat.
Phagosome
A phagosome is a vacuole formed around a particle absorbed by phagocytosis. The vacuole is formed by the fusion of the cell membrane around the particle. A phagosome is a cellular compartment in which pathogenic microorganisms can be killed and digested. Phagosomes fuse with lysosomes in their maturation process, forming phagolysosomes. In humans, Entamoeba histolytica can phagocytose red blood cells.
Specialised organs and behaviours
To aid in the digestion of their food, animals evolved organs such as beaks, tongues, radulae, teeth, crops, gizzards, and others.
Beaks
Birds have bony beaks that are specialised according to the bird's ecological niche. For example, macaws primarily eat seeds, nuts, and fruit, using their beaks to open even the toughest seed. First they scratch a thin line with the sharp point of the beak, then they shear the seed open with the sides of the beak.
The mouth of the squid is equipped with a sharp horny beak mainly made of cross-linked proteins. It is used to kill and tear prey into manageable pieces. The beak is very robust, but does not contain any minerals, unlike the teeth and jaws of many other organisms, including marine species. The beak is the only indigestible part of the squid.
Tongue
The tongue is skeletal muscle on the floor of the mouth of most vertebrates, that manipulates food for chewing (mastication) and swallowing (deglutition). It is sensitive and kept moist by saliva. The underside of the tongue is covered with a smooth mucous membrane. The tongue also has a touch sense for locating and positioning food particles that require further chewing. The tongue is used to roll food particles into a bolus before being transported down the esophagus through peristalsis.
The sublingual region underneath the front of the tongue is a location where the oral mucosa is very thin, and underlain by a plexus of veins. This is an ideal location for introducing certain medications to the body. The sublingual route takes advantage of the highly vascular quality of the oral cavity, and allows for the speedy application of medication into the cardiovascular system, bypassing the gastrointestinal tract.
Teeth
Teeth (singular tooth) are small whitish structures found in the jaws (or mouths) of many vertebrates that are used to tear, scrape, milk and chew food. Teeth are not made of bone, but rather of tissues of varying density and hardness, such as enamel, dentine and cementum. Human teeth have a blood and nerve supply which enables proprioception. This is the ability of sensation when chewing, for example if we were to bite into something too hard for our teeth, such as a chipped plate mixed in food, our teeth send a message to our brain and we realise that it cannot be chewed, so we stop trying.
The shapes, sizes and numbers of types of animals' teeth are related to their diets. For example, herbivores have a number of molars which are used to grind plant matter, which is difficult to digest. Carnivores have canine teeth which are used to kill and tear meat.
Crop
A crop, or croup, is a thin-walled expanded portion of the alimentary tract used for the storage of food prior to digestion. In some birds it is an expanded, muscular pouch near the gullet or throat. In adult doves and pigeons, the crop can produce crop milk to feed newly hatched birds.
Certain insects may have a crop or enlarged esophagus.
Abomasum
Herbivores have evolved cecums (or an abomasum in the case of ruminants). Ruminants have a fore-stomach with four chambers. These are the rumen, reticulum, omasum, and abomasum. In the first two chambers, the rumen and the reticulum, the food is mixed with saliva and separates into layers of solid and liquid material. Solids clump together to form the cud (or bolus). The cud is then regurgitated, chewed slowly to completely mix it with saliva and to break down the particle size.
Fibre, especially cellulose and hemi-cellulose, is primarily broken down into the volatile fatty acids, acetic acid, propionic acid and butyric acid in these chambers (the reticulo-rumen) by microbes: (bacteria, protozoa, and fungi). In the omasum, water and many of the inorganic mineral elements are absorbed into the blood stream.
The abomasum is the fourth and final stomach compartment in ruminants. It is a close equivalent of a monogastric stomach (e.g., those in humans or pigs), and digesta is processed here in much the same way. It serves primarily as a site for acid hydrolysis of microbial and dietary protein, preparing these protein sources for further digestion and absorption in the small intestine. Digesta is finally moved into the small intestine, where the digestion and absorption of nutrients occurs. Microbes produced in the reticulo-rumen are also digested in the small intestine.
Specialised behaviours
Regurgitation has been mentioned above under abomasum and crop, referring to crop milk, a secretion from the lining of the crop of pigeons and doves with which the parents feed their young by regurgitation.
Many sharks have the ability to turn their stomachs inside out and evert it out of their mouths in order to get rid of unwanted contents (perhaps developed as a way to reduce exposure to toxins).
Other animals, such as rabbits and rodents, practise coprophagia behaviours – eating specialised faeces in order to re-digest food, especially in the case of roughage. Capybara, rabbits, hamsters and other related species do not have a complex digestive system as do, for example, ruminants. Instead they extract more nutrition from grass by giving their food a second pass through the gut. Soft faecal pellets of partially digested food are excreted and generally consumed immediately. They also produce normal droppings, which are not eaten.
Young elephants, pandas, koalas, and hippos eat the faeces of their mother, probably to obtain the bacteria required to properly digest vegetation. When they are born, their intestines do not contain these bacteria (they are completely sterile). Without them, they would be unable to get any nutritional value from many plant components.
In earthworms
An earthworm's digestive system consists of a mouth, pharynx, esophagus, crop, gizzard, and intestine. The mouth is surrounded by strong lips, which act like a hand to grab pieces of dead grass, leaves, and weeds, with bits of soil to help chew. The lips break the food down into smaller pieces. In the pharynx, the food is lubricated by mucus secretions for easier passage. The esophagus adds calcium carbonate to neutralize the acids formed by food matter decay. Temporary storage occurs in the crop where food and calcium carbonate are mixed. The powerful muscles of the gizzard churn and mix the mass of food and dirt. When the churning is complete, the glands in the walls of the gizzard add enzymes to the thick paste, which helps chemically breakdown the organic matter. By peristalsis, the mixture is sent to the intestine where friendly bacteria continue chemical breakdown. This releases carbohydrates, protein, fat, and various vitamins and minerals for absorption into the body.
Overview of vertebrate digestion
In most vertebrates, digestion is a multistage process in the digestive system, starting from ingestion of raw materials, most often other organisms. Ingestion usually involves some type of mechanical and chemical processing. Digestion is separated into four steps:
Ingestion: placing food into the mouth (entry of food in the digestive system),
Mechanical and chemical breakdown: mastication and the mixing of the resulting bolus with water, acids, bile and enzymes in the stomach and intestine to break down complex chemical species into simple structures,
Absorption: of nutrients from the digestive system to the circulatory and lymphatic capillaries through osmosis, active transport, and diffusion, and
Egestion (Excretion): Removal of undigested materials from the digestive tract through defecation.
Underlying the process is muscle movement throughout the system through swallowing and peristalsis. Each step in digestion requires energy, and thus imposes an "overhead charge" on the energy made available from absorbed substances. Differences in that overhead cost are important influences on lifestyle, behavior, and even physical structures. Examples may be seen in humans, who differ considerably from other hominids (lack of hair, smaller jaws and musculature, different dentition, length of intestines, cooking, etc.).
The major part of digestion takes place in the small intestine. The large intestine primarily serves as a site for fermentation of indigestible matter by gut bacteria and for resorption of water from digests before excretion.
In mammals, preparation for digestion begins with the cephalic phase in which saliva is produced in the mouth and digestive enzymes are produced in the stomach. Mechanical and chemical digestion begin in the mouth where food is chewed, and mixed with saliva to begin enzymatic processing of starches. The stomach continues to break food down mechanically and chemically through churning and mixing with both acids and enzymes. Absorption occurs in the stomach and gastrointestinal tract, and the process finishes with defecation.
Human digestion process
The human gastrointestinal tract is around long. Food digestion physiology varies between individuals and upon other factors such as the characteristics of the food and size of the meal, and the process of digestion normally takes between 24 and 72 hours.
Digestion begins in the mouth with the secretion of saliva and its digestive enzymes. Food is formed into a bolus by the mechanical mastication and swallowed into the esophagus from where it enters the stomach through the action of peristalsis. Gastric juice contains hydrochloric acid and pepsin which would damage the walls of the stomach and mucus and bicarbonates are secreted for protection. In the stomach further release of enzymes break down the food further and this is combined with the churning action of the stomach. Mainly proteins are digested in stomach. The partially digested food enters the duodenum as a thick semi-liquid chyme. In the small intestine, the larger part of digestion takes place and this is helped by the secretions of bile, pancreatic juice and intestinal juice. The intestinal walls are lined with villi, and their epithelial cells are covered with numerous microvilli to improve the absorption of nutrients by increasing the surface area of the intestine. Bile helps in emulsification of fats and also activates lipases.
In the large intestine, the passage of food is slower to enable fermentation by the gut flora to take place. Here, water is absorbed and waste material stored as feces to be removed by defecation via the anal canal and anus.
Neural and biochemical control mechanisms
Different phases of digestion take place including: the cephalic phase, gastric phase, and intestinal phase.
The cephalic phase occurs at the sight, thought and smell of food, which stimulate the cerebral cortex. Taste and smell stimuli are sent to the hypothalamus and medulla oblongata. After this it is routed through the vagus nerve and release of acetylcholine. Gastric secretion at this phase rises to 40% of maximum rate. Acidity in the stomach is not buffered by food at this point and thus acts to inhibit parietal (secretes acid) and G cell (secretes gastrin) activity via D cell secretion of somatostatin.
The gastric phase takes 3 to 4 hours. It is stimulated by distension of the stomach, presence of food in stomach and decrease in pH. Distention activates long and myenteric reflexes. This activates the release of acetylcholine, which stimulates the release of more gastric juices. As protein enters the stomach, it binds to hydrogen ions, which raises the pH of the stomach. Inhibition of gastrin and gastric acid secretion is lifted. This triggers G cells to release gastrin, which in turn stimulates parietal cells to secrete gastric acid. Gastric acid is about 0.5% hydrochloric acid, which lowers the pH to the desired pH of 1–3. Acid release is also triggered by acetylcholine and histamine.
The intestinal phase has two parts, the excitatory and the inhibitory. Partially digested food fills the duodenum. This triggers intestinal gastrin to be released. Enterogastric reflex inhibits vagal nuclei, activating sympathetic fibers causing the pyloric sphincter to tighten to prevent more food from entering, and inhibits local reflexes.
Breakdown into nutrients
Protein digestion
Protein digestion occurs in the stomach and duodenum in which 3 main enzymes, pepsin secreted by the stomach and trypsin and chymotrypsin secreted by the pancreas, break down food proteins into polypeptides that are then broken down by various exopeptidases and dipeptidases into amino acids. The digestive enzymes however are mostly secreted as their inactive precursors, the zymogens. For example, trypsin is secreted by pancreas in the form of trypsinogen, which is activated in the duodenum by enterokinase to form trypsin. Trypsin then cleaves proteins to smaller polypeptides.
Fat digestion
Digestion of some fats can begin in the mouth where lingual lipase breaks down some short chain lipids into diglycerides. However fats are mainly digested in the small intestine. The presence of fat in the small intestine produces hormones that stimulate the release of pancreatic lipase from the pancreas and bile from the liver which helps in the emulsification of fats for absorption of fatty acids. Complete digestion of one molecule of fat (a triglyceride) results a mixture of fatty acids, mono- and di-glycerides, but no glycerol.
Carbohydrate digestion
In humans, dietary starches are composed of glucose units arranged in long chains called amylose, a polysaccharide. During digestion, bonds between glucose molecules are broken by salivary and pancreatic amylase, resulting in progressively smaller chains of glucose. This results in simple sugars glucose and maltose (2 glucose molecules) that can be absorbed by the small intestine.
Lactase is an enzyme that breaks down the disaccharide lactose to its component parts, glucose and galactose. Glucose and galactose can be absorbed by the small intestine. Approximately 65 percent of the adult population produce only small amounts of lactase and are unable to eat unfermented milk-based foods. This is commonly known as lactose intolerance. Lactose intolerance varies widely by genetic heritage; more than 90 percent of peoples of east Asian descent are lactose intolerant, in contrast to about 5 percent of people of northern European descent.
Sucrase is an enzyme that breaks down the disaccharide sucrose, commonly known as table sugar, cane sugar, or beet sugar. Sucrose digestion yields the sugars fructose and glucose which are readily absorbed by the small intestine.
DNA and RNA digestion
DNA and RNA are broken down into mononucleotides by the nucleases deoxyribonuclease and ribonuclease (DNase and RNase) from the pancreas.
Non-destructive digestion
Some nutrients are complex molecules (for example vitamin B12) which would be destroyed if they were broken down into their functional groups. To digest vitamin B12 non-destructively, haptocorrin in saliva strongly binds and protects the B12 molecules from stomach acid as they enter the stomach and are cleaved from their protein complexes.
After the B12-haptocorrin complexes pass from the stomach via the pylorus to the duodenum, pancreatic proteases cleave haptocorrin from the B12 molecules which rebind to intrinsic factor (IF). These B12-IF complexes travel to the ileum portion of the small intestine where cubilin receptors enable assimilation and circulation of B12-IF complexes in the blood.
Digestive hormones
There are at least five hormones that aid and regulate the digestive system in mammals. There are variations across the vertebrates, as for instance in birds. Arrangements are complex and additional details are regularly discovered. Connections to metabolic control (largely the glucose-insulin system) have been uncovered.
Gastrin – is in the stomach and stimulates the gastric glands to secrete pepsinogen (an inactive form of the enzyme pepsin) and hydrochloric acid. Secretion of gastrin is stimulated by food arriving in stomach. The secretion is inhibited by low pH.
Secretin – is in the duodenum and signals the secretion of sodium bicarbonate in the pancreas and it stimulates the bile secretion in the liver. This hormone responds to the acidity of the chyme.
Cholecystokinin (CCK) – is in the duodenum and stimulates the release of digestive enzymes in the pancreas and stimulates the emptying of bile in the gall bladder. This hormone is secreted in response to fat in chyme.
Gastric inhibitory peptide (GIP) – is in the duodenum and decreases the stomach churning in turn slowing the emptying in the stomach. Another function is to induce insulin secretion.
Motilin – is in the duodenum and increases the migrating myoelectric complex component of gastrointestinal motility and stimulates the production of pepsin.
Significance of pH
Digestion is a complex process controlled by several factors. pH plays a crucial role in a normally functioning digestive tract. In the mouth, pharynx and esophagus, pH is typically about 6.8, very weakly acidic. Saliva controls pH in this region of the digestive tract. Salivary amylase is contained in saliva and starts the breakdown of carbohydrates into monosaccharides. Most digestive enzymes are sensitive to pH and will denature in a high or low pH environment.
The stomach's high acidity inhibits the breakdown of carbohydrates within it. This acidity confers two benefits: it denatures proteins for further digestion in the small intestines, and provides non-specific immunity, damaging or eliminating various pathogens.
In the small intestines, the duodenum provides critical pH balancing to activate digestive enzymes. The liver secretes bile into the duodenum to neutralize the acidic conditions from the stomach, and the pancreatic duct empties into the duodenum, adding bicarbonate to neutralize the acidic chyme, thus creating a neutral environment. The mucosal tissue of the small intestines is alkaline with a pH of about 8.5.
See also
Digestive system of gastropods
Digestive system of humpback whales
Evolution of the mammalian digestive system
Discovery and development of proton pump inhibitors
Erepsin
Gastroesophageal reflux disease
References
External links
Human Physiology – Digestion
NIH guide to digestive system
The Digestive System
How does the Digestive System Work?
Digestive system
Metabolism | 0.765635 | 0.997311 | 0.763576 |
Elution | In analytical and organic chemistry, elution is the process of extracting one material from another by washing with a solvent: washing of loaded ion-exchange resins to remove captured ions, or eluting proteins or other biopolymers from a gel electrophoresis or chromatography column.
In a liquid chromatography experiment, for example, an analyte is generally adsorbed by ("bound to") an adsorbent in a liquid chromatography column. The adsorbent, a solid phase, called a "stationary phase", is a powder which is coated onto a solid support. Based on an adsorbent's composition, it can have varying affinities to "hold onto" other molecules—forming a thin film on the surface of its particles. Elution then is the process of removing analytes from the adsorbent by running a solvent, called an "eluent", past the adsorbent–analyte complex. As the solvent molecules "elute", or travel down through the chromatography column, they can either pass by the adsorbent–analyte complex or displace the analyte by binding to the adsorbent in its place. After the solvent molecules displace the analyte, the analyte can be carried out of the column for analysis. This is why as the mobile phase, called an "eluate", passes out of the column, it typically flows into a detector or is collected by a fraction collector for compositional analysis.
Predicting and controlling the order of elution is a key aspect of column chromatographic and column electrophoretic methods.
Eluotropic series
An eluotropic series is listing of various compounds in order of eluting power for a given adsorbent. The "eluting power" of a solvent is largely a measure of how well the solvent can "pull" an analyte off the adsorbent to which it is attached. This often happens when the eluent adsorbs onto the stationary phase, displacing the analyte. Such series are useful for determining necessary solvents needed for chromatography of chemical compounds. Normally such a series progresses from non-polar solvents, such as n-hexane, to polar solvents such as methanol or water. The order of solvents in an eluotropic series depends both on the stationary phase as well as on the compound used to determine the order.
Eluent
The eluent or eluant is the "carrier" portion of the mobile phase. It moves the analytes through the chromatograph. In liquid chromatography, the eluent is the liquid solvent; in gas chromatography, it is the carrier gas.
Eluate
The eluate contains the analyte material that emerges from the chromatograph. It specifically includes both the analytes and coeluting solutes passing through the column, while the eluent is only the carrier.
Elution time and elution volume
The "elution time" of a solute is the time between the start of the separation (the time at which the solute enters the column) and the time at which the solute elutes. In the same way, the elution volume is the volume of eluent required to cause elution. Under standard conditions for a known mix of solutes in a certain technique, the elution volume may be enough information to identify solutes. For instance, a mixture of amino acids may be separated by ion-exchange chromatography. Under a particular set of conditions, the amino acids will elute in the same order and at the same elution volume.
Antibody elution
Antibody elution is the process of removing antibodies that are attached to their targets, such as the surface of red blood cells. Techniques include using heat, a freeze-thaw cycle, ultrasound, acids or organic solvents. No single method is best in all situations.
See also
Chromatography
Desorption
Electroelution
Gradient elution in high performance liquid chromatography
Leaching
References
External links
Chemistry glossary
Eluotropic series
Analytical chemistry
Chromatography | 0.772055 | 0.988999 | 0.763561 |
Energy transformation | Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because thermal energy represents a particularly disordered form of energy; it is spread out randomly among many available states of a collection of microscopic particles constituting the system (these combinations of position and momentum for each of the particles are said to form a phase space). The measure of this disorder or randomness is entropy, and its defining feature is that the entropy of an isolated system never decreases. One cannot take a high-entropy system (like a hot substance, with a certain amount of thermal energy) and convert it into a low entropy state (like a low-temperature substance, with correspondingly lower energy), without that entropy going somewhere else (like the surrounding air). In other words, there is no way to concentrate energy without spreading out energy somewhere else.
Thermal energy in equilibrium at a given temperature already represents the maximal evening-out of energy between all possible states because it is not entirely convertible to a "useful" form, i.e. one that can do more than just affect temperature. The second law of thermodynamics states that the entropy of a closed system can never decrease. For this reason, thermal energy in a system may be converted to other kinds of energy with efficiencies approaching 100% only if the entropy of the universe is increased by other means, to compensate for the decrease in entropy associated with the disappearance of the thermal energy and its entropy content. Otherwise, only a part of that thermal energy may be converted to other kinds of energy (and thus useful work). This is because the remainder of the heat must be reserved to be transferred to a thermal reservoir at a lower temperature. The increase in entropy for this process is greater than the decrease in entropy associated with the transformation of the rest of the heat into other types of energy.
In order to make energy transformation more efficient, it is desirable to avoid thermal conversion. For example, the efficiency of nuclear reactors, where the kinetic energy of the nuclei is first converted to thermal energy and then to electrical energy, lies at around 35%. By direct conversion of kinetic energy to electric energy, effected by eliminating the intermediate thermal energy transformation, the efficiency of the energy transformation process can be dramatically improved.
History of energy transformation
Energy transformations in the universe over time are usually characterized by various kinds of energy, which have been available since the Big Bang, later being "released" (that is, transformed to more active types of energy such as kinetic or radiant energy) by a triggering mechanism.
Release of energy from gravitational potential
A direct transformation of energy occurs when hydrogen produced in the Big Bang collects into structures such as planets, in a process during which part of the gravitational potential is to be converted directly into heat. In Jupiter, Saturn, and Neptune, for example, such heat from the continued collapse of the planets' large gas atmospheres continue to drive most of the planets' weather systems. These systems, consisting of atmospheric bands, winds, and powerful storms, are only partly powered by sunlight. However, on Uranus, little of this process occurs.
On Earth, a significant portion of the heat output from the interior of the planet, estimated at a third to half of the total, is caused by the slow collapse of planetary materials to a smaller size, generating heat.
Release of energy from radioactive potential
Familiar examples of other such processes transforming energy from the Big Bang include nuclear decay, which releases energy that was originally "stored" in heavy isotopes, such as uranium and thorium. This energy was stored at the time of the nucleosynthesis of these elements. This process uses the gravitational potential energy released from the collapse of Type II supernovae to create these heavy elements before they are incorporated into star systems such as the Solar System and the Earth. The energy locked into uranium is released spontaneously during most types of radioactive decay, and can be suddenly released in nuclear fission bombs. In both cases, a portion of the energy binding the atomic nuclei together is released as heat.
Release of energy from hydrogen fusion potential
In a similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to one theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This resulted in hydrogen representing a store of potential energy which can be released by nuclear fusion. Such a fusion process is triggered by heat and pressure generated from the gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into starlight. Considering the solar system, starlight, overwhelmingly from the Sun, may again be stored as gravitational potential energy after it strikes the Earth. This occurs in the case of avalanches, or when water evaporates from oceans and is deposited as precipitation high above sea level (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity).
Sunlight also drives many weather phenomena on Earth. One example is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as a chemical potential energy via photosynthesis, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. The release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism when these molecules are ingested, and catabolism is triggered by enzyme action.
Through all of these transformation chains, the potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in several different ways for long periods between releases, as more active energy. All of these events involve the conversion of one kind of energy into others, including heat.
Examples
Examples of sets of energy conversions in machines
A coal-fired power plant involves these energy transformations:
Chemical energy in the coal is converted into thermal energy in the exhaust gases of combustion
Thermal energy of the exhaust gases converted into thermal energy of steam through heat exchange
Kinetic energy of steam converted to mechanical energy in the turbine
Mechanical energy of the turbine is converted to electrical energy by the generator, which is the ultimate output
In such a system, the first and fourth steps are highly efficient, but the second and third steps are less efficient. The most efficient gas-fired electrical power stations can achieve 50% conversion efficiency. Oil- and coal-fired stations are less efficient.
In a conventional automobile, the following energy transformations occur:
Chemical energy in the fuel is converted into kinetic energy of expanding gas via combustion
Kinetic energy of expanding gas converted to the linear piston movement
Linear piston movement converted to rotary crankshaft movement
Rotary crankshaft movement passed into transmission assembly
Rotary movement passed out of transmission assembly
Rotary movement passed through a differential
Rotary movement passed out of differential to drive wheels
Rotary movement of drive wheels converted to linear motion of the vehicle
Other energy conversions
There are many different machines and transducers that convert one energy form into another. A short list of examples follows:
ATP hydrolysis (chemical energy in adenosine triphosphate → mechanical energy)
Battery (electricity) (chemical energy → electrical energy)
Electric generator (kinetic energy or mechanical work → electrical energy)
Electric heater (electric energy → heat)
Fire (chemical energy → heat and light)
Friction (kinetic energy → heat)
Fuel cell (chemical energy → electrical energy)
Geothermal power (heat→ electrical energy)
Heat engines, such as the internal combustion engine used in cars, or the steam engine (heat → mechanical energy)
Hydroelectric dam (gravitational potential energy → electrical energy)
Electric lamp (electrical energy → heat and light)
Microphone (sound → electrical energy)
Ocean thermal power (heat → electrical energy)
Photosynthesis (electromagnetic radiation → chemical energy)
Piezoelectrics (strain → electrical energy)
Thermoelectric (heat → electrical energy)
Wave power (mechanical energy → electrical energy)
Windmill (wind energy → electrical energy or mechanical energy)
See also
Chaos theory
Conservation law
Conservation of energy
Conservation of mass
Energy accounting
Energy quality
Groundwater energy balance
Laws of thermodynamics
Noether's theorem
Ocean thermal energy conversion
Thermodynamic equilibrium
Thermoeconomics
Uncertainty principle
References
Further reading
Energy Transfer and Transformation | Core knowledge science
Energy (physics) | 0.766607 | 0.996015 | 0.763552 |
Reading comprehension | Reading comprehension is the ability to process written text, understand its meaning, and to integrate with what the reader already knows. Reading comprehension relies on two abilities that are connected to each other: word reading and language comprehension. Comprehension specifically is a "creative, multifaceted process" that is dependent upon four language skills: phonology, syntax, semantics, and pragmatics.
Some of the fundamental skills required in efficient reading comprehension are the ability to:
know the meaning of words,
understand the meaning of a word from a discourse context,
follow the organization of a passage and to identify antecedents and references in it,
draw inferences from a passage about its contents,
identify the main thought of a passage,
ask questions about the text,
answer questions asked in a passage,
visualize the text,
recall prior knowledge connected to text,
recognize confusion or attention problems,
recognize the literary devices or propositional structures used in a passage and determine its tone,
understand the situational mood (agents, objects, temporal and spatial reference points, casual and intentional inflections, etc.) conveyed for assertions, questioning, commanding, refraining, etc., and
determine the writer's purpose, intent, and point of view, and draw inferences about the writer (discourse-semantics).
Comprehension skills that can be applied as well as taught to all reading situations include:
Summarizing
Sequencing
Inferencing
Comparing and contrasting
Drawing conclusions
Self-questioning
Problem-solving
Relating background knowledge
Distinguishing between fact and opinion
Finding the main idea, important facts, and supporting details.
There are many reading strategies to use in improving reading comprehension and inferences, these include improving one's vocabulary, critical text analysis (intertextuality, actual events vs. narration of events, etc.), and practising deep reading.
The ability to comprehend text is influenced by the readers' skills and their ability to process information. If word recognition is difficult, students tend to use too much of their processing capacity to read individual words which interferes with their ability to comprehend what is read.
Overview
Some people learn comprehension skills through education or instruction and others learn through direct experiences. Proficient reading depends on the ability to recognize words quickly and effortlessly. It is also determined by an individual's cognitive development, which is "the construction of thought processes".
There are specific characteristics that determine how successfully an individual will comprehend text, including prior knowledge about the subject, well-developed language, and the ability to make inferences from methodical questioning & monitoring comprehension like: "Why is this important?" and "Do I need to read the entire text?" are examples of passage questioning.
Instruction for comprehension strategy often involves initially aiding the students by social and imitation learning, wherein teachers explain genre styles and model both top-down and bottom-up strategies, and familiarize students with a required complexity of text comprehension. After the contiguity interface, the second stage involves the gradual release of responsibility wherein over time teachers give students individual responsibility for using the learned strategies independently with remedial instruction as required and this helps in error management.
The final stage involves leading the students to a self-regulated learning state with more and more practice and assessment, it leads to overlearning and the learned skills will become reflexive or "second nature". The teacher as reading instructor is a role model of a reader for students, demonstrating what it means to be an effective reader and the rewards of being one.
Reading comprehension levels
Reading comprehension involves two levels of processing, shallow (low-level) processing and deep (high-level) processing.
Deep processing involves semantic processing, which happens when we encode the meaning of a word and relate it to similar words. Shallow processing involves structural and phonemic recognition, the processing of sentence and word structure, i.e. first-order logic, and their associated sounds. This theory was first identified by Fergus I. M. Craik and Robert S. Lockhart.
Comprehension levels are observed through neuroimaging techniques like functional magnetic resonance imaging (fMRI). fMRI is used to determine the specific neural pathways of activation across two conditions: narrative-level comprehension, and sentence-level comprehension. Images showed that there was less brain region activation during sentence-level comprehension, suggesting a shared reliance with comprehension pathways. The scans also showed an enhanced temporal activation during narrative levels tests, indicating this approach activates situation and spatial processing.
In general, neuroimaging studies have found that reading involves three overlapping neural systems: networks active in visual, orthography-phonology (angular gyrus), and semantic functions (anterior temporal lobe with Broca's and Wernicke's areas). However, these neural networks are not discrete, meaning these areas have several other functions as well. The Broca's area involved in executive functions helps the reader to vary depth of reading comprehension and textual engagement in accordance with reading goals.
The role of vocabulary
Reading comprehension and vocabulary are inextricably linked together. The ability to decode or identify and pronounce words is self-evidently important, but knowing what the words mean has a major and direct effect on knowing what any specific passage means while skimming a reading material. It has been shown that students with a smaller vocabulary than other students comprehend less of what they read. It has also been suggested that to improve comprehension, improving word groups, complex vocabularies such as homonyms or words that have multiple meanings, and those with figurative meanings like idioms, similes, collocations and metaphors are a good practice.
Andrew Biemiller argues that teachers should give out topic-related words and phrases before reading a book to students. Note also that teaching includes topic-related word groups, synonyms of words, and their meaning with the context. He further says teachers should familiarize students with sentence structures in which these words commonly occur. According to Biemiller, this intensive approach gives students opportunities to explore the topic beyond its discourse – freedom of conceptual expansion. However, there is no evidence to suggest the primacy of this approach. Incidental morphemic analysis of words – prefixes, suffixes and roots – is also considered to improve understanding of the vocabulary, though they are proved to be an unreliable strategy for improving comprehension and is no longer used to teach students.
Vocabulary is important as it is what connects a reader to the text, while helping develop background knowledge, their own ideas, communicating, and learning new concepts. Vocabulary has been described as "the glue that holds stories, ideas, and content together...making comprehension accessible". This greatly reflects the important role that vocabulary plays. Especially when studying various pieces of literature, it is important to have this background vocabulary, otherwise readers will become lost rather quickly. Because of this, teachers focus a great deal of attention to vocabulary programs and implementing them into their weekly lesson plans.
History
Initially most comprehension teaching was that when taken together it would allow students to be imparted through selected techniques for each genre by strategic readers. However, from the 1930s testing various methods never seemed to win support in empirical research. One such strategy for improving reading comprehension is the technique called SQ3R introduced by Francis Pleasant Robinson in his 1946 book Effective Study.
Between 1969 and 2000, a number of "strategies" were devised for teaching students to employ self-guided methods for improving reading comprehension. In 1969 Anthony V. Manzo designed and found empirical support for the Re Quest, or Reciprocal Questioning Procedure, in traditional teacher-centered approach due to its sharing of "cognitive secrets". It was the first method to convert a fundamental theory such as social learning into teaching methods through the use of cognitive modeling between teachers and students.
Since the turn of the 20th century, comprehension lessons usually consist of students answering teacher's questions or writing responses to questions of their own, or from prompts of the teacher. This detached whole group version only helped students individually to respond to portions of the text (content area reading), and improve their writing skills. In the last quarter of the 20th century, evidence accumulated that academic reading test methods were more successful in assessing rather than imparting comprehension or giving a realistic insight. Instead of using the prior response registering method, research studies have concluded that an effective way to teach comprehension is to teach novice readers a bank of "practical reading strategies" or tools to interpret and analyze various categories and styles of text.
Common Core State Standards (CCSS) have been implemented in hopes that students test scores would improve. Some of the goals of CCSS are directly related to students and their reading comprehension skills, with them being concerned with students learning and noticing key ideas and details, considering the structure of the text, looking at how the ideas are integrated, and reading texts with varying difficulties and complexity.
Reading strategies
There are a variety of strategies used to teach reading. Strategies are key to help with reading comprehension. They vary according to the challenges like new concepts, unfamiliar vocabulary, long and complex sentences, etc. Trying to deal with all of these challenges at the same time may be unrealistic. Then again strategies should fit to the ability, aptitude and age level of the learner. Some of the strategies teachers use are: reading aloud, group work, and more reading exercises.
Reciprocal teaching
In the 1980s, Annemarie Sullivan Palincsar and Ann L. Brown developed a technique called reciprocal teaching that taught students to predict, summarize, clarify, and ask questions for sections of a text. The use of strategies like summarizing after each paragraph has come to be seen as effective for building students' comprehension. The idea is that students will develop stronger reading comprehension skills on their own if the teacher gives them explicit mental tools for unpacking text.
Instructional conversations
"Instructional conversations", or comprehension through discussion, create higher-level thinking opportunities for students by promoting critical and aesthetic thinking about the text. According to Vivian Thayer, class discussions help students to generate ideas and new questions. (Goldenberg, p. 317).
Dr. Neil Postman has said, "All our knowledge results from questions, which is another way of saying that question-asking is our most important intellectual tool" (Response to Intervention). There are several types of questions that a teacher should focus on: remembering, testing, understanding, application or solving, invite synthesis or creating, evaluation and judging. Teachers should model these types of questions through "think-alouds" before, during, and after reading a text. When a student can relate a passage to an experience, another book, or other facts about the world, they are "making a connection". Making connections help students understand the author's purpose and fiction or non-fiction story.
Text factors
There are factors that, once discerned, make it easier for the reader to understand the written text. One of such is the genre, like folktales, historical fiction, biographies or poetry. Each genre has its own characteristics for text structure that once understood helps the reader comprehend it. A story is composed of a plot, characters, setting, point of view, and theme. Informational books provide real-world knowledge for students and have unique features such as: headings, maps, vocabulary, and an index. Poems are written in different forms and the most commonly used are: rhymed verse, haikus, free verse, and narratives. Poetry uses devices such as: alliteration, repetition, rhyme, metaphors, and similes. "When children are familiar with genres, organizational patterns, and text features in books they're reading, they're better able to create those text factors in their own writing." Another one is arranging the text per perceptual span and a text display favorable to the age level of the reader.
Non-verbal imagery
Non-verbal imagery refers to media that utilize schemata to make planned or unplanned connections more commonly used within context such as a passage, an experience, or one's imagination. Some notable examples are emojis, emoticons, cropped and uncropped images, and recently, emojis which are images that are used to elicit humor and comprehension.
Visualization
Visualization is a "mental image" created in a person's mind while reading text. This "brings words to life" and helps improve reading comprehension. Asking sensory questions will help students become better visualizers.
Students can practice visualizing before seeing the picture of what they are reading by imagining what they "see, hear, smell, taste, or feel" when they are reading a page of a picture book aloud. They can share their visualizations, then check their level of detail against the illustrations.
Partner reading
Partner reading is a strategy created for reading pairs. The teacher chooses two appropriate books for the students to read. First, the pupils and their partners must read their own book. Once they have completed this, they are given the opportunity to write down their own comprehension questions for their partner. The students swap books, read them out loud to one another and ask one another questions about the book they have read.
There are different levels of this strategy:
1) The lower ones who need extra help recording the strategies.
2) The average ones who still need some help.
3) The good level. At this level, the children require no help.
Students at a very good level are a few years ahead of the other students.
This strategy:
Provides a model of fluent reading and helps students learn decoding skills by offering positive feedback.
Provides direct opportunities for a teacher to circulate in the class, observe students, and offer individual remediation.
Multiple reading strategies
There are a wide range of reading strategies suggested by reading programs and educators. Effective reading strategies may differ for second language learners, as opposed to native speakers. The National Reading Panel identified positive effects only for a subset, particularly summarizing, asking questions, answering questions, comprehension monitoring, graphic organizers, and cooperative learning. The Panel also emphasized that a combination of strategies, as used in Reciprocal Teaching, can be effective. The use of effective comprehension strategies that provide specific instructions for developing and retaining comprehension skills, with intermittent feedback, has been found to improve reading comprehension across all ages, specifically those affected by mental disabilities.
Reading different types of texts requires the use of different reading strategies and approaches. Making reading an active, observable process can be very beneficial to struggling readers. A good reader interacts with the text in order to develop an understanding of the information before them. Some good reader strategies are predicting, connecting, inferring, summarizing, analyzing and critiquing. There are many resources and activities educators and instructors of reading can use to help with reading strategies in specific content areas and disciplines. Some examples are graphic organizers, talking to the text, anticipation guides, double entry journals, interactive reading and note taking guides, chunking, and summarizing.
The use of effective comprehension strategies is highly important when learning to improve reading comprehension. These strategies provide specific instructions for developing and retaining comprehension skills across all ages. Applying methods to attain an overt phonemic awareness with intermittent practice has been found to improve reading in early ages, specifically those affected by mental disabilities.
The importance of interest
A common statistic that researchers have found is the importance of readers, and specifically students, to be interested in what they are reading. It has been reported by students that they are more likely to finish books if they are the ones that choose them. They are also more likely to remember what they read if they were interested as it causes them to pay attention to the minute details.
Reading strategies
There are various reading strategies that help readers recognize what they are learning, which allows them to further understand themselves as readers. Also to understand what information they have comprehended. These strategies also activate reading strategies that good readers use when reading and understanding a text.
Think-Alouds
When reading a passage, it is good to vocalize what one is reading and also their mental processes that are occurring while reading. This can take many different forms, with a few being asking oneself questions about reading or the text, making connections with prior knowledge or prior read texts, noticing when one struggles, and rereading what needs to be. These tasks will help readers think about their reading and if they are understood fully, which helps them notice what changes or tactics might need to be considered.
Know, Want to know, Learned
Know, Want to know, and Learned (KWL) is often used by teachers and their students, but it is a great tactic for all readers when considering their own knowledge. So, the reader goes through the knowledge that they already have, they think about what they want to know or the knowledge they want to gain, and finally they think about what they have learnt after reading. This allows readers to reflect on the prior knowledge they have, and also to recognize what knowledge they have gained and comprehended from their reading.
Comprehension strategies
Research studies on reading and comprehension have shown that highly proficient, effective readers utilize a number of different strategies to comprehend various types of texts, strategies that can also be used by less proficient readers in order to improve their comprehension. These include:
Making Inferences: In everyday terms we refer to this as "reading between the lines". It involves connecting various parts of texts that are not directly linked in order to form a sensible conclusion. A form of assumption, the reader speculates what connections lie within the texts. They also make predictions about what might occur next.
Planning and Monitoring: This strategy centers around the reader's mental awareness and their ability to control their comprehension by way of awareness. By previewing text (via outlines, table of contents, etc.) one can establish a goal for reading: "what do I need to get out of this"? Readers use context clues and other evaluation strategies to clarify texts and ideas, and thus monitoring their level of understanding.
Asking Questions: To solidify one's understanding of passages of texts, readers inquire and develop their own opinion of the author's writing, character motivations, relationships, etc. This strategy involves allowing oneself to be completely objective in order to find various meanings within the text.
Self-Monitoring: Asking oneself questions about reading strategies, whether they are getting confused or having trouble paying attention.
Determining Importance: Pinpointing the important ideas and messages within the text. Readers are taught to identify direct and indirect ideas and to summarize the relevance of each.
Visualizing: With this sensory-driven strategy, readers form mental and visual images of the contents of text. Being able to connect visually allows for a better understanding of the text through emotional responses.
Synthesizing: This method involves marrying multiple ideas from various texts in order to draw conclusions and make comparisons across different texts; with the reader's goal being to understand how they all fit together.
Making Connections: A cognitive approach also referred to as "reading beyond the lines", which involves:
(A) finding a personal connection to reading, such as personal experience, previously read texts, etc. to help establish a deeper understanding of the context of the te xt, or (B) thinking about implications that have no immediate connection with the theme of the text.
Assessment
There are informal and formal assessments to monitor an individual's comprehension ability and use of comprehension strategies. Informal assessments are generally conducted through observation and the use of tools, like story boards, word sorts, and interactive writing. Many teachers use Formative assessments to determine if a student has mastered content of the lesson. Formative assessments can be verbal as in a "Think-Pair-Share" or "Partner Share". Formative Assessments can also be "Ticket out the door" or "digital summarizers". Formal assessments are district or state assessments that evaluates all students on important skills and concepts. Summative assessments typically, are assessments given at the end of a unit to measure a student's learning.
Running records
A popular assessment undertaken in numerous primary schools around the world are running records. Running records are a helpful tool in regard to reading comprehension. The tool assists teachers in analyzing specific patterns in student behaviors and planning appropriate instruction. By conducting running records, teachers are given an overview of students' reading abilities and learning over a period of time.
In order for teachers to conduct a running record properly, they must sit beside a student and make sure that the environment is as relaxed as possible so the student does not feel pressured or intimidated. It is best if the running record assessment is conducted during reading, to avoid distractions. Another alternative is asking an education assistant to conduct the running record for you in a separate room whilst you teach/supervise the class. Quietly observe the students' reading and record during this time. There is a specific code for recording which most teachers understand. Once the student has finished reading, ask them to retell the story as best as they can. After the completion of this, ask them comprehensive questions listed to test them on their understanding of the book. At the end of the assessment add up their running record score and file the assessment sheet away. After the completion of the running record assessment, plan strategies that will improve the students' ability to read and understand the text.
Overview of the steps taken when conducting a Running Record assessment:
Select the text
Introduce the text
Take a running record
Ask for retelling of the story
Ask comprehensive questions
Check fluency
Analyze the record
Plan strategies to improve students reading/understanding ability
File results away.
Difficult or complex content
Reading difficult texts
Some texts, like in philosophy, literature or scientific research, may appear more difficult to read because of the prior knowledge they assume, the tradition from which they come, or the tone, such as criticizing or parodying. A Philosopher Jacques Derrida, explained his opinion about complicated text: "In order to unfold what is implicit in so many discourses, one would have each time to make a pedagogical outlay that is just not reasonable to expect from every book. Here the responsibility has to be shared out, mediated; the reading has to do its work and the work has to make its reader." Other Philosophers however, believe that if you have something to say, you should be able to make the message readable to a wide audience.
Hyperlinks
Embedded hyperlinks in documents or Internet pages have been found to make different demands on the reader than traditional text. Authors such as Nicholas Carr, and Psychologists, such as Maryanne Wolf, contend that the internet may have a negative impact on attention and reading comprehension. Some studies report increased demands of reading hyperlinked text in terms of cognitive load, or the amount of information actively maintained in one's mind (also see working memory). One study showed that going from about 5 hyperlinks per page to about 11 per page reduced college students' understanding (assessed by multiple choice tests) of articles about alternative energy. This can be attributed to the decision-making process (deciding whether to click on it) required by each hyperlink, which may reduce comprehension of surrounding text.
On the other hand, other studies have shown that if a short summary of the link's content is provided when the mouse pointer hovers over it, then comprehension of the text is improved. "Navigation hints" about which links are most relevant improved comprehension. Finally, the background knowledge of the reader can partially determine the effect hyperlinks have on comprehension. In a study of reading comprehension with subjects who were familiar or unfamiliar with art history, texts which were hyperlinked to one another hierarchically were easier for novices to understand than texts which were hyperlinked semantically. In contrast, those already familiar with the topic understood the content equally well with both types of organization.
In interpreting these results, it may be useful to note that the studies mentioned were all performed in closed content environments, not on the internet. That is, the texts used only linked to a predetermined set of other texts which was offline. Furthermore, the participants were explicitly instructed to read on a certain topic in a limited amount of time. Reading text on the internet may not have these constraints.
Professional development
The National Reading Panel noted that comprehension strategy instruction is difficult for many teachers as well as for students, particularly because they were not taught this way and because it is a demanding task. They suggested that professional development can increase teachers/students willingness to use reading strategies but admitted that much remains to be done in this area.
The directed listening and thinking activity is a technique available to teachers to aid students in learning how to un-read and reading comprehension. It is also difficult for students that are new. There is often some debate when considering the relationship between reading fluency and reading comprehension. There is evidence of a direct correlation that fluency and comprehension lead to better understanding of the written material, across all ages. The National Assessment of Educational Progress assessed U.S. student performance in reading at grade 12 from both public and private school population and found that only 37 percent of students had proficient skills. The majority, 72 percent of the students, were only at or above basic skills, and 28 percent of the students were below basic level.
See also
Balanced literacy
Baseball Study
Directed listening and thinking activity
English as a second or foreign language
Fluency
Levels-of-processing
Phonics
Readability
Reading
Reading for special needs
Simple view of reading
SQ3R
Synthetic phonics
Whole language
Notes
References
Sources
Further reading
External links
Info, Tips, and Strategies for PTE Read Aloud, Express English Language Training Center
English Reading Comprehension Skills, Andrews University
SQ3R Reading Strategy And How to Apply It, ProductiveFish
Vocabulary Instruction and Reading comprehension – From the ERIC Clearinghouse on Reading English and Communication.
ReadWorks.org | The Solution to Reading Comprehension
Education in the United States
Learning to read
Comprehension | 0.765562 | 0.997357 | 0.763538 |
Subsets and Splits